Multiprocessor System-on-Chip (MPSoC) integrating heterogeneous processing elements (CPU, GPU, Accelerators, memory, I/O modules ,etc.) are the de-facto design choice to meet the ever-increasing performance/Watt requirements from modern computing machines. Although at consumer level the number of processing elements (PE) are limited to 8-16, for high end servers, the number of PEs can scale up to hundreds. A Network-on-Chip (NoC) is a microscale network that facilitates the packetized communication among the PEs in such complex computational systems. Due to the heterogeneous integration of the cores, execution of diverse (serial and parallel) applications on the PEs, application mapping strategies, and many other factors, the design of such NoCs play a crucial role to ensuring optimum performance of these systems. Design of such optimal NoC architecture poses a performance optimization problem with constraints on power, and area. Determination of these optimal network configurations is carried out by guided (heuristic algorithm) or unguided (exhaustive search) algorithms to explore the NoC design space. At each step of this design space exploration, a network configuration is simulated for performance, area and power for a wide range of applications. A system level modeling is required to conduct these simulations to accurately captures the timing behavior, energy profile, and area requirements of the network. Based on the accuracy of the network model, network configuration, and application running on the system, these simulations can be extremely slow. For example, running an open source NoC simulator like Bookism 2.0 for a small system containing 8 cores takes around 43.45 seconds on a 2.5 Ghz Dual-Core Intel Core i5 8GB 1600 MHz DDR3 machine configuration. An alternative, to such network simulation is to use analytical network models utilizing classical queuing theory and treat each input channel in the NoC router as an M/M/1, M/G/1/N, or G/G/1 queue. Such analytical models provide good estimation of network performance like latency only under certain assumptions i.e.: a Poisson process for the network traffic with an exponential packet service time, and an exponential distribution for packet length. Unfortunately, these assumptions are not guaranteed for real application-based traffic patterns, and the accuracy of the analytical models are disputable. Hence, an accurate NoC performance model with accelerated runtime is required to ameliorate the slow design space exploration process of NoC architectures. To accelerate the design space exploration, in this thesis, we propose Xtreme-NoC, an extreme gradient boosting based NoC latency model. To design such model, we use an accurate system-level simulator (Booksim 2.0) to generate the dataset of NoC latency. To contrast our proposed model with existing machine learning algorithms, we present a comparative study among different regression models to predict the latency of the NoC architectures. We also compare the results of the proposed NoC model against the latency from system level simulations. Based on our study, we conclude on the following:

1. Our proposed Xtreme-NoC, outperforms other machine learning regression models such as linear regressor, Support Vector Regressor, and deep neural network for predicting the latency of NoC architectures.

2. The Xtreme-NoC model can predict the latency of a NoC architecture with a root mean square error of 5.077 cycles and r-squared value of 96.16%.

3. The proposed model improves the runtime by 8513.29 times compared to simulation-based latency models.


Naseef Mansoor

Committee Member

Rajeev Bukralia

Committee Member

Mezbahur Rahman

Date of Degree




Document Type



Master of Science (MS)


Computer Information Science


Science, Engineering and Technology



Rights Statement

In Copyright