Claim Missing Document
Check
Articles

Found 2 Documents
Search

Chip Floorplanning Optimization Using Deep Reinforcement Learning Wang, Shikai; Zhang, Haodong; Zhou, Shiji; Sun, Jun; Shen, Qi
International Journal of Computer and Information System (IJCIS) Vol 5, No 2 (2024): IJCIS : Vol 5 - Issue 2 - 2024
Publisher : Institut Teknologi Bisnis AAS Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29040/ijcis.v5i2.210

Abstract

This paper presents a new method for chip floorplanning optimization using deep learning (DRL) combined with graph neural networks (GNNs). The plan addresses the challenges of traditional floor plans by applying AI to space design and intelligent space decisions. Three-head network architecture, including a policy network, cost network, and reconstruction head, is introduced to improve feature extraction and overall performance. GNNs are employed for state representation and feature extraction, enabling the capture of intricate topological information from chip netlists. A carefully designed reward function incorporating wire length minimization, area utilization, and timing constraint satisfaction guides the DRL agent toward high-quality floorplan solutions. An exploration bonus based on reconstruction error addresses the sparse reward problem. Extensive testing of the ISPD 2005 benchmarks demonstrated the effectiveness of the proposed approach, consistently operating on a state-of-the-art basis. Significant improvements include an average 31.4% reduction in half-perimeter wire length (HPWL) and a 34.2% reduction in breach time compared to the best baseline performance. The process scalability and robustness are evaluated, showing performance in various circuits and different perturbations. This research advances AI-driven electronic device design and paves the way for better chip design processes.
A Deep Reinforcement Learning Approach for Network-on-Chip Layout Verification and Route Optimization Chen, Jingyi; Wang, Shikai
International Journal of Computer and Information System (IJCIS) Vol 5, No 1 (2024): IJCIS : Vol 5 - Issue 1 - 2024
Publisher : Institut Teknologi Bisnis AAS Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29040/ijcis.v6i1.206

Abstract

This paper introduces a deep reinforcement learning approach for optimizing network-on-chip layout verification and route optimization. The proposed method addresses the challenges of increasing design complexity and traditional verification limitations in modern VLSI circuits. A novel three-headed policy gradient network architecture is developed to handle layout verification and routing optimization tasks simultaneously. The framework integrates feature extraction networks for topology analysis, policy networks for decision-making, and value networks for performance evaluation. The system employs a complex-valued reinforcement learning model to capture both spatial and temporal dependencies in NoC designs. Experimental results demonstrate significant improvements across multiple performance metrics: a 23.4% reduction in average packet latency, a 23.6% increase in network throughput, and a 20.8% decrease in power consumption compared to conventional methods. The verification accuracy achieves 98.7% with a false positive rate below 0.5%. The framework maintains consistent performance across various network sizes and traffic patterns, demonstrating robust scalability and practical applicability in real-world chip designs. Implementation results on ISPD 2005 benchmarks validate the effectiveness of the proposed approach, showing superior performance in both verification accuracy and optimization efficiency compared to existing state-of-the-art methods.