Semiconductor EDA / IC Design Computing & Storage Cluster Solution

As chip design complexity continues to increase, EDA (Electronic Design Automation) simulation platforms have become a core infrastructure component for semiconductor enterprises.

This article provides a structured overview of:

  • The main functional modules of an EDA simulation platform
  • The computational characteristics and core algorithm features of each module
  • Hardware configuration recommendations based on simulation task types
  • A reference architecture for a high-performance EDA / IC design computing and storage cluster

The objective is to support efficient, scalable, and stable chip development workflows in advanced semiconductor environments.

1. Major Modules of an EDA Simulation Platform

An EDA platform spans the entire IC design flow, from system-level modeling to physical verification. Key modules include:

System-Level Modeling & Simulation

High-level abstraction and algorithm validation using tools such as SystemC and MATLAB/Simulink.
Compute profile: Moderate workload, primarily dependent on single-thread CPU performance.

RTL Design & Functional Simulation

RTL development using Verilog/VHDL, followed by functional verification with simulators such as VCS and Xcelium.
Compute profile: High-frequency CPUs, large memory capacity, multi-thread support.

Formal Verification

Equivalence checking and property verification using formal tools.
Compute profile: Memory-intensive, limited multi-core scalability depending on tool architecture.

Logic Synthesis

Conversion from RTL to gate-level netlist.
Compute profile: Strong dependence on single-core performance, moderate parallelization.

Static Timing Analysis (STA)

Timing closure validation for advanced process nodes.
Compute profile: High memory usage, moderate multi-thread scaling.

Power Analysis

Dynamic and static power estimation from switching activity.
Compute profile: Large waveform data processing, high storage I/O demand.

Place & Route (P&R)

Backend physical implementation.
Compute profile: Compute-intensive, benefits from multi-core parallelism and large memory capacity.

Electromagnetic / Signal Integrity Analysis

IR drop, crosstalk, and power network analysis.
Compute profile: High memory consumption; certain workloads can leverage GPU acceleration.

DRC / LVS Physical Verification

Design Rule Checking and Layout Versus Schematic validation.
Compute profile: Distributed parallel execution; high I/O throughput requirement.

Gate-Level Simulation (GLS)

Final-stage functional and timing verification.
Compute profile: High I/O bandwidth and memory demand; supports distributed simulation.

II. Computational Characteristics by Module

Different EDA workloads exhibit distinct compute patterns:

Module

CPU Scaling

Memory Demand

Storage I/O

GPU Acceleration

System Modeling

Single-thread dominant

Low

Low

Not required

RTL Simulation High-frequency multi-core

Medium–High

Medium Limited
Formal Verification Limited scaling Very High Low Not typical
Logic Synthesis Single-core dominant Medium Low Not typical
STA Moderate parallelism High Low Not typical
Power Analysis Moderate Medium High Limited
Place & Route Strong parallel scaling High Medium Optional
EM / IR Analysis Moderate Very High Medium Supported
DRC / LVS Distributed scaling Medium High Not typical
GLS Distributed scaling High

High

Not typical

This differentiation is critical when designing an optimized compute infrastructure.

III. Hardware Configuration Recommendations by Workload

Functional Simulation & Logic Synthesis

  • CPU: High-frequency Intel Xeon or AMD EPYC
  • Core count: 16–128 cores (for multi-job parallel runs)
  • Memory: 128GB–2TB (depending on design scale)
  • Storage: High-IOPS NVMe SSD
  • Network: 10GbE or InfiniBand

Timing / Power / Gate-Level Simulation

  • CPU: High-frequency multi-core processors (4.0GHz+)
  • Memory: ≥128GB
  • Storage: NVMe + RAID configuration
  • Recommendation: Local high-speed cache disk (e.g., RAM disk) for waveform processing

Backend Place & Route & EM Simulation

  • CPU: Large core count (64+ cores recommended)
  • Memory: 512GB–2TB (large designs require higher capacity)
  • Optional GPU: NVIDIA professional GPU series for acceleration
  • Network: 25GbE or higher for cluster communication

DRC / LVS (Distributed Physical Verification)

  • Per node: 16 cores, 256GB memory
  • Storage: High-concurrency enterprise NVMe RAID or distributed storage (Lustre/NFS)
  • Scheduler: LSF, SLURM, or dedicated EDA workload manager

IV. EDA Simulation Platform Server / Cluster Architecture

To meet high-performance EDA workloads, the following infrastructure tiers are recommended:

Development Workstation

  • 16–128 core CPU
  • 128GB RAM
  • 2TB NVMe SSD
  • Professional GPU for GUI-based visualization

Dedicated Simulation Server Nodes

  • 32+ cores
  • 512GB RAM
  • 4TB+ SSD
  • High-speed networking

Parallel Simulation Cluster

  • 4–32 compute nodes
  • Shared distributed storage (NFS or Lustre)
  • Centralized job scheduling (LSF or SLURM)
  • High-speed switching (25–100GbE)

V. Reference EDA / IC Design Computing & Storage Cluster

To address the computational demands of semiconductor design teams, a modular cluster architecture is proposed, consisting of:

NO

Item

Description

Qty

1

Front-end design serve

  • Dual high-core-count enterprise CPUs
  • 1TB DDR5 ECC memory
  • 2 NVMe SSD storage
  • Dual 100GbE network connectivity
  • 2U rackmount configuration

1

2

Backend design server

  • High-frequency workstation-class CPUs
  • 128GB–512GB DDR5 ECC memory
  • 2 NVMe SSD
  • Professional GPU (48GB class)
  • Dual 25GbE connectivity
  • 4U rack/tower convertible chassis

5

3

All-Flash Storage Server

  • Dual enterprise CPUs
  • 100TB-class NVMe RAID array
  • 100GbE connectivity
  • Designed for high-concurrency EDA I/O workloads

1

4

High-Capacity Storage Server

  • Hybrid NVMe cache + large-capacity HDD array (200TB+)
  • 100GbE connectivity
  • Suitable for long-term project data retention

1

5

Network & Infrastructure

  • 100GbE high-speed core switch
  • Gigabit management switch
  • 42U rack cabinet with PDU
  • Centralized KVM management

1

Conclusion: Computing Power as the Foundation of Chip Design Innovation

Driven by advanced process nodes, high-density packaging, and emerging 3D IC architectures, EDA platforms face rapidly increasing computational demands.

A properly architected computing and storage cluster — optimized per workload type — significantly improves simulation throughput, resource utilization, and overall project turnaround time.

By aligning infrastructure design with EDA workload characteristics, semiconductor organizations can build scalable, high-efficiency platforms that support continuous innovation in complex IC development environments.