Computing power to answer NASA's complex science and engineering questions
Pleiades, one of the world's most powerful supercomputers, represents NASA's state-of-the-art technology for meeting the agency's supercomputing requirements, enabling NASA scientists and engineers to conduct modeling and simulation for NASA projects. This distributed-memory SGI/HPE ICE cluster is connected with InfiniBand in a dual-plane hypercube technology.
The system contains the following types of Intel Xeon processors: E5-2680v4 (Broadwell), E5-2680v3 (Haswell), E5-2680v2 (Ivy Bridge), and E5-2670 (Sandy Bridge). Pleiades is named after the astronomical open star cluster of the same name.
| Broadwell Nodes | Haswell Nodes | Ivy Bridge Nodes | Sandy Bridge Nodes | |
|---|---|---|---|---|
| Number of Nodes | 2,016 | 2,052 | 5,256 | 1,800 |
| Processors per Node | 2 fourteen-core processors per node | 2 twelve-core processors per node | 2 ten-core processors per node | 2 eight-core processors per node |
| Node Types | Intel Xeon E5-2680v4 processors | Intel Xeon E5-2680v3 processors | Intel Xeon E5-2680v2 processors | Intel Xeon E5-2670 processors |
| Processor Speed | 2.4 GHz | 2.5 GHz | 2.8 GHz | 2.6 GHz |
| Cache | 35 MB for 14 cores | 30 MB for 12 cores | 25 MB for 10 cores | 20 MB for 8 cores |
| Memory Type | DDR4 FB-DIMMs | DDR4 FB-DIMMs | DDR3 FB-DIMMs | DDR3 FB-DIMMs |
| Memory Size | 4.6 GB per core, 128 GB per node | 5.3 GB per core, 128 GB per node | 3.2 GB per core, 64 GB per node (plus 3 bigmem nodes with 128 GB per node) | 2 GB per core, 32 GB per node |
| Host Channel Adapter | InfiniBand FDR host channel adapter and switches | InfiniBand FDR host channel adapter and switches | InfiniBand FDR host channel adapter and switches | InfiniBand FDR host channel adapter and switches |
| Sandy Bridge + GPU Nodes | Skylake + GPU Nodes | Cascade Lake + GPU Nodes | |
|---|---|---|---|
| Number of Nodes | 64 | 19 | 38 |
| Processors per Node | Two 8-core host processors and one GPU coprocessor (2,880 CUDA cores) | Two 18-core host processors; four GPU coprocessors (for 17 nodes); eight GPU coprocessors (for 2 nodes) | Two 24-core host processors and four GPU coprocessors (5,120 CUDA cores) |
| Node Types | Intel Xeon E5-2670 (host); NVIDIA Tesla K40 (GPU) | Intel Xeon Gold 6154 (host); NVIDIA Tesla V100-SXM2-32GB (GPU) | Intel Xeon Platinum 8268 (host); NVIDIA Tesla V100-SXM2-32GB (GPU) |
| Processor Speed | 2.6 GHz (host); 745 MHz (GPU) | 3.0 GHz (host); 1,290 MHz (GPU) | 2.9 GHz (host); 1,290 MHz (GPU) |
| Cache | 20 MB for 8 cores (host) | 27.5 MB shared non-inclusive by 20 cores | 35.75 MB shared non-inclusive by 24 cores |
| Memory Type | DDR3 FB-DIMMS (host); GDDR5 (GPU) | DDR4 FB-DIMMS (host); HBM2 (GPU) | DDR4 FB-DIMMS (host); HBM2 (GPU) |
| Memory Size | 64 GB per node (host); 12 GB per GPU card | 384 GB per node (host); 32 GB per GPU card | 384 GB per node (host); 32 GB per GPU card |
| Host Channel Adapter | InfiniBand FDR host channel adapter and switches (host) | InfiniBand EDR host channel adapter and switches (host) | InfiniBand EDR host channel adapter and switches (host) |
| 8 Front-End Nodes | PBS server pbpspl1 | PBS server pbspl3 | |
|---|---|---|---|
| Number of Processors | 2 eight-core processors per node | 2 six-core processors per node | 2 quad-core processors per node |
| Processor Types | Xeon E5-2670 (Sandy Bridge) processors | Xeon X5670 (Westmere) processors | Xeon X5355 (Clovertown) processors |
| Processor Speed | 2.6 GHz | 2.93 GHz | 2.66 GHz |
| Memory | 64 GB per node | 72 GB per node | 16 GB per node |
| Connection | 10 Gigabit and 1 Gigabit Ethernet connection | N/A | N/A |
Can't find what you're looking for? NAS User Support is available 24x7x365:
(800) 331-8737
(650) 604-4444
support@nas.nasa.gov
Live status information for Pleiades, Electra, Endeavour, and Merope can only be viewed with Javascript enabled in your browser.