Search:     Advanced search

Preparing to Run on Pleiades' Ivy Bridge Nodes

Article ID: 446
Last updated: 03 Apr, 2014
Views: 1910
Posted: 25 Jul, 2013
by Chang S.
Updated: 03 Apr, 2014
by Moyer M.

To help you prepare for running jobs on Pleiades' Ivy Bridge compute nodes, this short review includes the general node configuration, tips on compiling your code, and PBS script examples.

Overview of Ivy Bridge Nodes

Pleiades has 75 Ivy Bridge racks, each containing 72 nodes. Each node contains two 10-core E5-2680v2 (2.8 GHz) processor chips and 64 GB of memory, providing 3.2 GB of memory per core--the highest among all Pleiades processor types (except for a few bigmem nodes).

The Ivy Bridge nodes are connected to the Pleiades InfiniBand (ib0 and ib1) network via the 4-link fourteen data rate (4X FDR) devices and switches for inter-node communication.

The Lustre filesystems, /nobackuppX, are accessible from the Ivy Bridge nodes.

Compiling Your Code For Ivy Bridge Nodes

Like the Sandy Bridge processor, the Ivy Bridge processor uses Advanced Vector Extensions (AVX), which is a set of instructions for doing Single Instruction Multiple Data (SIMD) operations on Intel architecture processors.

To take advantage of AVX, we recommend that you compile your code with an Intel compiler (version 12 or newer; for example, comp-intel/2012.0.032) on Pleiades, using either of the following compiler flags:

  • To run only on Sandy Bridge or Ivy Bridge processors:   -O2 (or -O3) -xAVX
  • To run on all Pleiades processor types:   -O2 (or -O3) -axAVX -xSSE4.1

You can also add the compiler options -ip or -ipo, which allow the compiler to look for ways to better optimize and/or vectorize your code.

To get a report on how well your code is vectorized, add the compiler flag -vec-report2.

If you have an MPI code that uses the SGI MPT library, use the MPT module mpi-sgi/mpt.2.06rp16 or a newer SGI MPT module. This is because FDR is supported in MPT 2.06, but not in earlier versions (mpt.1.25, mpt.1.26, mpt.2.01, and all mpt.2.04 modules).

TIP: Ensure your jobs run correctly on Ivy Bridge before starting production work.

Running PBS Jobs on Ivy Bridge Nodes

To request Ivy Bridge nodes, use :model=ivy in your PBS script:

#PBS -l select=xx:ncpus=yy:model=ivy

A PBS job running with a fixed number of processes or threads should use fewer Ivy Bridge nodes than the other Pleiades processor types for two reasons:

  1. There are 20 cores per Ivy Bridge node, compared with 16 cores per Sandy Bridge node and 12 cores per Westmere node.

    For example, if you have previously run a 240-process job with 15 Sandy Bridge nodes or 20 Westmere nodes, you should request 12 Ivy Bridge nodes instead.

    For Ivy Bridge
    #PBS -lselect=12:ncpus=20:mpiprocs=20:model=ivy
    
    For Sandy Bridge
    #PBS -lselect=15:ncpus=16:mpiprocs=16:model=san
    
    For Westmere
    #PBS -lselect=20:ncpus=12:mpiprocs=12:model=wes
    
    
  2. The Ivy Bridge processor provides more memory, per node and per core, than the other processor types.

    For example, to run a job that needs 2.5 GB of memory per process, you can fit the following number of processes on each type:

    • ~9 processes on a 12-core Westmere node with ~22 GB/node
    • 12 processes on a 16-core Sandy Bridge node with ~30 GB/node
    • 20 processes on a 20-core Ivy Bridge node with ~60 GB/node

    Note: For all processor types, a small amount of memory per node is reserved for system usage. Thus, the amount of memory available to a PBS job is slightly less than the total physical memory.

Sample PBS Script For Ivy Bridge

#PBS -lselect=12:ncpus=20:mpiprocs=20:model=ivy
#PBS -q normal
module load comp-intel/2012.0.032 mpi-sgi/mpt.2.06rp16
cd $PBS_O_WORKDIR
mpiexec -np 240 ./a.out

Note the following known issue: Using mpt.2.06rp16, a job with a total process count of 1021, 1022, 1023 or 1024 will hang when running with 20 processes per node. The workaround is to use mpt.2.06a67 or mpt.2.08r7.

For more information about Ivy Bridge nodes, see:

This article was:  
Also read
document Ivy Bridge Processors

Prev   Next
Ivy Bridge Processors     Endeavour


Contact Us

General User Assistance

Security

  • Report security issues 24x7x365
  • Toll-free: 1-877-NASA-SEC (1-877-627-2732)
  • E-mail: soc@nasa.gov

User Documentation

High-End Computing Capability (HECC) Project Office

NASA High-End Computing Program

Tell Us About It

Our goal is furnish all the information you need to efficiently and effectively use the HECC resources needed for your NASA computational projects.

We welcome your input on features and topics that you would like to see included on this website.

Please send us email with your wish list and other feedback.