Computing Resources

 

Kahuna

 Cluster Composition - Machines with Xeon Processor

  • 1 Head node with 40 cores Intel Xeon E5-2660 v2 of 2.20GHz, 128G memory.
  • 1 Login node with 40 cores HT Intel Xeon E5-2660 v2 of 2.20GHz, 128G memory and 1 Nvidia Quadro 4000 card.
  • 32 Graphic nodes with 40 cores HT Intel Xeon E5-2670 v2 of 2.50GHz, 64G memory and 2 Nvidia Tesla K20M card.
  • 1 UV20 node with 64 cores HT Intel Xeon E5-4650L of 2.60GHz, 1TB memory and 1 Intel Xeon-Phi card with 57 cores.
  • 1 Storage node with 32 cores HT Intel Xeon E5-2660 of 2.20GHz, 128G memory and 56 disks of 4TB in RAID6 array.
  • Interconnection with Gigabit Ethernet switch and Infiniband 40 Gb/sec (4X QDR) switch
  • The cluster is connected to two UPSs in parallel to 20kV and IQ of the diesel generator.

 Total Cores  - Xeon Processors only

  • 1360 cores

 Total Cluster Memory

  • 3,3 TB

 Theoretical Performance (in flops) - Theoretical Calculation of CPUs

  • 2.50 GHz x 8 flops/cycle = 20 Gflops/core
  • 20 cores x 20 Gflops/core = 400 Gflops/node
  • 32 nodes x 400 Gflops/node = 12,8 Tflops/rack

 

 

Operating System available on cluster

  • SUSE Linux Enterprise Server 11.2
  • Kernel: 3.0.101-0.7.17-default x86_64

Softwares available on cluster

  • PBSPro 12.2.4
  • Cuda 6.5
  • Python 2.7 e Python 3.4
  • Anaconda 2.7 e Anaconda 3.4
  • NAMD 2.10 Local Multicore, NAMD 2.10 Local Multicore CUDA, NAMD 2.10 infiniband, NAMD 2.10 infiniband CUDA
  • VMD 1.9.1
  • Cmake 3.0.2
  • GCC 4.3.4, GCC 4.9.1 e GCC 4.8.2
  • OpenMPI 1.6.5, OpenMPI-1.8.3
  • Gromacs 5.0.2, Gromacs-mpi-cuda-5.0.2, Gromacs-4.6.7
  • FFTW 3.3
  • Orca 3.0.2
  • PGI 14.9-0 compilers, Intel 2015 compilers, Intel 2013 compilers e Impi
  • XMGRACE 5.1.24
  • Boost 1.58
  • Openmx 3.7
  • CGAL 4.6
  • R-Base
  • Gamess
  • Lammps
  • Gaussian

Math Libs

  • Lapack
  • Blas
  • MKL

Management and Monitoring Softwares

  • Mgrclient 1.7.3
  • Ganglia 3.6.0

 

To make use of any software from the list, you must first load them by using the system modules, as the examples below:

module avail (it displays the available modules )

module load software/module (it loads the chosen module)

module list (it displays loaded modules)

module unload software/module (it unloads the chosen module)

 

Jobs and Queues

The submission of jobs to the processing nodes is accomplished through queues managed by PBSPro software.


Currently the following queues are defined(Kahuna server):

 

Queue

Memory

CPU Time

Walltime

Node

Run

Que

Lm

State

small

--

--

--

1

0

0

--

E R

 

medium

--

--

--

4

0

0

--

E R

 

large

--

--

--

8

0

0

--

E R

 

bigmem

--

--

 

--

1

0

0

--

E R

serial

--

--

--

1

0

0

--

E R

 

Small: Queue working on just 1 node. It has a limit of 20 jobs per user running concurrently.

Medium: Queue working on 2 to 4 nodes. It has a limit of 3 jobs per user running concurrently.

Large: Queue working on 5 to 8 nodes.It has a limit of 1 job per user running concurrently.

Bigmem: This queue works on service3 (UV20) node(with 1TB memory and Intel Xeon-Phi)

Serial: Queue working on just 1 node and 1 processor. The node is actually dedicated(reserved) only to this queue, which has a limit of 40 jobs per user running concurrently.

 

For any doubt or suggestions, please contact:

adriano.ferruzzi@iqm.unicamp.br

 

 

 

Zircon - This is a contributing Drupal Theme
Design by WeebPal.