Computing Resources

 Cluster Racks

 Cluster Composition - Machines with Xeon Processor

  • 1 Head node com 24 cores Intel Xeon E5-2643 v3 de 3.40GHz, 128G memory and one placa Nvidia Quadro 5000 GPU.
  • 1 Login node com 40 cores HT Intel Xeon E5-2660 v2 de 2.20GHz, 128G memory and one Nvidia Quadro 5000 GPU.
  • 1 Viz-Server node com 40 cores HT Intel Xeon E5-2660 v2 de 2.20GHz, 128G memory and one Nvidia Quadro 5000 GPU.
  • 32 Graphic nodes com 40 cores HT Intel Xeon E5-2670 v2 de 2.50GHz, 64G memory and 2 Nvidia Tesla K20M GPU.
  • 1 UV20 node com 64 cores HT Intel Xeon E5-4650L de 2.60GHz, 1TB memory and one Intel Xeon-Phi 57 cores co-processor.
  • 28 Graphic nodes com 48 cores HT Intel Xeon E5-2670 v3 de 2.30GHz, 64G memory and 2 Nvidia Tesla K40M GPU.
  • 20 Compute nodes com 48 cores HT Intel Xeon E5-2670 v3 de 2.30GHz, 64G memory.
  • 1 Storage node com 32 cores HT Intel Xeon E5-2660 de 2.20GHz, 128G memory e 56 4TB  HDD on RAID6.

 Total Cores  - Xeon Processors only

  • 1360 cores

 Total Cluster Memory

  • 3,3 TB

 Theoretical Performance (in flops) - Theoretical Calculation of CPUs

  • 2.50 GHz x 8 flops/cycle = 20 Gflops/core
  • 20 cores x 20 Gflops/core = 400 Gflops/node
  • 32 nodes x 400 Gflops/node = 12,8 Tflops/rack
 

Sistema Operacional disponível no cluster:

SUSE Linux Enterprise Server 12 SP 1
Kernel: 3.12.59-60.45-default

Os principais softwares disponíveis no cluster são:

* Todos os softwares estão instalados de forma que todos os nodes do cluster tenham acesso e de
forma que possibilite a instalação de várias versões sem gerar conflito de bibliotecas.
PBSPro 13.1.0.16
Ambertools 15
Cuda 6.5, Cuda 7.5, Cuda 8.0
Python 2.7 e Python 3.4
Anaconda Python 2.7, Anaconda Python 3.4 e Anaconda Python 3.6
NAMD 2.10, NAMD 2.11 e NAMD 2.12 *Todos com opção local, multi-node e GPU
VMD 1.9.2
Cmake 3.0.2
GCC 4.3, GCC 4.8, GCC 4.9 e GCC 6.2
OpenMPI 1.4.4, OpenMPI 1.6.5, OpenMPI-1.8.3 e OpenMPI-3.0.0
Gromacs-4.6.7, Gromacs 5.0.2, Gromacs-mpi-cuda-5.0.2 e Gromacs-mpi-cuda-5.1.4
FFTW 3.3
Orca 3.0.2 e Orca 4
Compiladores PGI 14.9
Compiladores Intel 2015, Compiladores Intel 2013 e Impi
XMGRACE 5.1.24
Boost 1.58
Openmx 3.7
CGAL 4.6
R-Base
Gamess
Lammps
Gaussian g09
Mpiblast 1.6.0
NWChem 6.6
JRE 1.8
Blast HTC
CCP 4-7.0
Eigen 3
HPCToolkit, version 2016.12
OpenFOAM
Ovito 2.4.2
Papi 5.5.1
Rism3d

Math Libs
Lapack, Blas e MKL.

Softwares para Administração e Monitoração
Software Management Cluster 3.3
Ganglia 3.6.0
NagiosPara utilizar qualquer software da lista, primeiro deve-se carregá-lo usando o sistema de
módulos, conforme os exemplos abaixo:
module avail (lista os módulos dispóníveis)
module load software/modulo (carrega o módulo escolhido)
module list (lista os módulos carregados)
module unload software/modulo (descarrega o módulo escolhido)

 

 

Jobs and Queues

The submission of jobs to the processing nodes is accomplished through queues managed by PBSPro software.


Currently the following queues are defined(Kahuna server):

 

Queue

Memory

CPU Time

Walltime(h)

CPUs

Run

Que

Lm

State

route

--

--

72

400

0

0

--

E R

 

bigmem

--

--

72

64

0

0

--

E R

 

longa

--

--

480

400

0

0

--

E R

 

 

route: Queue uses maximum of 400 CPUs per user, its time limit is 72h.

bigmem: This queue works on service3 (UV20) node(with 1TB memory and Intel Xeon-Phi)

longa: Queue uses maximum of 400 CPUs per user, its time limit is 480h.

For any question or suggestion, please contact:

adriano.ferruzzi@iqm.unicamp.br

leandro.zanotto@reitoria.unicamp.br

 

 

 

Zircon - This is a contributing Drupal Theme
Design by WeebPal.