Facilities

ACME

ACME is a dedicated local cluster for R&D purposes composed of

  • 28 PowerEdge C6420 nodes composed of:
    • 2 Intel Gold 6138 and 6230 processor (20C/40T) @2.0 and 2.1 GHz
    • 192 GB RDIMM, 2667MT/s
    • IB FDR port
    • 2TB SATA @7.2 krpm and 240GB SSD SATA
  • 2 bullx R424-E4 chasis with 4 computing nodes each
    • 2 Intel Xeon 8C processors E5-2640 V3 @2.6 GHz
    • DDR4 memory of 64 GB @2133 MHz
    • IB FDR port
    • 1 TB SAS2 disc @7.2 krpm and 1 SSD of 240 GB
  • 2 bullx R421-E4 chasis
    • 2 Intel Xeon 8C processors E5-2640 V3 @2.6 GHz
    • DDR4 memory of 64 GB @2133 MHz
    • 1 TB SATA III disc @7.2 krpm and 1 SSD of 240 GB
    • 2 NVIDA Tesla P100
  • 2 bullx X450-E5 chasis
    • 4 Intel Gold 6230 processor (20C/40T) @2.1 GHz
    • 1 NVIDA Tesla V100 (5120 cores, 640 tensor cores, 16 GB)
    • 1 NVIDA Tesla T4 (2560 cores, 320 Turing tensor cores, 16 GB)
    • 192GB DDR4
    • 240GB SATA3 SSD
  • 1 bullx X451-E5 chasis
    • 2 AMD Roma 7402 processor (24C/48T) @2.8 GHz
    • 1 NVIDA Tesla A100 (6912 cores, 422 tensor cores, 40 GB)
    • 256GB DDR4
    • 240GB SATA3 SSD
  • 1 storage server composed of 37 disks (131 TB raw storage)
  • Rpeak = 138.35 TFlops, Rtensor = 884 TOPS

XULA

Xula is the CIEMAT in-production cluster composed of:

  • 88 Intel Gold 6148 processors (1,760 cores)
  • 112 Intel Gold 6254 processors (2,016 cores)
  • 180 Intel Gold 6342 processors (4,320 cores)
  • DDR4 RAM memory of ~36.5 TB
  • Interconnected by InfiniBand HDR200
  • Fully devoted to the execution of jobs
  • 440 TB raw storage (~352 RAID6)
  • Rpeak~722.2 Tflops

Scientific computing service pricing

MIght you want to use the Xula cluster, please visit this link for applying. The pricing information can be found here.

TURGALIUM

The cluster located at CETA-CIEMAT (Trujillo) is composed of:

  • 40 Intel Gold 6254 processor (1440 cores)
  • 192GB DDR4, 240GB SATA3 SSD
  • Several heterogeneous nodes consisting of
  • 288 cores, 16x Intel 6240 – 2.6Ghz, 24x Nvidia V100
  • 256 cores, 64x Intel E5520 – 2.3Ghz, 64x Nvidia C1060
  • 192 cores, 16x E5-2680v3 – 2.5Ghz, 8x Nvidia K80
  • 192 cores, 32x Intel E5649 – 2.5Ghz, 32x Nvidia M2075
  • EDR IB
  • 1.3 PB
  • Rpeak = 374 TFLOPS

EULER

The former Euler in-production cluster at CIEMAT was composed of:

  • 144 blade nodes Dual Xeon quad-core 3.0 GHz (2 GB per core)
  • 96 blade nodes Dual Xeon quad-core 2.96 GHz (2 GB per core)
  • RAM memory of 3.8 TB for the 1920 cores
  • Interconnected by InfiniBand
  • Fully devoted to the execution of jobs
  • Lustre File System (120 TB)
  • Rpeak=23 Tflops; Rmax=19.55 Tflops

Free access database of execution logs

In this link you can find the execution logs of the former supercomputer of CIEMAT (Euler) in Parallel Workload Archive format. It contains traces of 9 years (2008-2018).

Please reference the following article whenever you will download and use it:

Heterogeneous Grid and Cloud clusters of ~400 cores

Sci-Track counts with two heterogeneous clusters in Madrid and Trujillo composed of around 800 CPU cores dedicated to High Throughput Computing infrastructure, mainly cloud.