The Hopper computing cluster
Hopper is a DELL computing cluster acquired by the CPHT, CMLS, CMAP, LPP laboratories. All the nodes of the cluster are interconnected by a high speed QDR InfiniBand network.
Important
In 2022, all Hopper nodes have been integrated into the Cholesky cluster on a dedicated QDR infiniband network.
User resources
- 2 login front-end nodes (DNS Round Robin for load sharing or fault tolerance) of Cholesky cluster
- A parallel file system BeeGFS to access only to HOME directories and Software Modules by Cholesky ethernet network
- A network file system NFS to access to WORK directories : 88 TB of usable space
- A Intel Q-Logic InfiniBand QDR (Quad Data Rate) interconnection network 40Gb/s
CPU resources
Nb of Nodes |
Nb of CPUs |
DELL Model Server |
CPU reference | CPU gen | Max memory |
Max reserved memory |
---|---|---|---|---|---|---|
32 | 2 | PowerEdge C6220 II |
Intel(R) Xeon(R) CPU E5-2650v2 8 cores @ 2.60GHz |
Ivy Bridge EP |
64 GB | 62 GB |
8 | 2 | PowerEdge C6320 |
Intel(R) Xeon(R) CPU E5-2640v4 10 cores @ 2.40GHz |
Broadwell | 64 GB | 62 GB |
SLURM partitions
These are the partitions for CPU computing on Hopper nodes.
Partition Name | Nodes | Cores per node | Max. memory per node | Max. Walltime |
---|---|---|---|---|
hopper | hopper[001-032] | 16 | 64 GB | 24:00:00 |
hopper_cpu20 | hopper[033-040] | 20 | 64 GB | 24:00:00 |