iCIS Intra Wiki
categories:             Info      -       Support      -       Software       -      Hardware       |      AllPages       -      uncategorized

Servers

From ICIS-intra
Jump to navigation Jump to search

Servers C&CZ

  • linux login servers
  • C&CZ administrates the linux cluster nodes for the departments in the beta faculty

Servers ICIS within C&CZ linux cluster

C&CZ does the maintenance for the cluster.

About Linux clusters

Info about linux cluster nodes

Policy

  • Use the mail alias users-of-icis-servers@science.ru.nl to which you can give notice if you require the machine for a certain time slot (e.g. article deadline, proper benchmarks with no interference of other processes).
  • Please create a directory with your username in the scratch directories, so it will not pollute the disk with single files and ownership is clear.
  • Please be considerate to other users: please keep the local directories cleaned up, and kill your processes you don't need anymore.

Access

For servers which are within the linux cluster managed my C&CZ you must first be granted access to one of the domains cluster-is(for DaS), cluster-ds(for DiS), or cluster-mbsd(for SwS), depending on your section, to be able to login to these machines. Each domain group has access to the full cluster. To get access to a domain group contact the Support_Staff which can arrange this at C&CZ. These domain clusters are only editable and viewable by C&CZ.

When a person is added to one of the domain clusters he/she will also be added to the mailing list users-of-icis-servers@science.ru.nl for the policy as described in previous section. Once added to this mailinglist you can view its contents on dhz.science.ru.nl.

How to run a job on a linux cluster

You can run jobs on the cluster with Slurm cluster software.

Info:

Administrator details

  • iCIS has three domain groups, one per section, to grant people access to the cluster: cluster-is(for DaS), cluster-ds(for DiS), or cluster-mbsd(for SwS)
  • access can be granted by contacting one of the scientific programmers
  • access is immediately granted to the whole cluster (cn00-cn96.science.ru.nl) however you are only allowed to use the machines you are granted usage to.
  • there are two email addresses for the ICIS cluster node machines:

Overview servers

Overview servers for education

contact Kasper Brink

 cn47.science.ru.nl	OII node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)
 cn48.science.ru.nl	OII node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)


Overview servers per section

  • SwS - contact Harco Kuppens
 none
  • DiS - contact Ronny Wichers Schreur
 none

  • DaS - contact Kasper Brink
 cn77.science.ru.nl	IS node (Dell PowerEdge R720, 2 x Xeon E5-2670 8C 2.6 GHz, 128 GB)
 cn78.science.ru.nl	IS node (Dell PowerEdge R720, 2 x Xeon E5-2670-Hyperthreading-on 8C 2.6 GHz, 128 GB)
 cn79.science.ru.nl	IS node (Dell PowerEdge R720, 2 x Xeon E5-2670 8C 2.6 GHz, 256 GB)
 cn104.science.ru.nl	DaS node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)
 cn105.science.ru.nl	DaS node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)

Servers for all departments

 cn89.science.ru.nl	DS node (Dell PowerEdge R920, 4 x Xeon E7-4870v2 15C 2.3 GHz, 3072 GB)


dagobert.cs.ru.nl(=cn89.science.ru.nl within linux cluster, current load)

  • brand: Dell R920
  • os: linux Ubuntu 20.04 LTS
  • cpu: 4 processors each having 15 cores (E7-4870 v2 at 2.3 Ghz): 60 cores in total. Note: each core supports hyperthreading causing linux to report 120 cpus.
  • memory:  3.17 TB = 3170 GB = 3170751 MB = 3170751192 kB 
  • local storage: There is accessible local storage on the device, to quickly read/write files instead of your slow network mounted home directory.
/scratch
RAID mirrored, but there will be NO BACKUPS of this directory.
/scratch-striped
striped RAID volume with faster access then /scratch, but less redundancy in case of hard disk crash. Also there will be NO BACKUPS of this directory.
  • description:

ICIS recently(august 2014) acquired a new server, called Dagobert (because of its hefty price). It is a Dell R920, with quad processors (E7-4870 v2 at 2.3 Ghz) with 15 cores each (60 total) and 3 TB RAM (1600 Mhz). A few pictures of our new server are attached. This server was bought from the Radboud Research Facilities grand to explore new research directions, achieve more relevant scientific results and cooperate with local organizations. This is the first of three phases for new equipment for ICIS. For now Dagobert is absolutely the most powerful server in our whole faculty (with a heavy 4.4 kW power supply, weight in excess of 30 kg), the next best server has only 256 Gb RAM and half the processor power.

cpu details

 $ lscpu
 Architecture:          x86_64
 CPU op-mode(s):        32-bit, 64-bit
 Byte Order:            Little Endian
 CPU(s):                120
 On-line CPU(s) list:   0-119
 Thread(s) per core:    2
 Core(s) per socket:    15
 Socket(s):             4
 NUMA node(s):          4
 Vendor ID:             GenuineIntel
 CPU family:            6
 Model:                 62
 Stepping:              7
 CPU MHz:               2300.154
 BogoMIPS:              4602.40
 Virtualization:        VT-x
 L1d cache:             32K
 L1i cache:             32K
 L2 cache:              256K
 L3 cache:              30720K
 NUMA node0 CPU(s):     0,4,8,12,16,20,24,28,32,36,40,44,48,52,56,60,64,68,72,76,80,84,88,92,96,100,104,108,112,116
 NUMA node1 CPU(s):     1,5,9,13,17,21,25,29,33,37,41,45,49,53,57,61,65,69,73,77,81,85,89,93,97,101,105,109,113,117
 NUMA node2 CPU(s):     2,6,10,14,18,22,26,30,34,38,42,46,50,54,58,62,66,70,74,78,82,86,90,94,98,102,106,110,114,118
 NUMA node3 CPU(s):     3,7,11,15,19,23,27,31,35,39,43,47,51,55,59,63,67,71,75,79,83,87,91,95,99,103,107,111,115,119

memory

 $ cat /proc/meminfo | head -1
 MemTotal:       3170751192 kB