iCIS Intra Wiki
categories:             Info      -       Support      -       Software       -      Hardware       |      AllPages       -      uncategorized

Servers: Difference between revisions

From ICIS-intra
Jump to navigation Jump to search
 
(152 intermediate revisions by 3 users not shown)
Line 1: Line 1:
[[Category:Hardware]] [[Category:AllPages]]
[[Category:Public]] [[Category:Hardware]] [[Category:AllPages]]
 
== Servers C&CZ ==
== Servers C&CZ ==


* [http://wiki.science.ru.nl/cncz/Hardware_servers#Login-servers linux login servers]
* [http://wiki.science.ru.nl/cncz/Hardware_servers#Login-servers linux login servers]
* [http://wiki.science.ru.nl/cncz/Windows_Terminal_Server windows terminal server 'ts2.science.ru.nl']
* C&CZ administrates the linux cluster nodes for the departments in the beta faculty
* [http://wiki.science.ru.nl/cncz/Hardware_servers#.5BReken-.5D.5BCompute_.5Dservers.2Fcluster linux cluster]
 
== Linux cluster ==
 
C&CZ does the maintenance for the cluster.
 
'''See the [[Cluster]] page for detailed info about the  [[Cluster]].'''
 
Below we put some extra general notes and notes about policy and access.
 
* The [[ClusterHelp]] page contains some practical slurm commands about using the cluster.
* See the [[ #Overview_servers_(_cluster_and_none-cluster)|next section in this page ]] to see which servers each department has.
* [https://cncz.science.ru.nl/en/howto/hardware-servers/#compute-serversclusters C&CZ info about linux cluster "cn-cluster and other clusters"]
* [https://sysadmin.science.ru.nl/clusternodes/ primitive list of nodes within  "cn-cluster" with owner specified]  
* when logged into a specific clusternode the command "htop" is convenient to see the load on this specific node
* running jobs:
** using [https://cncz.science.ru.nl/en/howto/slurm/ Slurm clustersoftware] you can run a job on the whole/partition of the cluster
** when logged in to a specific machine you can run jobs directly on that machine. However for nodes supporting Slurm this is disapproved.
 
=== '''Policy''' ===
 
* Please create a directory with your username in the scratch directories, so it will not pollute the disk with single files and ownership is clear.
* Please be considerate to other users: please keep the local directories cleaned up, and kill your processes you don't need anymore.
* If large data storage needed:  <br>
  '''Every compute node/server has a local ''/scratch'' partition/volume.<br> You can use that for storing big, temporary  data.'''
 
===  '''Access''' and '''Usage permission''' === 
 
For servers which are within the linux cluster managed my C&CZ you must first be granted access to one of the unix groups
* clustercsedu (for education)
* clusterdis(for DaS),
* clusterdas(for DiS), or
* mbsd(for SwS),
* clustericisonly
* csmpi
* clustericis, which is  a meta group consisting of groups: clusterdas, clusterdis, mbsd, csmpi, clustericisonly
   
To get access to a unix group contact the [[Support Staff|Support_Staff]] which can arrange this via DHZ.
 
Because access granted to the whole cluster (cn00-cn96.science.ru.nl) you can log in to each machine in the cluster. However, you should only run jobs directly on the machines you are '''granted usage''' to. So you should ask for usage from the owner of the machine before using it.
 
The cluster uses the Slurm software with which you can only run jobs on a partition of cluster machines when you are added to the unix groups which are allowed to use this partition. So with the Slurm software usage is controlled by granting access to these unix groups.
 
Note: it is possible to run on all cluster machines and run jobs there directly, but you SHOULD NOT DO THIS, you MUST use slurm.
 
=== Overview Servers ===
 
==== for education  ====
 
contact Kasper Brink
 
For education the following cluster servers are available:
 
  cn47.science.ru.nl OII node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)
  cn48.science.ru.nl OII node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)
 
OII has one domain group cluster-csedu. Add students to this domain group for access to the cluster.
Usage of the Slurm partition 'csedu' is only allowed by member of the 'csedu' unix group.
 
==== for all departments  ====
 
All departments with iCIS have access to the following cluster machines bought by iCIS via the iCIS partition:
 
    icis partition on slurm22:
      cn114.science.ru.nl  cpu: 2 x AMD EPYC 7642 48-Core Processor , ram: 500GB
      cn115.science.ru.nl  cpu: 2 x AMD EPYC 7642 48-Core Processor , ram: 500GB


The [http://graphs.science.ru.nl/ automatic monitoring of the servers] can be accessed on the web.
When added to the cluster unix group of your departement then you automatically get access to the iCIS cluster machines.
You can also look at [http://cricket.science.ru.nl/grapher.cgi?target=%2Fclusternodes cricket server] which shows also who is the owner of a cluster node.


== Servers ICIS  ==
==== per section  ====


=== '''Policy''' ===
===== DiS =====  


* Use the mail alias [mailto://users-of-icis-servers@science.ru.nl users-of-icis-servers@science.ru.nl]  to which you can give notice if you require the machine for a certain time slot (e.g. article deadline, proper benchmarks with no interference of other processes).
contact Ronny Wichers Schreur
* Please create a directory with your username in the scratch directories, so it will note pollute the disk with single files and ownership is clear.
* Please be considerate to other users: please keep the local directories  cleaned up, and kill your processes you don't need anymore.
* The [http://graphs.science.ru.nl/ automatic monitoring of the servers] can be accessed on the web. You can also look at [http://cricket.science.ru.nl/grapher.cgi?target=%2Fclusternodes cricket server] which shows also who is the owner of a cluster node.


===  '''Access''' === 
  cn108.science.ru.nl/tarzan.cs.ru.nl DS node (Dell PowerEdge R720, 2 x Xeon E5-2670 8C 2.6 GHz, 128 GB)


For servers which are within the linux cluster managed my C&CZ you must first be granted access to one of the domains cluster-is,cluster-ds,or cluster-mbsd, depending on your section, to be able to login to these machines. To get access contact the [[Support Staff|Support_Staff]] which can arrange this at C&CZ, because these domain clusters are only editable and viewable by C&CZ.
===== DaS =====


When a person is added to one of the domain clusters he/she will also be added to the mailing list [mailto://users-of-icis-servers@science.ru.nl users-of-icis-servers@science.ru.nl] for the policy as described in previous section.
contact Kasper Brink


For all other servers contact the contact person for the server to get an account.
  cn77.science.ru.nl IS node (Dell PowerEdge R720, 2 x Xeon E5-2670 8C 2.6 GHz, 128 GB)
  cn78.science.ru.nl IS node (Dell PowerEdge R720, 2 x Xeon E5-2670-Hyperthreading-on 8C 2.6 GHz, 128 GB)
  cn79.science.ru.nl IS node (Dell PowerEdge R720, 2 x Xeon E5-2670 8C 2.6 GHz, 256 GB)
  cn104.science.ru.nl DaS node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)
  cn105.science.ru.nl DaS node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)
  <br/>
  Above servers do not seem to support Slurm.


=== Servers for all departments  ===
===== SwS =====


dagobert.cs.ru.nl (within linux cluster, [http://cricket.science.ru.nl/grapher.cgi?target=%2Fclusternodes%2Fcn89.science.ru.nl current load]
contact Harco Kuppens


*brand: Dell R920
  &nbsp;
*os: linux  Ubuntu 14.04  LTS
  '''Cluster:'''
*cpu: 4 processors each having 15 cores (E7-4870 v2 at 2.3 Ghz): 60 cores in total. Note: each core supports hyperthreading causing linux to report 120 cpus.
  &nbsp;
*memory:  &nbsp;3.17&nbsp;TB&nbsp;=&nbsp;3170&nbsp;GB&nbsp;=&nbsp;3170751&nbsp;MB&nbsp;=&nbsp;3170751192&nbsp;kB&nbsp;          
  We have limited access to the clusters
*local storage: There is accessible local  storage on the device, to quickly read/write files instead of your slow network mounted home directory.
  &nbsp;
: /scratch
    * '''slurm22''' with login node '''cnlogin22.science.ru.nl'''
::RAID  mirrored, but there will be NO BACKUPS of this directory.  
  &nbsp;  
:/scratch-striped
    For info about cluster and its slurm software to schedule jobs,
::striped RAID volume with faster access then /scratch, but less redundancy in case of hard disk crash. Also there will be NO BACKUPS of this directory.
    see  https://cncz.science.ru.nl/en/howto/slurm/ which contains also a '''slurm starter tutorial'''.
*description:
  &nbsp;
<blockquote>
  * '''nodes for research'''
ICIS recently(august 2014) acquired a new server, called Dagobert (because of its hefty price). It is a Dell R920, with quad
  &nbsp;  
processors (E7-4870 v2 at 2.3 Ghz) with 15 cores each (60 total) and 3 TB RAM
    The  '''slurm22'''  cluster has an '''ICIS partition''' that contains machines that belong to ICIS and that we may use.
(1600 Mhz). A few pictures of our new server are attached. This server was
    The ICIS partition on '''slurm22''' is '''cn114.science.ru.nl''' and '''cn115.science.ru.nl''' .
bought from the Radboud Research Facilities grand to explore new research
    &nbsp;
directions, achieve more relevant scientific results and cooperate with local
      icis partition on slurm22:
organizations. This is the first of three phases for new equipment for ICIS.
        cn114.science.ru.nl  cpu: 2 x AMD EPYC 7642 48-Core Processor , ram: 500GB
For now Dagobert is absolutely the most powerful server in our whole faculty
        cn115.science.ru.nl  cpu: 2 x AMD EPYC 7642 48-Core Processor , ram: 500GB     
(with a heavy 4.4 kW power supply, weight in excess of 30 kg), the next best
    &nbsp;
server has only 256 Gb RAM and half the processor power.
    For access you need to be member of the '''mbsd group''' then you will also automatically become member of
</blockquote>
    the '''clustericis''' group which gives you access to the Slurm Account '''icis''' on the slurm22 cluster.
    Ask Harco Kuppens for access.
    &nbsp;
  * '''nodes for education: '''
  &nbsp;
      These nodes are all in the '''slurm22''' cluster and are bought for education purpose by Sven-Bodo Scholz
      for use in his course, but maybe sometimes can be used for research.
      &nbsp;
      contact: Sven-Bodo Scholz
      order date: 20221202
      location: server room C&CZ, machines are managed by C&CZ
      nodes:
        cn124-cn131 : Dell PowerEdge R250
              cpu: 1 x Intel(R) Xeon(R) E-2378 CPU @ 2.60GHz 8-core Processor with 2 threads per core
              ram: 32 GB
              disk: 1 TB Hard drive         
        cn132 :  Dell PowerEdge R7525
              cpu: 2 x AMD EPYC 7313 16-Core Processor with 1 thread per core
              gpu: NVIDIA Ampere A30, PCIe, 165W, 24GB Passive, Double Wide, Full Height GPU
              ram: 128 GB
              disk: 480 GB SDD
              fpga: Xilinx Alveo U200 225W Full Height FPGA
      cluster partitions:
        $ sinfo | head -1; sinfo -a |grep -e 132 -e 124
        PARTITION        AVAIL  TIMELIMIT  NODES  STATE NODELIST
        csmpi_short        up      10:00      8  idle cn[124-131]
        csmpi_long          up  10:00:00      8  idle cn[124-131]
        csmpi_fpga_short    up      10:00      1  idle cn132
        csmpi_fpga_long    up  10:00:00      1  idle cn132


cpu details
== Linux servers (none-cluster) ==
  $ lscpu
  Architecture:          x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Byte Order:            Little Endian
  CPU(s):                120
  On-line CPU(s) list:  0-119
  Thread(s) per core:    2
  Core(s) per socket:    15
  Socket(s):            4
  NUMA node(s):          4
  Vendor ID:            GenuineIntel
  CPU family:            6
  Model:                62
  Stepping:              7
  CPU MHz:              2300.154
  BogoMIPS:              4602.40
  Virtualization:        VT-x
  L1d cache:            32K
  L1i cache:            32K
  L2 cache:              256K
  L3 cache:              30720K
  NUMA node0 CPU(s):    0,4,8,12,16,20,24,28,32,36,40,44,48,52,56,60,64,68,72,76,80,84,88,92,96,100,104,108,112,116
  NUMA node1 CPU(s):    1,5,9,13,17,21,25,29,33,37,41,45,49,53,57,61,65,69,73,77,81,85,89,93,97,101,105,109,113,117
  NUMA node2 CPU(s):    2,6,10,14,18,22,26,30,34,38,42,46,50,54,58,62,66,70,74,78,82,86,90,94,98,102,106,110,114,118
  NUMA node3 CPU(s):    3,7,11,15,19,23,27,31,35,39,43,47,51,55,59,63,67,71,75,79,83,87,91,95,99,103,107,111,115,119


memory
=== Overview servers per section   ===
   $ cat /proc/meminfo | head -1
  MemTotal:      3170751192 kB


=== Servers DS department  ===
==== DiS ====  


* knabbel.cs.ru.nl  (AMD, 16 cores, 128 Gb, within linux cluster)
contact Ronny Wichers Schreur
* babbel.cs.ru.nl (AMD, 16 cores, 128 Gb, within linux cluster)
* donald.cs.ru.nl (AMD, 64 cores, 128 Gb, within Linux cluster)


  britney.cs.ru.nl                      standalone (Katharina Kohls)
  jonsnow.cs.ru.nl                      standalone (Peter Schwabe)


<!--
==== DaS ====


=== Servers SWS department  ===
contact Kasper Brink


sid.cs.ru.nl &nbsp;&nbsp;
  none


*os: ubuntu linux
==== SwS ====
*cpu: &nbsp;quadcore, &nbsp;Intel(R) Core(TM) i5-2500 CPU @ 3.30GHz, from /proc/cpuinfo&nbsp;: cpu MHz&nbsp;: 1600.000, cache size&nbsp;: 6144 KB
*memory: 32 GB
* you can connect with graphical session using x2go client
* contact Harco Kuppens to get an account


-->
contact Harco Kuppens


=== Administrator details ===
  &nbsp;
  '''Alternate servers:'''
    &nbsp;
    Several subgroups bought servers at alternate and are doing system administration themselves.
    The server is meant for that subgroup, but you can always try to ask for access.
    &nbsp;
    - for group: Robbert Krebbers                '''themelio.cs.ru.nl'''
        &nbsp;
        contact: Ike Muller 
        order date: 2021119
        location: server room Mercator 1 
        cpu: AMD Ryzen TR 3960X  - 24 core - 4,5GHz max
        gpu: Asus2GB D5 GT 1030 SL−BRK (GT1030-SL-2G-BRK)
        ram: 128GB 3200  corsair 3200−16 Veng. PRO SL
        disk: SSD 1TB Samsung 980 Pro
        moederbord: GiBy TRX40 AORUS MASTER
      &nbsp;
    - for group: Sebastian Junges         
        &nbsp;   
        order date: 20231120                      '''bert.cs.ru.nl'''
        contact: ?
        location: server room mercator 1
        cpu: AMD Ryzen™ Threadripper™ PRO 5965WX Processor - 24 cores, 3,8GHz (4,5GHz turbo boost)
              48 threads, 128MB L3 Cache, 128 PCIe 4.0 lanes,
        gpu: ASUS DUAL GeForce RTX 4070 OC grafische kaart , 12 GB (GDDR6X)
              (1x HDMI, 3x DisplayPort, DLSS 3)
        ram:  512GB : 8 x Kingston 64 GB ECC Registered DDR4-3200 servergeheugen
                      (Zwart, KSM32RD4/64HCR, Server Premier, XMP)
        disk: 2 x SAMSUNG 990 PRO, 2 TB SSD (MZ-V9P2T0BW, PCIe Gen 4.0 x4, NVMe 2.0)
        moederbord: Asus Pro WS WRX80E−SAGE SE WIFI
        &nbsp;
        2 x alternate server (may 2024) with specs:    '''UNKNOWN1.cs.ru.nl UNKNOWN2.cs.ru.nl'''
        order date: 20240508
        contact: ?
        location: server room mercator 1 
        cpu: AMD Ryzen 9 7900 4000 AM5 BOX 2 303,31 606,62 A 36M
        motherboard: ASUS TUF GAMING B650−E WIFI 2 147,93 295,86 A 36M
              integrated AMD Radeon Graphics
        ram: G.Skill Flare X5 DDR5-5600 - 96GB --  D5 96GB 5600−40 Flare X5 K2 GSK 2 287,60 575,20 A 240M
        ssd: SSD 2TB 5.0/7.0G 980 PRO M.2 SAM 2 132,15 264,30 A 60M
        power supply: be quiet! STRAIGHT POWER12 750W ATX3.0 P 2 132,15 264,30 A 120M
        cooling: Corsair 4000D Airflow TG bk ATX 2 78,43 156,86 A 24M


* iCIS has three domain groups, one per section, to grant people access to the cluster: cluster-is,cluster-ds,cluster-mbsd
    - for group: Nils Jansen                 
* access can be granted by contacting one of the [[Support Staff|scientific programmers]]
      &nbsp;     
* access is immediately granted to the whole cluster (cn00-cn96.science.ru.nl) however you are only allowed to use the machines you are granted access to.
        order date: 20231120                      '''ernie.cs.ru.nl'''
* there are two email addresses for the ICIS cluster node machines:
        contact: ?
** [mailto://users-of-icis-servers@science.ru.nl users-of-icis-servers@science.ru.nl] to which you can give notice if you require the machine for a certain time slot (e.g. article deadline, proper benchmarks with no interference of other processes).
        location: server room mercator
** [mailto://icis-servers@science.ru.nl icis-servers@science.ru.nl]  for administrative purposes.
        cpu: AMD Ryzen™ Threadripper™ PRO 5965WX Processor - 24 cores, 3,8GHz (4,5GHz turbo boost)
              48 threads, 128MB L3 Cache, 128 PCIe 4.0 lanes,
        gpu: inno3d geforce rtx 4090 x3 oc white , 24GB video memory, type GDDR6X, 21Gbps
        ram:  512GB : 8 x Kingston 64 GB ECC Registered DDR4-3200 servergeheugen
                      (Zwart, KSM32RD4/64HCR, Server Premier, XMP)
        disk: 2 x SAMSUNG 990 PRO, 2 TB SSD (MZ-V9P2T0BW, PCIe Gen 4.0 x4, NVMe 2.0)
        moederbord: Asus Pro WS WRX80E−SAGE SE WIFI
  &nbsp;
        order date: 20201215                      '''(active)'''
        contact: Christoph Schmidl/Maris Galesloot
        location: M1.01.16
        cpu: Intel® Core i9-10980XE, 3.0 GHz (4.6 GHz Turbo Boost) socket 2066 processor (18 cores)
        gpu: GIGABYTE GeForce RTX 3090 VISION OC 24G
        ram: HyperX 64 GB DDR4-3200 Kit werkgeheugen
        disk: Samsung 980 PRO 1 TB SSD + WD Blue, 6 TB Harde schijf
        moederbord: ASUS ROG RAMPAGE VI EXTREME ENCORE, socket 2066 moederbord
        &nbsp;

Latest revision as of 10:17, 16 May 2025

Servers C&CZ

  • linux login servers
  • C&CZ administrates the linux cluster nodes for the departments in the beta faculty

Linux cluster

C&CZ does the maintenance for the cluster.

See the Cluster page for detailed info about the Cluster.

Below we put some extra general notes and notes about policy and access.

Policy

  • Please create a directory with your username in the scratch directories, so it will not pollute the disk with single files and ownership is clear.
  • Please be considerate to other users: please keep the local directories cleaned up, and kill your processes you don't need anymore.
  • If large data storage needed:
 Every compute node/server has a local /scratch partition/volume.
You can use that for storing big, temporary data.

Access and Usage permission

For servers which are within the linux cluster managed my C&CZ you must first be granted access to one of the unix groups

  • clustercsedu (for education)
  • clusterdis(for DaS),
  • clusterdas(for DiS), or
  • mbsd(for SwS),
  • clustericisonly
  • csmpi
  • clustericis, which is a meta group consisting of groups: clusterdas, clusterdis, mbsd, csmpi, clustericisonly

To get access to a unix group contact the Support_Staff which can arrange this via DHZ.

Because access granted to the whole cluster (cn00-cn96.science.ru.nl) you can log in to each machine in the cluster. However, you should only run jobs directly on the machines you are granted usage to. So you should ask for usage from the owner of the machine before using it.

The cluster uses the Slurm software with which you can only run jobs on a partition of cluster machines when you are added to the unix groups which are allowed to use this partition. So with the Slurm software usage is controlled by granting access to these unix groups.

Note: it is possible to run on all cluster machines and run jobs there directly, but you SHOULD NOT DO THIS, you MUST use slurm.

Overview Servers

for education

contact Kasper Brink

For education the following cluster servers are available:

 cn47.science.ru.nl	OII node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)
 cn48.science.ru.nl	OII node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)

OII has one domain group cluster-csedu. Add students to this domain group for access to the cluster. Usage of the Slurm partition 'csedu' is only allowed by member of the 'csedu' unix group.

for all departments

All departments with iCIS have access to the following cluster machines bought by iCIS via the iCIS partition:

   icis partition on slurm22:
     cn114.science.ru.nl  cpu: 2 x AMD EPYC 7642 48-Core Processor , ram: 500GB
     cn115.science.ru.nl  cpu: 2 x AMD EPYC 7642 48-Core Processor , ram: 500GB

When added to the cluster unix group of your departement then you automatically get access to the iCIS cluster machines.

per section

DiS

contact Ronny Wichers Schreur

 cn108.science.ru.nl/tarzan.cs.ru.nl	DS node (Dell PowerEdge R720, 2 x Xeon E5-2670 8C 2.6 GHz, 128 GB)
DaS

contact Kasper Brink

 cn77.science.ru.nl	IS node (Dell PowerEdge R720, 2 x Xeon E5-2670 8C 2.6 GHz, 128 GB)
 cn78.science.ru.nl	IS node (Dell PowerEdge R720, 2 x Xeon E5-2670-Hyperthreading-on 8C 2.6 GHz, 128 GB)
 cn79.science.ru.nl	IS node (Dell PowerEdge R720, 2 x Xeon E5-2670 8C 2.6 GHz, 256 GB)
 cn104.science.ru.nl	DaS node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)
 cn105.science.ru.nl	DaS node (Supermicro, 2 x Intel Xeon Silver 4214 2.2 GHz, 128 GB, 8x GPU)
 
Above servers do not seem to support Slurm.
SwS

contact Harco Kuppens

  
 Cluster: 
    
  We have limited access to the clusters 
    
    * slurm22 with login node cnlogin22.science.ru.nl
       
    For info about cluster and its slurm software to schedule jobs,
    see  https://cncz.science.ru.nl/en/howto/slurm/ which contains also a slurm starter tutorial.
   
 * nodes for research
    
    The  slurm22   cluster has an ICIS partition that contains machines that belong to ICIS and that we may use. 
    The ICIS partition on slurm22 is cn114.science.ru.nl and cn115.science.ru.nl .
     
      icis partition on slurm22:
        cn114.science.ru.nl  cpu: 2 x AMD EPYC 7642 48-Core Processor , ram: 500GB
        cn115.science.ru.nl  cpu: 2 x AMD EPYC 7642 48-Core Processor , ram: 500GB      
     
    For access you need to be member of the mbsd group then you will also automatically become member of 
    the clustericis group which gives you access to the Slurm Account icis on the slurm22 cluster. 
    Ask Harco Kuppens for access.
     
 * nodes for education: 
   
     These nodes are all in the slurm22 cluster and are bought for education purpose by Sven-Bodo Scholz 
     for use in his course, but maybe sometimes can be used for research.
      
     contact:  Sven-Bodo Scholz
     order date: 20221202
     location: server room C&CZ, machines are managed by C&CZ
     nodes:
        cn124-cn131 : Dell PowerEdge R250 
             cpu: 1 x Intel(R) Xeon(R) E-2378 CPU @ 2.60GHz 8-core Processor with 2 threads per core
             ram: 32 GB
             disk: 1 TB Hard drive          
        cn132 :  Dell PowerEdge R7525
             cpu: 2 x AMD EPYC 7313 16-Core Processor with 1 thread per core
             gpu: NVIDIA Ampere A30, PCIe, 165W, 24GB Passive, Double Wide, Full Height GPU
             ram: 128 GB
             disk: 480 GB SDD
             fpga: Xilinx Alveo U200 225W Full Height FPGA
     cluster partitions:
        $ sinfo | head -1; sinfo -a |grep -e 132 -e 124
        PARTITION        AVAIL  TIMELIMIT  NODES  STATE NODELIST
        csmpi_short         up      10:00      8   idle cn[124-131]
        csmpi_long          up   10:00:00      8   idle cn[124-131]
        csmpi_fpga_short    up      10:00      1   idle cn132
        csmpi_fpga_long     up   10:00:00      1   idle cn132

Linux servers (none-cluster)

Overview servers per section

DiS

contact Ronny Wichers Schreur

 britney.cs.ru.nl                      standalone (Katharina Kohls)
 jonsnow.cs.ru.nl                      standalone (Peter Schwabe)

DaS

contact Kasper Brink

 none

SwS

contact Harco Kuppens

  
 Alternate servers: 
    
   Several subgroups bought servers at alternate and are doing system administration themselves.
   The server is meant for that subgroup, but you can always try to ask for access. 
    
   - for group: Robbert Krebbers                themelio.cs.ru.nl
        
       contact: Ike Muller   
       order date: 2021119
       location: server room Mercator 1  
        cpu: AMD Ryzen TR 3960X  - 24 core - 4,5GHz max 
        gpu: Asus2GB D5 GT 1030 SL−BRK (GT1030-SL-2G-BRK)
        ram: 128GB 3200  corsair 3200−16 Veng. PRO SL 
        disk: SSD 1TB Samsung 980 Pro
        moederbord: GiBy TRX40 AORUS MASTER
      
   - for group: Sebastian Junges           
             
        order date: 20231120                      bert.cs.ru.nl
        contact: ?
        location: server room mercator 1
        cpu: AMD Ryzen™ Threadripper™ PRO 5965WX Processor - 24 cores, 3,8GHz (4,5GHz turbo boost) 
             48 threads, 128MB L3 Cache, 128 PCIe 4.0 lanes,
        gpu: ASUS DUAL GeForce RTX 4070 OC grafische kaart , 12 GB (GDDR6X)
             (1x HDMI, 3x DisplayPort, DLSS 3)
        ram:  512GB : 8 x Kingston 64 GB ECC Registered DDR4-3200 servergeheugen 
                      (Zwart, KSM32RD4/64HCR, Server Premier, XMP)
        disk: 2 x SAMSUNG 990 PRO, 2 TB SSD (MZ-V9P2T0BW, PCIe Gen 4.0 x4, NVMe 2.0)
        moederbord: Asus Pro WS WRX80E−SAGE SE WIFI
        
        2 x alternate server (may 2024) with specs:     UNKNOWN1.cs.ru.nl UNKNOWN2.cs.ru.nl
        order date: 20240508
        contact: ?
        location: server room mercator 1   
        cpu: AMD Ryzen 9 7900 4000 AM5 BOX 2 303,31 606,62 A 36M 
        motherboard: ASUS TUF GAMING B650−E WIFI 2 147,93 295,86 A 36M
              integrated AMD Radeon Graphics
        ram: G.Skill Flare X5 DDR5-5600 - 96GB --  D5 96GB 5600−40 Flare X5 K2 GSK 2 287,60 575,20 A 240M
        ssd: SSD 2TB 5.0/7.0G 980 PRO M.2 SAM 2 132,15 264,30 A 60M
        power supply: be quiet! STRAIGHT POWER12 750W ATX3.0 P 2 132,15 264,30 A 120M 
        cooling: Corsair 4000D Airflow TG bk ATX 2 78,43 156,86 A 24M
   - for group: Nils Jansen                   
            
        order date: 20231120                      ernie.cs.ru.nl
        contact: ?
        location: server room mercator
        cpu: AMD Ryzen™ Threadripper™ PRO 5965WX Processor - 24 cores, 3,8GHz (4,5GHz turbo boost) 
             48 threads, 128MB L3 Cache, 128 PCIe 4.0 lanes,
        gpu: inno3d geforce rtx 4090 x3 oc white , 24GB video memory, type GDDR6X, 21Gbps
        ram:  512GB : 8 x Kingston 64 GB ECC Registered DDR4-3200 servergeheugen 
                      (Zwart, KSM32RD4/64HCR, Server Premier, XMP)
        disk: 2 x SAMSUNG 990 PRO, 2 TB SSD (MZ-V9P2T0BW, PCIe Gen 4.0 x4, NVMe 2.0)
        moederbord: Asus Pro WS WRX80E−SAGE SE WIFI
   
        order date: 20201215                       (active)
        contact: Christoph Schmidl/Maris Galesloot
        location: M1.01.16
        cpu: Intel® Core i9-10980XE, 3.0 GHz (4.6 GHz Turbo Boost) socket 2066 processor  (18 cores)
        gpu: GIGABYTE GeForce RTX 3090 VISION OC 24G 
        ram: HyperX 64 GB DDR4-3200 Kit werkgeheugen
        disk: Samsung 980 PRO 1 TB SSD + WD Blue, 6 TB Harde schijf
        moederbord: ASUS ROG RAMPAGE VI EXTREME ENCORE, socket 2066 moederbord