Available resources: Difference between revisions

From HPC documentation portal
Line 6: Line 6:


The supercomputer facilities are installed at the High Technology Center in Bergen (HiB) and are managed and operated by UiB. The installation consists of the following parts:
The supercomputer facilities are installed at the High Technology Center in Bergen (HiB) and are managed and operated by UiB. The installation consists of the following parts:
=== Cray XE6m-200 (hexagon.hpc.uib.no) ===
[[File:hexagon_small.jpg|right|130px]]
* 204.9 TFlops peak performance
* 22272 cores
* AMD Opteron 6276 (2.3GHz "Interlagos")
* 1392 CPUs (sockets)
* 696 nodes
* 32 cores per node
* 32GB RAM per node (1GB/core)
* Cray Gemini interconnect
* 2.5D Torus topology
* OS: Cray Linux Environment, CLE 5.2 (Based on Novell Linux SLES11sp3)
It was upgraded from Cray XT4 in March 2012.


=== IBM/HP/Sun/Dell Linux cluster (fimm.hpc.uib.no) ===
=== IBM/HP/Sun/Dell Linux cluster (fimm.hpc.uib.no) ===
Line 23: Line 39:


The IBM e1350 cluster came alive November 2004. It was upgraded in 2007 with 12 Sun x2200M2 nodes, in 2008 with 32 HP blades and in 2010 with 12 Dell blades. In addition to grid services for NorGrid and CERN Tier1 it serves a variety of applications, including bio-informatics, physics, geophysics, and chemistry.
The IBM e1350 cluster came alive November 2004. It was upgraded in 2007 with 12 Sun x2200M2 nodes, in 2008 with 32 HP blades and in 2010 with 12 Dell blades. In addition to grid services for NorGrid and CERN Tier1 it serves a variety of applications, including bio-informatics, physics, geophysics, and chemistry.
=== Cray XE6m-200 (hexagon.hpc.uib.no) ===
[[File:hexagon_small.jpg|right|130px]]
* 204.9 TFlops peak performance
* 22272 cores
* AMD Opteron 6276 (2.3GHz "Interlagos")
* 1392 CPUs (sockets)
* 696 nodes
* 32 cores per node
* 32GB RAM per node (1GB/core)
* Cray Gemini interconnect
* 2.5D Torus topology
* OS: Cray Linux Environment, CLE 5.2 (Based on Novell Linux SLES11sp3)
It was upgraded from Cray XT4 in March 2012.


=== IBM Tape Library ===
=== IBM Tape Library ===

Revision as of 12:14, 29 October 2015

THE COMPUTATIONAL RESOURCES AT UIB

UiB operates supercomputer facilities that serve the high-end computational needs of scientists at Norwegian universities and other national research and industrial organizations. Additionally, the system is used for research and development by international organizations and provides services to European research groups (and projects) as well as to cooperating scientists from international institutions.

The installations are funded by the Norwegian Research Council through the NOTUR project, the University of Bergen (UiB), the Nansen Environmental and Remote Sensing Center (NERSC), the Institute for Marine Research (IMR), and Uni Research AS. These partners make critical use of the system for scientific research and development, in particular targeting marine activities ranging from marine molecular biology to the large scale simulation of ocean processes including the management of ocean resources and monitoring of the environment. Heavy use by academic research groups has traditionally come from computational chemistry, computational physics, computational biology, the geosciences, and applied mathematics.

The supercomputer facilities are installed at the High Technology Center in Bergen (HiB) and are managed and operated by UiB. The installation consists of the following parts:

Cray XE6m-200 (hexagon.hpc.uib.no)

  • 204.9 TFlops peak performance
  • 22272 cores
  • AMD Opteron 6276 (2.3GHz "Interlagos")
  • 1392 CPUs (sockets)
  • 696 nodes
  • 32 cores per node
  • 32GB RAM per node (1GB/core)
  • Cray Gemini interconnect
  • 2.5D Torus topology
  • OS: Cray Linux Environment, CLE 5.2 (Based on Novell Linux SLES11sp3)

It was upgraded from Cray XT4 in March 2012.


IBM/HP/Sun/Dell Linux cluster (fimm.hpc.uib.no)

  • 7.7 teraflops
  • 174 nodes, 876 cores
    • 172 AMD Opteron 250 (2.4 GHz) cores (2 single-core per node)
    • 48 AMD Opteron 2218 (2.6 GHz) cores (2 dual-core per node)
    • 256 Intel Xeon E5420 (2.5 GHz) cores (2 quad-core per node)
    • 256 Intel Xeon E5430 (2.66 GHz) cores (2 quad-core per node)
    • 144 AMD Opteron 2431 (2.4 GHz) cores (2 six-core per node)
  • 11 + 4 TB GPFS parallell filesystem for /work and /home, pluss internal disks on all nodes
  • Linux operating system (Rocks/Redhat)
  • Gigabit Ethernet on all nodes
  • Low latency SCI/Dolphin interconnect on 25 nodes
  • Infiniband interconnect on 16 nodes

The IBM e1350 cluster came alive November 2004. It was upgraded in 2007 with 12 Sun x2200M2 nodes, in 2008 with 32 HP blades and in 2010 with 12 Dell blades. In addition to grid services for NorGrid and CERN Tier1 it serves a variety of applications, including bio-informatics, physics, geophysics, and chemistry.

IBM Tape Library

The secondary storage device for backup and archiving is based on the IBM 3584 UltraScalable Tape Library. It has approximately 1000 Terabyte tape capacity (1130 cartridges) in the current configuration (Jan 2010), amount depends on mix of LTO2, LTO3 and LTO4 tapes and amount of compression. The device came originally with four LTO1 tape-drives (which store 100-200 Gbyte data on one tape). Two LTO2 tape-drives have later been added that can store 200-400 Gigabyte data on a tape. In 2006, two of the LTO1 drives where replaced with LTO3 drives that can store 400-800GB per tape. In 2008 the last two LTO1 drives were replaced by two new LTO4 drives. In addition, four more LTO4 drives were added, to a total of 6 LTO4 drives. The LTO4 tapes can store between 800 to 1600 Gigabyte per tape.

History

A page that lists all HPC equipment that was once operated by Parallab can be found here.