Available resources: Difference between revisions

From HPC documentation portal
m 1 revision
No edit summary
 
(21 intermediate revisions by 4 users not shown)
Line 1: Line 1:
== THE COMPUTATIONAL RESOURCES AT UIB ==
== Historical Computational Resources at UIB ==


UiB operates supercomputer facilities that serve the high-end computational needs of scientists at Norwegian universities and other national research and industrial organizations. Additionally, the system is used for research and development by international organizations and provides services to European research groups (and projects) as well as to cooperating scientists from international institutions.
UiB historically had local supercomputer facilities that served the high-end computational needs of scientists at Norwegian universities and other national research and industrial organisations. Additionally, the systems were used for research and development by international organisations and were providing services to European research groups (and projects) as well as to cooperating scientists from international institutions.


The installations are funded by the Norwegian Research Council through the NOTUR project, the University of Bergen (UiB), the Nansen Environmental and Remote Sensing Center (NERSC), the Institute for Marine Research (IMR), and Uni Research AS. These partners make critical use of the system for scientific research and development, in particular targeting marine activities ranging from marine molecular biology to the large scale simulation of ocean processes including the management of ocean resources and monitoring of the environment. Heavy use by academic research groups has traditionally come from computational chemistry, computational physics, computational biology, the geosciences, and applied mathematics.
The installations were funded by the Norwegian Research Council through the NOTUR project, the University of Bergen (UiB), the Nansen Environmental and Remote Sensing Center (NERSC), the Institute for Marine Research (IMR), and Uni Research AS. These partners made critical use of the system for scientific research and development, in particular targeting marine activities ranging from marine molecular biology to the large scale simulation of ocean processes including the management of ocean resources and monitoring of the environment. Heavy use by academic research groups has traditionally come from computational chemistry, computational physics, computational biology, the geosciences, and applied mathematics. Sigma2 is an umbrella organisation which coordinates these efforts. Please refer to [https://www.sigma2.no Sigma2 website] for details.


The supercomputer facilities are installed at the High Technology Center in Bergen (HiB) and are managed and operated by UiB. The installation consists of the following parts:
The last system UiB had was Cray XE6m-200, which was decommissioned on December 28th, 2018.


=== IBM/HP/Sun/Dell Linux cluster (fimm.hpc.uib.no) ===
=== Cray XE6m-200 (hexagon.hpc.uib.no) downscaled in 2017===
[[File:Fimm.jpg|right |130px]]
[[File:hexagon_small.jpg|right|130px]]
* 7.7 teraflops
In 2017 XE6m-200 was reconfigured to 316 compute nodes.
* 174 nodes, 876 cores
* 93.03 TFlops peak performance
** 172 AMD Opteron 250 (2.4 GHz) cores (2 single-core per node)
<!--* 63.33 TFlops measured HPL. - this was with 310 nodes, about 68 with 312 -->
** 48 AMD Opteron 2218 (2.6 GHz) cores (2 dual-core per node)
* 10112 cores
** 256 Intel Xeon E5420 (2.5 GHz) cores (2 quad-core per node)
* AMD Opteron 6276 (2.3GHz "Interlagos")
** 256 Intel Xeon E5430 (2.66 GHz) cores (2 quad-core per node)
* 632 CPUs (sockets)
** 144 AMD Opteron 2431 (2.4 GHz) cores (2 six-core per node)
* 316 nodes
* 11 + 4 TB GPFS parallell filesystem for /work and /home, pluss internal disks on all nodes
* 32 cores per node
* Linux operating system (Rocks/Redhat)
* 32GB RAM per node (1GB/core)
* Gigabit Ethernet on all nodes
* Cray Gemini interconnect
* Low latency SCI/Dolphin interconnect on 25 nodes
* 2.5D Torus topology
* Infiniband interconnect on 16 nodes
* OS: Cray Linux Environment, CLE 5.2 (Based on Novell Linux SLES11sp3)
 
The IBM e1350 cluster came alive November 2004. It was upgraded in 2007 with 12 Sun x2200M2 nodes, in 2008 with 32 HP blades and in 2010 with 12 Dell blades. In addition to grid services for NorGrid and CERN Tier1 it serves a variety of applications, including bio-informatics, physics, geophysics, and chemistry.


=== Cray XE6m-200 (hexagon.hpc.uib.no) ===
=== Cray XE6m-200 (hexagon.hpc.uib.no) Full Configuration ===
[[File:hexagon_small.jpg|right|130px]]
[[File:hexagon_small.jpg|right|130px]]
* 204.9 TFlops peak performance
* 204.9 TFlops peak performance
Line 35: Line 33:
* Cray Gemini interconnect
* Cray Gemini interconnect
* 2.5D Torus topology
* 2.5D Torus topology
* OS: Cray Linux Environment, CLE 4.0 (Based on Novell Linux SLES11sp1)
* OS: Cray Linux Environment, CLE 5.2 (Based on Novell Linux SLES11sp3)


It was upgraded from Cray XT4 in March 2012.
It was upgraded from Cray XT4 in March 2012. Cray XT4 was first time installed in 2008, it was in the top 50 world fastest supercomputers and was ranked on place 47 on the TOP500, June 2018 list.


=== IBM Tape Library ===
=== Older systems ===
[[File:Tape.jpg|left|130px]]
A page that lists all HPC equipment that was once operated by Parallab can be found [[Resource History|here]].
The secondary storage device for backup and archiving is based on the IBM 3584 UltraScalable Tape Library. It has approximately 1000 Terabyte tape capacity (1130 cartridges) in the current configuration (Jan 2010), amount depends on mix of LTO2, LTO3 and LTO4 tapes and amount of compression. The device came originally with four LTO1 tape-drives (which store 100-200 Gbyte data on one tape). Two LTO2 tape-drives have later been added that can store 200-400 Gigabyte data on a tape. In 2006, two of the LTO1 drives where replaced with LTO3 drives that can store 400-800GB per tape. In 2008 the last two LTO1 drives were replaced by two new LTO4 drives. In addition, four more LTO4 drives were added, to a total of 6 LTO4 drives. The LTO4 tapes can store between 800 to 1600 Gigabyte per tape.


=== History ===
UiB page on the Top500 list [https://www.top500.org/site/49135 University of Bergen TOP500]
A page that lists all HPC equipment that was once operated by Parallab can be found [[Resource History|here]].

Latest revision as of 09:41, 4 August 2020

Historical Computational Resources at UIB

UiB historically had local supercomputer facilities that served the high-end computational needs of scientists at Norwegian universities and other national research and industrial organisations. Additionally, the systems were used for research and development by international organisations and were providing services to European research groups (and projects) as well as to cooperating scientists from international institutions.

The installations were funded by the Norwegian Research Council through the NOTUR project, the University of Bergen (UiB), the Nansen Environmental and Remote Sensing Center (NERSC), the Institute for Marine Research (IMR), and Uni Research AS. These partners made critical use of the system for scientific research and development, in particular targeting marine activities ranging from marine molecular biology to the large scale simulation of ocean processes including the management of ocean resources and monitoring of the environment. Heavy use by academic research groups has traditionally come from computational chemistry, computational physics, computational biology, the geosciences, and applied mathematics. Sigma2 is an umbrella organisation which coordinates these efforts. Please refer to Sigma2 website for details.

The last system UiB had was Cray XE6m-200, which was decommissioned on December 28th, 2018.

Cray XE6m-200 (hexagon.hpc.uib.no) downscaled in 2017

In 2017 XE6m-200 was reconfigured to 316 compute nodes.

  • 93.03 TFlops peak performance
  • 10112 cores
  • AMD Opteron 6276 (2.3GHz "Interlagos")
  • 632 CPUs (sockets)
  • 316 nodes
  • 32 cores per node
  • 32GB RAM per node (1GB/core)
  • Cray Gemini interconnect
  • 2.5D Torus topology
  • OS: Cray Linux Environment, CLE 5.2 (Based on Novell Linux SLES11sp3)

Cray XE6m-200 (hexagon.hpc.uib.no) Full Configuration

  • 204.9 TFlops peak performance
  • 22272 cores
  • AMD Opteron 6276 (2.3GHz "Interlagos")
  • 1392 CPUs (sockets)
  • 696 nodes
  • 32 cores per node
  • 32GB RAM per node (1GB/core)
  • Cray Gemini interconnect
  • 2.5D Torus topology
  • OS: Cray Linux Environment, CLE 5.2 (Based on Novell Linux SLES11sp3)

It was upgraded from Cray XT4 in March 2012. Cray XT4 was first time installed in 2008, it was in the top 50 world fastest supercomputers and was ranked on place 47 on the TOP500, June 2018 list.

Older systems

A page that lists all HPC equipment that was once operated by Parallab can be found here.

UiB page on the Top500 list University of Bergen TOP500