Available resources: Difference between revisions

From HPC documentation portal
Line 23: Line 23:




=== IBM/HP/Sun/Dell Linux cluster (fimm.hpc.uib.no) ===
=== Linux cluster fimm.hpc.uib.no ===
[[File:Fimm.jpg|right |130px]]
[[File:Fimm.jpg|right |130px]]
* 7.7 teraflops
* 7.7 teraflops

Revision as of 08:42, 7 March 2016

THE COMPUTATIONAL RESOURCES AT UIB

UiB operates supercomputer facilities that serve the high-end computational needs of scientists at Norwegian universities and other national research and industrial organizations. Additionally, the system is used for research and development by international organizations and provides services to European research groups (and projects) as well as to cooperating scientists from international institutions.

The installations are funded by the Norwegian Research Council through the NOTUR project, the University of Bergen (UiB), the Nansen Environmental and Remote Sensing Center (NERSC), the Institute for Marine Research (IMR), and Uni Research AS. These partners make critical use of the system for scientific research and development, in particular targeting marine activities ranging from marine molecular biology to the large scale simulation of ocean processes including the management of ocean resources and monitoring of the environment. Heavy use by academic research groups has traditionally come from computational chemistry, computational physics, computational biology, the geosciences, and applied mathematics.

The supercomputer facilities are installed at the High Technology Center in Bergen (HiB) and are managed and operated by UiB. The installation consists of the following parts:

Cray XE6m-200 (hexagon.hpc.uib.no)

  • 204.9 TFlops peak performance
  • 22272 cores
  • AMD Opteron 6276 (2.3GHz "Interlagos")
  • 1392 CPUs (sockets)
  • 696 nodes
  • 32 cores per node
  • 32GB RAM per node (1GB/core)
  • Cray Gemini interconnect
  • 2.5D Torus topology
  • OS: Cray Linux Environment, CLE 5.2 (Based on Novell Linux SLES11sp3)

It was upgraded from Cray XT4 in March 2012.


Linux cluster fimm.hpc.uib.no

  • 7.7 teraflops
  • 97 nodes, 1328 cores
    • 672 Intel Xeon E5-2670 (2.60 GHz) cores (2 eight-core per node)
    • 256 Intel Xeon E5420 (2.5 GHz) cores (2 quad-core per node)
    • 256 Intel Xeon E5430 (2.66 GHz) cores (2 quad-core per node)
    • 144 AMD Opteron 2431 (2.4 GHz) cores (2 six-core per node)
  • 110 TB Lustre parallell filesystem for /work and /fimm/home, pluss internal disks on all nodes
  • Linux operating system (Rocks/Centos 6.5)
  • Gigabit Ethernet on all nodes
  • Infiniband interconnect on 16 nodes

Fimm.hpc.uib.no has legacy hardware from 2008 with 64 HP blades and in 2010 with 12 Dell blades,2013 with 21 Dell blades. In addition to grid services for NorGrid and CERN Tier1 it serves a variety of applications, including bio-informatics, physics, geophysics, and chemistry.

IBM Tape Library

The secondary storage device for backup and archiving is based on the IBM 3584 UltraScalable Tape Library. It has approximately 1000 Terabyte tape capacity (1130 cartridges) in the current configuration (Jan 2010), amount depends on mix of LTO2, LTO3 and LTO4 tapes and amount of compression. The device came originally with four LTO1 tape-drives (which store 100-200 Gbyte data on one tape). Two LTO2 tape-drives have later been added that can store 200-400 Gigabyte data on a tape. In 2006, two of the LTO1 drives where replaced with LTO3 drives that can store 400-800GB per tape. In 2008 the last two LTO1 drives were replaced by two new LTO4 drives. In addition, four more LTO4 drives were added, to a total of 6 LTO4 drives. The LTO4 tapes can store between 800 to 1600 Gigabyte per tape.

History

A page that lists all HPC equipment that was once operated by Parallab can be found here.