Data (Hexagon)
Available file systems
Available file systems on Hexagon are:
- /home
- /work
- /shared
For details about each storage area, see documentation down below.
Note: Hexagon nodes do not have internal disks, thus there is no local /scratch file system available on any of the nodes.
User area (home directories): /home
/home is the file system for user home directories ($HOME) on Hexagon.
This file system is relatively small and not mounted on the compute nodes, so it can NOT be used for running jobs. It has quota enabled, limits can be found here . Files are backed up daily, except for folders called "scratch" or "tmp" and their sub-folders.
Work area (temporary data): /work/users
/work is a large external storage shared by all compute nodes on hexagon. Files are NOT backed up. /work is a Lustre file system where the user can specify stripe size or stripe count to optimize performance (see examples in Data (Hexagon)#Management of large files).
Only /work and /shared should be used when running jobs since there is no /home or any local scratch file system available on the compute nodes. Overview and comparison between /work and /shared can be found here.
Note: The /work/users/* directories are subject to automatic deletion dependent on modification, access time and the total usage of the file system. The oldest files will be deleted first. You can find more information about deletion policy in the Data handling and storage policy document.
/shared is a Lustre file system available on the compute nodes. This file system is intended for projects and other shared data between users and projects.
/shared is a "permanent" storage. It is not subject to automatic deletions (like /work/users), but it has no backup. For comparison between /work and /shared see here.
The maximum space a project can request is 10TB. There is an option to purchase extra storage nodes by the project or group to have this limit extended. Please contact us for more details. The groups on this filesystem are enforced according to project quota allocations by nightly cron job.
What area to use for what data
/home should be used for storing tools, like application sources, scripts, or any relevant data which must have a backup.
/work/users should be used for running jobs, as a main storage during data processing. All data after processing must be moved out of the machine or deleted after use.
/shared should be used for sharing data between projects, permanent files and running jobs.
Policies wrt deletion of temporary data
Please find detailed policies here.
| Feature | /work | /shared |
|---|---|---|
| Available on the compute nodes | Y | Y |
| Automatic deletion | Y | N |
| Backup | N | N |
| Group enforcement | N | Y |
| Performance | Best | Slightly worse |
| Quota | N | Y |
Transferring data to/from the system
Only ssh type of access is open to hexagon. Therefore to upload or download data only scp and sftp can be used. On special request it is possible to bring hard drives to the data center and have the data copied directly by the support staff. Please contact Support to arrange this.
To transfer data to and from hexagon use the address:
hexagon.hpc.uib.no
All login nodes on hexagon now has 10Gb network interfaces.
Basic tools (scp, sftp)
Standard scp command and sftp clients can be used.
Please have a look at Getting started section for list of programs and examples.
Disk quota and accounting
Default quotas
User quotas
By default you get a soft quota and a hard quota for your home directory.
/home has 20GB hard limit and maximum 1M files quota enforced per user. Grace time is set to 7 days. If the soft quota is exceeded for more than 7 days, or the hard quota is exceeded, you will not be able to create new files or append to files in your home directory.
You can check your disk usage (in KB), soft quota (quota), hard quota (limit) and inodes (limit) with the command:
quota -Q
OR in human readable form with:
quota -Qs
Note: Intermediate files with STDOUT and STDERR of running jobs are placed on the /home file system. If a job has a lot of output to STDOUT/STDERR, it is recommended to redirect it in the script directly to /work filesystem (i.e.: aprun .... executable >& /work/users/$USER/somelog.file) instead of using the "#SBATCH --output" switch. See Job execution (Hexagon)#Relevant examples.
Group quotas
/shared has 10TB hard limit quota enforced per group.
You can check your disk usage on the /shared file system with the command:
lfs quota -g my_group /shared
Request increased quota (justification)
It is possible for users that have a strong demand for disk space to contact Support with request to increase quota. Depending on the requirements a solution on another file system may be offered.
Management of large files
For files located on a Lustre file system, like /work, depending on client access pattern, you may want to change striping. By doing this you will optimally load OSTs and can get best throughput.
For large files you will better increase stripe count (and perhaps stripe-chunk size):
lfs setstripe --stripe-size XM --stripe-count Y "dir"
e.g.
lfs setstripe --stripe-size 8M --stripe-count 4 "dir" # stripe across 4 OSTs using 8MB chunks.
Note that the striping will only take affect on new files created/copied into the directory.
See more in Lustre.
Management of many small files
For files located on a Lustre file system, like /work, depending on client access pattern, you may want to change striping. By doing this you will optimally load OSTs and can get better throughput.
For many small files and one client accessing each file, change stripe count to 1
lfs setstripe --stripe-count 1 "dir"
Note that the striping will only take affect on new files created/copied into the directory.
See more in Lustre.
Copying files in parallel
Normally you will copy files on the login nodes, but there are cases when you need to copy big amount of data from one parallel file system to another one.
In this case we recommend you to use special tools optimized for parallel copying. In general, any big copying operation inside the job script will benefit. For copying on compute nodes only /work and /work-common are available and we recommend pcp.
pcp
pcp is using MPI. Only /work and /shared are supported.
Use normal job script Job execution (Hexagon)#Create a job (scripts) to spread copying over few nodes.
Example:
#SBATCH --ntasks=16 #SBATCH --ntasks-per-node=8 # 16 MPI processes, 8 on each node # you will be charged for 64 cores ... module load pcp cd /work/users/$USER aprun -B pcp source-dir destination-dir
If copying is a part of the computation script, then leave PBS directives as the job already has and just use "aprun -B pcp ..."
mutil
Mutil can use MPI and threads.
Usually PCP is faster than Mutil, especially with many small files, and we strongly recommend in the job scripts to use pcp with aprun. There could be some situations when Mutil can be preferred, like copying on the login node with threads, you can try both and see what better suits your task. The --mpi version is less stable than pcp and you can get some "DEADLOCK"s
"mcp --threads=" - works on any file system and can be used on login node.
Mutil has 2 commands:
* mcp - parallel copy command with the same syntax as "cp" * msum - parallel "md5sum"
"mcp" has the same syntax as "cp", in addition it has the following options:
--buffer-size=MBYTES read/write buffer size [4]
--direct-read enable use of direct I/O for reads
--direct-write enable use of direct I/O for writes
--double-buffer enable use of double buffering during file I/O
--fadvise-read enable use of posix_fadvise during reads
--fadvise-write enable use of posix_fadvise during writes
--length=LEN copy LEN bytes beginning at --offset
(or 0 if --offset not specified)
--mpi enable use of MPI for multi-node copies
--offset=POS copy --length bytes beginning at POS
(or to end if --length not specified)
--print-stats print performance per file to stderr
--print-stripe print striping changes to stderr
--read-stdin perform a batch of operations read over stdin
in the form 'SRC DST RANGES' where SRC and DST
must be URI-escaped (RFC 3986) file names and
RANGES is zero or more comma-separated ranges of
the form 'START-END' for 0 <= START < END
--skip-chmod retain temporary permissions used during copy
--split-size=MBYTES size to split files for parallelization [0]
--threads=NUMBER number of OpenMP worker threads to use [4]
So copying example on the login node with threads can look like:
module load mutil mcp --threads=4 --print-stats -a source-folder destination-path
Compression of data
Infrequently accessed files must be compressed to reduce file system usage.
Parallel tools (pigz, pbzip2, ..)
Hexagon has threaded versions of gzip and bzip2. They have almost linear scaling inside one node. Please see the job script example on how they can be used:
#!/bin/bash #SBATCH --ntasks=32 export OMP_NUM_THREADS=32 # load pigz and pbzip2 module load pigz pbzip2 # create tar file using pigz or bzip2 cd /work/users/$USER aprun -n1 -N1 -m30000M -d32 tar --use-compress-program pigz -cf tmp.tgz tmp # example for parallel gzip aprun -n1 -N1 -m30000M -d32 tar --use-compress-program pbzip2 -cf tmp.tbz tmp # example for parallel bzip2
Tools (gzip, bzip2, ..)
Tools like gzip, bzip2, zip and unrar are in the PATH and are always available on login nodes. Use command man to get detailed info.
man bzip2
If you need to perform packing/unpacking/compressing on compute nodes (recommended for very big files), please load module coreutils-cnl. E.g.:
module load coreutils-cnl cd /work/users/$USER aprun -n 1 tar -cf archive.tar MYDIR
Binary data (endianness)
Hexagon is an AMD Opteron based machine. Therefore it has little-endian format. [1]
Fortran sequential unformatted files created on big-endian machines cannot be read on a little-endian system. To workaround this issue, you can recompile your Fortran code with:
- -byteswapio - for PGI compiler
- -fconvert=swap - for GNU fortran
Back-up of data
Hexagon is connected to a secondary storage device (tape robot). The tape robot is used for the storage of backup data and archiving.
For backup policies, please consult the Data handling and storage policy document.
Should you need to restore files from backup, please contact Support.
Archiving data
Hexagon does not currently have any archiving facility for common use.
Previously, it was possible for users to apply for long term storage space in /migrate. This service has been suspended until new funding sources can be found.
Users from the Bjerknes center have archiving space in /bcmhsm. Access can be requested at Support.
Other users are advised to apply for Norstore resources http://www.norstore.no/.
If you have a specific demand please contact us at Support
How to archive data
To archive data on /bcmhsm or /migrate file systems, simply use cp or mv commands. It is important to remember that these file systems are tape file systems and to get the most out of this file system,it is important that it's used correctly:
- Never use /migrate/$USER as a work-area.
- Use /work and /scratch for temporary data.
- Copy files to /migrate after the job is finished, that is, avoid appending to files in /migrate from a running job.
- Never unpack files under /migrate/$USER
- Instead copy the files to /work first.
- Never put small (<100MB) files here, try to make the files larger than 2 GB if possible.
- Instead, pack directories with small files together with tar and, if possible, compress with gzip or bzip2.
- Never use tools like cat, grep, etc.. directly on files under /migrate/$USER it will fetch all referenced files from tape.
- Instead copy the files needed to /work first.
How to retrieve data from archive
To retrieve data from /migrate or /bcmhsm, simply use "cp" or "mv" commands, to remove files use "rm".
Closing of user account
A temporary (end date for usage is set) user account will be closed 1 week after the day noted in the request/application approval.
A normal user account can be closed on the request of NOTUR project lead Uninett Sigma, if the user no longer works at UiB, IMR or an organization that is affiliated (or has a contract for usage of the machine) with UiB.
When the user account is closed, the home folder belonging to the user is archived on tape for a period of 6 months. Files from other file systems, e.g. /work, are deleted. On a request to Support the home folder data can be restored by the system administrators.
Privacy of user data
When user account is created access to his home and work folders is set so they cannot be accessed by other users (mode 700). If you want to change access of your home or work folder to make it readable by your group, try:
chmod g+rX $HOME # home folder access chmod g+rX /work/users/$USER # work folder access
The project responsible can request via Support to create additional UNIX group. This can be useful if data in the home folder or else must be shared between several users.
References
- ↑ Endianness, endianness on Wikipedia.