File systems
This page gives an overview of the file systems of the Umbrella HPC cluster and details the various types system services available to end-users.
Overview
File system | Quota (space) | Quota (files) | Speed | Shared between nodes | Path | Expiration | Backup | Notes |
---|---|---|---|---|---|---|---|---|
Home | 200 GiB1 | 1,000,0001 | Fast1 | Yes | /home/<login_name> | To be decided | No backup | — |
Scratch-shared | 8 TiB | 3,000,000 | Fast | Yes | /scratch-shared/<login_name> | Files older than 14 days are automatically removed | No backup | — |
Scratch-node | — | — | Very fast | No | $TMPDIR (and /scratch-node ) | Data is cleaned at irregular intervals | No backup | Size varies per node |
Project | —Varies per project— | Yes | /project/<project_name> | Varies per project | No backup | — |
1. Shown values are default values. For a number of reasons, some home directories may have different quota, and may reside on slower storage.
Home directories
No backup
There is no backup service available for home directories. Please check the Storage Finder for available options to store your data for long term!
Every user has their own home directory, which is accessible at /home/<login_name>
.
Your home directory has default capacity quota of 200 GiB. The default inode quota is 1,000,000. To see your current quota usage, run the myquota
command.
Most home directories reside on fast (NVMe) storage. Some, however, may reside on slower (spinning disk) storage.
The 200 GiB home directory is ample space for a work environment on the system for most users. If you think that it is not sufficient to accommodate your work environment on the Umbrella Cluster, you can request extra storage space (project space). Think of your home directory as the basis for arranging the work environment for your current computational project on the Umbrella Cluster. Note, however, that home directories are not intended for long term storage of large data sets. For this purpose, the TU/e Supercomputing Center recommends using other (external) storages, such as the TU/e NetApp or SURF Research Drive. Please consult the Storage Finder, your local hub, or your Research IT representative for available options to store your data for long term!
Scratch file systems
The scratch file systems are intended as fast temporary storage that can be used while running a job, and can be accessed by all users with a valid account on the system. There are several different types of scratch available on the Umbrella Cluster, as listed in the table above. Below, we describe them in detail, including any active quota and backup policies.
Scratch-shared
Automatic cleanup and no backup
For scratch-shared there is an automated expiration policy of 14 days. Files and directories that are older, i.e. haven't been modified in the past 14 days, are automatically deleted.
There is no backup service for scratch-shared.
Scratch-shared can be accessed at /scratch-shared/<login_name>
. It resides on fast (NVMe) networked storage. Each user can store up to 8 TiB and 3,000,000 files. Files are automatically deleted 14 days after they've been last modified.
Scratch-node
Irregular cleanup and no backup
For scratch-node there is an irregular expiration policy. Files and directories are removed irregularly and unannounced.
There is no backup service for scratch-shared.
Scratch-node can be accessed at $TMPDIR
(will be removed after job) and /scratch-node
(cleaned up irregularly). It resides on fast (NVMe) local storage, that is attached directly to the compute node's CPU. The size of this file system differs per node.
Project spaces
No backup
There is no backup service for project spaces. Please check the Storage Finder for available options to store your data for long term!
A project space can be used when:
- you need additional storage space, but do not require a backup; or
- you need to share files within a collaboration.
Project spaces are accessible at /project/<project_name>
. They can reside on fast storage (NVMe) or slow (spinning disk) storage, and have project-dependent quota for space and number of files. (Current quota usage can be seen using the myquota
command.) By default accounts on our systems are not provisioned with a project space. Project spaces can be requested separately, through the web form.
Project spaces are not intended for long term storage of large data sets. For this purpose, the TU/e Supercomputing Center recommends using other (external) storages, such as the TU/e NetApp or SURF Research Drive. Please consult the Storage Finder, your local hub, or your Research IT representative for available options to store your data for long term!