File Systems
This page gives an overview of the file systems of the Umbrella HPC cluster and details the various types system services available to end-users.
File system | Quota (space) | Quota (files) | Speed | Shared between nodes | Path | Expiration | Backup | Notes |
---|---|---|---|---|---|---|---|---|
Home | 200 GiB1 | 1,000,0001 | Fast1 | Yes | /home/<login_name> | To be decided | No backup | — |
Scratch-shared | 8 TiB | 3,000,000 | Fast | Yes | /scratch-shared/<login_name> | Files older than 14 days are automatically removed | No backup | — |
Scratch-node | — | — | Very fast (but see below) | No | $TMPDIR (and /scratch-node ) | Data is cleaned at irregular intervals | No backup | Size varies per node |
Project | —Varies per project— | Yes | /project/<project_name> | Varies per project | No backup | — |
1. Shown values are default values. Some home directories may have different quota, and may reside on slower storage.
Home Directories
Every user has their own home directory, which is accessible at /home/<login_name>
.
Your home directory has default capacity quota of 200 GiB. The default inode quota is 1,000,000. To see your current quota usage, run the myquota
command.
Most home directories reside on fast (NVMe) storage. Some, however, may reside on slower (spinning disk) storage.
The 200 GiB home directory is ample space for a work environment on the system for most users. If you think that it is not sufficient to accommodate your work environment on the Umbrella Cluster, you can request extra storage space (project space). Think of your home directory as the basis for arranging the work environment for your current computational project on the Umbrella Cluster.
There is no backup service available for home directories
Please be aware that data (including your home directory) in the HPC Cluster is NOT backed up!
Note
Home directories are not intended for long term storage of large data sets. For this purpose, the TU/e Supercomputing Center recommends using other (external) storages, such as the TU/e NetApp or SURF Research Drive. Please consult the Storage Finder, your local hub, or your Research IT representative for available options to store your data for long term!
Scratch File Systems
The scratch file systems are intended as fast temporary storage that can be used while running a job, and can be accessed by all users with a valid account on the system. There are several different types of scratch available on the Umbrella Cluster, as listed in the table above. Below, we describe them in detail, including any active quota and backup policies.
Scratch-shared
Scratch-shared can be accessed at /scratch-shared/<login_name>
. It resides on fast (NVMe) networked storage. Each user can store up to 8 TiB and 3,000,000 files. Files are automatically deleted 14 days after they've been last modified.
Automatic cleanup and no backup
For scratch-shared there is an automated expiration policy of 14 days. Files and directories that are older, i.e. haven't been modified in the past 14 days, are automatically deleted.
There is no backup service for scratch-shared.
Scratch-node
Scratch-node can be accessed at $TMPDIR
(will be removed after job) and /scratch-node
(cleaned up irregularly). For newer nodes, it resides on fast (NVMe) local storage, that is attached directly to the compute node's CPU. For older nodes, it resides on a HDD. The size of this file system differs per node.
Irregular cleanup and no backup
For scratch-node there is an irregular expiration policy. Files and directories are removed irregularly and unannounced.
There is no backup service for scratch-shared.
Project Spaces
A project space can be used when:
- You need additional storage space.
- You need to share files within a collaboration.
Project spaces are accessible at /project/<project_name>
. They can reside on fast storage (NVMe) or slow (spinning disk) storage, and have project-dependent quota for space and number of files. (Current quota usage can be seen using the myquota
command.) By default accounts on our systems are not provisioned with a project space. Project spaces can be requested separately, through the web form.
Projects spaces are bound to the following limitations:
- Maximum size: 25 TB
- Maximum project duration: 4 years
- Maximum product: TB * months < 100
Project space requests that fall within these limitations will very likely be honoured. If your storage needs exceed these limitions, you will need to invest in a storage node. Please contact the HPC administrators if this applies to you.
No backup
Please be aware that data (including project spaces) in the HPC Cluster is NOT backed up!
Note
Project spaces are not intended for long term storage of large data sets. For this purpose, the TU/e Supercomputing Center recommends using other (external) storages, such as the TU/e NetApp or SURF Research Drive. Please consult the Storage Finder, your local hub, or your Research IT representative for available options to store your data for long term!