Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
167 changes: 167 additions & 0 deletions .github/actions/spelling/allow.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
ACLs
ACR
AMD
AWS
Alpstein
Balfrin
Broyden
CFLAGS
CHARMM
CHF
COSMA
CPE
CPMD
CSCS
CWP
CXI
capstor
Ceph
Containerfile
DNS
EDF
EDFs
EDFs
EMPA
ETHZ
Ehrenfest
Errigal
FFT
Fock
GAPW
GCC
GGA
GPFS
GPG
GPU
GPUs
GPW
GROMACS
GTL
Gaussian
Google
HDD
HPC
HPCP
HPE
HSN
Hartree
iopsstor
Jax
Jira
Keycloak
LAMMPS
LDA
LOCALID
LUMI
Libc
Linaro
Linux
MFA
MLP
MNDO
MPICH
MPS
MeteoSwiss
NAMD
NICs
NVIDIA
NVMe
OTP
OTPs
PASC
PBE
PDUs
PID
PMPI
POSIX
Parrinello
Piz
Plesset
Pulay
RCCL
RDMA
ROCm
RPA
Roboto
Roothaan
SSHService
STMV
Scopi
TOTP
UANs
UserLab
VASP
Waldur
Wannier
XDG
aarch
aarch64
acl
biomolecular
bristen
bytecode
clariden
concretise
concretizer
Comment on lines +104 to +105
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that I haven't made any judgment about e.g. z vs s in words like these. Feel free to bikeshed as much as you want about what goes into this file, but let's keep it practical.

containerised
customised
diagonalisation
eiger
filesystems
groundstate
inodes
lexer
libfabric
multitenancy
podman
prioritised
proactively
quickstart
santis
screenshot
slurm
smartphone
squashfs
srun
ssh
stackinator
stakeholders
subfolders
subtable
subtables
supercomputing
superlu
sysadmin
tcl
tcsh
testuser
timeframe
timelimit
tmpfs
todi
toolbar
toolset
torchaudio
torchvision
treesitter
trilinos
uarch
uenv
uenvs
uids
vCluster
vClusters
venv
versioned
versioning
webhooks
webinar
webpage
website
wikipedia
workaround
workflows
xattr
xattrs
youtube
zstd
1 change: 1 addition & 0 deletions .github/actions/spelling/only.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
docs/.*\.md$
15 changes: 15 additions & 0 deletions .github/actions/spelling/patterns.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Recognized as "Firec" and "REST" with the regular rules, so in patterns.txt
# instead of allow.txt
FirecREST
RESTful

# markdown figure
^!\[.*\]\(.*\)$

# Most obvious URLs
https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)

# Markdown references (definition and use)
^\[\]\(\){#[a-z-]+}$
\]\(#[a-z-]+\)
\]\[[a-z-]+\]
26 changes: 26 additions & 0 deletions .github/workflows/spelling.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
name: Check Spelling

on:
pull_request:

jobs:
spelling:
name: Check Spelling
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Check spelling
id: spelling
uses: check-spelling/check-spelling@v0.0.24
with:
check_file_names: 1
post_comment: 0
use_magic_file: 1
warnings: bad-regex,binary-file,deprecated-feature,large-file,limited-references,no-newline-at-eof,noisy-file,non-alpha-in-dictionary,token-is-substring,unexpected-line-ending,whitespace-in-dictionary,minified-file,unsupported-configuration,no-files-to-check
use_sarif: 1
extra_dictionary_limit: 20
extra_dictionaries:
cspell:software-terms/dict/softwareTerms.txt
cspell:bash/dict/bash-words.txt
cspell:companies/dict/companies.txt
cspell:filetypes/filetypes.txt
2 changes: 1 addition & 1 deletion docs/accounts/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ To get an account you must be invited by a member of CSCS project adminstration
CSCS issues calls for proposals that are announced via the CSCS website and e-mails.
More information about upcoming calls is available on [the CSCS web site](https://www.cscs.ch/user-lab/allocation-schemes).

New PIs who have sucessfully applied for a preparatory project will receive an invitation from CSCS to get an account at CSCS.
New PIs who have successfully applied for a preparatory project will receive an invitation from CSCS to get an account at CSCS.
PIs can then invite members of their groups to join their project.

!!! info
Expand Down
2 changes: 1 addition & 1 deletion docs/alps/hardware.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ This approach to cooling provides greater efficiency for the rack-level cooling,
information about the network.

* Details about SlingShot 11.
* how many NICS per node
* how many NICs per node
* raw feeds and speeds
* Some OSU benchmark results.
* GPU-aware communication
Expand Down
14 changes: 7 additions & 7 deletions docs/alps/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@
# Alps Storage

Alps has different storage attached, each with characteristics suited to different workloads and use cases.
HPC storage is manged in a separate cluster of nodes that host servers that manage the storage and the physical storage drives.
HPC storage is managed in a separate cluster of nodes that host servers that manage the storage and the physical storage drives.
These separate clusters are on the same Slingshot 11 network as the Alps.

| | Capstor | IOPStor | Vast |
| | Capstor | Iopsstor | Vast |
|--------------|------------------------|------------------------|---------------------|
| Model | HPE ClusterStor E1000D | HPE ClusterStor E1000F | Vast |
| Type | Lustre | Lustre | NFS |
| Capacity | 129 PB raw GridRAID | 7.2 PB raw RAID 10 | 1 PB |
| Number of Drives | 8,480 16 TB HDD | 240 * 30 TB NVME SSD | N/A |
| Number of Drives | 8,480 16 TB HDD | 240 * 30 TB NVMe SSD | N/A |
| Read Speed | 1.19 TB/s | 782 GB/s | 38 GB/s |
| Write Speed | 1.09 TB/s | 393 GB/s | 11 GB/s |
| IOPs | 1.5M | 8.6M read, 24M write | 200k read, 768k write |
Expand All @@ -22,19 +22,19 @@ These separate clusters are on the same Slingshot 11 network as the Alps.
Capstor is the largest file system, for storing large amounts of input and output data.
It is used to provide SCRATCH and STORE for different clusters - the precise details are platform-specific.

[](){#ref-alps-iopstor}
## iopstor
[](){#ref-alps-iopsstor}
## iopsstor

!!! todo
small text explaining what iopstor is designed to be used for.
small text explaining what iopsstor is designed to be used for.

[](){#ref-alps-vast}
## vast

The Vast storage is smaller capacity system that is designed for use as home folders.

!!! todo
small text explaining what iopstor is designed to be used for.
small text explaining what iopsstor is designed to be used for.

The mounts, and how they are used for SCRATCH, STORE, PROJECT, HOME would be in the [storage docs][ref-storage-fs]

4 changes: 2 additions & 2 deletions docs/build-install/uenv.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ uenv start prgenv-gnu/24.11:v1 --view=spack

??? warning "Upstream Spack version"

It is strongly recomended that your version of Spack and the version of Spack in the uenv match when building software on top of an uenv.
It is strongly recommended that your version of Spack and the version of Spack in the uenv match when building software on top of an uenv.

!!! note "Advanced Spack users"

Expand Down Expand Up @@ -131,7 +131,7 @@ The `uenv-spack` tool can be used to create a build directory with a template [S

1. Script to build the software stack.
2. `git` clone of the required version of Spack.
3. Spack onfiguration files for the software stack.
3. Spack configuration files for the software stack.
4. Information about the uenv that was used to run `uenv-spack`.
5. Description of the software to build.
6. Template [Spack environment file].
Expand Down
2 changes: 1 addition & 1 deletion docs/clusters/bristen.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ See the SLURM documentation for instructions on how to run jobs on the [Grace-Ho

### FirecREST

Bristen can also be accessed using [FircREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint.
Bristen can also be accessed using [FirecREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint.

Check failure

Code scanning / check-spelling

Unrecognized Spelling

[Firec](#security-tab) is not a recognized word. \(unrecognized-spelling\)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FirecREST is a word that apparently it will not accept when it's in allow.txt, because it treats it as two words, Firec and REST. The first is not a recognized word.

patterns.txt allows regexes as a whitelist, and I've added FirecREST into that file instead.


### Scheduled Maintenance

Expand Down
2 changes: 1 addition & 1 deletion docs/clusters/clariden.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ See the SLURM documentation for instructions on how to run jobs on the [Grace-Ho

### FirecREST

Clariden can also be accessed using [FircREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint.
Clariden can also be accessed using [FirecREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint.

Check failure

Code scanning / check-spelling

Unrecognized Spelling

[Firec](#security-tab) is not a recognized word. \(unrecognized-spelling\)

## Maintenance and status

Expand Down
6 changes: 3 additions & 3 deletions docs/clusters/santis.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Currently, the following uenv are provided for the climate and weather community
* `icon/25.1`
* `climana/25.1`

In adition to the climate and weather uenv, all of the
In addition to the climate and weather uenv, all of the

??? example "using uenv provided for other clusters"
You can run uenv that were built for other Alps clusters using the `@` notation.
Expand Down Expand Up @@ -102,11 +102,11 @@ See the SLURM documentation for instructions on how to run jobs on the [Grace-Ho
| normal | 1266 | 1-infinite | 1-00:00:00 | 812/371 |
| xfer | 2 | 1 | 1-00:00:00 | 1/1 |
```
The last column shows the number of nodes that have been allocted in currently running jobs (`A`) and the number of jobs that are idle (`I`).
The last column shows the number of nodes that have been allocated in currently running jobs (`A`) and the number of jobs that are idle (`I`).

### FirecREST

Santis can also be accessed using [FircREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint.
Santis can also be accessed using [FirecREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint.

## Maintenance and status

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/terminal.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Binary applications are generally not portable, for example if you compile or in
A common pattern for installing local software, for example some useful command line utilities like [ripgrep](https://github.com/BurntSushi/ripgrep), is to install them in `$HOME/.local/bin`.
This approach won't work if the same home directory is mounted on two different clusters with different architectures: the version of ripgrep in our example would crash with `Exec format error` on one of the clusters.

Care needs to be taken to store executables, configuration and data for different architecures in separate locations, and automatically configure the login environment to use the correct location when you log into different systems.
Care needs to be taken to store executables, configuration and data for different architectures in separate locations, and automatically configure the login environment to use the correct location when you log into different systems.

The following example:

Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ The Alps Research infrastructure hosts multiple platforms and clusters targeting

[:octicons-arrow-right-24: Alps Overview](alps/index.md)

Get detailed information about the main components of the infrastructre
Get detailed information about the main components of the infrastructure

[:octicons-arrow-right-24: Alps Clusters](alps/clusters.md)

Expand Down
4 changes: 2 additions & 2 deletions docs/platforms/cwp/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Project administrators (PIs and deputy PIs) of projects on the CWP can to invite

This is performed using the [project management tool][ref-account-waldur]

Once invited to a project, you will receive an email, which you can need to create an account and configure [multi-factor authentification][ref-mfa] (MFA).
Once invited to a project, you will receive an email, which you can need to create an account and configure [multi-factor authentication][ref-mfa] (MFA).

## Systems

Expand Down Expand Up @@ -62,7 +62,7 @@ Scratch is per user - each user gets separate scratch path and quota.
!!! warning "scratch cleanup policy"
Files that have not been accessed in 30 days are automatically deleted.

**Scratch is not intended for permanant storage**: transfer files back to the capstor project storage after job runs.
**Scratch is not intended for permanent storage**: transfer files back to the capstor project storage after job runs.

### Project

Expand Down
8 changes: 4 additions & 4 deletions docs/platforms/mlp/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ There are three main file systems mounted on the MLP clusters Clariden and Brist
| type |mount | filesystem |
| -- | -- | -- |
| Home | /users/$USER | [VAST][ref-alps-vast] |
| Scratch | `/iopstor/scratch/cscs/$USER` | [Iopstor][ref-alps-iopstor] |
| Scratch | `/iopsstor/scratch/cscs/$USER` | [Iopsstor][ref-alps-iopsstor] |
| Project | `/capstor/store/cscs/swissai/<project>` | [Capstor][ref-alps-capstor] |

### Home
Expand All @@ -50,15 +50,15 @@ Scratch filesystems provide temporary storage for high-performance I/O for execu
Use scratch to store datasets that will be accessed by jobs, and for job output.
Scratch is per user - each user gets separate scratch path and quota.

* The environment variable `SCRATCH=/iopstor/scratch/cscs/$USER` is set automatically when you log into the system, and can be used as a shortcut to access scratch.
* The environment variable `SCRATCH=/iopsstor/scratch/cscs/$USER` is set automatically when you log into the system, and can be used as a shortcut to access scratch.

!!! warning "scratch cleanup policy"
Files that have not been accessed in 30 days are automatically deleted.

**Scratch is not intended for permanant storage**: transfer files back to the capstor project storage after job runs.
**Scratch is not intended for permanent storage**: transfer files back to the capstor project storage after job runs.

!!! note
There is an additional scratch path mounted on [Capstor][ref-alps-capstor] at `/capstor/scratch/cscs/$USER`, however this is not reccomended for ML workloads for performance reasons.
There is an additional scratch path mounted on [Capstor][ref-alps-capstor] at `/capstor/scratch/cscs/$USER`, however this is not recommended for ML workloads for performance reasons.

### Project

Expand Down
Loading
Loading