This landing page provides an overview of the Supercomputer Fugaku operated by RIKEN R-CCS. It aggregates publicly available resources, software, and documentation. More detailed materials are available to approved users through official access programs.
What's new (2026-04): AI/ML workflows on A64FX are under continuous update. See AI / Machine Learning on Fugaku below for links to the official Fugaku AI framework guides and current status notes.
- Overview
- Account / Time Application
- Documentation
- System Access
- Software Ecosystem
- Containerization
- Libraries
- AI / Machine Learning on Fugaku ★
- HPC Applications
- Datasets
- Benchmarks
- Hardware
- Development & Optimization
- Related Projects & Initiatives
- Citation / Acknowledgment
Fugaku is a flagship exascale-class supercomputer developed by RIKEN R-CCS in collaboration with Fujitsu. It is powered by the A64FX ARM-based processor and is designed for high-performance computing (HPC), AI workloads, and data-centric science.
Access to Fugaku is managed through HPCI:
If this is your first attempt at obtaining a Fugaku account, the practical sequence is roughly as follows:
- Check eligibility. Most academic / industry users in Japan apply via the HPCI proposal system. Foreign users may also apply via HPCI; commercial use is supported under the fee-based programs.
- Pick a category.
- Trial use — short-duration, lightweight, recommended for first-time evaluation, porting, and benchmarking.
- Fee-based use — production-grade, longer-duration, charges per node-hour.
- Open-call (general) — competitive scientific proposals; review-based.
- Get an HPCI ID. All access goes through HPCI federated authentication. Register at the HPCI Operating Office before submitting a proposal.
- Submit a proposal through the HPCI portal. ARiSE / JST AI for Science PIs may attach the relevant project number.
- After approval: read the User Guides (R-CCS), then register SSH keys, set up the Spack environment, and submit your first job.
Tip for ARiSE researchers: lightweight inference / surrogate workloads typically fit within Trial use. Pair Fugaku with the AI for Science Supercomputer when GPU acceleration or the latest DL frameworks are required.
- SSH access (standard HPC workflow)
- Spack (when using sysroot'd LLVM please refer to LLVM toolchain via Spack)
- Fujitsu Compiler (FCC/FORTRAN/C++)
- GNU toolchain (via OS and Spack)
- LLVM toolchain
- Fujitsu MPI (MPI-3 compliant)
- OpenMP support on A64FX
-
Fujitsu Software Technical Computing Suite (TCSDS)
- BLAS / LAPACK / ScaLAPACK
- FFTW (Fujitsu-optimized)
- SSL2 math libraries (A64FX optimized)
-
Fujitsu MPI (MPI-3 compliant)
-
MPICH-Tofu (alternative MPI implementation)
More details: Fugaku Software Documentation
- HDF5
- NetCDF
- ADIOS2
- h5py
Reference (official software lists):
- Python · NumPy · SciPy · mpi4py · xarray · ASE
Reference:
Fugaku provides these libraries via Spack and pre-installed system environments.
This section consolidates Fugaku's AI/ML software stack, including the public A64FX optimization references and current evaluation notes. It is a primary reference for ARiSE / AI for Science researchers who plan to run inference, surrogate, or agentic workloads on Fugaku.
- PyTorch (A64FX-related guide) → PyTorch on the Fugaku (JP)
- TensorFlow (A64FX-related guide) → TensorFlow on the Fugaku (JP)
- Horovod (distributed training) → Fugaku AI framework guide (JP)
These builds incorporate the optimizations described below; users do not typically need to compile their own framework.
Publicly available references for oneDNN and A64FX optimizations include the following repositories and guides:
Key components:
- fujitsu/dnnl_aarch64 — AArch64/SVE向け deep-learning kernel 実装。
- fujitsu/pytorch — Fugaku向けPyTorch関連情報。
- fujitsu/tensorflow — Fugaku向けTensorFlow関連情報。
- RIKEN-RCCS/A64FX_Tuning_Techniques — A64FX最適化の実践情報。
Recommended starting points:
- fujitsu/dnnl_aarch64: https://github.com/fujitsu/dnnl_aarch64
- fujitsu/pytorch: https://github.com/fujitsu/pytorch
- fujitsu/tensorflow: https://github.com/fujitsu/tensorflow
- oneDNN upstream: https://github.com/oneapi-src/oneDNN
- A64FX SVE tuning techniques: https://github.com/RIKEN-RCCS/A64FX_Tuning_Techniques
- Horovod for data-parallel training over Tofu-D interconnect.
- Process / thread layout recipes for A64FX (4 CMG × 12 cores) — see the User Guides (R-CCS) and the A64FX tuning techniques.
- Apptainer containers for reproducible PyTorch / TensorFlow stacks (see singularity guide).
- ollama on A64FX (re-evaluation, ongoing). Recent versions of ollama and the underlying llama.cpp have substantially improved ARM64 / SVE support. Evaluation on A64FX is ongoing. Public update channels are this repository and related official Fugaku documentation pages.
- vLLM / Triton Inference Server. Currently targeted at GPU systems (see the AI for Science Supercomputer); Fugaku is positioned for CPU-side inference and surrogate workloads.
- dalotia is a data loader library for tensors to easily integrate inference pipelines into scientific apps
- Surrogate models (fast physics approximation)
- Physics-Informed Neural Networks (PINN)
- Hybrid HPC × AI workflows (simulation generates training data; AI accelerates inner loops)
- Distributed inference for agentic / scientific workflows
- AI framework references: see https://riken-rccs.github.io/fugaku-doc/docs/user-guide/sys-use/fugakuaiguide/build/ja/index.html and R-CCS Fugaku research / achievements.
- A64FX architecture / performance papers (Fujitsu Technical Review): https://www.fujitsu.com/global/about/resources/publications/technicalreview/
Have a paper, recipe, or notebook to share? Please open a PR against this README — the AI/ML section is updated rolling.
- GROMACS
- LAMMPS
- NAMD
- AMBER
- GENESIS (R-CCS)
- Quantum ESPRESSO
- ABINIT
- NWChem
- NTChem (R-CCS)
- Gaussian (commercial)
- SMASH
- VASP
- Quantum ESPRESSO
- OpenMX
- SALMON
- CP2K
- Phonopy
- ALAMODE
- OpenFOAM
- FrontFlow/blue (R-CCS)
- ANSYS Fluent (commercial)
- Simcenter STAR-CCM+
- SCALE (R-CCS)
- WRF
- NEMO
- BWA
- SAMtools
- BEDTools
- Picard
Bioinformatics tools are available via system modules or Spack.
- ParaView
- VisIt
- VMD
- PyMOL
These are pre-installed on Fugaku as part of the visualization suite.
Application list references (official):
- Fugaku provides:
- Open-source software via Spack and pre-built environments
- R-CCS-developed applications
- Commercial ISV software (license required)
For a full list of software and availability, check the HPCI Software Resource page:
- F-DATA — A Fugaku workload dataset for job-centric predictive modelling in HPC systems.
- Data repositories (R-CCS projects): https://www.r-ccs.riken.jp/en/research/
- HPL-AI / HPL-MxP: https://github.com/RIKEN-RCCS/hpl-ai
- Fugaku benchmark results: https://www.r-ccs.riken.jp/en/fugaku/research/
- TOP500 entry: https://www.top500.org/system/179807/
- DL4Fugaku & AI/ML benchmarks: see the AI / Machine Learning on Fugaku section.
- A64FX tuning guides: https://github.com/RIKEN-RCCS/A64FX_Tuning_Techniques
- A64FX architecture (Fujitsu): https://www.fujitsu.com/global/products/computing/servers/supercomputer/a64fx/
- Tofu Interconnect D overview: https://www.r-ccs.riken.jp/en/fugaku/about/
- Additional public technical references: https://www.r-ccs.riken.jp/en/fugaku/research/
- Fugaku S3-compatible service: https://github.com/RIKEN-RCCS/fugaku-s3-service-guide
- Parallel file system (Lustre-based, internal)
- Performance tuning: https://github.com/RIKEN-RCCS/A64FX_Tuning_Techniques
- Profiling tools:
- Fujitsu profiler
- Arm MAP / Performance Reports
- AI for Science Supercomputer (GPU companion system, ARiSE platform): https://github.com/RIKEN-RCCS/AI-for-Science-Supercomputer
- RiVault (RIKEN AI inference gateway): https://github.com/RIKEN-RCCS/RiVault
- Fugaku co-design program: https://www.r-ccs.riken.jp/en/fugaku/fs2020/
- Post-Fugaku / next-gen HPC research: https://www.r-ccs.riken.jp/en/
If you use Fugaku resources, please follow the HPCI acknowledgment guidelines.