Welcome to the IstiCusi/phonon-tensor repository! This Docker scripts are designed to facilitate the development and execution of TensorFlow applications with GPU support. The container is configured to provide an enhanced development experience with LunarVim as the editor, integrated for better code management and editing in C, C++, and Python.
On some linux distributions it is hard (if not nearly possible) to keep all dependencies well resolved for Tensor Flow. To let the host use still the newest NVIDIA drivers (including CUDA), this docker image provides even the beginners with a working environment to startup fast GPU-based tensor flow calculations.
The docker image is based on:
- Ubuntu 22.04
- CUDA 12.3.0
- Lib Tensor Flow C/C++ 2.16.1
To ensure smooth setup and operation of the Docker environment, the host machine needs to meet the following requirements:
- Operating System: The host must be a Linux-based system, as the scripts and configurations are specifically tailored for Linux.
- Docker: Docker must be installed to create and manage the containerized environment. This allows for the isolation of the TensorFlow setup and its dependencies.
- wget: Required for fetching files and resources over the network. Ensure wget is installed to handle downloads within the scripts.
- git: Necessary for cloning the repository and managing version control. Ensure git is installed to access the latest updates and script versions.
- Install the fonts as described here for your terminal: https://www.lunarvim.org/docs/installation/post-install
- internet access: You need a good internet connection, so that packages can be downloaded by the docker file
Make sure these tools are installed and properly configured before proceeding with the setup.
- GPU Support: Optimized for leveraging GPUs with TensorFlow to enhance performance for machine learning tasks.
- LunarVim Integration: A Neovim-based development environment set up to improve productivity and user experience for developers.
- Pre-configured Templates: Includes setups for C, C++, and Python located at
/docker/templates/c/
,/docker/templates/cpp/
, - and within the Python environment, respectively. These templates are tailored to provide a quick start for projects.
- Libraries: Key libraries such as ncurses are included to support complex text-based applications within the container.
Follow these steps to get started with the Tensor Docker Container:
Clone the repository and navigate into the directory using the following commands
git clone https://github.com/IstiCusi/phonon-tensor.git
cd phonon-tensor
Execute the installer.sh
script to set up the environment. This script will
download the necessary TensorFlow libraries, configure directory structures,
and build the Docker image. Run the script with: ./installer.sh
This script will prompt you to integrate the tensor_run.sh
into your
.zshrc
or .bashrc
files for easy access to the container
functionalities.
-
To start an interactive TensorFlow GPU session using the container, simply run:
tensor
-
To execute a Python script that utilizes TensorFlow within this environment, use:
tensor <file>
To uninstall the Docker container and clean up the modifications made during installation:
-
Remove the Docker image using the command:
docker rmi tensor
-
Remove the tensor_run.sh sourcing from your
.zshrc
or.bashrc
file. -
Delete the
~/.phonon-docker
directory if no longer needed.
When you started the container by tensor
you will find yourself in the
/home/phonon/
folder. There you will find a workingdir
directory, that
points to the host folder, your started the docker. You can therefore easily
access your working files on the host. Be aware, that the docker runs under
it's own root access (not of your machine clearly). files build by it are owned
by the root of the docker container.
For easy access, you can simple start the lunarvim ide with vv
. It provides
complete python, c and c++ syntax completion.
You file find c, c++ and python project templates at /home/phonon/templates.
Copy them in your workingdir
to startup projects quickly.
For any issues, questions, or contributions, please feel free to open an issue or submit a pull request on the GitHub repository page. We are grateful to these projects and their contributors for their open-source commitments and encourage users to comply with the respective licenses if they use these tools in their own projects.
NUMA stands for Non-Uniform Memory Access, a computer memory design used in multiprocessors where the memory access time depends on the memory location relative to a processor. In NUMA, the system memory is divided into various nodes. Each node is closely associated with a specific set of CPUs or processors, which forms its local memory. Accessing memory local to a node is faster than non-local memory (memory local to another processor or node). This architecture helps in optimizing the performance of applications by minimizing memory latency.
In some cases, software that interacts with hardware directly, such as TensorFlow with CUDA for GPU-accelerated operations, might log repeated messages about NUMA configuration, especially if it encounters anomalies or default settings that don't match its expectations. One commonly observed message is about negative NUMA node values being read, which should not typically happen as there must be at least one NUMA node.
The command:
for a in /sys/bus/pci/devices/*; do echo 0 | sudo tee -a $a/numa_node; done
on the host (in the docker clearly this cannot be done, because it is not real root of the system) informs the system, but there needs to be ine node present. This suppresses on such machines the annoying warning messages when you are running tensor flow.
Use this caution -- the setting is not permanent and is considered in general as save, but you never know.
The idea doing so is done by yodi (<yodiw.com>)
This project makes use of several external tools and libraries, and we wish to acknowledge their contribution and provide information about their licenses.
TensorFlow is utilized within this Docker container for its powerful machine learning libraries and capabilities. TensorFlow is an open-source platform developed by the TensorFlow team at Google. It is available under the Apache License 2.0. We extend our thanks to the TensorFlow community for developing and maintaining such a powerful tool. More information about TensorFlow and its license can be found at https://www.tensorflow.org.
our development environment to enhance productivity and usability for coding. LunarVim is built on top of Neovim and is freely available under the open-source license. We thank the LunarVim and Neovim teams for providing such a versatile and powerful tool for developers. Further details about LunarVim and its licensing can be accessed at https://github.com/LunarVim/LunarVim.
Vim aiming to improve user experience, plugins, and GUIs. Neovim is open-source software and is available under the Apache License 2.0. Appreciation goes to the Neovim community for their continuous efforts in improving the developer experience. For more information about Neovim and its licensing, visit https://neovim.io.
- docker container could be potentially added to github.
- de-installer script
- extension of the template library and an example library
- explanation of the docker preliminaries for rookies
- better direct one button, one copy, one click installation (maybe including preliminaries)
- change to "host user"
- Add jupyter-book for publication
- TF_CPP_MIN_LOG_LEVEL 3?
- Implement classes (solvers) for STDP using chem. kinetics and check how we could potentially interface this to Keras etc
- Change nvim to nvchad / it is more practical (with better theme)
- implement direct starter for jupyter notebooks
- implement windows installers and loaders.
- arrange home directory directly to workingdir. More userfriendly.
- Add pyglet, manim, sympy
- What is with Wayland -- how to handle this