Skip to content

Commit

Permalink
Replaced single backtick with double backtick to properly highlight c…
Browse files Browse the repository at this point in the history
…ode.
  • Loading branch information
twinkarma committed Oct 10, 2017
1 parent 960a9e1 commit 6143cde
Show file tree
Hide file tree
Showing 4 changed files with 35 additions and 35 deletions.
14 changes: 7 additions & 7 deletions software/machine-learning/caffe.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,12 +47,12 @@ This command will show the following, which is now running on a compute node: ::

.. note::

Inside the container, your home directory on the outside e.g. `/jmain01/home/JAD00X/test/test1-test` is mapped to the `/home_directory` folder inside the container.
Inside the container, your home directory on the outside e.g. ``/jmain01/home/JAD00X/test/test1-test`` is mapped to the ``/home_directory`` folder inside the container.

You can test this by using the command:
ls /home_directory

You are now inside the container where `Caffe` is installed. Let's check by asking for the version: ::
You are now inside the container where ``Caffe`` is installed. Let's check by asking for the version: ::

caffe --version

Expand All @@ -75,18 +75,18 @@ Firstly navigate to the folder you wish your script to lauch from, for example w

cd ~

It is recommended that you create a **script file** e.g. `script.sh`: ::
It is recommended that you create a **script file** e.g. ``script.sh``: ::

#!/bin/bash

# Prints out Caffe's version number
caffe --version

And don't forget to make your `script.sh` executable: ::
And don't forget to make your ``script.sh`` executable: ::

chmod +x script.sh

Then create a **Slurm batch script** that is used to launch the code, e.g. `batch.sh`: ::
Then create a **Slurm batch script** that is used to launch the code, e.g. ``batch.sh``: ::

#!/bin/bash

Expand All @@ -112,15 +112,15 @@ Then create a **Slurm batch script** that is used to launch the code, e.g. `batc
#Launching the commands within script.sh
/jmain01/apps/docker/caffe-batch -c ./script.sh

You can then submit the job using `sbatch`: ::
You can then submit the job using ``sbatch``: ::

sbatch batch.sh

On successful submission, a job ID is given: ::

Submitted batch job 7800

The output will appear in the slurm standard output file with the corresponding job ID (in this case `slurm-7800.out`). The content of the output is as follows: ::
The output will appear in the slurm standard output file with the corresponding job ID (in this case ``slurm-7800.out``). The content of the output is as follows: ::

==================
== NVIDIA Caffe ==
Expand Down
16 changes: 8 additions & 8 deletions software/machine-learning/tensorflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,12 @@ This command will show the following, which is now running on a compute node: ::

.. note::

Inside the container, your home directory on the outside e.g. `/jmain01/home/JAD00X/test/test1-test` is mapped to the `/home_directory` folder inside the container.
Inside the container, your home directory on the outside e.g. ``/jmain01/home/JAD00X/test/test1-test`` is mapped to the ``/home_directory`` folder inside the container.

You can test this by using the command:
ls /home_directory

You can test that `Tensorflow` is running on the GPU with the following python code `tftest.py` ::
You can test that ``Tensorflow`` is running on the GPU with the following python code ``tftest.py`` ::

import tensorflow as tf
# Creates a graph.
Expand All @@ -63,7 +63,7 @@ You can test that `Tensorflow` is running on the GPU with the following python c
# Runs the op.
print(sess.run(c))

Run the `tftest.py` script with the following command: ::
Run the ``tftest.py`` script with the following command: ::

python tftest.py

Expand All @@ -81,18 +81,18 @@ Firstly navigate to the folder you wish your script to lauch from, for example w

cd ~

It is recommended that you create a **script file** e.g. `script.sh`: ::
It is recommended that you create a **script file** e.g. ``script.sh``: ::

#!/bin/bash

# Run the tftest.py script, see previous section for contents
python tftest.py

And don't forget to make your `script.sh` executable: ::
And don't forget to make your ``script.sh`` executable: ::

chmod +x script.sh

Then create a **Slurm batch script** that is used to launch the code, e.g. `batch.sh`: ::
Then create a **Slurm batch script** that is used to launch the code, e.g. ``batch.sh``: ::

#!/bin/bash

Expand All @@ -118,15 +118,15 @@ Then create a **Slurm batch script** that is used to launch the code, e.g. `batc
#Launching the commands within script.sh
/jmain01/apps/docker/tensorflow-batch -c ./script.sh

You can then submit the job using `sbatch`: ::
You can then submit the job using ``sbatch``: ::

sbatch batch.sh

On successful submission, a job ID is given: ::

Submitted batch job 7800

The output will appear in the slurm standard output file with the corresponding job ID (in this case `slurm-7800.out`). The content of the output is as follows: ::
The output will appear in the slurm standard output file with the corresponding job ID (in this case ``slurm-7800.out``). The content of the output is as follows: ::

================
== TensorFlow ==
Expand Down
18 changes: 9 additions & 9 deletions software/machine-learning/theano.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,12 +47,12 @@ This command will show the following, which is now running on a compute node: ::

.. note::

Inside the container, your home directory on the outside e.g. `/jmain01/home/JAD00X/test/test1-test` is mapped to the `/home_directory` folder inside the container.
Inside the container, your home directory on the outside e.g. ``/jmain01/home/JAD00X/test/test1-test`` is mapped to the ``/home_directory`` folder inside the container.

You can test this by using the command:
ls /home_directory

You can test that `Theano` is running on the GPU with the following python code `theanotest.py` ::
You can test that ``Theano`` is running on the GPU with the following python code ``theanotest.py`` ::

from theano import function, config, shared, sandbox
import theano.tensor as T
Expand All @@ -77,11 +77,11 @@ You can test that `Theano` is running on the GPU with the following python code
else:
print('Used the gpu')

Run the `theanotest.py` script with the following command: ::
Run the ``theanotest.py`` script with the following command: ::

THEANO_FLAGS="device=gpu" python theanotest.py

The `THEANO_FLAGS` `device` variable can be set to either `cpu` or `gpu`.
The ``THEANO_FLAGS`` ``device`` variable can be set to either ``cpu`` or ``gpu``.

Which gives the following results: ::

Expand Down Expand Up @@ -109,18 +109,18 @@ Firstly navigate to the folder you wish your script to lauch from, for example w

cd ~

It is recommended that you create a **script file** e.g. `script.sh`: ::
It is recommended that you create a **script file** e.g. ``script.sh``: ::

#!/bin/bash

# Run the theanotest.py script, see previous section for contents
python theanotest.py

And don't forget to make your `script.sh` executable: ::
And don't forget to make your ``script.sh`` executable: ::

chmod +x script.sh

Then create a **Slurm batch script** that is used to launch the code, e.g. `batch.sh`: ::
Then create a **Slurm batch script** that is used to launch the code, e.g. ``batch.sh``: ::

#!/bin/bash

Expand All @@ -146,15 +146,15 @@ Then create a **Slurm batch script** that is used to launch the code, e.g. `batc
#Launching the commands within script.sh
/jmain01/apps/docker/theano-batch -c ./script.sh

You can then submit the job using `sbatch`: ::
You can then submit the job using ``sbatch``: ::

sbatch batch.sh

On successful submission, a job ID is given: ::

Submitted batch job 7800

The output will appear in the slurm standard output file with the corresponding job ID (in this case `slurm-7800.out`). The content of the output is as follows: ::
The output will appear in the slurm standard output file with the corresponding job ID (in this case ``slurm-7800.out``). The content of the output is as follows: ::

============
== Theano ==
Expand Down
22 changes: 11 additions & 11 deletions software/machine-learning/torch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,17 +49,17 @@ This command will show the following, which is now running on a compute node: ::

.. note::

Inside the container, your home directory on the outside e.g. `/jmain01/home/JAD00X/test/test1-test` is mapped to the `/home_directory` folder inside the container.
Inside the container, your home directory on the outside e.g. ``/jmain01/home/JAD00X/test/test1-test`` is mapped to the ``/home_directory`` folder inside the container.

You can test this by using the command:
ls /home_directory

You are now inside the container where `Torch` is installed.
You are now inside the container where ``Torch`` is installed.

Torch console
^^^^^^^^^^^^^

`Torch` can be used interactively by using the `th` command: ::
``Torch`` can be used interactively by using the ``th`` command: ::

th

Expand All @@ -73,7 +73,7 @@ Where you will the torch command prompt: ::

th>

When you're done, type `exit` and then `y` to exit the `Torch` console: ::
When you're done, type ``exit`` and then ``y`` to exit the ``Torch`` console: ::

th> exit
Do you really want to exit ([y]/n)? y
Expand All @@ -83,7 +83,7 @@ When you're done, type `exit` and then `y` to exit the `Torch` console: ::
Using LUA script
^^^^^^^^^^^^^^^^

It is also possible to pass a LUA script to the `th` command. For example, create a `test.lua` file in the current directory with the contents: ::
It is also possible to pass a LUA script to the ``th`` command. For example, create a ``test.lua`` file in the current directory with the contents: ::

torch.manualSeed(1234)
-- choose a dimension
Expand All @@ -109,7 +109,7 @@ It is also possible to pass a LUA script to the `th` command. For example, creat
print(J(torch.rand(N)))


Call the `test.lua` script by using the command: ::
Call the ``test.lua`` script by using the command: ::

th test.lua

Expand All @@ -128,19 +128,19 @@ Firstly navigate to the folder you wish your script to lauch from, for example w

cd ~

It is recommended that you create a **script file** e.g. `script.sh`: ::
It is recommended that you create a **script file** e.g. ``script.sh``: ::

#!/bin/bash

# Runs a script called test.lua
# see above section for contents
th test.lua

And don't forget to make your `script.sh` executable: ::
And don't forget to make your ``script.sh`` executable: ::

chmod +x script.sh

Then create a **Slurm batch script** that is used to launch the code, e.g. `batch.sh`: ::
Then create a **Slurm batch script** that is used to launch the code, e.g. ``batch.sh``: ::

#!/bin/bash

Expand All @@ -165,15 +165,15 @@ Then create a **Slurm batch script** that is used to launch the code, e.g. `batc
#Launching the commands within script.sh
/jmain01/apps/docker/torch-batch -c ./script.sh

You can then submit the job using `sbatch`: ::
You can then submit the job using ``sbatch``: ::

sbatch batch.sh

On successful submission, a job ID is given: ::

Submitted batch job 7800

The output will appear in the slurm standard output file with the corresponding job ID (in this case `slurm-7800.out`). The content of the output is as follows: ::
The output will appear in the slurm standard output file with the corresponding job ID (in this case ``slurm-7800.out``). The content of the output is as follows: ::

______ __ | Torch7
/_ __/__ ________/ / | Scientific computing for Lua.
Expand Down

0 comments on commit 6143cde

Please sign in to comment.