Skip to content
Snippets Groups Projects
Commit 6ef81ec0 authored by Kaloyan Danovski's avatar Kaloyan Danovski
Browse files

Add tf and pytorch examples

parent a7008803
Branches
No related tags found
No related merge requests found
......@@ -11,20 +11,20 @@ The following steps describe the easiest way to install and access the `Tensorfl
.. note::
In the instructions below, loading both Tensroflow and PyTorch relies on the `PYTHONPATH` environment variable. In principle, this could cause conflicts, but the packages have been compiled and organized by Intel such that no conflicts should arise and both of them can safely be loaded and used at the same time.
In the instructions below, loading both Tensroflow and PyTorch relies on the *PYTHONPATH* environment variable. In principle, this could cause conflicts, but the packages have been compiled and organized by Intel such that no conflicts should arise and both of them can safely be loaded and used at the same time.
Install and run Tensorflow
--------------------------
To install Tensorflow locally, use your preferred package manager (requires priviledges)::
To install Tensorflow locally, use your preferred package manager (requires root priviledges)::
# dnf install intel-oneapi-tensorflow
The simplest way to make Tensorflow discoverable is to use::
$ export PYTHONPATH=/<oneapi-root>/tensorflow/latest/lib:$PYTHONPATH
$ export PYTHONPATH=/<oneapi-root>/tensorflow/latest/lib/python3.9/site-packages:$PYTHONPATH
where <oneapi-root> is the main directory of your Intel oneAPI installation, such as `/opt/intel/oneapi` (default, and on Discoverer).
where <oneapi-root> is the main directory of your Intel oneAPI installation, such as */opt/intel/oneapi* (default, and on Discoverer).
You can test that Tensorflow is accessible via::
......@@ -34,6 +34,13 @@ You can test that Tensorflow is accessible via::
Only Tensorflow 2.X is available and maintained on the Discoverer HPC servers.
You can run a simple test example like so::
$ cd /discofs/$(whoami)
$ sbatch /opt/software/cases/python/intel/tensorflow/test.sbatch
This will run a simple network on the MNIST dataset and output the results into a *.out* file.
Install and run PyTorch
-----------------------
......@@ -45,18 +52,23 @@ The simplest way to make PyTorch discoverable is to use::
$ export PYTHONPATH=/<oneapi-root>/pytorch/latest/lib/python3.9/site-packages:$PYTHONPATH
where <oneapi-root> is the main directory of your Intel oneAPI installation, such as "`/opt/intel/oneapi`" (default).
Note that the path to PyTorch points to "`lib/python3.9/site-packages`", rather than simply "`lib`", as was the case for Tensorflow.
where *<oneapi-root>* is the main directory of your Intel oneAPI installation, such as "*/opt/intel/oneapi*" (default).
You can test that PyTorch is accessible via::
$ python -c "import torch"
You can run a simple test example like so::
$ cd /discofs/$(whoami)
$ sbatch /opt/software/cases/python/intel/pytorch/test_1.sbatch
This will run a simple ML example and output the results into a *.out* file. There are also a number of other example scripts under the */opt/software/cases/python/intel/pytorch* directory.
Using Torchvision
-----------------
You can also install and use `Torchvision <https://pytorch.org/vision/stable/index.html>`_ along with Intel oneAPI libraries by installing it via ``pip``. However, in doing so you need to make sure that you are using the ``pip`` executable which is connected to the Tensorflow installation. In other words, your `PATH` needs to point to the directory `<tensorflow-root>/latest/bin`, where `<tensorflow-root>` is the main directory of your Tensorflow installation, such as `/opt/intel/oneapi/tensorflow` (default, and on Discoverer). If this is not the case, you should run::
You can also install and use `Torchvision <https://pytorch.org/vision/stable/index.html>`_ along with Intel oneAPI libraries by installing it via ``pip``. However, in doing so you need to make sure that you are using the ``pip`` executable which is connected to the Tensorflow installation. In other words, your *PATH* needs to point to the directory *<tensorflow-root>/latest/bin*, where *<tensorflow-root>* is the main directory of your Tensorflow installation, such as */opt/intel/oneapi/tensorflow* (default, and on Discoverer). If this is not the case, you should run::
$ export PATH=<tensorflow-root>/latest/bin:$PATH
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment