Skip to content

Virtual Environments

Info

The guide below is only meant for reference only and not meant to be followed verbatim. You may need to generate your own guide site if you require guidance specifically for your own project.

Incompatibility issues

This method is not recommended as it may have unintended consequences on user and group permissions. It is highly recommended to run locally instead. This section is to provide a complimentary section as a means of a technical possibility, especially should you require debugging within the Docker container.

Creating Virtual Environments

If you're planning to use the code-server development workspace written in the previous section, you should start reading here instead.

Docker Image Debugging

While you might be making use of your own remote infrastructure to carry out some workflows, we can still make use of our local machine to execute some of the steps of the end-to-end machine learning workflow. Hence, we can begin by creating a virtual environment that will contain all the dependencies required for this guide. This requires the Docker image to be built from a Dockerfile (docker/project-cpu.Dockerfile) provided in this template:

docker build \
    -t registry.aisingapore.net/project-path/cpu:0.1.0 \
    -f docker/project-cpu.Dockerfile .
docker build \
    -t registry.aisingapore.net/project-path/cpu:0.1.0 \
    -f docker/project-cpu.Dockerfile \
    --platform linux/amd64 .
docker build `
    -t registry.aisingapore.net/project-path/cpu:0.1.0 `
    -f docker/project-cpu.Dockerfile .

Using GPUs in Docker

You can build the gpu variant by replacing the cpu in the above commands to gpu, i.e.:

  • registry.aisingapore.net/project-path/cpu to registry.aisingapore.net/project-path/gpu
  • docker/project-cpu.Dockerfile to docker/project-gpu.Dockerfile

Spin up your Docker image by running:

Info

Add --gpus=all for Nvidia GPUs in front of the image name.
Add --device=nvidia.com/gpu=all for Nvidia GPUs using Podman instead of Docker.
Add --device=/dev/kfd --device=/dev/dri --group-add video for AMD GPUs in front of the image name.

docker run -it --rm \
    -u $(id -u):$(id -g) \
    -v ./:/home/aisg/project \
    registry.aisingapore.net/project-path/cpu:0.1.0 \
    bash
docker run -it --rm \
    -v ./:/home/aisg/project \
    registry.aisingapore.net/project-path/cpu:0.1.0 \
    bash

Warning

GPU passthrough only works with Docker Desktop or Podman Desktop at the time this section is written.
For Nvidia GPUs, you would need to add --gpus=all in front of the image name, or --device=nvidia.com/gpu=all if Podman is used.
For AMD GPUs, you can follow this guide.

docker run -it --rm `
    -v .\:/home/aisg/project `
    registry.aisingapore.net/project-path/cpu:0.1.0 `
    bash

You can either run python or install IPython to run an interactive shell and write snippets to test within the environment:

pip install ipython

Why use IPython when there is the Python interpreter?

There are a number of things IPython does that the Python interpreter lacks:

  • Robust command history with search capabilities, allowing you to navigate through previously executed commands easily
  • Auto-completion, syntax highlighting and other development tools that make coding easier and faster
  • Enhanced debugging tools and interactive exception handling
  • Able to use hooks and plugins to enhance the IPython experience

Using Virtual Conda Environments Within VSCode

While it is possible for VSCode to make use of different virtual Python environments, some other additional steps are required for the VSCode server to detect the conda environments that you would have created.

  • Ensure that you are in a project folder which you intend to work on. You can open a folder through File > Open Folder....

  • Install the VSCode extensions ms-python.python and ms-toolsai.jupyter. After installation of these extensions, restart VSCode. If you wish to restart VSCode in-place, you can do so by using the shortcut Ctrl + Shift + P, entering Developer: Reload Window in the prompt and pressing Enter following that.

  • Ensure that you have ipykernel installed in the conda environment that you intend to use. This template by default lists the library as a dependency under requirements.txt. You can check for the library like so:

    conda activate project
    conda list | grep "ipykernel"
    
    conda activate project
    conda list | Select-String "ipykernel"
    

Output should look similar to:

ipykernel  X.XX.XX  pypi_0  pypi
  • Now enter Ctrl + Shift + P again and execute Python: Select Interpreter. Provide the path to the Python executable within the conda environment that you intend to use, something like so: path/to/conda_env/bin/python.

  • Open up any Jupyter notebook and click on the button that says Select Kernel on the top right hand corner. You will be presented with a selection of Python interpreters. Select the one that corresponds to the environment you intend to use.

  • Test out the kernel by running the cells in the sample notebook provided under notebooks/sample-pytorch-notebook.ipynb.