In this tutorial, we show how to start working with Neu.ro:
Install the CLI client;
Understand core concepts;
Start developing on GPU in Jupyter Notebooks.
To start working with CLI, you have two options:
Use the Web Terminal
Install Neu.ro CLI on your machine and run neuro login
The first option is recommended for exploring the platform. The second option, though, is better when working on your projects.
Neu.ro CLI requires Python 3 installed (recommended: 3.7, required: >=3.6). We suggest using Anaconda Python 3.7 Distribution.
pip install -U neuro-cli neuro-extras neuro-flowneuro login
While there are several options to make Neu.ro CLI work on Windows, we highly recommend using Anaconda Python 3.7 Distribution with default installation settings.
When you have it up and running, run the following commands in Conda Prompt:
conda install -c conda-forge makeconda install -c conda-forge gitpip install -U neuro-cli neuro-extras neuro-flowpip install -U certifineuro login
To make sure that all commands you can find in our documentation work properly, don't forget to run bash
every time you open Conda Prompt.
On the Neu.ro Core level, you will work with jobs, environments, and storage. To be more specific, a job (an execution unit) runs in a given environment (Docker container) on a given preset (a combination of CPU, GPU, and memory resources allocated for this job) with several storage instances (block or object storage) available (attached).
Let us show a few examples.
Run a job on CPU which prints “Hello, World!” and shuts down:
neuro run --preset cpu-small --name test ubuntu echo Hello, World!
Executing this command will result in an output like this:
√ Job ID: job-7dd12c3c-ae8d-4492-bdb9-99509fda4f8c√ Name: test- Status: pending Creating- Status: pending Scheduling√ Http URL: https://test--jane-doe.jobs.neuro-compute.org.neu.ro√ The job will die in a day. See --life-span option documentation for details.√ Status: succeeded√ =========== Job is running in terminal mode ===========√ (If you don't see a command prompt, try pressing enter)√ (Use Ctrl-P Ctrl-Q key sequence to detach from the job)Hello, World!
Run a job in Neu.ro default environment (neuromation/base
) on GPU which prints checks if CUDA is available in this environment:
neuro run --preset gpu-k80-small --name test neuromation/base python -c "import torch; print(torch.cuda.is_available());"
Check the presets you can use:
neuro config show
Create a directory demo
in your platform storage root:
neuro mkdir -p storage:demo
Run a job which mounts the demo
directory on storage to the /demo
directory in the job container and creates a file in it:
neuro run --preset cpu-small --name test --volume storage:demo:/demo:rw ubuntu bash -c "echo Hello >> /demo/hello.txt"
Check that the file is on storage:
neuro ls storage:demo
While you can run a Jupyter Notebooks session with one command in the command line or in one click in the web UI, we recommend project-based development. To simplify the process, we provide a project template which is a part of Neu.ro Toolbox. This template provides the folder structure and integrations with several recommended tools.
To initialize a new project from the template, run:
neuro project init
This command asks several questions about your project:
project_name [Name of the project]: Neuro Tutorialproject_slug [neuro-tutorial]:code_directory [modules]:
You can press Enter if you agree with the suggested choice.
To navigate to the project directory, run:
cd neuro-tutorial
The structure of the project's folder will look like this:
neuro-tutorial├── .neuro/ <- neuro and neuro-flow CLI configuration files├── config/ <- configuration files for various integrations├── data/ <- training and testing datasets (we do not keep it under source control)├── notebooks/ <- Jupyter notebooks├── modules/ <- source code of models├── results/ <- training artifacts├── .gitignore <- default .gitignore for a Python ML project├── .neuro.toml <- autogenerated config file├── HELP.md <- autogenerated template reference├── README.md <- autogenerated informational file├── Dockerfile <- description of base image used for your project├── apt.txt <- list of system packages to be installed in the training environment├── requirements.txt <- list of Python dependencies to be installed in the training environment└── setup.cfg <- linter settings (Python code quality checking)
The template contains neuro/live.yaml
configuration file for neuro-flow
, which guarantees the contract between the above-shown structure, the base environment which we provide, and manipulations with storage and jobs. For example, the upload
command synchronizes sub-folders on your local machine with sub-folders on the persistent platform storage, and those sub-folders are synchronized with the corresponding sub-folders in job containers.
To set up the project environment, run:
neuro-flow build myimage
When this command is executed, system packages from apt.txt
and pip dependencies from requirements.txt
are installed to the base environment which already contains CUDA support and the most popular ML/AI frameworks, like Tensorflow and Pytorch.
To start a Jupyter Notebooks session on GPU, run:
neuro-flow run jupyter
This command open Jupyter Notebooks interface in your default browser.
Now, when you edit notebooks, they are updated on your platform storage. To download them locally (for example, to connect them to a version control system), run:
neuro-flow download notebooks
Don’t forget to terminate your job when you no longer need it (the files won’t disappear after that):
neuro-flow kill jupyter
To check how much GPU and CPU hours you have left, run:
neuro config show-quota