In this tutorial, we show how to start working with Neu.ro:
Install CLI client;
Understand core concepts; and
Start developing on GPU in Jupyter Notebooks.
To start working with CLI, you have two options:
Use Web Terminal
Install Neu.ro CLI on your machine and run
The first option is recommended for exporing the platform. The second option, though, is better when working on your projects.
Neu.ro CLI requires Python 3 installed (recommended: 3.7, required: >=3.6). We suggest using Anaconda Python 3.7 Distribution.
pip install -U neuromationneuro login
While there are several options to make Neu.ro CLI work on Windows, we highly recommend using Anaconda Python 3.7 Distribution with default installation settings.
When you have it up and running, run the following commands in Conda Prompt:
conda install -c conda-forge makeconda install -c conda-forge gitpip install -U neuromationpip install -U certifineuro login
On Neu.ro Core level, one works with jobs, environments, and storage. To be more specific, one runs a job (an execution unit) in a given environment (Docker container) on a given preset (a combination of CPU, GPU, and memory resources allocated for this job) with several parts of storage (block or object storage) available (attached).
Let us show several examples.
Run a job on CPU which prints “Hello, World!” and shuts down:
neuro run --preset cpu-small --name test ubuntu echo Hello, World!
Upon execution of this command you’ll see an output like this:
Job ID: job-2b743322-f53a-4211-be4e-5d493e6cc770 Status: pendingName: testHttp URL: https://test--johndoe.jobs.neuro-ai-public.org.neu.roShortcuts:neuro status test # check job statusneuro logs test # monitor job stdoutneuro top test # display real-time job telemetryneuro exec test bash # execute bash shell to the jobneuro kill test # kill jobStatus: pending CreatingStatus: pending SchedulingStatStatus: pending ContainerCreatingStatus: succeededTerminal is attached to the remote job, so you receive the job's output.Use 'Ctrl-C' to detach (it will NOT terminate the job), or restart thejob with `--detach` option.Hello, World!
Run a job in Neu.ro default environment (
neuromation/base) on GPU which prints checks if CUDA is available in this environment:
neuro run -preset gpu-small --name test neuromation/base python -c "import torch; print(torch.cuda.is_available());"
Check the presets you can use:
neuro config show
Create a directory
demo in your platform storage root:
neuro mkdir -p storage:demo
Run a job which mounts
demo directory on storage to
/demo directory in the job container and creates a file in it:
neuro run --preset cpu-small --name test --volume storage:demo:/demo:rw ubuntu "echo Hello >> /demo/hello.txt"
Check that the file is on storage:
neuro ls storage:demo
While one can run a Jupyter Notebooks session with one command in a command line or with one click in web UI, we recommend project-based development. To simplify the process, we provide the project template, which is a part of Neu.ro Toolbox, and provides the folder structure and integrations with several recommended tools.
To initialize a new project from the template, run:
neuro project init
This command asks several questions about your project:
project_name [Name of the project]: Neuro Tutorialproject_slug [neuro-tutorial]:code_directory [modules]:
You can press Enter if you agree with the suggested choice.
To navigate to the project directory, run:
After you execute the command mentioned above, you get the following structure:
neuro-tutorial├── config/ <- configuration files for various integrations├── data/ <- training and testing datasets (we do not keep it under source control)├── notebooks/ <- Jupyter notebooks├── modules/ <- source code of models├── results/ <- training artifacts├── .gitignore <- default .gitignore for a Python ML project├── HELP.md <- autogenerated template reference├── Makefile <- various ML development tasks (see `make help`)├── README.md <- autogenerated informational file├── apt.txt <- list of system packages to be installed in the training environment├── requirements.txt <- list of Python dependencies to be installed in the training environment└── setup.cfg <- linter settings (Python code quality checking)
The template contains Makefile, which guarantees the contract between the above-shown structure, the base environment which we provide, and manipulations with storage and jobs. For example, through
download-* commands, sub-folders on your local machine are synced with sub-folders on persistent platform storage, and those sub-folders are synced with the corresponding sub-folders in job containers.
To set up the project environment, run:
Upon execution of this command, system packages from
apt.txt and pip dependencies from
requirements.txt are installed in the base environment, which already contains CUDA support and the most popular ML/AI frameworks, like Tensorflow and Pytorch.
To start a Jupyter Notebooks session on GPU, run:
This command open Jupyter Notebooks interface in your default browser.
Now, when you edit notebooks, they update on your platform storage. To download them locally (for example, to put them under version control), run:
Don’t forget to terminate your job when you no longer need it (the files won’t disappear after that):
To check how much GPU and CPU hours you have left, run:
neuro config show-quota