In this tutorial, we describe the recommended way to train a simple machine learning model on the Neuro platform. As our ML engineers prefer PyTorch over other ML frameworks, we show the training and evaluation of one of the basic PyTorch examples.
We assume that you have already signed up to the platform, installed the Neuro CLI, and logged in to the platform (see Getting Started).
To simplify working with Neuro Platform and to help establish the best practices in the ML environment, we provide a flow template. This template consists of the recommended directories and files. It's designed to operate smoothly with our base environment.
To use it, install thecookiecutter package and initialize cookiecutter-neuro-project:
You will then need to provide some information about the new project:
project_name [Name of the project]: Neuro Tutorial
project_dir [neuro-tutorial]:
project_id [neuro-tutorial]:
code_directory [modules]: rnn
preserve Neuro Flow template hints [yes]:
Flow configuration structure
After you execute the command mentioned above, you get the following structure:
neuro-tutorial
├── .github/ <- Github workflows and a dependabot.yml file
├── .neuro/ <- neuro and neuro-flow CLI configuration files
├── config/ <- configuration files for various integrations
├── data/ <- training and testing datasets (we don't keep it under source control)
├── notebooks/ <- Jupyter notebooks
├── rnn/ <- models' source code
├── results/ <- training artifacts
├── .gitignore <- default .gitignore file for a Python ML project
├── .neuro.toml <- autogenerated config file
├── .neuroignore <- a file telling Neuro to ignore the results/ folder
├── HELP.md <- autogenerated template reference
├── README.md <- autogenerated informational file
├── Dockerfile <- description of the base image used for your project
├── apt.txt <- list of system packages to be installed in the training environment
├── requirements.txt <- list of Python dependencies to be installed in the training environment
├── setup.cfg <- linter settings (Python code quality checking)
└── update_actions.py <- instructions on update actions
When you run a job (for example, via neuro-flow run jupyter), the directories are mounted to the job as follows:
Mount Point
Description
Storage URI
/project/data/
Training / testing data
storage:neuro-tutorial/data/
/project/rnn/
User's Python code
storage:neuro-tutorial/rnn/
/project/notebooks/
User's Jupyter notebooks
storage:neuro-tutorial/notebooks/
/project/results/
Logs and results
storage:neuro-tutorial/results/
This mapping is defined as variables in the top section of Makefile and can be adjusted if needed.
Filling the flow
Now we need to fill newly created flow with the content:
When you start working with a flow on the Neuro platform, the basic flow looks as follows: you set up the remote environment, upload data and code to your storage, run training, and evaluate the results.
To set up the remote environment, run
$ neuro-flow build train
This command will run a lightweight job (via neuro run), upload the files containing your dependencies apt.txt and requirements.txt (via neuro cp), install the dependencies (using neuro exec), do other preparatory steps, and then create the base image from this job and push it to the platform (via neuro save, which works similarly to docker commit).
To upload data and code to your storage, run
$ neuro-flow upload ALL
To run training job, you need to specify the training script in .neuro/live.yaml, and then run neuro-flow run train:
open .neuro/live.yaml in an editor,
find the following lines (make sure you're looking at the train job, not multitrain which has a very similar section):