Getting Started
Introduction
There are two things you will need to do before you start working with Neu.ro:
After this, you're free to explore the platform and it's functionality. As a good starting point, we've included a section about development on GPU with Jupyter Notebooks.
Installing the CLI
Web Terminal doesn't require installation and can quickly get you familiar with Neu.ro, allowing you to work with the platform in a browser.
However, installing Neu.ro CLI locally may prove more effective for long-term use:
You won't need to pay for simply running the job like you do in Web UI.
Your source code and other local files will be saved directly on your machine.
Installation instructions
Installing via pipx
Our neuro-all package available in pipx will automatically install all required components:
Installing via pip
You can also install all of the components through pip.
Neu.ro CLI requires Python 3 installed (recommended: 3.8; required: 3.7.9 or newer). We suggest using the Anaconda Python 3.8 Distribution.
If your machine doesn't have GUI, use the following command instead of neuro login:
Understanding the core concepts
On the Neu.ro Core level, you will work with jobs, environments, and storage. To be more specific, a job (an execution unit) runs in a given environment (Docker container) on a given preset (a combination of CPU, GPU, and memory resources allocated for this job) with several storage instances (block or object storage) attached.
Here are some examples.
Hello, World!
Run a job on CPU which prints “Hello, World!” and shuts down:
Executing this command will result in an output like this:
A simple GPU job
Run a job on GPU in the default Neu.ro environment (neuromation/base
) that checks if CUDA is available in this environment:
We used the gpu-small
preset for this job. To see the full list of presets you can use, run the following command:
Working with platform storage
Create a new demo
directory in the root directory of your platform storage:
Run a job that mounts the demo
directory from platform storage to the /demo
directory in the job container and creates a file in it:
Check that the file you have just created is actually on the storage:
Developing on GPU with Jupyter Notebooks
Development in Jupyter Notebooks is a good example of how the Neuro Platform can be used. While you can run a Jupyter Notebooks session in one command through CLI or in one click in the web UI, we recommend project-based development. To simplify the process, we provide a project template which is based on the cookiecutter package and is a part of the Neu.ro Toolbox. This template provides the basic necessary folder structure and integrations with several recommended tools.
Initializing a Neuro cookiecutter project
First, you will need to install the cookiecutter package via pip or pipx:
Now, to initialize a new Neuro cookiecutter project, run:
This command will prompt you to enter some info about your new project:
Default values are indicated by square brackets [ ]. You can use them by pressing Enter.
To navigate to the project directory, run:
Project structure
The structure of the project's folder will look like this:
The template contains the neuro/live.yaml
configuration file for neuro-flow
. This file guarantees a proper connection between the project structure, the base environment that we provide, and actions with storage and jobs. For example, the upload
command synchronizes sub-folders on your local machine with sub-folders on the persistent platform storage, and those sub-folders are synchronized with the corresponding sub-folders in job containers.
Setting up the environment and running Jupyter
To set up the project environment, run:
When these commands are executed, system packages from apt.txt
and pip dependencies from requirements.txt
are installed to the base environment. It supports CUDA by default and contains the most popular ML/AI frameworks such as Tensorflow and Pytorch.
For Jupyter Notebooks to run properly, the train.py
script and the notebook itself should be available on the storage. Upload the code
directory containing this file to the storage by using the following command:
Now you need to choose a preset on which you want to run your Jupyter jobs. To view the list of presets available on the current cluster, run:
To start a Jupyter Notebooks session run:
This command will open Jupyter Notebooks interface in your default browser.
Also, you can adjust the jupyter job's preset configuration by specifying the preset argument to reflect your preferred preset:
After this, each time you run a Jupyter job, it will use the specified by default without the need for you to provide it in a CLI command:
You can find more information about job description arguments here
Now, when you edit notebooks, they are updated on your platform storage. To download them locally (for example, to save them under a version control system), run:
Don’t forget to terminate your job when you no longer need it (the files won’t disappear after that):
To check how many credits you have left, run:
Last updated