Running Your Code
Oftentimes you don't start a project from scratch. Instead of that you use someone's or your own old code as a baseline and develop your solution on top of it. This guide demonstrates how to take an existing code base, convert it into a Neu.ro project, and start developing on the platform.

Prerequisites

    1.
    Make sure that you have the Neuro CLI installed and logged in.
    2.
    Install the neuro-flow package:
1
$ pip install -U neuro-flow
Copied!

Configuration

As an example we'll use the GitHub repo that contains PyTorch implementations for Aspect-Based Sentiment Analysis models (see Attentional Encoder Network for Targeted Sentiment Classification for more details).
First, let's clone the repo and navigate to the created folder:
1
$ git clone [email protected]:songyouwei/ABSA-PyTorch.git
2
$ cd ABSA-PyTorch
Copied!
Now, we need to add a couple of files:
    Dockerfile contains a very basic Docker image configuration. We need this file to build a custom Docker image which is based on pytorch/pytorch public images and contains this repo requirements (which are gracefully listed by the repo maintainer in requirements.txt):
Dockerfile
1
FROM pytorch/pytorch
2
COPY . /cfg
3
RUN pip install --progress-bar=off -U --no-cache-dir -r /cfg/requirements.txt
Copied!
    .neuro/live.yml contains minimal configuration allowing us to run this repo's scripts right on the platform through handy short commands:
.neuro/live.yml
1
kind: live
2
title: Sentiment Analysis Training
3
id: absa
4
5
volumes:
6
project:
7
remote: storage:${{ flow.id }}
8
mount: /project
9
local: .
10
11
images:
12
pytorch:
13
ref: image:${{ flow.id }}:v1.0
14
dockerfile: ${{ flow.workspace }}/Dockerfile
15
context: ${{ flow.workspace }}
16
17
jobs:
18
train:
19
image: ${{ images.pytorch.ref }}
20
preset: gpu-small
21
volumes:
22
- ${{ volumes.project.ref_rw }}
23
bash: |
24
cd ${{ volumes.project.mount }}
25
python train.py --model_name bert_spc --dataset restaurant
Copied!
Here is a brief explanation of this config:
    volumes section contains declarations of connections between your computer file system and the platform storage; here we state that we want the entire project folder to be uploaded to storage at storage:absa folder and be mounted inside jobs /project;
    images section contains declarations of Docker images created in this project; here we declare our image which is decribed in Dockerfile above;
    jobs section is the one where action happens; here we declare a train job which runs our training script with a couple of parameters.

Running code

Now it's time to run several commands that set up the project environment and run training.
    First, create volumes and upload project to platform storage:
1
$ neuro-flow mkvolumes
2
$ neuro-flow upload ALL
Copied!
    Then, build an image:
1
$ neuro-flow build pytorch
Copied!
    Finally, run training:
1
$ neuro-flow run train
Copied!
Please run neuro-flow --help to get more information about available commands.
Last modified 6mo ago