You can upload your datasets to the Platform using Neuro CLI. Neuro CLI supports basic file system operations for copying and moving files to and from the platform storage.
From your terminal or command prompt, change to the directory containing your dataset, and run:
neuro cp -r data/ storage:data/
storage:data/indicates that the destination is the platform. In a similar fashion,
neuro cp -r storage:data/ data/
downloads dataset to your current directory locally.
You can access your dataset from within a container by giving
neuro runas a parameter when starting a new job.
Filebrowser is a web-based file management interface. You can use it to view and manage your storage on Neuro Platform. To start the job and open the tool in your default browser, run the following command:
neuro run \
--name filebrowser \
--preset cpu-small \
--http 80 \
--volume storage::/srv:rw \
To work with your dataset from within a container, to troubleshoot a model, or to get shell access to a GPU instance, you can execute a command shell within a running job in interactive mode.
To do so, copy the job id of a running job (you can run
neuro psto see the list), and run:
neuro exec <job-id or job-name> bash
neuro exec training bash
This command starts bash within the running job and connects your terminal to it.
Assuming you have a local Docker image named
helloworldbuilt on your local machine, you can push it to the Neuro Platform by running:
neuro push helloworld
After that, you can start the job by running:
neuro run image:helloworld
To kill all jobs that are currently running on your behalf, run the following command:
neuro kill `neuro -q ps -o <user>`
neuro kill `neuro -q ps -o mariyadavydova`
Sometimes you want to execute two or three commands in a job without having to connect to it. For example, you may want to change the working directory and to run training. To achieve this, you need to wrap your commands in
”bash -c ‘<commands>’”call, like this:
"bash -c 'cd /project && python mnist/train.py'"
There are two ways to get the output of your running job:
- Run it without the
- Connect to a running job output with
neuro log <JOB>, where JOB is either the id or the name of your job.
In some cases, Python caches the output of the scripts, so that you won’t get any output until the job finishes. To overcome this problem, provide
python, like this:
"bash -c 'cd /project && python -u mnist/train.py'"
To check the usage of storage space, run the following command:
$ neuro run -v storage://:/var/storage ubuntu du -h -d 1 /var/storage
If you want to check storage space in any specific folder, just provide the folder name to this command in the following way:
$ neuro run -v storage:FOLDER_NAME:/var/storage ubuntu du -h -d 1 /var/storage