mirror of
https://github.com/clearml/clearml-agent
synced 2025-06-26 18:16:15 +00:00
Compare commits
48 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c43084825c | ||
|
|
f1abee91dd | ||
|
|
c6b04edc34 | ||
|
|
50b847f4f7 | ||
|
|
1f53a06299 | ||
|
|
257dd95401 | ||
|
|
1736d205bb | ||
|
|
6fef58df6c | ||
|
|
473a8de8bb | ||
|
|
ff6272f48f | ||
|
|
1b5bcebd10 | ||
|
|
c4344d3afd | ||
|
|
45a44b087a | ||
|
|
c58ffdb9f8 | ||
|
|
54d9d77294 | ||
|
|
ce02385420 | ||
|
|
87ffd95eaa | ||
|
|
522dd85d7b | ||
|
|
3651c85fcd | ||
|
|
566427d550 | ||
|
|
cc99077c92 | ||
|
|
5f112447f7 | ||
|
|
22c5f043aa | ||
|
|
860ff8911c | ||
|
|
799b292146 | ||
|
|
fffe8e1c3f | ||
|
|
8245293f7f | ||
|
|
6563ce70c8 | ||
|
|
829b1d8f15 | ||
|
|
f6be64a4b5 | ||
|
|
21f6a73f66 | ||
|
|
77c4c79a2f | ||
|
|
2ad929fa00 | ||
|
|
53f511f536 | ||
|
|
7c87797a40 | ||
|
|
272fa07c29 | ||
|
|
6ce9cf7c2a | ||
|
|
abb30ac2b8 | ||
|
|
5bb257c46c | ||
|
|
c65b28ed92 | ||
|
|
fce8eb6782 | ||
|
|
9cb71b9526 | ||
|
|
38e02ca5cd | ||
|
|
06bfea80bc | ||
|
|
e660c7f2be | ||
|
|
fc28467080 | ||
|
|
8d47905982 | ||
|
|
a6a0b01f71 |
140
README.md
140
README.md
@@ -1,4 +1,4 @@
|
||||
# TRAINS Agent
|
||||
# Allegro Trains Agent
|
||||
## Deep Learning DevOps For Everyone - Now supporting all platforms (Linux, macOS, and Windows)
|
||||
|
||||
"All the Deep-Learning DevOps your research needs, and then some... Because ain't nobody got time for that"
|
||||
@@ -8,27 +8,29 @@
|
||||
[](https://img.shields.io/pypi/v/trains-agent.svg)
|
||||
[](https://pypi.python.org/pypi/trains-agent/)
|
||||
|
||||
**TRAINS Agent is an AI experiment cluster solution.**
|
||||
### Help improve Trains by filling our 2-min [user survey](https://allegro.ai/lp/trains-user-survey/)
|
||||
|
||||
**Trains Agent is an AI experiment cluster solution.**
|
||||
|
||||
It is a zero configuration fire-and-forget execution agent, which combined with trains-server provides a full AI cluster solution.
|
||||
|
||||
**Full AutoML in 5 steps**
|
||||
1. Install the [TRAINS server](https://github.com/allegroai/trains-agent) (or use our [open server](https://demoapp.trains.allegro.ai))
|
||||
2. `pip install trains-agent` ([install](#installing-the-trains-agent) the TRAINS agent on any GPU machine: on-premises / cloud / ...)
|
||||
3. Add [TRAINS](https://github.com/allegroai/trains) to your code with just 2 lines & run it once (on your machine / laptop)
|
||||
1. Install the [Trains Server](https://github.com/allegroai/trains-agent) (or use our [open server](https://demoapp.trains.allegro.ai))
|
||||
2. `pip install trains-agent` ([install](#installing-the-trains-agent) the Trains Agent on any GPU machine: on-premises / cloud / ...)
|
||||
3. Add [Trains](https://github.com/allegroai/trains) to your code with just 2 lines & run it once (on your machine / laptop)
|
||||
4. Change the [parameters](#using-the-trains-agent) in the UI & schedule for [execution](#using-the-trains-agent) (or automate with an [AutoML pipeline](#automl-and-orchestration-pipelines-))
|
||||
5. :chart_with_downwards_trend: :chart_with_upwards_trend: :eyes: :beer:
|
||||
|
||||
|
||||
**Using the TRAINS agent, you can now set up a dynamic cluster with \*epsilon DevOps**
|
||||
**Using the Trains Agent, you can now set up a dynamic cluster with \*epsilon DevOps**
|
||||
|
||||
*epsilon - Because we are scientists :triangular_ruler: and nothing is really zero work
|
||||
|
||||
(Experience TRAINS live at [https://demoapp.trains.allegro.ai](https://demoapp.trains.allegro.ai))
|
||||
(Experience Trains live at [https://demoapp.trains.allegro.ai](https://demoapp.trains.allegro.ai))
|
||||
<a href="https://demoapp.trains.allegro.ai"><img src="https://raw.githubusercontent.com/allegroai/trains-agent/9f1e86c1ca45c984ee13edc9353c7b10c55d7257/docs/screenshots.gif" width="100%"></a>
|
||||
|
||||
## Simple, Flexible Experiment Orchestration
|
||||
**The TRAINS Agent was built to address the DL/ML R&D DevOps needs:**
|
||||
**The Trains Agent was built to address the DL/ML R&D DevOps needs:**
|
||||
|
||||
* Easily add & remove machines from the cluster
|
||||
* Reuse machines without the need for any dedicated containers or images
|
||||
@@ -49,30 +51,30 @@ If you are considering K8S for your research, also consider that you will soon b
|
||||
In our experience, handling and building the environments, having to package every experiment in a docker, managing those hundreds (or more) containers and building pipelines on top of it all, is very complicated (also, it’s usually out of scope for the research team, and overwhelming even for the DevOps team).
|
||||
|
||||
We feel there has to be a better way, that can be just as powerful for R&D and at the same time allow integration with K8S **when the need arises**.
|
||||
(If you already have a K8S cluster for AI, detailed instructions on how to integrate TRAINS into your K8S cluster are *coming soon*.)
|
||||
(If you already have a K8S cluster for AI, detailed instructions on how to integrate Trains into your K8S cluster are [here](https://github.com/allegroai/trains-server-k8s/tree/master/trains-server-chart) with included [helm chart](https://github.com/allegroai/trains-server-helm))
|
||||
|
||||
|
||||
## Using the TRAINS Agent
|
||||
## Using the Trains Agent
|
||||
**Full scale HPC with a click of a button**
|
||||
|
||||
TRAINS Agent is a job scheduler that listens on job queue(s), pulls jobs, sets the job environments, executes the job and monitors its progress.
|
||||
The Trains Agent is a job scheduler that listens on job queue(s), pulls jobs, sets the job environments, executes the job and monitors its progress.
|
||||
|
||||
Any 'Draft' experiment can be scheduled for execution by a TRAINS agent.
|
||||
Any 'Draft' experiment can be scheduled for execution by a Trains agent.
|
||||
|
||||
A previously run experiment can be put into 'Draft' state by either of two methods:
|
||||
* Using the **'Reset'** action from the experiment right-click context menu in the
|
||||
TRAINS UI - This will clear any results and artifacts the previous run had created.
|
||||
Trains UI - This will clear any results and artifacts the previous run had created.
|
||||
* Using the **'Clone'** action from the experiment right-click context menu in the
|
||||
TRAINS UI - This will create a new 'Draft' experiment with the same configuration as the original experiment.
|
||||
Trains UI - This will create a new 'Draft' experiment with the same configuration as the original experiment.
|
||||
|
||||
An experiment is scheduled for execution using the **'Enqueue'** action from the experiment
|
||||
right-click context menu in the TRAINS UI and selecting the execution queue.
|
||||
right-click context menu in the Trains UI and selecting the execution queue.
|
||||
|
||||
See [creating an experiment and enqueuing it for execution](#from-scratch).
|
||||
|
||||
Once an experiment is enqueued, it will be picked up and executed by a TRAINS agent monitoring this queue.
|
||||
Once an experiment is enqueued, it will be picked up and executed by a Trains agent monitoring this queue.
|
||||
|
||||
The TRAINS UI Workers & Queues page provides ongoing execution information:
|
||||
The Trains UI Workers & Queues page provides ongoing execution information:
|
||||
- Workers Tab: Monitor you cluster
|
||||
- Review available resources
|
||||
- Monitor machines statistics (CPU / GPU / Disk / Network)
|
||||
@@ -81,16 +83,16 @@ The TRAINS UI Workers & Queues page provides ongoing execution information:
|
||||
- Cancel or abort job execution
|
||||
- Move jobs between execution queues
|
||||
|
||||
### What The TRAINS Agent Actually Does
|
||||
The TRAINS agent executes experiments using the following process:
|
||||
### What The Trains Agent Actually Does
|
||||
The Trains Agent executes experiments using the following process:
|
||||
- Create a new virtual environment (or launch the selected docker image)
|
||||
- Clone the code into the virtual-environment (or inside the docker)
|
||||
- Install python packages based on the package requirements listed for the experiment
|
||||
- Special note for PyTorch: The TRAINS agent will automatically select the
|
||||
- Special note for PyTorch: The Trains Agent will automatically select the
|
||||
torch packages based on the CUDA_VERSION environment variable of the machine
|
||||
- Execute the code, while monitoring the process
|
||||
- Log all stdout/stderr in the TRAINS UI, including the cloning and installation process, for easy debugging
|
||||
- Monitor the execution and allow you to manually abort the job using the TRAINS UI (or, in the unfortunate case of a code crash, catch the error and signal the experiment has failed)
|
||||
- Log all stdout/stderr in the Trains UI, including the cloning and installation process, for easy debugging
|
||||
- Monitor the execution and allow you to manually abort the job using the Trains UI (or, in the unfortunate case of a code crash, catch the error and signal the experiment has failed)
|
||||
|
||||
### System Design & Flow
|
||||
```text
|
||||
@@ -98,24 +100,24 @@ The TRAINS agent executes experiments using the following process:
|
||||
| GPU Machine |
|
||||
Development Machine | |
|
||||
+------------------------+ | +-------------+ |
|
||||
| Data Scientist's | +--------------+ | |TRAINS Agent | |
|
||||
| Data Scientist's | +--------------+ | |Trains Agent | |
|
||||
| DL/ML Code | | WEB UI | | | | |
|
||||
| | | | | | +---------+ | |
|
||||
| | | | | | | DL/ML | | |
|
||||
| | +--------------+ | | | Code | | |
|
||||
| | User Clones Exp #1 / . . . . . . . / | | | | | |
|
||||
| +-------------------+ | into Exp #2 / . . . . . . . / | | +---------+ | |
|
||||
| | TRAINS | | +---------------/-_____________-/ | | | |
|
||||
| | Trains | | +---------------/-_____________-/ | | | |
|
||||
| +---------+---------+ | | | | ^ | |
|
||||
+-----------|------------+ | | +------|------+ |
|
||||
| | +--------|--------+
|
||||
Auto-Magically | |
|
||||
Creates Exp #1 | The TRAINS Agent
|
||||
Creates Exp #1 | The Trains Agent
|
||||
\ User Change Hyper-Parameters Pulls Exp #2, setup the
|
||||
| | environment & clone code.
|
||||
| | Start execution with the
|
||||
+------------|------------+ | +--------------------+ new set of Hyper-Parameters.
|
||||
| +---------v---------+ | | | TRAINS-SERVER | |
|
||||
| +---------v---------+ | | | Trains Server | |
|
||||
| | Experiment #1 | | | | | |
|
||||
| +-------------------+ | | | Execution Queue | |
|
||||
| || | | | | |
|
||||
@@ -126,17 +128,17 @@ Development Machine |
|
||||
| | ------------->---------------+ | |
|
||||
| | User Send Exp #2 | |Execute Exp #2 +--------------------+
|
||||
| | For Execution | +---------------+ |
|
||||
| TRAINS-SERVER | | |
|
||||
| Trains Server | | |
|
||||
+-------------------------+ +--------------------+
|
||||
```
|
||||
|
||||
### Installing the TRAINS Agent
|
||||
### Installing the Trains Agent
|
||||
|
||||
```bash
|
||||
pip install trains-agent
|
||||
```
|
||||
|
||||
### TRAINS Agent Usage Examples
|
||||
### Trains Agent Usage Examples
|
||||
|
||||
Full Interface and capabilities are available with
|
||||
```bash
|
||||
@@ -144,29 +146,30 @@ trains-agent --help
|
||||
trains-agent daemon --help
|
||||
```
|
||||
|
||||
### Configuring the TRAINS Agent
|
||||
### Configuring the Trains Agent
|
||||
|
||||
```bash
|
||||
trains-agent init
|
||||
```
|
||||
|
||||
Note: The TRAINS agent uses a cache folder to cache pip packages, apt packages and cloned repositories. The default TRAINS Agent cache folder is `~/.trains`
|
||||
Note: The Trains Agent uses a cache folder to cache pip packages, apt packages and cloned repositories. The default Trains Agent cache folder is `~/.trains`
|
||||
|
||||
See full details in your configuration file at `~/trains.conf`
|
||||
|
||||
Note: The **TRAINS agent** extends the **TRAINS** configuration file `~/trains.conf`
|
||||
Note: The **Trains agent** extends the **Trains** configuration file `~/trains.conf`
|
||||
They are designed to share the same configuration file, see example [here](docs/trains.conf)
|
||||
|
||||
### Running the TRAINS Agent
|
||||
### Running the Trains Agent
|
||||
|
||||
For debug and experimentation, start the TRAINS agent in `foreground` mode, where all the output is printed to screen
|
||||
For debug and experimentation, start the Trains agent in `foreground` mode, where all the output is printed to screen
|
||||
```bash
|
||||
trains-agent daemon --queue default --foreground
|
||||
```
|
||||
|
||||
For actual service mode, all the stdout will be stored automatically into a temporary file (no need to pipe)
|
||||
Notice: with `--detached` flag, the *trains-agent* will be running in the background
|
||||
```bash
|
||||
trains-agent daemon --queue default
|
||||
trains-agent daemon --detached --queue default
|
||||
```
|
||||
|
||||
GPU allocation is controlled via the standard OS environment `NVIDIA_VISIBLE_DEVICES` or `--gpus` flag (or disabled with `--cpu-only`).
|
||||
@@ -175,42 +178,44 @@ If no flag is set, and `NVIDIA_VISIBLE_DEVICES` variable doesn't exist, all GPU'
|
||||
If `--cpu-only` flag is set, or `NVIDIA_VISIBLE_DEVICES` is an empty string (""), no gpu will be allocated for the `trains-agent`
|
||||
|
||||
Example: spin two agents, one per gpu on the same machine:
|
||||
Notice: with `--detached` flag, the *trains-agent* will be running in the background
|
||||
```bash
|
||||
trains-agent daemon --gpus 0 --queue default &
|
||||
trains-agent daemon --gpus 1 --queue default &
|
||||
trains-agent daemon --detached --gpus 0 --queue default
|
||||
trains-agent daemon --detached --gpus 1 --queue default
|
||||
```
|
||||
|
||||
Example: spin two agents, pulling from dedicated `dual_gpu` queue, two gpu's per agent
|
||||
```bash
|
||||
trains-agent daemon --gpus 0,1 --queue dual_gpu &
|
||||
trains-agent daemon --gpus 2,3 --queue dual_gpu &
|
||||
trains-agent daemon --detached --gpus 0,1 --queue dual_gpu
|
||||
trains-agent daemon --detached --gpus 2,3 --queue dual_gpu
|
||||
```
|
||||
|
||||
#### Starting the TRAINS Agent in docker mode
|
||||
#### Starting the Trains Agent in docker mode
|
||||
|
||||
For debug and experimentation, start the TRAINS agent in `foreground` mode, where all the output is printed to screen
|
||||
For debug and experimentation, start the Trains agent in `foreground` mode, where all the output is printed to screen
|
||||
```bash
|
||||
trains-agent daemon --queue default --docker --foreground
|
||||
```
|
||||
|
||||
For actual service mode, all the stdout will be stored automatically into a file (no need to pipe)
|
||||
Notice: with `--detached` flag, the *trains-agent* will be running in the background
|
||||
```bash
|
||||
trains-agent daemon --queue default --docker
|
||||
trains-agent daemon --detached --queue default --docker
|
||||
```
|
||||
|
||||
Example: spin two agents, one per gpu on the same machine, with default nvidia/cuda docker:
|
||||
```bash
|
||||
trains-agent daemon --gpus 0 --queue default --docker nvidia/cuda &
|
||||
trains-agent daemon --gpus 1 --queue default --docker nvidia/cuda &
|
||||
trains-agent daemon --detached --gpus 0 --queue default --docker nvidia/cuda
|
||||
trains-agent daemon --detached --gpus 1 --queue default --docker nvidia/cuda
|
||||
```
|
||||
|
||||
Example: spin two agents, pulling from dedicated `dual_gpu` queue, two gpu's per agent, with default nvidia/cuda docker:
|
||||
```bash
|
||||
trains-agent daemon --gpus 0,1 --queue dual_gpu --docker nvidia/cuda &
|
||||
trains-agent daemon --gpus 2,3 --queue dual_gpu --docker nvidia/cuda &
|
||||
trains-agent daemon --detached --gpus 0,1 --queue dual_gpu --docker nvidia/cuda
|
||||
trains-agent daemon --detached --gpus 2,3 --queue dual_gpu --docker nvidia/cuda
|
||||
```
|
||||
|
||||
#### Starting the TRAINS Agent - Priority Queues
|
||||
#### Starting the Trains Agent - Priority Queues
|
||||
|
||||
Priority Queues are also supported, example use case:
|
||||
|
||||
@@ -218,14 +223,14 @@ High priority queue: `important_jobs` Low priority queue: `default`
|
||||
```bash
|
||||
trains-agent daemon --queue important_jobs default
|
||||
```
|
||||
The **TRAINS agent** will first try to pull jobs from the `important_jobs` queue, only then it will fetch a job from the `default` queue.
|
||||
The **Trains Agent** will first try to pull jobs from the `important_jobs` queue, only then it will fetch a job from the `default` queue.
|
||||
|
||||
Adding queues, managing job order within a queue and moving jobs between queues, is available using the Web UI, see example on our [open server](https://demoapp.trains.allegro.ai/workers-and-queues/queues)
|
||||
|
||||
# How do I create an experiment on the TRAINS server? <a name="from-scratch"></a>
|
||||
* Integrate [TRAINS](https://github.com/allegroai/trains) with your code
|
||||
## How do I create an experiment on the Trains Server? <a name="from-scratch"></a>
|
||||
* Integrate [Trains](https://github.com/allegroai/trains) with your code
|
||||
* Execute the code on your machine (Manually / PyCharm / Jupyter Notebook)
|
||||
* As your code is running, **TRAINS** creates an experiment logging all the necessary execution information:
|
||||
* As your code is running, **Trains** creates an experiment logging all the necessary execution information:
|
||||
- Git repository link and commit ID (or an entire jupyter notebook)
|
||||
- Git diff (we’re not saying you never commit and push, but still...)
|
||||
- Python packages used by your code (including specific versions used)
|
||||
@@ -234,7 +239,7 @@ Adding queues, managing job order within a queue and moving jobs between queues,
|
||||
|
||||
You now have a 'template' of your experiment with everything required for automated execution
|
||||
|
||||
* In the TRAINS UI, Right click on the experiment and select 'clone'. A copy of your experiment will be created.
|
||||
* In the Trains UI, Right click on the experiment and select 'clone'. A copy of your experiment will be created.
|
||||
* You now have a new draft experiment cloned from your original experiment, feel free to edit it
|
||||
- Change the Hyper-Parameters
|
||||
- Switch to the latest code base of the repository
|
||||
@@ -243,10 +248,31 @@ Adding queues, managing job order within a queue and moving jobs between queues,
|
||||
- Or simply change nothing to run the same experiment again...
|
||||
* Schedule the newly created experiment for execution: Right-click the experiment and select 'enqueue'
|
||||
|
||||
# AutoML and Orchestration Pipelines <a name="automl-pipes"></a>
|
||||
The TRAINS Agent can also be used to implement AutoML orchestration and Experiment Pipelines in conjunction with the TRAINS package.
|
||||
## Trains-Agent Services Mode <a name="services"></a>
|
||||
|
||||
Sample AutoML & Orchestration examples can be found in the TRAINS [example/automl](https://github.com/allegroai/trains/tree/master/examples/automl) folder.
|
||||
Trains-Agent Services is a special mode of Trains-Agent that provides the ability to launch long-lasting jobs
|
||||
that previously had to be executed on local / dedicated machines. It allows a single agent to
|
||||
launch multiple dockers (Tasks) for different use cases. To name a few use cases, auto-scaler service (spinning instances
|
||||
when the need arises and the budget allows), Controllers (Implementing pipelines and more sophisticated DevOps logic),
|
||||
Optimizer (such as Hyper-parameter Optimization or sweeping), and Application (such as interactive Bokeh apps for
|
||||
increased data transparency)
|
||||
|
||||
Trains-Agent Services mode will spin **any** task enqueued into the specified queue.
|
||||
Every task launched by Trains-Agent Services will be registered as a new node in the system,
|
||||
providing tracking and transparency capabilities.
|
||||
Currently trains-agent in services-mode supports cpu only configuration. Trains-agent services mode can be launched alongside GPU agents.
|
||||
|
||||
```bash
|
||||
trains-agent daemon --services-mode --detached --queue services --create-queue --docker ubuntu:18.04 --cpu-only
|
||||
```
|
||||
|
||||
**Note**: It is the user's responsibility to make sure the proper tasks are pushed into the specified queue.
|
||||
|
||||
|
||||
## AutoML and Orchestration Pipelines <a name="automl-pipes"></a>
|
||||
The Trains Agent can also be used to implement AutoML orchestration and Experiment Pipelines in conjunction with the Trains package.
|
||||
|
||||
Sample AutoML & Orchestration examples can be found in the Trains [example/automl](https://github.com/allegroai/trains/tree/master/examples/automl) folder.
|
||||
|
||||
AutoML examples
|
||||
- [Toy Keras training experiment](https://github.com/allegroai/trains/blob/master/examples/automl/automl_base_template_keras_simple.py)
|
||||
@@ -259,3 +285,7 @@ Experiment Pipeline examples
|
||||
- This example will "process data", and once done, will launch a copy of the 'second step' experiment-template
|
||||
- [Second step experiment](https://github.com/allegroai/trains/blob/master/examples/automl/toy_base_task.py)
|
||||
- In order to create an experiment-template in the system, this code must be executed once manually
|
||||
|
||||
## License
|
||||
|
||||
Apache License, Version 2.0 (see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0.html) for more information)
|
||||
|
||||
18
docker/agent/Dockerfile
Normal file
18
docker/agent/Dockerfile
Normal file
@@ -0,0 +1,18 @@
|
||||
# syntax = docker/dockerfile
|
||||
FROM nvidia/cuda
|
||||
|
||||
WORKDIR /usr/agent
|
||||
|
||||
COPY . /usr/agent
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get dist-upgrade -y
|
||||
RUN apt-get install -y curl python3-pip git
|
||||
RUN curl -sSL https://get.docker.com/ | sh
|
||||
RUN python3 -m pip install -U pip
|
||||
RUN python3 -m pip install trains-agent
|
||||
RUN python3 -m pip install -U "cryptography>=2.9"
|
||||
|
||||
ENV TRAINS_DOCKER_SKIP_GPUS_FLAG=1
|
||||
|
||||
ENTRYPOINT ["/usr/agent/entrypoint.sh"]
|
||||
19
docker/agent/entrypoint.sh
Executable file
19
docker/agent/entrypoint.sh
Executable file
@@ -0,0 +1,19 @@
|
||||
#!/bin/sh
|
||||
|
||||
LOWER_PIP_UPDATE_VERSION="$(echo "$PIP_UPDATE_VERSION" | tr '[:upper:]' '[:lower:]')"
|
||||
LOWER_TRAINS_AGENT_UPDATE_VERSION="$(echo "$TRAINS_AGENT_UPDATE_VERSION" | tr '[:upper:]' '[:lower:]')"
|
||||
|
||||
if [ "$LOWER_PIP_UPDATE_VERSION" = "yes" ] || [ "$LOWER_PIP_UPDATE_VERSION" = "true" ] ; then
|
||||
python3 -m pip install -U pip
|
||||
elif [ ! -z "$LOWER_PIP_UPDATE_VERSION" ] ; then
|
||||
python3 -m pip install pip$LOWER_PIP_UPDATE_VERSION ;
|
||||
fi
|
||||
|
||||
echo "TRAINS_AGENT_UPDATE_VERSION = $LOWER_TRAINS_AGENT_UPDATE_VERSION"
|
||||
if [ "$LOWER_TRAINS_AGENT_UPDATE_VERSION" = "yes" ] || [ "$LOWER_TRAINS_AGENT_UPDATE_VERSION" = "true" ] ; then
|
||||
python3 -m pip install trains-agent -U
|
||||
elif [ ! -z "$LOWER_TRAINS_AGENT_UPDATE_VERSION" ] ; then
|
||||
python3 -m pip install trains-agent$LOWER_TRAINS_AGENT_UPDATE_VERSION ;
|
||||
fi
|
||||
|
||||
python3 -m trains_agent daemon --docker "$TRAINS_AGENT_DEFAULT_BASE_DOCKER" --force-current-version $TRAINS_AGENT_EXTRA_ARGS
|
||||
16
docker/services/Dockerfile
Normal file
16
docker/services/Dockerfile
Normal file
@@ -0,0 +1,16 @@
|
||||
# syntax = docker/dockerfile
|
||||
FROM ubuntu:18.04
|
||||
|
||||
WORKDIR /usr/agent
|
||||
|
||||
COPY . /usr/agent
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get dist-upgrade -y
|
||||
RUN apt-get install -y curl python3-pip git
|
||||
RUN curl -sSL https://get.docker.com/ | sh
|
||||
RUN python3 -m pip install -U pip
|
||||
RUN python3 -m pip install trains-agent
|
||||
RUN python3 -m pip install -U "cryptography>=2.9"
|
||||
|
||||
ENTRYPOINT ["/usr/agent/entrypoint.sh"]
|
||||
14
docker/services/entrypoint.sh
Executable file
14
docker/services/entrypoint.sh
Executable file
@@ -0,0 +1,14 @@
|
||||
#!/bin/sh
|
||||
|
||||
if [ -z "$TRAINS_FILES_HOST" ]; then
|
||||
TRAINS_HOST_IP=${TRAINS_HOST_IP:-$(curl -s https://ifconfig.me/ip)}
|
||||
fi
|
||||
|
||||
TRAINS_FILES_HOST=${TRAINS_FILES_HOST:-"http://$TRAINS_HOST_IP:8081"}
|
||||
TRAINS_WEB_HOST=${TRAINS_WEB_HOST:-"http://$TRAINS_HOST_IP:8080"}
|
||||
TRAINS_API_HOST=${TRAINS_API_HOST:-"http://$TRAINS_HOST_IP:8008"}
|
||||
|
||||
echo $TRAINS_FILES_HOST $TRAINS_WEB_HOST $TRAINS_API_HOST 1>&2
|
||||
|
||||
python3 -m pip install -q -U "trains-agent${TRAINS_AGENT_UPDATE_VERSION}"
|
||||
trains-agent daemon --services-mode --queue services --create-queue --docker $TRAINS_AGENT_DEFAULT_BASE_DOCKER --cpu-only $TRAINS_AGENT_EXTRA_ARGS
|
||||
@@ -13,11 +13,13 @@ api {
|
||||
}
|
||||
|
||||
agent {
|
||||
# Set GIT user/pass credentials
|
||||
# leave blank for GIT SSH credentials
|
||||
# Set GIT user/pass credentials (if user/pass are set, GIT protocol will be set to https)
|
||||
# leave blank for GIT SSH credentials (set force_git_ssh_protocol=true to force SSH protocol)
|
||||
git_user=""
|
||||
git_pass=""
|
||||
|
||||
# Force GIT protocol to use SSH regardless of the git url (Assumes GIT user/pass are blank)
|
||||
force_git_ssh_protocol: false
|
||||
|
||||
# unique name of this worker, if None, created based on hostname:process_id
|
||||
# Overridden with os environment: TRAINS_WORKER_NAME
|
||||
@@ -55,6 +57,10 @@ agent {
|
||||
|
||||
# additional conda channels to use when installing with conda package manager
|
||||
conda_channels: ["pytorch", "conda-forge", ]
|
||||
|
||||
# set to True to support torch nightly build installation,
|
||||
# notice: torch nightly builds are ephemeral and are deleted from time to time
|
||||
torch_nightly: false,
|
||||
},
|
||||
|
||||
# target folder for virtual environments builds, created when executing experiment
|
||||
@@ -82,9 +88,9 @@ agent {
|
||||
# reload configuration file every daemon execution
|
||||
reload_config: false,
|
||||
|
||||
# pip cache folder used mapped into docker, for python package caching
|
||||
# pip cache folder mapped into docker, used for python package caching
|
||||
docker_pip_cache = ~/.trains/pip-cache
|
||||
# apt cache folder used mapped into docker, for ubuntu package caching
|
||||
# apt cache folder mapped into docker, used for ubuntu package caching
|
||||
docker_apt_cache = ~/.trains/apt-cache
|
||||
|
||||
# optional arguments to pass to docker image
|
||||
@@ -105,6 +111,11 @@ agent {
|
||||
# optional arguments to pass to docker image
|
||||
# arguments: ["--ipc=host"]
|
||||
}
|
||||
|
||||
# CUDA versions used for Conda setup & solving PyTorch wheel packages
|
||||
# it Should be detected automatically. Override with os environment CUDA_VERSION / CUDNN_VERSION
|
||||
# cuda_version: 10.1
|
||||
# cudnn_version: 7.6
|
||||
}
|
||||
|
||||
sdk {
|
||||
|
||||
@@ -3,7 +3,6 @@ enum34>=0.9 ; python_version < '3.6'
|
||||
furl>=2.0.0
|
||||
future>=0.16.0
|
||||
humanfriendly>=2.1
|
||||
jsonmodels>=2.2
|
||||
jsonschema>=2.6.0
|
||||
pathlib2>=2.3.0
|
||||
psutil>=3.4.2
|
||||
|
||||
27
setup.py
27
setup.py
@@ -4,28 +4,31 @@ TRAINS-AGENT DevOps for machine/deep learning
|
||||
https://github.com/allegroai/trains-agent
|
||||
"""
|
||||
|
||||
import os.path
|
||||
# Always prefer setuptools over distutils
|
||||
from setuptools import setup, find_packages
|
||||
from six import exec_
|
||||
from pathlib2 import Path
|
||||
|
||||
def read_text(filepath):
|
||||
with open(filepath, "r") as f:
|
||||
return f.read()
|
||||
|
||||
here = Path(__file__).resolve().parent
|
||||
|
||||
here = os.path.dirname(__file__)
|
||||
# Get the long description from the README file
|
||||
long_description = (here / 'README.md').read_text()
|
||||
long_description = read_text(os.path.join(here, 'README.md'))
|
||||
|
||||
|
||||
def read_version_string():
|
||||
result = {}
|
||||
exec_((here / 'trains_agent/version.py').read_text(), result)
|
||||
return result['__version__']
|
||||
def read_version_string(version_file):
|
||||
for line in read_text(version_file).splitlines():
|
||||
if line.startswith('__version__'):
|
||||
delim = '"' if '"' in line else "'"
|
||||
return line.split(delim)[1]
|
||||
else:
|
||||
raise RuntimeError("Unable to find version string.")
|
||||
|
||||
|
||||
version = read_version_string()
|
||||
|
||||
requirements = (here / 'requirements.txt').read_text().splitlines()
|
||||
version = read_version_string("trains_agent/version.py")
|
||||
|
||||
requirements = read_text(os.path.join(here, 'requirements.txt')).splitlines()
|
||||
|
||||
setup(
|
||||
name='trains_agent',
|
||||
|
||||
@@ -30,6 +30,6 @@ from trains_agent.helper.repo import VCS
|
||||
),
|
||||
)
|
||||
def test(url, expected):
|
||||
result = VCS.resolve_ssh_url(url)
|
||||
result = VCS.replace_ssh_url(url)
|
||||
expected = expected or url
|
||||
assert result == expected
|
||||
|
||||
@@ -20,6 +20,8 @@ from .interface import get_parser
|
||||
def run_command(parser, args, command_name):
|
||||
|
||||
debug = args.debug
|
||||
session.Session.set_debug_mode(debug)
|
||||
|
||||
if command_name and command_name.lower() in ('config', 'init'):
|
||||
command_class = commands.Config
|
||||
elif len(command_name.split('.')) < 2:
|
||||
|
||||
@@ -9,10 +9,14 @@
|
||||
# worker_name: "trains-agent-machine1"
|
||||
worker_name: ""
|
||||
|
||||
# Set GIT user/pass credentials for cloning code, leave blank for GIT SSH credentials.
|
||||
# Set GIT user/pass credentials (if user/pass are set, GIT protocol will be set to https)
|
||||
# leave blank for GIT SSH credentials (set force_git_ssh_protocol=true to force SSH protocol)
|
||||
# git_user: ""
|
||||
# git_pass: ""
|
||||
|
||||
# Force GIT protocol to use SSH regardless of the git url (Assumes GIT user/pass are blank)
|
||||
force_git_ssh_protocol: false
|
||||
|
||||
# Set the python version to use when creating the virtual environment and launching the experiment
|
||||
# Example values: "/usr/bin/python3" or "/usr/local/bin/python3.6"
|
||||
# The default is the python executing the trains_agent
|
||||
@@ -26,7 +30,7 @@
|
||||
type: pip,
|
||||
|
||||
# specify pip version to use (examples "<20", "==19.3.1", "", empty string will install the latest version)
|
||||
pip_version: "<20",
|
||||
pip_version: "<20.2",
|
||||
|
||||
# virtual environment inheres packages from system
|
||||
system_site_packages: false,
|
||||
@@ -39,6 +43,10 @@
|
||||
|
||||
# additional conda channels to use when installing with conda package manager
|
||||
conda_channels: ["defaults", "conda-forge", "pytorch", ]
|
||||
|
||||
# set to True to support torch nightly build installation,
|
||||
# notice: torch nightly builds are ephemeral and are deleted from time to time
|
||||
torch_nightly: false,
|
||||
},
|
||||
|
||||
# target folder for virtual environments builds, created when executing experiment
|
||||
@@ -66,9 +74,9 @@
|
||||
# reload configuration file every daemon execution
|
||||
reload_config: false,
|
||||
|
||||
# pip cache folder used mapped into docker, for python package caching
|
||||
# pip cache folder mapped into docker, used for python package caching
|
||||
docker_pip_cache = ~/.trains/pip-cache
|
||||
# apt cache folder used mapped into docker, for ubuntu package caching
|
||||
# apt cache folder mapped into docker, used for ubuntu package caching
|
||||
docker_apt_cache = ~/.trains/apt-cache
|
||||
|
||||
# optional arguments to pass to docker image
|
||||
|
||||
@@ -151,7 +151,7 @@ class CreateCredentialsRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "create_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'additionalProperties': False,
|
||||
'definitions': {},
|
||||
@@ -169,7 +169,7 @@ class CreateCredentialsResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "create_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -230,7 +230,7 @@ class EditUserRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "edit_user"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -287,7 +287,7 @@ class EditUserResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "edit_user"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -347,7 +347,7 @@ class GetCredentialsRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "get_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'additionalProperties': False,
|
||||
'definitions': {},
|
||||
@@ -365,7 +365,7 @@ class GetCredentialsResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "get_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -433,7 +433,7 @@ class LoginRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "login"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -474,7 +474,7 @@ class LoginResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "login"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -510,7 +510,7 @@ class LogoutRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "logout"
|
||||
_version = "2.2"
|
||||
_version = "2.4"
|
||||
_schema = {'additionalProperties': False, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
|
||||
@@ -521,7 +521,7 @@ class LogoutResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "logout"
|
||||
_version = "2.2"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {'additionalProperties': False, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
@@ -537,7 +537,7 @@ class RevokeCredentialsRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "revoke_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -577,7 +577,7 @@ class RevokeCredentialsResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "revoke_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
|
||||
@@ -19,7 +19,7 @@ class ApiexRequest(Request):
|
||||
|
||||
_service = "debug"
|
||||
_action = "apiex"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
_schema = {'definitions': {}, 'properties': {}, 'required': [], 'type': 'object'}
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ class ApiexResponse(Response):
|
||||
"""
|
||||
_service = "debug"
|
||||
_action = "apiex"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
@@ -43,7 +43,7 @@ class EchoRequest(Request):
|
||||
|
||||
_service = "debug"
|
||||
_action = "echo"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
|
||||
@@ -54,7 +54,7 @@ class EchoResponse(Response):
|
||||
"""
|
||||
_service = "debug"
|
||||
_action = "echo"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
@@ -65,7 +65,7 @@ class ExRequest(Request):
|
||||
|
||||
_service = "debug"
|
||||
_action = "ex"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
_schema = {'definitions': {}, 'properties': {}, 'required': [], 'type': 'object'}
|
||||
|
||||
|
||||
@@ -76,7 +76,7 @@ class ExResponse(Response):
|
||||
"""
|
||||
_service = "debug"
|
||||
_action = "ex"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
@@ -89,7 +89,7 @@ class PingRequest(Request):
|
||||
|
||||
_service = "debug"
|
||||
_action = "ping"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
|
||||
@@ -102,7 +102,7 @@ class PingResponse(Response):
|
||||
"""
|
||||
_service = "debug"
|
||||
_action = "ping"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -141,7 +141,7 @@ class PingAuthRequest(Request):
|
||||
|
||||
_service = "debug"
|
||||
_action = "ping_auth"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
|
||||
@@ -154,7 +154,7 @@ class PingAuthResponse(Response):
|
||||
"""
|
||||
_service = "debug"
|
||||
_action = "ping_auth"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
|
||||
@@ -734,7 +734,7 @@ class AddRequest(CompoundRequest):
|
||||
|
||||
_service = "events"
|
||||
_action = "add"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_item_prop_name = "event"
|
||||
_schema = {
|
||||
'anyOf': [
|
||||
@@ -926,7 +926,7 @@ class AddResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "add"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {'additionalProperties': True, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
@@ -939,7 +939,7 @@ class AddBatchRequest(BatchRequest):
|
||||
|
||||
_service = "events"
|
||||
_action = "add_batch"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_batched_request_cls = AddRequest
|
||||
|
||||
|
||||
@@ -954,7 +954,7 @@ class AddBatchResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "add_batch"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1015,7 +1015,7 @@ class DebugImagesRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "debug_images"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1098,7 +1098,7 @@ class DebugImagesResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "debug_images"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1213,7 +1213,7 @@ class DeleteForTaskRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "delete_for_task"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'task': {'description': 'Task ID', 'type': 'string'}},
|
||||
@@ -1248,7 +1248,7 @@ class DeleteForTaskResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "delete_for_task"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1293,7 +1293,7 @@ class DownloadTaskLogRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "download_task_log"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1366,7 +1366,7 @@ class DownloadTaskLogResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "download_task_log"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {'definitions': {}, 'type': 'string'}
|
||||
|
||||
@@ -1385,7 +1385,7 @@ class GetMultiTaskPlotsRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_multi_task_plots"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1472,7 +1472,7 @@ class GetMultiTaskPlotsResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_multi_task_plots"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1571,7 +1571,7 @@ class GetScalarMetricDataRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_scalar_metric_data"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1628,7 +1628,7 @@ class GetScalarMetricDataResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_scalar_metric_data"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1730,7 +1730,7 @@ class GetScalarMetricsAndVariantsRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_scalar_metrics_and_variants"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'task': {'description': 'task ID', 'type': 'string'}},
|
||||
@@ -1765,7 +1765,7 @@ class GetScalarMetricsAndVariantsResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_scalar_metrics_and_variants"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1811,7 +1811,7 @@ class GetTaskEventsRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_task_events"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1928,7 +1928,7 @@ class GetTaskEventsResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_task_events"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2028,7 +2028,7 @@ class GetTaskLatestScalarValuesRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_task_latest_scalar_values"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'task': {'description': 'Task ID', 'type': 'string'}},
|
||||
@@ -2063,7 +2063,7 @@ class GetTaskLatestScalarValuesResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_task_latest_scalar_values"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2141,7 +2141,7 @@ class GetTaskLogRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_task_log"
|
||||
_version = "1.7"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2254,7 +2254,7 @@ class GetTaskLogResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_task_log"
|
||||
_version = "1.7"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2358,7 +2358,7 @@ class GetTaskPlotsRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_task_plots"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2439,7 +2439,7 @@ class GetTaskPlotsResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_task_plots"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2537,7 +2537,7 @@ class GetVectorMetricsAndVariantsRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_vector_metrics_and_variants"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'task': {'description': 'Task ID', 'type': 'string'}},
|
||||
@@ -2572,7 +2572,7 @@ class GetVectorMetricsAndVariantsResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_vector_metrics_and_variants"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2623,7 +2623,7 @@ class MultiTaskScalarMetricsIterHistogramRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "multi_task_scalar_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'scalar_key_enum': {'enum': ['iter', 'timestamp', 'iso_time'], 'type': 'string'},
|
||||
@@ -2712,7 +2712,7 @@ class MultiTaskScalarMetricsIterHistogramResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "multi_task_scalar_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {'additionalProperties': True, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
@@ -2734,7 +2734,7 @@ class ScalarMetricsIterHistogramRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "scalar_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'scalar_key_enum': {'enum': ['iter', 'timestamp', 'iso_time'], 'type': 'string'},
|
||||
@@ -2816,7 +2816,7 @@ class ScalarMetricsIterHistogramResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "scalar_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2860,7 +2860,7 @@ class VectorMetricsIterHistogramRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "vector_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2927,7 +2927,7 @@ class VectorMetricsIterHistogramResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "vector_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
|
||||
@@ -464,7 +464,7 @@ class CreateRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "create"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -720,7 +720,7 @@ class CreateResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "create"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -779,7 +779,7 @@ class DeleteRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "delete"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -834,7 +834,7 @@ class DeleteResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "delete"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -904,7 +904,7 @@ class EditRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "edit"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1175,7 +1175,7 @@ class EditResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "edit"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1279,7 +1279,7 @@ class GetAllRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "get_all"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'multi_field_pattern_data': {
|
||||
@@ -1647,7 +1647,7 @@ class GetAllResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "get_all"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1770,7 +1770,7 @@ class GetByIdRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "get_by_id"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'model': {'description': 'Model id', 'type': 'string'}},
|
||||
@@ -1805,7 +1805,7 @@ class GetByIdResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "get_by_id"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1925,7 +1925,7 @@ class GetByTaskIdRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "get_by_task_id"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1961,7 +1961,7 @@ class GetByTaskIdResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "get_by_task_id"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -2087,7 +2087,7 @@ class SetReadyRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "set_ready"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2164,7 +2164,7 @@ class SetReadyResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "set_ready"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2276,7 +2276,7 @@ class UpdateRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "update"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2502,7 +2502,7 @@ class UpdateResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "update"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2581,7 +2581,7 @@ class UpdateForTaskRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "update_for_task"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2752,7 +2752,7 @@ class UpdateForTaskResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "update_for_task"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
|
||||
@@ -1518,7 +1518,7 @@ class CloseRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "close"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1612,7 +1612,7 @@ class CloseResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "close"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1682,7 +1682,7 @@ class CompletedRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "completed"
|
||||
_version = "2.2"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1776,7 +1776,7 @@ class CompletedResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "completed"
|
||||
_version = "2.2"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1862,7 +1862,7 @@ class CreateRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "create"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'artifact': {
|
||||
@@ -2229,7 +2229,7 @@ class CreateResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "create"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2280,7 +2280,7 @@ class DeleteRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "delete"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2403,7 +2403,7 @@ class DeleteResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "delete"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2547,7 +2547,7 @@ class DequeueRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "dequeue"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2624,7 +2624,7 @@ class DequeueResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "dequeue"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2733,7 +2733,7 @@ class EditRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "edit"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'artifact': {
|
||||
@@ -3123,7 +3123,7 @@ class EditResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "edit"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -3201,7 +3201,7 @@ class EnqueueRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "enqueue"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -3296,7 +3296,7 @@ class EnqueueResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "enqueue"
|
||||
_version = "1.5"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -3386,7 +3386,7 @@ class FailedRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "failed"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -3480,7 +3480,7 @@ class FailedResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "failed"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -3587,7 +3587,7 @@ class GetAllRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "get_all"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'multi_field_pattern_data': {
|
||||
@@ -3986,7 +3986,7 @@ class GetAllResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "get_all"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -4373,7 +4373,7 @@ class GetByIdRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "get_by_id"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'task': {'description': 'Task ID', 'type': 'string'}},
|
||||
@@ -4408,7 +4408,7 @@ class GetByIdResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "get_by_id"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -4792,7 +4792,7 @@ class PingRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "ping"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'task': {'description': 'Task ID', 'type': 'string'}},
|
||||
@@ -4825,7 +4825,7 @@ class PingResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "ping"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {'additionalProperties': False, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
@@ -4853,7 +4853,7 @@ class PublishRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "publish"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -4967,7 +4967,7 @@ class PublishResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "publish"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -5057,7 +5057,7 @@ class ResetRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "reset"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -5160,7 +5160,7 @@ class ResetResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "reset"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -5305,7 +5305,7 @@ class SetRequirementsRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "set_requirements"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -5362,7 +5362,7 @@ class SetRequirementsResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "set_requirements"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -5431,7 +5431,7 @@ class StartedRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "started"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -5527,7 +5527,7 @@ class StartedResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "started"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -5617,7 +5617,7 @@ class StopRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "stop"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -5711,7 +5711,7 @@ class StopResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "stop"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -5780,7 +5780,7 @@ class StoppedRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "stopped"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -5874,7 +5874,7 @@ class StoppedResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "stopped"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -5952,7 +5952,7 @@ class UpdateRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "update"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -6120,7 +6120,7 @@ class UpdateResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "update"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -6183,7 +6183,7 @@ class UpdateBatchRequest(BatchRequest):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "update_batch"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_batched_request_cls = UpdateRequest
|
||||
|
||||
|
||||
@@ -6196,7 +6196,7 @@ class UpdateBatchResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "update_batch"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -6261,7 +6261,7 @@ class ValidateRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "validate"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'artifact': {
|
||||
@@ -6614,7 +6614,7 @@ class ValidateResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "validate"
|
||||
_version = "2.1"
|
||||
_version = "2.4"
|
||||
|
||||
_schema = {'additionalProperties': False, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
|
||||
@@ -151,7 +151,7 @@ class CreateCredentialsRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "create_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'additionalProperties': False,
|
||||
'definitions': {},
|
||||
@@ -169,7 +169,7 @@ class CreateCredentialsResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "create_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -230,7 +230,7 @@ class EditUserRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "edit_user"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -287,7 +287,7 @@ class EditUserResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "edit_user"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -347,7 +347,7 @@ class GetCredentialsRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "get_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'additionalProperties': False,
|
||||
'definitions': {},
|
||||
@@ -365,7 +365,7 @@ class GetCredentialsResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "get_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -433,7 +433,7 @@ class LoginRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "login"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -474,7 +474,7 @@ class LoginResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "login"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -510,7 +510,7 @@ class LogoutRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "logout"
|
||||
_version = "2.2"
|
||||
_version = "2.5"
|
||||
_schema = {'additionalProperties': False, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
|
||||
@@ -521,7 +521,7 @@ class LogoutResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "logout"
|
||||
_version = "2.2"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'additionalProperties': False, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
@@ -537,7 +537,7 @@ class RevokeCredentialsRequest(Request):
|
||||
|
||||
_service = "auth"
|
||||
_action = "revoke_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -577,7 +577,7 @@ class RevokeCredentialsResponse(Response):
|
||||
"""
|
||||
_service = "auth"
|
||||
_action = "revoke_credentials"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
|
||||
@@ -19,7 +19,7 @@ class ApiexRequest(Request):
|
||||
|
||||
_service = "debug"
|
||||
_action = "apiex"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
_schema = {'definitions': {}, 'properties': {}, 'required': [], 'type': 'object'}
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ class ApiexResponse(Response):
|
||||
"""
|
||||
_service = "debug"
|
||||
_action = "apiex"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
@@ -43,7 +43,7 @@ class EchoRequest(Request):
|
||||
|
||||
_service = "debug"
|
||||
_action = "echo"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
|
||||
@@ -54,7 +54,7 @@ class EchoResponse(Response):
|
||||
"""
|
||||
_service = "debug"
|
||||
_action = "echo"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
@@ -65,7 +65,7 @@ class ExRequest(Request):
|
||||
|
||||
_service = "debug"
|
||||
_action = "ex"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
_schema = {'definitions': {}, 'properties': {}, 'required': [], 'type': 'object'}
|
||||
|
||||
|
||||
@@ -76,7 +76,7 @@ class ExResponse(Response):
|
||||
"""
|
||||
_service = "debug"
|
||||
_action = "ex"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
@@ -89,7 +89,7 @@ class PingRequest(Request):
|
||||
|
||||
_service = "debug"
|
||||
_action = "ping"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
|
||||
@@ -102,7 +102,7 @@ class PingResponse(Response):
|
||||
"""
|
||||
_service = "debug"
|
||||
_action = "ping"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -141,7 +141,7 @@ class PingAuthRequest(Request):
|
||||
|
||||
_service = "debug"
|
||||
_action = "ping_auth"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
|
||||
@@ -154,7 +154,7 @@ class PingAuthResponse(Response):
|
||||
"""
|
||||
_service = "debug"
|
||||
_action = "ping_auth"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
|
||||
@@ -734,7 +734,7 @@ class AddRequest(CompoundRequest):
|
||||
|
||||
_service = "events"
|
||||
_action = "add"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_item_prop_name = "event"
|
||||
_schema = {
|
||||
'anyOf': [
|
||||
@@ -926,7 +926,7 @@ class AddResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "add"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'additionalProperties': True, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
@@ -939,7 +939,7 @@ class AddBatchRequest(BatchRequest):
|
||||
|
||||
_service = "events"
|
||||
_action = "add_batch"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_batched_request_cls = AddRequest
|
||||
|
||||
|
||||
@@ -954,7 +954,7 @@ class AddBatchResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "add_batch"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1015,7 +1015,7 @@ class DebugImagesRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "debug_images"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1098,7 +1098,7 @@ class DebugImagesResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "debug_images"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1215,7 +1215,7 @@ class DeleteForTaskRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "delete_for_task"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1271,7 +1271,7 @@ class DeleteForTaskResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "delete_for_task"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1316,7 +1316,7 @@ class DownloadTaskLogRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "download_task_log"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1389,7 +1389,7 @@ class DownloadTaskLogResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "download_task_log"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'definitions': {}, 'type': 'string'}
|
||||
|
||||
@@ -1408,7 +1408,7 @@ class GetMultiTaskPlotsRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_multi_task_plots"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1495,7 +1495,7 @@ class GetMultiTaskPlotsResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_multi_task_plots"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1594,7 +1594,7 @@ class GetScalarMetricDataRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_scalar_metric_data"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1651,7 +1651,7 @@ class GetScalarMetricDataResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_scalar_metric_data"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1753,7 +1753,7 @@ class GetScalarMetricsAndVariantsRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_scalar_metrics_and_variants"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'task': {'description': 'task ID', 'type': 'string'}},
|
||||
@@ -1788,7 +1788,7 @@ class GetScalarMetricsAndVariantsResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_scalar_metrics_and_variants"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1834,7 +1834,7 @@ class GetTaskEventsRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_task_events"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1951,7 +1951,7 @@ class GetTaskEventsResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_task_events"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2051,7 +2051,7 @@ class GetTaskLatestScalarValuesRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_task_latest_scalar_values"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'task': {'description': 'Task ID', 'type': 'string'}},
|
||||
@@ -2086,7 +2086,7 @@ class GetTaskLatestScalarValuesResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_task_latest_scalar_values"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2164,7 +2164,7 @@ class GetTaskLogRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_task_log"
|
||||
_version = "1.7"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2277,7 +2277,7 @@ class GetTaskLogResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_task_log"
|
||||
_version = "1.7"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2381,7 +2381,7 @@ class GetTaskPlotsRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_task_plots"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2462,7 +2462,7 @@ class GetTaskPlotsResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_task_plots"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2560,7 +2560,7 @@ class GetVectorMetricsAndVariantsRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "get_vector_metrics_and_variants"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'task': {'description': 'Task ID', 'type': 'string'}},
|
||||
@@ -2595,7 +2595,7 @@ class GetVectorMetricsAndVariantsResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "get_vector_metrics_and_variants"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2646,7 +2646,7 @@ class MultiTaskScalarMetricsIterHistogramRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "multi_task_scalar_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'scalar_key_enum': {'enum': ['iter', 'timestamp', 'iso_time'], 'type': 'string'},
|
||||
@@ -2735,7 +2735,7 @@ class MultiTaskScalarMetricsIterHistogramResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "multi_task_scalar_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'additionalProperties': True, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
@@ -2757,7 +2757,7 @@ class ScalarMetricsIterHistogramRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "scalar_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'scalar_key_enum': {'enum': ['iter', 'timestamp', 'iso_time'], 'type': 'string'},
|
||||
@@ -2839,7 +2839,7 @@ class ScalarMetricsIterHistogramResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "scalar_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2883,7 +2883,7 @@ class VectorMetricsIterHistogramRequest(Request):
|
||||
|
||||
_service = "events"
|
||||
_action = "vector_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2950,7 +2950,7 @@ class VectorMetricsIterHistogramResponse(Response):
|
||||
"""
|
||||
_service = "events"
|
||||
_action = "vector_metrics_iter_histogram"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
|
||||
@@ -464,7 +464,7 @@ class CreateRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "create"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -720,7 +720,7 @@ class CreateResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "create"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -779,7 +779,7 @@ class DeleteRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "delete"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -834,7 +834,7 @@ class DeleteResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "delete"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -904,7 +904,7 @@ class EditRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "edit"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1175,7 +1175,7 @@ class EditResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "edit"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1279,7 +1279,7 @@ class GetAllRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "get_all"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'multi_field_pattern_data': {
|
||||
@@ -1647,7 +1647,7 @@ class GetAllResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "get_all"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1770,7 +1770,7 @@ class GetByIdRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "get_by_id"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'model': {'description': 'Model id', 'type': 'string'}},
|
||||
@@ -1805,7 +1805,7 @@ class GetByIdResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "get_by_id"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1925,7 +1925,7 @@ class GetByTaskIdRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "get_by_task_id"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1961,7 +1961,7 @@ class GetByTaskIdResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "get_by_task_id"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -2087,7 +2087,7 @@ class SetReadyRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "set_ready"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2164,7 +2164,7 @@ class SetReadyResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "set_ready"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2276,7 +2276,7 @@ class UpdateRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "update"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2502,7 +2502,7 @@ class UpdateResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "update"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2581,7 +2581,7 @@ class UpdateForTaskRequest(Request):
|
||||
|
||||
_service = "models"
|
||||
_action = "update_for_task"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2752,7 +2752,7 @@ class UpdateForTaskResponse(Response):
|
||||
"""
|
||||
_service = "models"
|
||||
_action = "update_for_task"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
|
||||
@@ -365,7 +365,7 @@ class AddTaskRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "add_task"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -417,7 +417,7 @@ class AddTaskResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "add_task"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -466,7 +466,7 @@ class CreateRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "create"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -548,7 +548,7 @@ class CreateResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "create"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -588,7 +588,7 @@ class DeleteRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "delete"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -644,7 +644,7 @@ class DeleteResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "delete"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -714,7 +714,7 @@ class GetAllRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "get_all"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -918,7 +918,7 @@ class GetAllResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "get_all"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1017,7 +1017,7 @@ class GetByIdRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "get_by_id"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'queue': {'description': 'Queue ID', 'type': 'string'}},
|
||||
@@ -1052,7 +1052,7 @@ class GetByIdResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "get_by_id"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1144,7 +1144,7 @@ class GetDefaultRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "get_default"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'additionalProperties': False,
|
||||
'definitions': {},
|
||||
@@ -1164,7 +1164,7 @@ class GetDefaultResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "get_default"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1217,7 +1217,7 @@ class GetNextTaskRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "get_next_task"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'queue': {'description': 'Queue id', 'type': 'string'}},
|
||||
@@ -1252,7 +1252,7 @@ class GetNextTaskResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "get_next_task"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1319,7 +1319,7 @@ class GetQueueMetricsRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "get_queue_metrics"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1420,7 +1420,7 @@ class GetQueueMetricsResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "get_queue_metrics"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1494,7 +1494,7 @@ class MoveTaskBackwardRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "move_task_backward"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1567,7 +1567,7 @@ class MoveTaskBackwardResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "move_task_backward"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1615,7 +1615,7 @@ class MoveTaskForwardRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "move_task_forward"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1688,7 +1688,7 @@ class MoveTaskForwardResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "move_task_forward"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1731,7 +1731,7 @@ class MoveTaskToBackRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "move_task_to_back"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1784,7 +1784,7 @@ class MoveTaskToBackResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "move_task_to_back"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1827,7 +1827,7 @@ class MoveTaskToFrontRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "move_task_to_front"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1880,7 +1880,7 @@ class MoveTaskToFrontResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "move_task_to_front"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1925,7 +1925,7 @@ class RemoveTaskRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "remove_task"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1977,7 +1977,7 @@ class RemoveTaskResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "remove_task"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2028,7 +2028,7 @@ class UpdateRequest(Request):
|
||||
|
||||
_service = "queues"
|
||||
_action = "update"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2127,7 +2127,7 @@ class UpdateResponse(Response):
|
||||
"""
|
||||
_service = "queues"
|
||||
_action = "update"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
|
||||
@@ -1826,7 +1826,7 @@ class CloneResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "clone"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -1870,7 +1870,7 @@ class CloseRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "close"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1964,7 +1964,7 @@ class CloseResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "close"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2034,7 +2034,7 @@ class CompletedRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "completed"
|
||||
_version = "2.2"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2128,7 +2128,7 @@ class CompletedResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "completed"
|
||||
_version = "2.2"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2214,7 +2214,7 @@ class CreateRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "create"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'artifact': {
|
||||
@@ -2588,7 +2588,7 @@ class CreateResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "create"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2639,7 +2639,7 @@ class DeleteRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "delete"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2762,7 +2762,7 @@ class DeleteResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "delete"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -2906,7 +2906,7 @@ class DequeueRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "dequeue"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2983,7 +2983,7 @@ class DequeueResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "dequeue"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -3092,7 +3092,7 @@ class EditRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "edit"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'artifact': {
|
||||
@@ -3486,7 +3486,7 @@ class EditResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "edit"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -3564,7 +3564,7 @@ class EnqueueRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "enqueue"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -3659,7 +3659,7 @@ class EnqueueResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "enqueue"
|
||||
_version = "1.5"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -3749,7 +3749,7 @@ class FailedRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "failed"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -3843,7 +3843,7 @@ class FailedResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "failed"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -3950,7 +3950,7 @@ class GetAllRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "get_all"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'multi_field_pattern_data': {
|
||||
@@ -4354,7 +4354,7 @@ class GetAllResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "get_all"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -4748,7 +4748,7 @@ class GetByIdRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "get_by_id"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'task': {'description': 'Task ID', 'type': 'string'}},
|
||||
@@ -4783,7 +4783,7 @@ class GetByIdResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "get_by_id"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -5174,7 +5174,7 @@ class PingRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "ping"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {'task': {'description': 'Task ID', 'type': 'string'}},
|
||||
@@ -5207,7 +5207,7 @@ class PingResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "ping"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'additionalProperties': False, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
@@ -5235,7 +5235,7 @@ class PublishRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "publish"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -5349,7 +5349,7 @@ class PublishResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "publish"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -5439,7 +5439,7 @@ class ResetRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "reset"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -5544,7 +5544,7 @@ class ResetResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "reset"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -5707,7 +5707,7 @@ class SetRequirementsRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "set_requirements"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -5764,7 +5764,7 @@ class SetRequirementsResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "set_requirements"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -5833,7 +5833,7 @@ class StartedRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "started"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -5929,7 +5929,7 @@ class StartedResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "started"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -6019,7 +6019,7 @@ class StopRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "stop"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -6113,7 +6113,7 @@ class StopResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "stop"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -6182,7 +6182,7 @@ class StoppedRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "stopped"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -6276,7 +6276,7 @@ class StoppedResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "stopped"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -6354,7 +6354,7 @@ class UpdateRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "update"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -6522,7 +6522,7 @@ class UpdateResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "update"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -6585,7 +6585,7 @@ class UpdateBatchRequest(BatchRequest):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "update_batch"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_batched_request_cls = UpdateRequest
|
||||
|
||||
|
||||
@@ -6598,7 +6598,7 @@ class UpdateBatchResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "update_batch"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
@@ -6663,7 +6663,7 @@ class ValidateRequest(Request):
|
||||
|
||||
_service = "tasks"
|
||||
_action = "validate"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'artifact': {
|
||||
@@ -7023,7 +7023,7 @@ class ValidateResponse(Response):
|
||||
"""
|
||||
_service = "tasks"
|
||||
_action = "validate"
|
||||
_version = "2.1"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'additionalProperties': False, 'definitions': {}, 'type': 'object'}
|
||||
|
||||
|
||||
@@ -1237,7 +1237,7 @@ class GetActivityReportRequest(Request):
|
||||
|
||||
_service = "workers"
|
||||
_action = "get_activity_report"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1319,7 +1319,7 @@ class GetActivityReportResponse(Response):
|
||||
"""
|
||||
_service = "workers"
|
||||
_action = "get_activity_report"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1405,7 +1405,7 @@ class GetAllRequest(Request):
|
||||
|
||||
_service = "workers"
|
||||
_action = "get_all"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1447,7 +1447,7 @@ class GetAllResponse(Response):
|
||||
"""
|
||||
_service = "workers"
|
||||
_action = "get_all"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1601,7 +1601,7 @@ class GetMetricKeysRequest(Request):
|
||||
|
||||
_service = "workers"
|
||||
_action = "get_metric_keys"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -1644,7 +1644,7 @@ class GetMetricKeysResponse(Response):
|
||||
"""
|
||||
_service = "workers"
|
||||
_action = "get_metric_keys"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1718,7 +1718,7 @@ class GetStatsRequest(Request):
|
||||
|
||||
_service = "workers"
|
||||
_action = "get_stats"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'aggregation_type': {
|
||||
@@ -1880,7 +1880,7 @@ class GetStatsResponse(Response):
|
||||
"""
|
||||
_service = "workers"
|
||||
_action = "get_stats"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {
|
||||
'definitions': {
|
||||
@@ -1991,7 +1991,7 @@ class RegisterRequest(Request):
|
||||
|
||||
_service = "workers"
|
||||
_action = "register"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2071,7 +2071,7 @@ class RegisterResponse(Response):
|
||||
"""
|
||||
_service = "workers"
|
||||
_action = "register"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
@@ -2099,7 +2099,7 @@ class StatusReportRequest(Request):
|
||||
|
||||
_service = "workers"
|
||||
_action = "status_report"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {
|
||||
'machine_stats': {
|
||||
@@ -2299,7 +2299,7 @@ class StatusReportResponse(Response):
|
||||
"""
|
||||
_service = "workers"
|
||||
_action = "status_report"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
@@ -2314,7 +2314,7 @@ class UnregisterRequest(Request):
|
||||
|
||||
_service = "workers"
|
||||
_action = "unregister"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
_schema = {
|
||||
'definitions': {},
|
||||
'properties': {
|
||||
@@ -2352,7 +2352,7 @@ class UnregisterResponse(Response):
|
||||
"""
|
||||
_service = "workers"
|
||||
_action = "unregister"
|
||||
_version = "2.4"
|
||||
_version = "2.5"
|
||||
|
||||
_schema = {'definitions': {}, 'properties': {}, 'type': 'object'}
|
||||
|
||||
|
||||
9
trains_agent/backend_api/session/jsonmodels/__init__.py
Normal file
9
trains_agent/backend_api/session/jsonmodels/__init__.py
Normal file
@@ -0,0 +1,9 @@
|
||||
# coding: utf-8
|
||||
|
||||
__author__ = 'Szczepan Cieślik'
|
||||
__email__ = 'szczepan.cieslik@gmail.com'
|
||||
__version__ = '2.4'
|
||||
|
||||
from . import models
|
||||
from . import fields
|
||||
from . import errors
|
||||
230
trains_agent/backend_api/session/jsonmodels/builders.py
Normal file
230
trains_agent/backend_api/session/jsonmodels/builders.py
Normal file
@@ -0,0 +1,230 @@
|
||||
"""Builders to generate in memory representation of model and fields tree."""
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
from collections import defaultdict
|
||||
|
||||
import six
|
||||
|
||||
from . import errors
|
||||
from .fields import NotSet
|
||||
|
||||
|
||||
class Builder(object):
|
||||
|
||||
def __init__(self, parent=None, nullable=False, default=NotSet):
|
||||
self.parent = parent
|
||||
self.types_builders = {}
|
||||
self.types_count = defaultdict(int)
|
||||
self.definitions = set()
|
||||
self.nullable = nullable
|
||||
self.default = default
|
||||
|
||||
@property
|
||||
def has_default(self):
|
||||
return self.default is not NotSet
|
||||
|
||||
def register_type(self, type, builder):
|
||||
if self.parent:
|
||||
return self.parent.register_type(type, builder)
|
||||
|
||||
self.types_count[type] += 1
|
||||
if type not in self.types_builders:
|
||||
self.types_builders[type] = builder
|
||||
|
||||
def get_builder(self, type):
|
||||
if self.parent:
|
||||
return self.parent.get_builder(type)
|
||||
|
||||
return self.types_builders[type]
|
||||
|
||||
def count_type(self, type):
|
||||
if self.parent:
|
||||
return self.parent.count_type(type)
|
||||
|
||||
return self.types_count[type]
|
||||
|
||||
@staticmethod
|
||||
def maybe_build(value):
|
||||
return value.build() if isinstance(value, Builder) else value
|
||||
|
||||
def add_definition(self, builder):
|
||||
if self.parent:
|
||||
return self.parent.add_definition(builder)
|
||||
|
||||
self.definitions.add(builder)
|
||||
|
||||
|
||||
class ObjectBuilder(Builder):
|
||||
|
||||
def __init__(self, model_type, *args, **kwargs):
|
||||
super(ObjectBuilder, self).__init__(*args, **kwargs)
|
||||
self.properties = {}
|
||||
self.required = []
|
||||
self.type = model_type
|
||||
|
||||
self.register_type(self.type, self)
|
||||
|
||||
def add_field(self, name, field, schema):
|
||||
_apply_validators_modifications(schema, field)
|
||||
self.properties[name] = schema
|
||||
if field.required:
|
||||
self.required.append(name)
|
||||
|
||||
def build(self):
|
||||
builder = self.get_builder(self.type)
|
||||
if self.is_definition and not self.is_root:
|
||||
self.add_definition(builder)
|
||||
[self.maybe_build(value) for _, value in self.properties.items()]
|
||||
return '#/definitions/{name}'.format(name=self.type_name)
|
||||
else:
|
||||
return builder.build_definition(nullable=self.nullable)
|
||||
|
||||
@property
|
||||
def type_name(self):
|
||||
module_name = '{module}.{name}'.format(
|
||||
module=self.type.__module__,
|
||||
name=self.type.__name__,
|
||||
)
|
||||
return module_name.replace('.', '_').lower()
|
||||
|
||||
def build_definition(self, add_defintitions=True, nullable=False):
|
||||
properties = dict(
|
||||
(name, self.maybe_build(value))
|
||||
for name, value
|
||||
in self.properties.items()
|
||||
)
|
||||
schema = {
|
||||
'type': 'object',
|
||||
'additionalProperties': False,
|
||||
'properties': properties,
|
||||
}
|
||||
if self.required:
|
||||
schema['required'] = self.required
|
||||
if self.definitions and add_defintitions:
|
||||
schema['definitions'] = dict(
|
||||
(builder.type_name, builder.build_definition(False, False))
|
||||
for builder in self.definitions
|
||||
)
|
||||
return schema
|
||||
|
||||
@property
|
||||
def is_definition(self):
|
||||
if self.count_type(self.type) > 1:
|
||||
return True
|
||||
elif self.parent:
|
||||
return self.parent.is_definition
|
||||
else:
|
||||
return False
|
||||
|
||||
@property
|
||||
def is_root(self):
|
||||
return not bool(self.parent)
|
||||
|
||||
|
||||
def _apply_validators_modifications(field_schema, field):
|
||||
for validator in field.validators:
|
||||
try:
|
||||
validator.modify_schema(field_schema)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
|
||||
class PrimitiveBuilder(Builder):
|
||||
|
||||
def __init__(self, type, *args, **kwargs):
|
||||
super(PrimitiveBuilder, self).__init__(*args, **kwargs)
|
||||
self.type = type
|
||||
|
||||
def build(self):
|
||||
schema = {}
|
||||
if issubclass(self.type, six.string_types):
|
||||
obj_type = 'string'
|
||||
elif issubclass(self.type, bool):
|
||||
obj_type = 'boolean'
|
||||
elif issubclass(self.type, int):
|
||||
obj_type = 'number'
|
||||
elif issubclass(self.type, float):
|
||||
obj_type = 'number'
|
||||
else:
|
||||
raise errors.FieldNotSupported(
|
||||
"Can't specify value schema!", self.type
|
||||
)
|
||||
|
||||
if self.nullable:
|
||||
obj_type = [obj_type, 'null']
|
||||
schema['type'] = obj_type
|
||||
|
||||
if self.has_default:
|
||||
schema["default"] = self.default
|
||||
|
||||
return schema
|
||||
|
||||
|
||||
class ListBuilder(Builder):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(ListBuilder, self).__init__(*args, **kwargs)
|
||||
self.schemas = []
|
||||
|
||||
def add_type_schema(self, schema):
|
||||
self.schemas.append(schema)
|
||||
|
||||
def build(self):
|
||||
schema = {'type': 'array'}
|
||||
if self.nullable:
|
||||
self.add_type_schema({'type': 'null'})
|
||||
|
||||
if self.has_default:
|
||||
schema["default"] = [self.to_struct(i) for i in self.default]
|
||||
|
||||
schemas = [self.maybe_build(s) for s in self.schemas]
|
||||
if len(schemas) == 1:
|
||||
items = schemas[0]
|
||||
else:
|
||||
items = {'oneOf': schemas}
|
||||
|
||||
schema['items'] = items
|
||||
return schema
|
||||
|
||||
@property
|
||||
def is_definition(self):
|
||||
return self.parent.is_definition
|
||||
|
||||
@staticmethod
|
||||
def to_struct(item):
|
||||
from .models import Base
|
||||
if isinstance(item, Base):
|
||||
return item.to_struct()
|
||||
return item
|
||||
|
||||
|
||||
class EmbeddedBuilder(Builder):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(EmbeddedBuilder, self).__init__(*args, **kwargs)
|
||||
self.schemas = []
|
||||
|
||||
def add_type_schema(self, schema):
|
||||
self.schemas.append(schema)
|
||||
|
||||
def build(self):
|
||||
if self.nullable:
|
||||
self.add_type_schema({'type': 'null'})
|
||||
|
||||
schemas = [self.maybe_build(schema) for schema in self.schemas]
|
||||
if len(schemas) == 1:
|
||||
schema = schemas[0]
|
||||
else:
|
||||
schema = {'oneOf': schemas}
|
||||
|
||||
if self.has_default:
|
||||
# The default value of EmbeddedField is expected to be an instance
|
||||
# of a subclass of models.Base, thus have `to_struct`
|
||||
schema["default"] = self.default.to_struct()
|
||||
|
||||
return schema
|
||||
|
||||
@property
|
||||
def is_definition(self):
|
||||
return self.parent.is_definition
|
||||
21
trains_agent/backend_api/session/jsonmodels/collections.py
Normal file
21
trains_agent/backend_api/session/jsonmodels/collections.py
Normal file
@@ -0,0 +1,21 @@
|
||||
|
||||
|
||||
class ModelCollection(list):
|
||||
|
||||
"""`ModelCollection` is list which validates stored values.
|
||||
|
||||
Validation is made with use of field passed to `__init__` at each point,
|
||||
when new value is assigned.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, field):
|
||||
self.field = field
|
||||
|
||||
def append(self, value):
|
||||
self.field.validate_single_value(value)
|
||||
super(ModelCollection, self).append(value)
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
self.field.validate_single_value(value)
|
||||
super(ModelCollection, self).__setitem__(key, value)
|
||||
15
trains_agent/backend_api/session/jsonmodels/errors.py
Normal file
15
trains_agent/backend_api/session/jsonmodels/errors.py
Normal file
@@ -0,0 +1,15 @@
|
||||
|
||||
|
||||
class ValidationError(RuntimeError):
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class FieldNotFound(RuntimeError):
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class FieldNotSupported(ValueError):
|
||||
|
||||
pass
|
||||
488
trains_agent/backend_api/session/jsonmodels/fields.py
Normal file
488
trains_agent/backend_api/session/jsonmodels/fields.py
Normal file
@@ -0,0 +1,488 @@
|
||||
import datetime
|
||||
import re
|
||||
from weakref import WeakKeyDictionary
|
||||
|
||||
import six
|
||||
from dateutil.parser import parse
|
||||
|
||||
from .errors import ValidationError
|
||||
from .collections import ModelCollection
|
||||
|
||||
|
||||
# unique marker for "no default value specified". None is not good enough since
|
||||
# it is a completely valid default value.
|
||||
NotSet = object()
|
||||
|
||||
|
||||
class BaseField(object):
|
||||
|
||||
"""Base class for all fields."""
|
||||
|
||||
types = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
required=False,
|
||||
nullable=False,
|
||||
help_text=None,
|
||||
validators=None,
|
||||
default=NotSet,
|
||||
name=None):
|
||||
self.memory = WeakKeyDictionary()
|
||||
self.required = required
|
||||
self.help_text = help_text
|
||||
self.nullable = nullable
|
||||
self._assign_validators(validators)
|
||||
self.name = name
|
||||
self._validate_name()
|
||||
if default is not NotSet:
|
||||
self.validate(default)
|
||||
self._default = default
|
||||
|
||||
@property
|
||||
def has_default(self):
|
||||
return self._default is not NotSet
|
||||
|
||||
def _assign_validators(self, validators):
|
||||
if validators and not isinstance(validators, list):
|
||||
validators = [validators]
|
||||
self.validators = validators or []
|
||||
|
||||
def __set__(self, instance, value):
|
||||
self._finish_initialization(type(instance))
|
||||
value = self.parse_value(value)
|
||||
self.validate(value)
|
||||
self.memory[instance._cache_key] = value
|
||||
|
||||
def __get__(self, instance, owner=None):
|
||||
if instance is None:
|
||||
self._finish_initialization(owner)
|
||||
return self
|
||||
|
||||
self._finish_initialization(type(instance))
|
||||
|
||||
self._check_value(instance)
|
||||
return self.memory[instance._cache_key]
|
||||
|
||||
def _finish_initialization(self, owner):
|
||||
pass
|
||||
|
||||
def _check_value(self, obj):
|
||||
if obj._cache_key not in self.memory:
|
||||
self.__set__(obj, self.get_default_value())
|
||||
|
||||
def validate_for_object(self, obj):
|
||||
value = self.__get__(obj)
|
||||
self.validate(value)
|
||||
|
||||
def validate(self, value):
|
||||
self._check_types()
|
||||
self._validate_against_types(value)
|
||||
self._check_against_required(value)
|
||||
self._validate_with_custom_validators(value)
|
||||
|
||||
def _check_against_required(self, value):
|
||||
if value is None and self.required:
|
||||
raise ValidationError('Field is required!')
|
||||
|
||||
def _validate_against_types(self, value):
|
||||
if value is not None and not isinstance(value, self.types):
|
||||
raise ValidationError(
|
||||
'Value is wrong, expected type "{types}"'.format(
|
||||
types=', '.join([t.__name__ for t in self.types])
|
||||
),
|
||||
value,
|
||||
)
|
||||
|
||||
def _check_types(self):
|
||||
if self.types is None:
|
||||
raise ValidationError(
|
||||
'Field "{type}" is not usable, try '
|
||||
'different field type.'.format(type=type(self).__name__))
|
||||
|
||||
def to_struct(self, value):
|
||||
"""Cast value to Python structure."""
|
||||
return value
|
||||
|
||||
def parse_value(self, value):
|
||||
"""Parse value from primitive to desired format.
|
||||
|
||||
Each field can parse value to form it wants it to be (like string or
|
||||
int).
|
||||
|
||||
"""
|
||||
return value
|
||||
|
||||
def _validate_with_custom_validators(self, value):
|
||||
if value is None and self.nullable:
|
||||
return
|
||||
|
||||
for validator in self.validators:
|
||||
try:
|
||||
validator.validate(value)
|
||||
except AttributeError:
|
||||
validator(value)
|
||||
|
||||
def get_default_value(self):
|
||||
"""Get default value for field.
|
||||
|
||||
Each field can specify its default.
|
||||
|
||||
"""
|
||||
return self._default if self.has_default else None
|
||||
|
||||
def _validate_name(self):
|
||||
if self.name is None:
|
||||
return
|
||||
if not re.match('^[A-Za-z_](([\w\-]*)?\w+)?$', self.name):
|
||||
raise ValueError('Wrong name', self.name)
|
||||
|
||||
def structue_name(self, default):
|
||||
return self.name if self.name is not None else default
|
||||
|
||||
|
||||
class StringField(BaseField):
|
||||
|
||||
"""String field."""
|
||||
|
||||
types = six.string_types
|
||||
|
||||
|
||||
class IntField(BaseField):
|
||||
|
||||
"""Integer field."""
|
||||
|
||||
types = (int,)
|
||||
|
||||
def parse_value(self, value):
|
||||
"""Cast value to `int`, e.g. from string or long"""
|
||||
parsed = super(IntField, self).parse_value(value)
|
||||
if parsed is None:
|
||||
return parsed
|
||||
return int(parsed)
|
||||
|
||||
|
||||
class FloatField(BaseField):
|
||||
|
||||
"""Float field."""
|
||||
|
||||
types = (float, int)
|
||||
|
||||
|
||||
class BoolField(BaseField):
|
||||
|
||||
"""Bool field."""
|
||||
|
||||
types = (bool,)
|
||||
|
||||
def parse_value(self, value):
|
||||
"""Cast value to `bool`."""
|
||||
parsed = super(BoolField, self).parse_value(value)
|
||||
return bool(parsed) if parsed is not None else None
|
||||
|
||||
|
||||
class ListField(BaseField):
|
||||
|
||||
"""List field."""
|
||||
|
||||
types = (list,)
|
||||
|
||||
def __init__(self, items_types=None, *args, **kwargs):
|
||||
"""Init.
|
||||
|
||||
`ListField` is **always not required**. If you want to control number
|
||||
of items use validators.
|
||||
|
||||
"""
|
||||
self._assign_types(items_types)
|
||||
super(ListField, self).__init__(*args, **kwargs)
|
||||
self.required = False
|
||||
|
||||
def get_default_value(self):
|
||||
default = super(ListField, self).get_default_value()
|
||||
if default is None:
|
||||
return ModelCollection(self)
|
||||
return default
|
||||
|
||||
def _assign_types(self, items_types):
|
||||
if items_types:
|
||||
try:
|
||||
self.items_types = tuple(items_types)
|
||||
except TypeError:
|
||||
self.items_types = items_types,
|
||||
else:
|
||||
self.items_types = tuple()
|
||||
|
||||
types = []
|
||||
for type_ in self.items_types:
|
||||
if isinstance(type_, six.string_types):
|
||||
types.append(_LazyType(type_))
|
||||
else:
|
||||
types.append(type_)
|
||||
self.items_types = tuple(types)
|
||||
|
||||
def validate(self, value):
|
||||
super(ListField, self).validate(value)
|
||||
|
||||
if len(self.items_types) == 0:
|
||||
return
|
||||
|
||||
for item in value:
|
||||
self.validate_single_value(item)
|
||||
|
||||
def validate_single_value(self, item):
|
||||
if len(self.items_types) == 0:
|
||||
return
|
||||
|
||||
if not isinstance(item, self.items_types):
|
||||
raise ValidationError(
|
||||
'All items must be instances '
|
||||
'of "{types}", and not "{type}".'.format(
|
||||
types=', '.join([t.__name__ for t in self.items_types]),
|
||||
type=type(item).__name__,
|
||||
))
|
||||
|
||||
def parse_value(self, values):
|
||||
"""Cast value to proper collection."""
|
||||
result = self.get_default_value()
|
||||
|
||||
if not values:
|
||||
return result
|
||||
|
||||
if not isinstance(values, list):
|
||||
return values
|
||||
|
||||
return [self._cast_value(value) for value in values]
|
||||
|
||||
def _cast_value(self, value):
|
||||
if isinstance(value, self.items_types):
|
||||
return value
|
||||
else:
|
||||
if len(self.items_types) != 1:
|
||||
tpl = 'Cannot decide which type to choose from "{types}".'
|
||||
raise ValidationError(
|
||||
tpl.format(
|
||||
types=', '.join([t.__name__ for t in self.items_types])
|
||||
)
|
||||
)
|
||||
return self.items_types[0](**value)
|
||||
|
||||
def _finish_initialization(self, owner):
|
||||
super(ListField, self)._finish_initialization(owner)
|
||||
|
||||
types = []
|
||||
for type in self.items_types:
|
||||
if isinstance(type, _LazyType):
|
||||
types.append(type.evaluate(owner))
|
||||
else:
|
||||
types.append(type)
|
||||
self.items_types = tuple(types)
|
||||
|
||||
def _elem_to_struct(self, value):
|
||||
try:
|
||||
return value.to_struct()
|
||||
except AttributeError:
|
||||
return value
|
||||
|
||||
def to_struct(self, values):
|
||||
return [self._elem_to_struct(v) for v in values]
|
||||
|
||||
|
||||
class EmbeddedField(BaseField):
|
||||
|
||||
"""Field for embedded models."""
|
||||
|
||||
def __init__(self, model_types, *args, **kwargs):
|
||||
self._assign_model_types(model_types)
|
||||
super(EmbeddedField, self).__init__(*args, **kwargs)
|
||||
|
||||
def _assign_model_types(self, model_types):
|
||||
if not isinstance(model_types, (list, tuple)):
|
||||
model_types = (model_types,)
|
||||
|
||||
types = []
|
||||
for type_ in model_types:
|
||||
if isinstance(type_, six.string_types):
|
||||
types.append(_LazyType(type_))
|
||||
else:
|
||||
types.append(type_)
|
||||
self.types = tuple(types)
|
||||
|
||||
def _finish_initialization(self, owner):
|
||||
super(EmbeddedField, self)._finish_initialization(owner)
|
||||
|
||||
types = []
|
||||
for type in self.types:
|
||||
if isinstance(type, _LazyType):
|
||||
types.append(type.evaluate(owner))
|
||||
else:
|
||||
types.append(type)
|
||||
self.types = tuple(types)
|
||||
|
||||
def validate(self, value):
|
||||
super(EmbeddedField, self).validate(value)
|
||||
try:
|
||||
value.validate()
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
def parse_value(self, value):
|
||||
"""Parse value to proper model type."""
|
||||
if not isinstance(value, dict):
|
||||
return value
|
||||
|
||||
embed_type = self._get_embed_type()
|
||||
return embed_type(**value)
|
||||
|
||||
def _get_embed_type(self):
|
||||
if len(self.types) != 1:
|
||||
raise ValidationError(
|
||||
'Cannot decide which type to choose from "{types}".'.format(
|
||||
types=', '.join([t.__name__ for t in self.types])
|
||||
)
|
||||
)
|
||||
return self.types[0]
|
||||
|
||||
def to_struct(self, value):
|
||||
return value.to_struct()
|
||||
|
||||
|
||||
class _LazyType(object):
|
||||
|
||||
def __init__(self, path):
|
||||
self.path = path
|
||||
|
||||
def evaluate(self, base_cls):
|
||||
module, type_name = _evaluate_path(self.path, base_cls)
|
||||
return _import(module, type_name)
|
||||
|
||||
|
||||
def _evaluate_path(relative_path, base_cls):
|
||||
base_module = base_cls.__module__
|
||||
|
||||
modules = _get_modules(relative_path, base_module)
|
||||
|
||||
type_name = modules.pop()
|
||||
module = '.'.join(modules)
|
||||
if not module:
|
||||
module = base_module
|
||||
return module, type_name
|
||||
|
||||
|
||||
def _get_modules(relative_path, base_module):
|
||||
canonical_path = relative_path.lstrip('.')
|
||||
canonical_modules = canonical_path.split('.')
|
||||
|
||||
if not relative_path.startswith('.'):
|
||||
return canonical_modules
|
||||
|
||||
parents_amount = len(relative_path) - len(canonical_path)
|
||||
parent_modules = base_module.split('.')
|
||||
parents_amount = max(0, parents_amount - 1)
|
||||
if parents_amount > len(parent_modules):
|
||||
raise ValueError("Can't evaluate path '{}'".format(relative_path))
|
||||
return parent_modules[:parents_amount * -1] + canonical_modules
|
||||
|
||||
|
||||
def _import(module_name, type_name):
|
||||
module = __import__(module_name, fromlist=[type_name])
|
||||
try:
|
||||
return getattr(module, type_name)
|
||||
except AttributeError:
|
||||
raise ValueError(
|
||||
"Can't find type '{}.{}'.".format(module_name, type_name))
|
||||
|
||||
|
||||
class TimeField(StringField):
|
||||
|
||||
"""Time field."""
|
||||
|
||||
types = (datetime.time,)
|
||||
|
||||
def __init__(self, str_format=None, *args, **kwargs):
|
||||
"""Init.
|
||||
|
||||
:param str str_format: Format to cast time to (if `None` - casting to
|
||||
ISO 8601 format).
|
||||
|
||||
"""
|
||||
self.str_format = str_format
|
||||
super(TimeField, self).__init__(*args, **kwargs)
|
||||
|
||||
def to_struct(self, value):
|
||||
"""Cast `time` object to string."""
|
||||
if self.str_format:
|
||||
return value.strftime(self.str_format)
|
||||
return value.isoformat()
|
||||
|
||||
def parse_value(self, value):
|
||||
"""Parse string into instance of `time`."""
|
||||
if value is None:
|
||||
return value
|
||||
if isinstance(value, datetime.time):
|
||||
return value
|
||||
return parse(value).timetz()
|
||||
|
||||
|
||||
class DateField(StringField):
|
||||
|
||||
"""Date field."""
|
||||
|
||||
types = (datetime.date,)
|
||||
default_format = '%Y-%m-%d'
|
||||
|
||||
def __init__(self, str_format=None, *args, **kwargs):
|
||||
"""Init.
|
||||
|
||||
:param str str_format: Format to cast date to (if `None` - casting to
|
||||
%Y-%m-%d format).
|
||||
|
||||
"""
|
||||
self.str_format = str_format
|
||||
super(DateField, self).__init__(*args, **kwargs)
|
||||
|
||||
def to_struct(self, value):
|
||||
"""Cast `date` object to string."""
|
||||
if self.str_format:
|
||||
return value.strftime(self.str_format)
|
||||
return value.strftime(self.default_format)
|
||||
|
||||
def parse_value(self, value):
|
||||
"""Parse string into instance of `date`."""
|
||||
if value is None:
|
||||
return value
|
||||
if isinstance(value, datetime.date):
|
||||
return value
|
||||
return parse(value).date()
|
||||
|
||||
|
||||
class DateTimeField(StringField):
|
||||
|
||||
"""Datetime field."""
|
||||
|
||||
types = (datetime.datetime,)
|
||||
|
||||
def __init__(self, str_format=None, *args, **kwargs):
|
||||
"""Init.
|
||||
|
||||
:param str str_format: Format to cast datetime to (if `None` - casting
|
||||
to ISO 8601 format).
|
||||
|
||||
"""
|
||||
self.str_format = str_format
|
||||
super(DateTimeField, self).__init__(*args, **kwargs)
|
||||
|
||||
def to_struct(self, value):
|
||||
"""Cast `datetime` object to string."""
|
||||
if self.str_format:
|
||||
return value.strftime(self.str_format)
|
||||
return value.isoformat()
|
||||
|
||||
def parse_value(self, value):
|
||||
"""Parse string into instance of `datetime`."""
|
||||
if isinstance(value, datetime.datetime):
|
||||
return value
|
||||
if value:
|
||||
return parse(value)
|
||||
else:
|
||||
return None
|
||||
154
trains_agent/backend_api/session/jsonmodels/models.py
Normal file
154
trains_agent/backend_api/session/jsonmodels/models.py
Normal file
@@ -0,0 +1,154 @@
|
||||
import six
|
||||
|
||||
from . import parsers, errors
|
||||
from .fields import BaseField
|
||||
from .errors import ValidationError
|
||||
|
||||
|
||||
class JsonmodelMeta(type):
|
||||
|
||||
def __new__(cls, name, bases, attributes):
|
||||
cls.validate_fields(attributes)
|
||||
return super(cls, cls).__new__(cls, name, bases, attributes)
|
||||
|
||||
@staticmethod
|
||||
def validate_fields(attributes):
|
||||
fields = {
|
||||
key: value for key, value in attributes.items()
|
||||
if isinstance(value, BaseField)
|
||||
}
|
||||
taken_names = set()
|
||||
for name, field in fields.items():
|
||||
structue_name = field.structue_name(name)
|
||||
if structue_name in taken_names:
|
||||
raise ValueError('Name taken', structue_name, name)
|
||||
taken_names.add(structue_name)
|
||||
|
||||
|
||||
class Base(six.with_metaclass(JsonmodelMeta, object)):
|
||||
|
||||
"""Base class for all models."""
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
self._cache_key = _CacheKey()
|
||||
self.populate(**kwargs)
|
||||
|
||||
def populate(self, **values):
|
||||
"""Populate values to fields. Skip non-existing."""
|
||||
values = values.copy()
|
||||
fields = list(self.iterate_with_name())
|
||||
for _, structure_name, field in fields:
|
||||
if structure_name in values:
|
||||
field.__set__(self, values.pop(structure_name))
|
||||
for name, _, field in fields:
|
||||
if name in values:
|
||||
field.__set__(self, values.pop(name))
|
||||
|
||||
def get_field(self, field_name):
|
||||
"""Get field associated with given attribute."""
|
||||
for attr_name, field in self:
|
||||
if field_name == attr_name:
|
||||
return field
|
||||
|
||||
raise errors.FieldNotFound('Field not found', field_name)
|
||||
|
||||
def __iter__(self):
|
||||
"""Iterate through fields and values."""
|
||||
for name, field in self.iterate_over_fields():
|
||||
yield name, field
|
||||
|
||||
def validate(self):
|
||||
"""Explicitly validate all the fields."""
|
||||
for name, field in self:
|
||||
try:
|
||||
field.validate_for_object(self)
|
||||
except ValidationError as error:
|
||||
raise ValidationError(
|
||||
"Error for field '{name}'.".format(name=name),
|
||||
error,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def iterate_over_fields(cls):
|
||||
"""Iterate through fields as `(attribute_name, field_instance)`."""
|
||||
for attr in dir(cls):
|
||||
clsattr = getattr(cls, attr)
|
||||
if isinstance(clsattr, BaseField):
|
||||
yield attr, clsattr
|
||||
|
||||
@classmethod
|
||||
def iterate_with_name(cls):
|
||||
"""Iterate over fields, but also give `structure_name`.
|
||||
|
||||
Format is `(attribute_name, structue_name, field_instance)`.
|
||||
Structure name is name under which value is seen in structure and
|
||||
schema (in primitives) and only there.
|
||||
"""
|
||||
for attr_name, field in cls.iterate_over_fields():
|
||||
structure_name = field.structue_name(attr_name)
|
||||
yield attr_name, structure_name, field
|
||||
|
||||
def to_struct(self):
|
||||
"""Cast model to Python structure."""
|
||||
return parsers.to_struct(self)
|
||||
|
||||
@classmethod
|
||||
def to_json_schema(cls):
|
||||
"""Generate JSON schema for model."""
|
||||
return parsers.to_json_schema(cls)
|
||||
|
||||
def __repr__(self):
|
||||
attrs = {}
|
||||
for name, _ in self:
|
||||
try:
|
||||
attr = getattr(self, name)
|
||||
if attr is not None:
|
||||
attrs[name] = repr(attr)
|
||||
except ValidationError:
|
||||
pass
|
||||
|
||||
return '{class_name}({fields})'.format(
|
||||
class_name=self.__class__.__name__,
|
||||
fields=', '.join(
|
||||
'{0[0]}={0[1]}'.format(x) for x in sorted(attrs.items())
|
||||
),
|
||||
)
|
||||
|
||||
def __str__(self):
|
||||
return '{name} object'.format(name=self.__class__.__name__)
|
||||
|
||||
def __setattr__(self, name, value):
|
||||
try:
|
||||
return super(Base, self).__setattr__(name, value)
|
||||
except ValidationError as error:
|
||||
raise ValidationError(
|
||||
"Error for field '{name}'.".format(name=name),
|
||||
error
|
||||
)
|
||||
|
||||
def __eq__(self, other):
|
||||
if type(other) is not type(self):
|
||||
return False
|
||||
|
||||
for name, _ in self.iterate_over_fields():
|
||||
try:
|
||||
our = getattr(self, name)
|
||||
except errors.ValidationError:
|
||||
our = None
|
||||
|
||||
try:
|
||||
their = getattr(other, name)
|
||||
except errors.ValidationError:
|
||||
their = None
|
||||
|
||||
if our != their:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def __ne__(self, other):
|
||||
return not (self == other)
|
||||
|
||||
|
||||
class _CacheKey(object):
|
||||
"""Object to identify model in memory."""
|
||||
106
trains_agent/backend_api/session/jsonmodels/parsers.py
Normal file
106
trains_agent/backend_api/session/jsonmodels/parsers.py
Normal file
@@ -0,0 +1,106 @@
|
||||
"""Parsers to change model structure into different ones."""
|
||||
import inspect
|
||||
|
||||
from . import fields, builders, errors
|
||||
|
||||
|
||||
def to_struct(model):
|
||||
"""Cast instance of model to python structure.
|
||||
|
||||
:param model: Model to be casted.
|
||||
:rtype: ``dict``
|
||||
|
||||
"""
|
||||
model.validate()
|
||||
|
||||
resp = {}
|
||||
for _, name, field in model.iterate_with_name():
|
||||
value = field.__get__(model)
|
||||
if value is None:
|
||||
continue
|
||||
|
||||
value = field.to_struct(value)
|
||||
resp[name] = value
|
||||
return resp
|
||||
|
||||
|
||||
def to_json_schema(cls):
|
||||
"""Generate JSON schema for given class.
|
||||
|
||||
:param cls: Class to be casted.
|
||||
:rtype: ``dict``
|
||||
|
||||
"""
|
||||
builder = build_json_schema(cls)
|
||||
return builder.build()
|
||||
|
||||
|
||||
def build_json_schema(value, parent_builder=None):
|
||||
from .models import Base
|
||||
|
||||
cls = value if inspect.isclass(value) else value.__class__
|
||||
if issubclass(cls, Base):
|
||||
return build_json_schema_object(cls, parent_builder)
|
||||
else:
|
||||
return build_json_schema_primitive(cls, parent_builder)
|
||||
|
||||
|
||||
def build_json_schema_object(cls, parent_builder=None):
|
||||
builder = builders.ObjectBuilder(cls, parent_builder)
|
||||
if builder.count_type(builder.type) > 1:
|
||||
return builder
|
||||
for _, name, field in cls.iterate_with_name():
|
||||
if isinstance(field, fields.EmbeddedField):
|
||||
builder.add_field(name, field, _parse_embedded(field, builder))
|
||||
elif isinstance(field, fields.ListField):
|
||||
builder.add_field(name, field, _parse_list(field, builder))
|
||||
else:
|
||||
builder.add_field(
|
||||
name, field, _create_primitive_field_schema(field))
|
||||
return builder
|
||||
|
||||
|
||||
def _parse_list(field, parent_builder):
|
||||
builder = builders.ListBuilder(
|
||||
parent_builder, field.nullable, default=field._default)
|
||||
for type in field.items_types:
|
||||
builder.add_type_schema(build_json_schema(type, builder))
|
||||
return builder
|
||||
|
||||
|
||||
def _parse_embedded(field, parent_builder):
|
||||
builder = builders.EmbeddedBuilder(
|
||||
parent_builder, field.nullable, default=field._default)
|
||||
for type in field.types:
|
||||
builder.add_type_schema(build_json_schema(type, builder))
|
||||
return builder
|
||||
|
||||
|
||||
def build_json_schema_primitive(cls, parent_builder):
|
||||
builder = builders.PrimitiveBuilder(cls, parent_builder)
|
||||
return builder
|
||||
|
||||
|
||||
def _create_primitive_field_schema(field):
|
||||
if isinstance(field, fields.StringField):
|
||||
obj_type = 'string'
|
||||
elif isinstance(field, fields.IntField):
|
||||
obj_type = 'number'
|
||||
elif isinstance(field, fields.FloatField):
|
||||
obj_type = 'float'
|
||||
elif isinstance(field, fields.BoolField):
|
||||
obj_type = 'boolean'
|
||||
else:
|
||||
raise errors.FieldNotSupported(
|
||||
'Field {field} is not supported!'.format(
|
||||
field=type(field).__class__.__name__))
|
||||
|
||||
if field.nullable:
|
||||
obj_type = [obj_type, 'null']
|
||||
|
||||
schema = {'type': obj_type}
|
||||
|
||||
if field.has_default:
|
||||
schema["default"] = field._default
|
||||
|
||||
return schema
|
||||
156
trains_agent/backend_api/session/jsonmodels/utilities.py
Normal file
156
trains_agent/backend_api/session/jsonmodels/utilities.py
Normal file
@@ -0,0 +1,156 @@
|
||||
from __future__ import absolute_import
|
||||
|
||||
import six
|
||||
import re
|
||||
from collections import namedtuple
|
||||
|
||||
SCALAR_TYPES = tuple(list(six.string_types) + [int, float, bool])
|
||||
|
||||
ECMA_TO_PYTHON_FLAGS = {
|
||||
'i': re.I,
|
||||
'm': re.M,
|
||||
}
|
||||
|
||||
PYTHON_TO_ECMA_FLAGS = dict(
|
||||
(value, key) for key, value in ECMA_TO_PYTHON_FLAGS.items()
|
||||
)
|
||||
|
||||
PythonRegex = namedtuple('PythonRegex', ['regex', 'flags'])
|
||||
|
||||
|
||||
def _normalize_string_type(value):
|
||||
if isinstance(value, six.string_types):
|
||||
return six.text_type(value)
|
||||
else:
|
||||
return value
|
||||
|
||||
|
||||
def _compare_dicts(one, two):
|
||||
if len(one) != len(two):
|
||||
return False
|
||||
|
||||
for key, value in one.items():
|
||||
if key not in one or key not in two:
|
||||
return False
|
||||
|
||||
if not compare_schemas(one[key], two[key]):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def _compare_lists(one, two):
|
||||
if len(one) != len(two):
|
||||
return False
|
||||
|
||||
they_match = False
|
||||
for first_item in one:
|
||||
for second_item in two:
|
||||
if they_match:
|
||||
continue
|
||||
they_match = compare_schemas(first_item, second_item)
|
||||
return they_match
|
||||
|
||||
|
||||
def _assert_same_types(one, two):
|
||||
if not isinstance(one, type(two)) or not isinstance(two, type(one)):
|
||||
raise RuntimeError('Types mismatch! "{type1}" and "{type2}".'.format(
|
||||
type1=type(one).__name__, type2=type(two).__name__))
|
||||
|
||||
|
||||
def compare_schemas(one, two):
|
||||
"""Compare two structures that represents JSON schemas.
|
||||
|
||||
For comparison you can't use normal comparison, because in JSON schema
|
||||
lists DO NOT keep order (and Python lists do), so this must be taken into
|
||||
account during comparison.
|
||||
|
||||
Note this wont check all configurations, only first one that seems to
|
||||
match, which can lead to wrong results.
|
||||
|
||||
:param one: First schema to compare.
|
||||
:param two: Second schema to compare.
|
||||
:rtype: `bool`
|
||||
|
||||
"""
|
||||
one = _normalize_string_type(one)
|
||||
two = _normalize_string_type(two)
|
||||
|
||||
_assert_same_types(one, two)
|
||||
|
||||
if isinstance(one, list):
|
||||
return _compare_lists(one, two)
|
||||
elif isinstance(one, dict):
|
||||
return _compare_dicts(one, two)
|
||||
elif isinstance(one, SCALAR_TYPES):
|
||||
return one == two
|
||||
elif one is None:
|
||||
return one is two
|
||||
else:
|
||||
raise RuntimeError('Not allowed type "{type}"'.format(
|
||||
type=type(one).__name__))
|
||||
|
||||
|
||||
def is_ecma_regex(regex):
|
||||
"""Check if given regex is of type ECMA 262 or not.
|
||||
|
||||
:rtype: bool
|
||||
|
||||
"""
|
||||
parts = regex.split('/')
|
||||
|
||||
if len(parts) == 1:
|
||||
return False
|
||||
|
||||
if len(parts) < 3:
|
||||
raise ValueError('Given regex isn\'t ECMA regex nor Python regex.')
|
||||
parts.pop()
|
||||
parts.append('')
|
||||
|
||||
raw_regex = '/'.join(parts)
|
||||
if raw_regex.startswith('/') and raw_regex.endswith('/'):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def convert_ecma_regex_to_python(value):
|
||||
"""Convert ECMA 262 regex to Python tuple with regex and flags.
|
||||
|
||||
If given value is already Python regex it will be returned unchanged.
|
||||
|
||||
:param string value: ECMA regex.
|
||||
:return: 2-tuple with `regex` and `flags`
|
||||
:rtype: namedtuple
|
||||
|
||||
"""
|
||||
if not is_ecma_regex(value):
|
||||
return PythonRegex(value, [])
|
||||
|
||||
parts = value.split('/')
|
||||
flags = parts.pop()
|
||||
|
||||
try:
|
||||
result_flags = [ECMA_TO_PYTHON_FLAGS[f] for f in flags]
|
||||
except KeyError:
|
||||
raise ValueError('Wrong flags "{}".'.format(flags))
|
||||
|
||||
return PythonRegex('/'.join(parts[1:]), result_flags)
|
||||
|
||||
|
||||
def convert_python_regex_to_ecma(value, flags=[]):
|
||||
"""Convert Python regex to ECMA 262 regex.
|
||||
|
||||
If given value is already ECMA regex it will be returned unchanged.
|
||||
|
||||
:param string value: Python regex.
|
||||
:param list flags: List of flags (allowed flags: `re.I`, `re.M`)
|
||||
:return: ECMA 262 regex
|
||||
:rtype: str
|
||||
|
||||
"""
|
||||
if is_ecma_regex(value):
|
||||
return value
|
||||
|
||||
result_flags = [PYTHON_TO_ECMA_FLAGS[f] for f in flags]
|
||||
result_flags = ''.join(result_flags)
|
||||
|
||||
return '/{value}/{flags}'.format(value=value, flags=result_flags)
|
||||
202
trains_agent/backend_api/session/jsonmodels/validators.py
Normal file
202
trains_agent/backend_api/session/jsonmodels/validators.py
Normal file
@@ -0,0 +1,202 @@
|
||||
"""Predefined validators."""
|
||||
import re
|
||||
|
||||
from six.moves import reduce
|
||||
|
||||
from .errors import ValidationError
|
||||
from . import utilities
|
||||
|
||||
|
||||
class Min(object):
|
||||
|
||||
"""Validator for minimum value."""
|
||||
|
||||
def __init__(self, minimum_value, exclusive=False):
|
||||
"""Init.
|
||||
|
||||
:param minimum_value: Minimum value for validator.
|
||||
:param bool exclusive: If `True`, then validated value must be strongly
|
||||
lower than given threshold.
|
||||
|
||||
"""
|
||||
self.minimum_value = minimum_value
|
||||
self.exclusive = exclusive
|
||||
|
||||
def validate(self, value):
|
||||
"""Validate value."""
|
||||
if self.exclusive:
|
||||
if value <= self.minimum_value:
|
||||
tpl = "'{value}' is lower or equal than minimum ('{min}')."
|
||||
raise ValidationError(
|
||||
tpl.format(value=value, min=self.minimum_value))
|
||||
else:
|
||||
if value < self.minimum_value:
|
||||
raise ValidationError(
|
||||
"'{value}' is lower than minimum ('{min}').".format(
|
||||
value=value, min=self.minimum_value))
|
||||
|
||||
def modify_schema(self, field_schema):
|
||||
"""Modify field schema."""
|
||||
field_schema['minimum'] = self.minimum_value
|
||||
if self.exclusive:
|
||||
field_schema['exclusiveMinimum'] = True
|
||||
|
||||
|
||||
class Max(object):
|
||||
|
||||
"""Validator for maximum value."""
|
||||
|
||||
def __init__(self, maximum_value, exclusive=False):
|
||||
"""Init.
|
||||
|
||||
:param maximum_value: Maximum value for validator.
|
||||
:param bool exclusive: If `True`, then validated value must be strongly
|
||||
bigger than given threshold.
|
||||
|
||||
"""
|
||||
self.maximum_value = maximum_value
|
||||
self.exclusive = exclusive
|
||||
|
||||
def validate(self, value):
|
||||
"""Validate value."""
|
||||
if self.exclusive:
|
||||
if value >= self.maximum_value:
|
||||
tpl = "'{val}' is bigger or equal than maximum ('{max}')."
|
||||
raise ValidationError(
|
||||
tpl.format(val=value, max=self.maximum_value))
|
||||
else:
|
||||
if value > self.maximum_value:
|
||||
raise ValidationError(
|
||||
"'{value}' is bigger than maximum ('{max}').".format(
|
||||
value=value, max=self.maximum_value))
|
||||
|
||||
def modify_schema(self, field_schema):
|
||||
"""Modify field schema."""
|
||||
field_schema['maximum'] = self.maximum_value
|
||||
if self.exclusive:
|
||||
field_schema['exclusiveMaximum'] = True
|
||||
|
||||
|
||||
class Regex(object):
|
||||
|
||||
"""Validator for regular expressions."""
|
||||
|
||||
FLAGS = {
|
||||
'ignorecase': re.I,
|
||||
'multiline': re.M,
|
||||
}
|
||||
|
||||
def __init__(self, pattern, **flags):
|
||||
"""Init.
|
||||
|
||||
Note, that if given pattern is ECMA regex, given flags will be
|
||||
**completely ignored** and taken from given regex.
|
||||
|
||||
|
||||
:param string pattern: Pattern of regex.
|
||||
:param bool flags: Flags used for the regex matching.
|
||||
Allowed flag names are in the `FLAGS` attribute. The flag value
|
||||
does not matter as long as it evaluates to True.
|
||||
Flags with False values will be ignored.
|
||||
Invalid flags will be ignored.
|
||||
|
||||
"""
|
||||
if utilities.is_ecma_regex(pattern):
|
||||
result = utilities.convert_ecma_regex_to_python(pattern)
|
||||
self.pattern, self.flags = result
|
||||
else:
|
||||
self.pattern = pattern
|
||||
self.flags = [self.FLAGS[key] for key, value in flags.items()
|
||||
if key in self.FLAGS and value]
|
||||
|
||||
def validate(self, value):
|
||||
"""Validate value."""
|
||||
flags = self._calculate_flags()
|
||||
|
||||
try:
|
||||
result = re.search(self.pattern, value, flags)
|
||||
except TypeError as te:
|
||||
raise ValidationError(*te.args)
|
||||
|
||||
if not result:
|
||||
raise ValidationError(
|
||||
'Value "{value}" did not match pattern "{pattern}".'.format(
|
||||
value=value, pattern=self.pattern
|
||||
))
|
||||
|
||||
def _calculate_flags(self):
|
||||
return reduce(lambda x, y: x | y, self.flags, 0)
|
||||
|
||||
def modify_schema(self, field_schema):
|
||||
"""Modify field schema."""
|
||||
field_schema['pattern'] = utilities.convert_python_regex_to_ecma(
|
||||
self.pattern, self.flags)
|
||||
|
||||
|
||||
class Length(object):
|
||||
|
||||
"""Validator for length."""
|
||||
|
||||
def __init__(self, minimum_value=None, maximum_value=None):
|
||||
"""Init.
|
||||
|
||||
Note that if no `minimum_value` neither `maximum_value` will be
|
||||
specified, `ValueError` will be raised.
|
||||
|
||||
:param int minimum_value: Minimum value (optional).
|
||||
:param int maximum_value: Maximum value (optional).
|
||||
|
||||
"""
|
||||
if minimum_value is None and maximum_value is None:
|
||||
raise ValueError(
|
||||
"Either 'minimum_value' or 'maximum_value' must be specified.")
|
||||
|
||||
self.minimum_value = minimum_value
|
||||
self.maximum_value = maximum_value
|
||||
|
||||
def validate(self, value):
|
||||
"""Validate value."""
|
||||
len_ = len(value)
|
||||
|
||||
if self.minimum_value is not None and len_ < self.minimum_value:
|
||||
tpl = "Value '{val}' length is lower than allowed minimum '{min}'."
|
||||
raise ValidationError(tpl.format(
|
||||
val=value, min=self.minimum_value
|
||||
))
|
||||
|
||||
if self.maximum_value is not None and len_ > self.maximum_value:
|
||||
raise ValidationError(
|
||||
"Value '{val}' length is bigger than "
|
||||
"allowed maximum '{max}'.".format(
|
||||
val=value,
|
||||
max=self.maximum_value,
|
||||
))
|
||||
|
||||
def modify_schema(self, field_schema):
|
||||
"""Modify field schema."""
|
||||
if self.minimum_value:
|
||||
field_schema['minLength'] = self.minimum_value
|
||||
|
||||
if self.maximum_value:
|
||||
field_schema['maxLength'] = self.maximum_value
|
||||
|
||||
|
||||
class Enum(object):
|
||||
|
||||
"""Validator for enums."""
|
||||
|
||||
def __init__(self, *choices):
|
||||
"""Init.
|
||||
|
||||
:param [] choices: Valid choices for the field.
|
||||
"""
|
||||
|
||||
self.choices = list(choices)
|
||||
|
||||
def validate(self, value):
|
||||
if value not in self.choices:
|
||||
tpl = "Value '{val}' is not a valid choice."
|
||||
raise ValidationError(tpl.format(val=value))
|
||||
|
||||
def modify_schema(self, field_schema):
|
||||
field_schema['enum'] = self.choices
|
||||
@@ -1,10 +1,8 @@
|
||||
import requests
|
||||
|
||||
import six
|
||||
import jsonmodels.models
|
||||
import jsonmodels.fields
|
||||
import jsonmodels.errors
|
||||
|
||||
from . import jsonmodels
|
||||
from .apimodel import ApiModel
|
||||
from .datamodel import NonStrictDataModelMixin
|
||||
|
||||
|
||||
@@ -16,6 +16,7 @@ from .request import Request, BatchRequest
|
||||
from .token_manager import TokenManager
|
||||
from ..config import load
|
||||
from ..utils import get_http_session_with_retry, urllib_log_warning_setup
|
||||
from ...backend_config.environment import backward_compatibility_support
|
||||
from ...version import __version__
|
||||
|
||||
|
||||
@@ -84,8 +85,11 @@ class Session(TokenManager):
|
||||
initialize_logging=True,
|
||||
client=None,
|
||||
config=None,
|
||||
http_retries_config=None,
|
||||
**kwargs
|
||||
):
|
||||
# add backward compatibility support for old environment variables
|
||||
backward_compatibility_support()
|
||||
|
||||
if config is not None:
|
||||
self.config = config
|
||||
@@ -126,7 +130,7 @@ class Session(TokenManager):
|
||||
raise ValueError("host is required in init or config")
|
||||
|
||||
self.__host = host.strip("/")
|
||||
http_retries_config = self.config.get(
|
||||
http_retries_config = http_retries_config or self.config.get(
|
||||
"api.http.retries", ConfigTree()
|
||||
).as_plain_ordered_dict()
|
||||
http_retries_config["status_forcelist"] = self._retry_codes
|
||||
|
||||
@@ -23,3 +23,31 @@ class EnvEntry(Entry):
|
||||
|
||||
def error(self, message):
|
||||
print("Environment configuration: {}".format(message))
|
||||
|
||||
|
||||
def backward_compatibility_support():
|
||||
from ..definitions import ENVIRONMENT_CONFIG, ENVIRONMENT_SDK_PARAMS, ENVIRONMENT_BACKWARD_COMPATIBLE
|
||||
if not ENVIRONMENT_BACKWARD_COMPATIBLE.get():
|
||||
return
|
||||
|
||||
# Add ALG_ prefix on every TRAINS_ os environment we support
|
||||
for k, v in ENVIRONMENT_CONFIG.items():
|
||||
try:
|
||||
trains_vars = [var for var in v.vars if var.startswith('TRAINS_')]
|
||||
if not trains_vars:
|
||||
continue
|
||||
alg_var = trains_vars[0].replace('TRAINS_', 'ALG_', 1)
|
||||
if alg_var not in v.vars:
|
||||
v.vars = tuple(list(v.vars) + [alg_var])
|
||||
except:
|
||||
continue
|
||||
for k, v in ENVIRONMENT_SDK_PARAMS.items():
|
||||
try:
|
||||
trains_vars = [var for var in v if var.startswith('TRAINS_')]
|
||||
if not trains_vars:
|
||||
continue
|
||||
alg_var = trains_vars[0].replace('TRAINS_', 'ALG_', 1)
|
||||
if alg_var not in v:
|
||||
ENVIRONMENT_SDK_PARAMS[k] = tuple(list(v) + [alg_var])
|
||||
except:
|
||||
continue
|
||||
|
||||
@@ -94,9 +94,20 @@ class ServiceCommandSection(BaseCommandSection):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(ServiceCommandSection, self).__init__()
|
||||
kwargs = self._verify_command_states(kwargs)
|
||||
self._session = self._get_session(*args, **kwargs)
|
||||
self._list_formatter = ListFormatter(self.service)
|
||||
|
||||
@classmethod
|
||||
def _verify_command_states(cls, kwargs):
|
||||
"""
|
||||
Conform and enforce command argument
|
||||
This is where you can automatically turn on/off switches based on different states.
|
||||
:param kwargs:
|
||||
:return: kwargs
|
||||
"""
|
||||
return kwargs
|
||||
|
||||
@staticmethod
|
||||
def _get_session(*args, **kwargs):
|
||||
return Session(*args, **kwargs)
|
||||
|
||||
@@ -44,7 +44,7 @@ def main():
|
||||
sentinel = ''
|
||||
parse_input = '\n'.join(iter(input, sentinel))
|
||||
credentials = None
|
||||
api_host = None
|
||||
api_server = None
|
||||
web_server = None
|
||||
# noinspection PyBroadException
|
||||
try:
|
||||
@@ -52,11 +52,11 @@ def main():
|
||||
if parsed:
|
||||
# Take the credentials in raw form or from api section
|
||||
credentials = get_parsed_field(parsed, ["credentials"])
|
||||
api_host = get_parsed_field(parsed, ["api_server", "host"])
|
||||
api_server = get_parsed_field(parsed, ["api_server", "host"])
|
||||
web_server = get_parsed_field(parsed, ["web_server"])
|
||||
except Exception:
|
||||
credentials = credentials or None
|
||||
api_host = api_host or None
|
||||
api_server = api_server or None
|
||||
web_server = web_server or None
|
||||
|
||||
while not credentials or set(credentials) != {"access_key", "secret_key"}:
|
||||
@@ -65,63 +65,25 @@ def main():
|
||||
|
||||
print('Detected credentials key=\"{}\" secret=\"{}\"'.format(credentials['access_key'],
|
||||
credentials['secret_key'][0:4] + "***"))
|
||||
if api_host:
|
||||
api_host = input_url('API Host', api_host)
|
||||
web_input = True
|
||||
if web_server:
|
||||
host = input_url('WEB Host', web_server)
|
||||
elif api_server:
|
||||
web_input = False
|
||||
host = input_url('API Host', api_server)
|
||||
else:
|
||||
print(host_description)
|
||||
api_host = input_url('API Host', '')
|
||||
parsed_host = verify_url(api_host)
|
||||
host = input_url('WEB Host', '')
|
||||
|
||||
if parsed_host.netloc.startswith('demoapp.'):
|
||||
# this is our demo server
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('demoapp.', 'demoapi.', 1) + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('demoapp.', 'demofiles.', 1) + parsed_host.path
|
||||
elif parsed_host.netloc.startswith('app.'):
|
||||
# this is our application server
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('app.', 'api.', 1) + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('app.', 'files.', 1) + parsed_host.path
|
||||
elif parsed_host.netloc.startswith('demoapi.'):
|
||||
print('{} is the api server, we need the web server. Replacing \'demoapi.\' with \'demoapp.\''.format(
|
||||
parsed_host.netloc))
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('demoapi.', 'demoapp.', 1) + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('demoapi.', 'demofiles.', 1) + parsed_host.path
|
||||
elif parsed_host.netloc.startswith('api.'):
|
||||
print('{} is the api server, we need the web server. Replacing \'api.\' with \'app.\''.format(
|
||||
parsed_host.netloc))
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('api.', 'app.', 1) + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('api.', 'files.', 1) + parsed_host.path
|
||||
elif parsed_host.port == 8008:
|
||||
print('Port 8008 is the api port. Replacing 8080 with 8008 for Web application')
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc.replace(':8008', ':8080', 1) + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace(':8008', ':8081', 1) + parsed_host.path
|
||||
elif parsed_host.port == 8080:
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc.replace(':8080', ':8008', 1) + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace(':8080', ':8081', 1) + parsed_host.path
|
||||
parsed_host = verify_url(host)
|
||||
api_host, files_host, web_host = parse_host(parsed_host, allow_input=True)
|
||||
|
||||
# on of these two we configured
|
||||
if not web_input:
|
||||
web_host = input_url('Web Application Host', web_host)
|
||||
else:
|
||||
api_host = ''
|
||||
web_host = ''
|
||||
files_host = ''
|
||||
if not parsed_host.port:
|
||||
print('Host port not detected, do you wish to use the default 8080 port n/[y]? ', end='')
|
||||
replace_port = input().lower()
|
||||
if not replace_port or replace_port == 'y' or replace_port == 'yes':
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc + ':8008' + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc + ':8080' + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc + ':8081' + parsed_host.path
|
||||
elif not replace_port or replace_port.lower() == 'n' or replace_port.lower() == 'no':
|
||||
web_host = input_host_port("Web", parsed_host)
|
||||
api_host = input_host_port("API", parsed_host)
|
||||
files_host = input_host_port("Files", parsed_host)
|
||||
if not api_host:
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
api_host = input_url('API Host', api_host)
|
||||
|
||||
web_host = input_url('Web Application Host', web_server if web_server else web_host)
|
||||
files_host = input_url('File Store Host', files_host)
|
||||
|
||||
print('\nTRAINS Hosts configuration:\nWeb App: {}\nAPI: {}\nFile Store: {}\n'.format(
|
||||
@@ -208,13 +170,71 @@ def main():
|
||||
print('TRAINS-AGENT setup completed successfully.')
|
||||
|
||||
|
||||
def parse_host(parsed_host, allow_input=True):
|
||||
if parsed_host.netloc.startswith('demoapp.'):
|
||||
# this is our demo server
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('demoapp.', 'demoapi.', 1) + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('demoapp.', 'demofiles.',
|
||||
1) + parsed_host.path
|
||||
elif parsed_host.netloc.startswith('app.'):
|
||||
# this is our application server
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('app.', 'api.', 1) + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('app.', 'files.', 1) + parsed_host.path
|
||||
elif parsed_host.netloc.startswith('demoapi.'):
|
||||
print('{} is the api server, we need the web server. Replacing \'demoapi.\' with \'demoapp.\''.format(
|
||||
parsed_host.netloc))
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('demoapi.', 'demoapp.', 1) + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('demoapi.', 'demofiles.',
|
||||
1) + parsed_host.path
|
||||
elif parsed_host.netloc.startswith('api.'):
|
||||
print('{} is the api server, we need the web server. Replacing \'api.\' with \'app.\''.format(
|
||||
parsed_host.netloc))
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('api.', 'app.', 1) + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace('api.', 'files.', 1) + parsed_host.path
|
||||
elif parsed_host.port == 8008:
|
||||
print('Port 8008 is the api port. Replacing 8080 with 8008 for Web application')
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc.replace(':8008', ':8080', 1) + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace(':8008', ':8081', 1) + parsed_host.path
|
||||
elif parsed_host.port == 8080:
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc.replace(':8080', ':8008', 1) + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc.replace(':8080', ':8081', 1) + parsed_host.path
|
||||
elif allow_input:
|
||||
api_host = ''
|
||||
web_host = ''
|
||||
files_host = ''
|
||||
if not parsed_host.port:
|
||||
print('Host port not detected, do you wish to use the default 8080 port n/[y]? ', end='')
|
||||
replace_port = input().lower()
|
||||
if not replace_port or replace_port == 'y' or replace_port == 'yes':
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc + ':8008' + parsed_host.path
|
||||
web_host = parsed_host.scheme + "://" + parsed_host.netloc + ':8080' + parsed_host.path
|
||||
files_host = parsed_host.scheme + "://" + parsed_host.netloc + ':8081' + parsed_host.path
|
||||
elif not replace_port or replace_port.lower() == 'n' or replace_port.lower() == 'no':
|
||||
web_host = input_host_port("Web", parsed_host)
|
||||
api_host = input_host_port("API", parsed_host)
|
||||
files_host = input_host_port("Files", parsed_host)
|
||||
if not api_host:
|
||||
api_host = parsed_host.scheme + "://" + parsed_host.netloc + parsed_host.path
|
||||
else:
|
||||
raise ValueError("Could not parse host name")
|
||||
|
||||
return api_host, files_host, web_host
|
||||
|
||||
|
||||
def verify_credentials(api_host, credentials):
|
||||
"""check if the credentials are valid"""
|
||||
# noinspection PyBroadException
|
||||
try:
|
||||
print('Verifying credentials ...')
|
||||
if api_host:
|
||||
Session(api_key=credentials['access_key'], secret_key=credentials['secret_key'], host=api_host)
|
||||
Session(api_key=credentials['access_key'], secret_key=credentials['secret_key'], host=api_host,
|
||||
http_retries_config={"total": 2})
|
||||
print('Credentials verified!')
|
||||
return True
|
||||
else:
|
||||
@@ -256,7 +276,7 @@ def read_manual_credentials():
|
||||
|
||||
def input_url(host_type, host=None):
|
||||
while True:
|
||||
print('{} configured to: [{}] '.format(host_type, host), end='')
|
||||
print('{} configured to: {}'.format(host_type, '[{}] '.format(host) if host else ''), end='')
|
||||
parse_input = input()
|
||||
if host and (not parse_input or parse_input.lower() == 'yes' or parse_input.lower() == 'y'):
|
||||
break
|
||||
@@ -270,11 +290,12 @@ def input_url(host_type, host=None):
|
||||
def input_host_port(host_type, parsed_host):
|
||||
print('Enter port for {} host '.format(host_type), end='')
|
||||
replace_port = input().lower()
|
||||
return parsed_host.scheme + "://" + parsed_host.netloc + (':{}'.format(replace_port) if replace_port else '') + \
|
||||
parsed_host.path
|
||||
return parsed_host.scheme + "://" + parsed_host.netloc + (
|
||||
':{}'.format(replace_port) if replace_port else '') + parsed_host.path
|
||||
|
||||
|
||||
def verify_url(parse_input):
|
||||
# noinspection PyBroadException
|
||||
try:
|
||||
if not parse_input.startswith('http://') and not parse_input.startswith('https://'):
|
||||
# if we have a specific port, use http prefix, otherwise assume https
|
||||
|
||||
@@ -39,8 +39,10 @@ from trains_agent.definitions import (
|
||||
PROGRAM_NAME,
|
||||
DEFAULT_VENV_UPDATE_URL,
|
||||
ENV_TASK_EXECUTE_AS_USER,
|
||||
ENV_K8S_HOST_MOUNT,
|
||||
ENV_TASK_EXTRA_PYTHON_PATH)
|
||||
ENV_DOCKER_HOST_MOUNT,
|
||||
ENV_TASK_EXTRA_PYTHON_PATH,
|
||||
ENV_AGENT_GIT_USER,
|
||||
ENV_AGENT_GIT_PASS)
|
||||
from trains_agent.definitions import WORKING_REPOSITORY_DIR, PIP_EXTRA_INDICES
|
||||
from trains_agent.errors import APIError, CommandFailedError, Sigterm
|
||||
from trains_agent.helper.base import (
|
||||
@@ -71,6 +73,7 @@ from trains_agent.helper.package.base import PackageManager
|
||||
from trains_agent.helper.package.conda_api import CondaAPI
|
||||
from trains_agent.helper.package.horovod_req import HorovodRequirement
|
||||
from trains_agent.helper.package.external_req import ExternalRequirements
|
||||
from trains_agent.helper.package.pip_api.system import SystemPip
|
||||
from trains_agent.helper.package.pip_api.venv import VirtualenvPip
|
||||
from trains_agent.helper.package.poetry_api import PoetryConfig, PoetryAPI
|
||||
from trains_agent.helper.package.pytorch import PytorchRequirement
|
||||
@@ -98,6 +101,8 @@ from .events import Events
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
DOCKER_ROOT_CONF_FILE = "/root/trains.conf"
|
||||
DOCKER_DEFAULT_CONF_FILE = "/root/default_trains.conf"
|
||||
|
||||
@attr.s
|
||||
class LiteralScriptManager(object):
|
||||
@@ -258,7 +263,7 @@ class TaskStopSignal(object):
|
||||
)
|
||||
return TaskStopReason.stopped
|
||||
|
||||
if status in self.unexpected_statuses and "worker" not in message:
|
||||
if status in self.unexpected_statuses: ## and "worker" not in message:
|
||||
self.command.log("unexpected status change, task will terminate")
|
||||
return TaskStopReason.status_changed
|
||||
|
||||
@@ -306,6 +311,12 @@ class Worker(ServiceCommandSection):
|
||||
# machine status update intervals, seconds
|
||||
_machine_update_interval = 30.0
|
||||
|
||||
# message printed before starting task logging,
|
||||
# it will be parsed by services_mode, to identify internal docker logging start
|
||||
_task_logging_start_message = "Running task '{}'"
|
||||
# last message before passing control to the actual task
|
||||
_task_logging_pass_control_message = "Running task id [{}]:"
|
||||
|
||||
@property
|
||||
def service(self):
|
||||
""" Worker command service endpoint """
|
||||
@@ -359,6 +370,7 @@ class Worker(ServiceCommandSection):
|
||||
self.queues = ()
|
||||
self.venv_folder = None # type: Optional[Text]
|
||||
self.package_api = None # type: PackageManager
|
||||
self.global_package_api = None
|
||||
|
||||
self.is_venv_update = self._session.config.agent.venv_update.enabled
|
||||
self.poetry = PoetryConfig(self._session)
|
||||
@@ -371,8 +383,24 @@ class Worker(ServiceCommandSection):
|
||||
self._docker_force_pull = self._session.config.get("agent.docker_force_pull", False)
|
||||
self._daemon_foreground = None
|
||||
self._standalone_mode = None
|
||||
self._services_mode = None
|
||||
self._force_current_version = None
|
||||
|
||||
@classmethod
|
||||
def _verify_command_states(cls, kwargs):
|
||||
"""
|
||||
Conform and enforce command argument
|
||||
This is where you can automatically turn on/off switches based on different states.
|
||||
:param kwargs:
|
||||
:return: kwargs
|
||||
"""
|
||||
if kwargs.get('services_mode'):
|
||||
kwargs['cpu_only'] = True
|
||||
kwargs['docker'] = kwargs.get('docker') or []
|
||||
kwargs['gpus'] = None
|
||||
|
||||
return kwargs
|
||||
|
||||
def _get_requirements_manager(self, os_override=None, base_interpreter=None):
|
||||
requirements_manager = RequirementsManager(
|
||||
self._session, base_interpreter=base_interpreter
|
||||
@@ -411,7 +439,9 @@ class Worker(ServiceCommandSection):
|
||||
:param docker: Docker image in which the execution task will run
|
||||
"""
|
||||
# start new process and execute task id
|
||||
print("Running task '{}'".format(task_id))
|
||||
# "Running task '{}'".format(task_id)
|
||||
print(self._task_logging_start_message.format(task_id))
|
||||
|
||||
# set task status to in_progress so we know it was popped from the queue
|
||||
try:
|
||||
self._session.send_api(tasks_api.StartedRequest(task=task_id, force=True))
|
||||
@@ -455,7 +485,13 @@ class Worker(ServiceCommandSection):
|
||||
docker_arguments = self._docker_arguments
|
||||
|
||||
# Update docker command
|
||||
full_docker_cmd = self.docker_image_func(docker_image=docker_image, docker_arguments=docker_arguments)
|
||||
if self._services_mode:
|
||||
# if this is services mode, give the docker a unique worker id, as it will register itself.
|
||||
full_docker_cmd = self.docker_image_func(
|
||||
worker_id='{}:service:{}'.format(self.worker_id, task_id),
|
||||
docker_image=docker_image, docker_arguments=docker_arguments)
|
||||
else:
|
||||
full_docker_cmd = self.docker_image_func(docker_image=docker_image, docker_arguments=docker_arguments)
|
||||
try:
|
||||
self._session.send_api(
|
||||
tasks_api.EditRequest(task_id, force=True, execution=dict(
|
||||
@@ -463,9 +499,15 @@ class Worker(ServiceCommandSection):
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
full_docker_cmd[-1] = full_docker_cmd[-1] + 'execute --disable-monitoring {} --id {}'.format(
|
||||
'--standalone-mode' if self._standalone_mode else '', task_id)
|
||||
# if this is services_mode, change the worker_id to a unique name
|
||||
# abd use full-monitoring, ot it registers itself as a worker for this specific service.
|
||||
# notice, the internal agent will monitor itself once the docker is up and running
|
||||
full_docker_cmd[-1] = full_docker_cmd[-1] + 'execute {} {} --id {}'.format(
|
||||
'--full-monitoring' if self._services_mode else '--disable-monitoring',
|
||||
'--standalone-mode' if self._standalone_mode else '',
|
||||
task_id)
|
||||
cmd = Argv(*full_docker_cmd)
|
||||
print('Running Docker:\n{}\n'.format(str(cmd)))
|
||||
else:
|
||||
cmd = worker_args.get_argv_for_command("execute") + (
|
||||
"--disable-monitoring",
|
||||
@@ -519,12 +561,15 @@ class Worker(ServiceCommandSection):
|
||||
self.handle_user_abort(task_id)
|
||||
status = ExitStatus.interrupted
|
||||
finally:
|
||||
self.handle_task_termination(task_id, status, stop_signal_status)
|
||||
# remove temp files after we sent everything to the backend
|
||||
safe_remove_file(temp_stdout_name)
|
||||
safe_remove_file(temp_stderr_name)
|
||||
if self.docker_image_func:
|
||||
shutdown_docker_process(docker_cmd_contains='--id {}\'\"'.format(task_id))
|
||||
if self._services_mode and stop_signal_status is None:
|
||||
print('Service started, docker running in the background')
|
||||
else:
|
||||
self.handle_task_termination(task_id, status, stop_signal_status)
|
||||
# remove temp files after we sent everything to the backend
|
||||
safe_remove_file(temp_stdout_name)
|
||||
safe_remove_file(temp_stderr_name)
|
||||
if self.docker_image_func:
|
||||
shutdown_docker_process(docker_cmd_contains='--id {}\'\"'.format(task_id))
|
||||
|
||||
def run_tasks_loop(self, queues, worker_params):
|
||||
"""
|
||||
@@ -626,8 +671,18 @@ class Worker(ServiceCommandSection):
|
||||
|
||||
self._session.print_configuration()
|
||||
|
||||
@resolve_names
|
||||
def daemon(self, queues, log_level, foreground=False, docker=False, detached=False, **kwargs):
|
||||
# if we do not need to create queues, make sure they are valid
|
||||
# match previous behaviour when we validated queue names before everything else
|
||||
queues = self._resolve_queue_names(queues, create_if_missing=kwargs.get('create_queue', False))
|
||||
|
||||
self._standalone_mode = kwargs.get('standalone_mode', False)
|
||||
self._services_mode = kwargs.get('services_mode', False)
|
||||
# must have docker in services_mode
|
||||
if self._services_mode:
|
||||
kwargs = self._verify_command_states(kwargs)
|
||||
docker = docker or kwargs.get('docker')
|
||||
|
||||
# make sure we only have a single instance,
|
||||
# also make sure we set worker_id properly and cache folders
|
||||
self._singleton()
|
||||
@@ -635,19 +690,10 @@ class Worker(ServiceCommandSection):
|
||||
# check if we have the latest version
|
||||
start_check_update_daemon()
|
||||
|
||||
self._standalone_mode = kwargs.get('standalone_mode', False)
|
||||
|
||||
self.check(**kwargs)
|
||||
self.log.debug("starting resource monitor thread")
|
||||
print("Worker \"{}\" - ".format(self.worker_id), end='')
|
||||
|
||||
if queues:
|
||||
queues = return_list(queues)
|
||||
queues = [self._resolve_name(q, "queues") for q in queues]
|
||||
else:
|
||||
default_queue = self._session.send_api(queues_api.GetDefaultRequest())
|
||||
queues = [default_queue.id]
|
||||
|
||||
queues_info = [
|
||||
self._session.send_api(
|
||||
queues_api.GetByIdRequest(queue)
|
||||
@@ -672,6 +718,22 @@ class Worker(ServiceCommandSection):
|
||||
self.set_docker_variables(docker)
|
||||
else:
|
||||
self.dump_config()
|
||||
# only in none docker we have to make sure we have CUDA setup
|
||||
|
||||
# make sure we have CUDA set if we have --gpus
|
||||
if kwargs.get('gpus') and self._session.config.get('agent.cuda_version', None) in (None, 0, '0'):
|
||||
message = 'Running with GPUs but no CUDA version was detected!\n' \
|
||||
'\tSet OS environemnt CUDA_VERSION & CUDNN_VERSION to the correct version\n' \
|
||||
'\tExample: export CUDA_VERSION=10.1 or (Windows: set CUDA_VERSION=10.1)'
|
||||
if is_conda(self._session.config):
|
||||
self._unregister(queues)
|
||||
safe_remove_file(self.temp_config_path)
|
||||
raise ValueError(message)
|
||||
else:
|
||||
warning(message+'\n')
|
||||
|
||||
if self._services_mode:
|
||||
print('Trains-Agent running in services mode')
|
||||
|
||||
self._daemon_foreground = foreground
|
||||
if not foreground:
|
||||
@@ -695,6 +757,8 @@ class Worker(ServiceCommandSection):
|
||||
# in detached mode
|
||||
# fully detach stdin.stdout/stderr and leave main process, running in the background
|
||||
daemonize_process(out_file.fileno())
|
||||
# make sure we update the singleton lock file to the new pid
|
||||
Singleton.update_pid_file()
|
||||
# reprint headers to std file (we are now inside the daemon process)
|
||||
print("Worker \"{}\" :".format(self.worker_id))
|
||||
self._session.print_configuration()
|
||||
@@ -757,8 +821,12 @@ class Worker(ServiceCommandSection):
|
||||
def dump_config(self, config=None):
|
||||
def to_json(config):
|
||||
return json.dumps(config.as_plain_ordered_dict(), cls=HOCONEncoder, indent=4)
|
||||
Path(self.temp_config_path).write_text(six.text_type(self._session.to_json()
|
||||
if config is None else to_json(config)))
|
||||
try:
|
||||
Path(self.temp_config_path).write_text(
|
||||
six.text_type(self._session.to_json() if config is None else to_json(config)))
|
||||
except Exception:
|
||||
return False
|
||||
return True
|
||||
|
||||
def _log_command_output(
|
||||
self,
|
||||
@@ -775,21 +843,22 @@ class Worker(ServiceCommandSection):
|
||||
def _print_file(file_path, prev_line_count):
|
||||
with open(file_path, "rb") as f:
|
||||
binary_text = f.read()
|
||||
if not binary_text:
|
||||
return []
|
||||
# skip the previously printed lines,
|
||||
blines = binary_text.split(b'\n')[prev_line_count:]
|
||||
if not blines:
|
||||
return blines
|
||||
return decode_binary_lines(blines if blines[-1] else blines[:-1])
|
||||
if not binary_text:
|
||||
return []
|
||||
# skip the previously printed lines,
|
||||
blines = binary_text.split(b'\n')[prev_line_count:]
|
||||
if not blines:
|
||||
return blines
|
||||
return decode_binary_lines(blines if blines[-1] else blines[:-1])
|
||||
|
||||
stdout = open(stdout_path, "wt")
|
||||
stderr = open(stderr_path, "wt") if stderr_path else stdout
|
||||
stdout_line_count, stdout_last_lines = 0, []
|
||||
stderr_line_count, stderr_last_lines = 0, []
|
||||
service_mode_internal_agent_started = None
|
||||
stopping = False
|
||||
status = None
|
||||
try:
|
||||
status = None
|
||||
stopping = False
|
||||
_last_machine_update_ts = time()
|
||||
stop_reason = None
|
||||
|
||||
@@ -824,13 +893,24 @@ class Worker(ServiceCommandSection):
|
||||
stderr.flush()
|
||||
|
||||
# get diff from previous poll
|
||||
printed_lines = _print_file(stdout_path, stdout_line_count)
|
||||
if self._services_mode and not stopping and not status:
|
||||
# if the internal agent started, we stop logging, it will take over logging.
|
||||
# if the internal agent started running the task itself, it will return status==0,
|
||||
# then we can quit the monitoring loop of this process
|
||||
printed_lines, service_mode_internal_agent_started, status = self._check_if_internal_agent_started(
|
||||
printed_lines, service_mode_internal_agent_started, task_id)
|
||||
if status is not None:
|
||||
stop_reason = 'Service started'
|
||||
|
||||
stdout_line_count += self.send_logs(
|
||||
task_id, _print_file(stdout_path, stdout_line_count)
|
||||
task_id, printed_lines
|
||||
)
|
||||
if stderr_path:
|
||||
stderr_line_count += self.send_logs(
|
||||
task_id, _print_file(stderr_path, stderr_line_count)
|
||||
)
|
||||
|
||||
except subprocess.CalledProcessError as ex:
|
||||
# non zero return code
|
||||
stop_reason = 'Exception occurred'
|
||||
@@ -846,6 +926,11 @@ class Worker(ServiceCommandSection):
|
||||
stop_reason = 'Exception occurred'
|
||||
status = -1
|
||||
|
||||
# if running in services mode, keep the file open
|
||||
# in case the docker was so quick it started and finished, check the stop reason
|
||||
if self._services_mode and service_mode_internal_agent_started and stop_reason == 'Service started':
|
||||
return None, None
|
||||
|
||||
stdout.close()
|
||||
if stderr_path:
|
||||
stderr.close()
|
||||
@@ -861,6 +946,19 @@ class Worker(ServiceCommandSection):
|
||||
|
||||
return status, stop_reason
|
||||
|
||||
def _check_if_internal_agent_started(self, printed_lines, service_mode_internal_agent_started, task_id):
|
||||
log_start_msg = self._task_logging_start_message.format(task_id)
|
||||
log_control_end_msg = self._task_logging_pass_control_message.format(task_id)
|
||||
filter_lines = printed_lines if not service_mode_internal_agent_started else []
|
||||
for i, line in enumerate(printed_lines):
|
||||
if not service_mode_internal_agent_started and line.startswith(log_start_msg):
|
||||
service_mode_internal_agent_started = True
|
||||
filter_lines = printed_lines[:i+1]
|
||||
elif line.startswith(log_control_end_msg):
|
||||
return filter_lines, service_mode_internal_agent_started, 0
|
||||
|
||||
return filter_lines, service_mode_internal_agent_started, None
|
||||
|
||||
def send_logs(self, task_id, lines, level="DEBUG"):
|
||||
"""
|
||||
Send output lines as log events to backend
|
||||
@@ -932,6 +1030,8 @@ class Worker(ServiceCommandSection):
|
||||
target=None,
|
||||
python_version=None,
|
||||
docker=None,
|
||||
entry_point=None,
|
||||
install_globally=False,
|
||||
**_
|
||||
):
|
||||
if not task_id:
|
||||
@@ -940,7 +1040,7 @@ class Worker(ServiceCommandSection):
|
||||
self._session.print_configuration()
|
||||
|
||||
if docker is not False and docker is not None:
|
||||
return self._build_docker(docker, target, task_id)
|
||||
return self._build_docker(docker, target, task_id, entry_point)
|
||||
|
||||
current_task = self._session.api_client.tasks.get_by_id(task_id)
|
||||
|
||||
@@ -964,7 +1064,10 @@ class Worker(ServiceCommandSection):
|
||||
requested_python_version=python_version)
|
||||
|
||||
if self._default_pip:
|
||||
self.package_api.install_packages(*self._default_pip)
|
||||
if install_globally and self.global_package_api:
|
||||
self.global_package_api.install_packages(*self._default_pip)
|
||||
else:
|
||||
self.package_api.install_packages(*self._default_pip)
|
||||
|
||||
directory, vcs, repo_info = self.get_repo_info(execution, current_task, venv_folder.as_posix())
|
||||
|
||||
@@ -974,6 +1077,7 @@ class Worker(ServiceCommandSection):
|
||||
requirements_manager=requirements_manager,
|
||||
cached_requirements=requirements,
|
||||
cwd=vcs.location if vcs and vcs.location else directory,
|
||||
package_api=self.global_package_api if install_globally else None,
|
||||
)
|
||||
freeze = self.freeze_task_environment(requirements_manager=requirements_manager)
|
||||
script_dir = directory
|
||||
@@ -987,18 +1091,18 @@ class Worker(ServiceCommandSection):
|
||||
print("No freeze information available")
|
||||
|
||||
print("Virtual environment: {}".format(venv_folder / 'bin'))
|
||||
print("Source code: {}".format(repo_info.root))
|
||||
print("Source code: {}".format(repo_info.root if repo_info else execution.entry_point))
|
||||
print("Entry point: {}".format(Path(script_dir) / execution.entry_point))
|
||||
|
||||
return 0
|
||||
|
||||
def _build_docker(self, docker, target, task_id):
|
||||
def _build_docker(self, docker, target, task_id, entry_point=None, standalone_mode=True):
|
||||
|
||||
self.temp_config_path = safe_mkstemp(
|
||||
suffix=".cfg", prefix=".trains_agent.", text=True, name_only=True
|
||||
)
|
||||
if not target:
|
||||
ValueError("--target container name must be provided for docker build")
|
||||
target = "task_id_{}".format(task_id)
|
||||
|
||||
temp_config, docker_image_func = self.get_docker_config_cmd(docker)
|
||||
self.dump_config(temp_config)
|
||||
@@ -1015,15 +1119,19 @@ class Worker(ServiceCommandSection):
|
||||
full_docker_cmd = self.docker_image_func(docker_image=task_docker_cmd[0],
|
||||
docker_arguments=task_docker_cmd[1:])
|
||||
else:
|
||||
print('running Task {} inside default docker image: {} {}\n'.format(
|
||||
print('Building Task {} inside default docker image: {} {}\n'.format(
|
||||
task_id, self._docker_image, self._docker_arguments or ''))
|
||||
full_docker_cmd = self.docker_image_func(docker_image=self._docker_image,
|
||||
docker_arguments=self._docker_arguments)
|
||||
end_of_build_marker = "build.done=true"
|
||||
docker_cmd_suffix = ' build --id {} ; ' \
|
||||
'echo "" >> /root/trains.conf ; ' \
|
||||
'echo {} >> /root/trains.conf ; ' \
|
||||
'bash'.format(task_id, end_of_build_marker)
|
||||
docker_cmd_suffix = ' build --id {task_id} --install-globally; ' \
|
||||
'echo "" >> {conf_file} ; ' \
|
||||
'echo {end_of_build_marker} >> {conf_file} ; ' \
|
||||
'bash'.format(
|
||||
task_id=task_id,
|
||||
end_of_build_marker=end_of_build_marker,
|
||||
conf_file=DOCKER_ROOT_CONF_FILE
|
||||
)
|
||||
full_docker_cmd[-1] = full_docker_cmd[-1] + docker_cmd_suffix
|
||||
cmd = Argv(*full_docker_cmd)
|
||||
|
||||
@@ -1053,9 +1161,22 @@ class Worker(ServiceCommandSection):
|
||||
print("Error: cannot locate docker for storage")
|
||||
return
|
||||
|
||||
if entry_point == "clone_task" or entry_point == "reuse_task":
|
||||
change = 'ENTRYPOINT if [ ! -s "{trains_conf}" ] ; then ' \
|
||||
'cp {default_trains_conf} {trains_conf} ; ' \
|
||||
' fi ; trains-agent execute --id {task_id} --standalone-mode {clone}'.format(
|
||||
default_trains_conf=DOCKER_DEFAULT_CONF_FILE,
|
||||
trains_conf=DOCKER_ROOT_CONF_FILE,
|
||||
task_id=task_id,
|
||||
clone=("--clone" if entry_point == "clone_task" else ""),
|
||||
)
|
||||
else:
|
||||
change = 'ENTRYPOINT bash'
|
||||
|
||||
print('Committing docker container to: {}'.format(target))
|
||||
print(commit_docker(container_name=target, docker_id=docker_id))
|
||||
print(commit_docker(container_name=target, docker_id=docker_id, apply_change=change))
|
||||
shutdown_docker_process(docker_id=docker_id)
|
||||
|
||||
return
|
||||
|
||||
@resolve_names
|
||||
@@ -1073,6 +1194,9 @@ class Worker(ServiceCommandSection):
|
||||
clone=False,
|
||||
**_
|
||||
):
|
||||
|
||||
self._standalone_mode = standalone_mode
|
||||
|
||||
if not task_id:
|
||||
raise CommandFailedError("Worker execute must have valid task id")
|
||||
|
||||
@@ -1080,8 +1204,12 @@ class Worker(ServiceCommandSection):
|
||||
current_task = self._session.api_client.tasks.get_by_id(task_id)
|
||||
if not current_task.id:
|
||||
pass
|
||||
except Exception:
|
||||
raise ValueError("Could not find task id={}".format(task_id))
|
||||
except Exception as ex:
|
||||
raise ValueError(
|
||||
"Could not find task id={} (for host: {})\nException: {}".format(
|
||||
task_id, self._session.config.get("api.host", ""), ex
|
||||
)
|
||||
)
|
||||
|
||||
if clone:
|
||||
try:
|
||||
@@ -1182,13 +1310,25 @@ class Worker(ServiceCommandSection):
|
||||
script_dir = (directory if isinstance(directory, Path) else Path(directory)).absolute().as_posix()
|
||||
|
||||
# run code
|
||||
print("Running task id [%s]:" % current_task.id)
|
||||
# print("Running task id [%s]:" % current_task.id)
|
||||
print(self._task_logging_pass_control_message.format(current_task.id))
|
||||
extra = ['-u', ]
|
||||
if optimization:
|
||||
extra.append(
|
||||
WorkerParams(optimization=optimization).get_optimization_flag()
|
||||
)
|
||||
extra.append(execution.entry_point)
|
||||
# check if this is a module load, then load it.
|
||||
try:
|
||||
if current_task.script.binary and current_task.script.binary.startswith('python') and \
|
||||
execution.entry_point and execution.entry_point.split()[0].strip() == '-m':
|
||||
# we need to split it
|
||||
import shlex
|
||||
extra.extend(shlex.split(execution.entry_point))
|
||||
else:
|
||||
extra.append(execution.entry_point)
|
||||
except:
|
||||
extra.append(execution.entry_point)
|
||||
|
||||
command = self.package_api.get_python_command(extra)
|
||||
print("[{}]$ {}".format(execution.working_dir, command.pretty()))
|
||||
|
||||
@@ -1363,6 +1503,7 @@ class Worker(ServiceCommandSection):
|
||||
|
||||
def _get_repo_info(self, execution, task, venv_folder):
|
||||
try:
|
||||
self._session.config.put("agent.standalone_mode", self._standalone_mode)
|
||||
vcs, repo_info = clone_repository_cached(
|
||||
session=self._session,
|
||||
execution=execution,
|
||||
@@ -1508,7 +1649,14 @@ class Worker(ServiceCommandSection):
|
||||
return None
|
||||
|
||||
def install_requirements(
|
||||
self, execution, repo_info, requirements_manager, cached_requirements=None, cwd=None,
|
||||
self, execution, repo_info, requirements_manager, cached_requirements=None, cwd=None, package_api=None
|
||||
):
|
||||
return self.install_requirements_for_package_api(execution, repo_info, requirements_manager,
|
||||
cached_requirements=cached_requirements, cwd=cwd,
|
||||
package_api=package_api if package_api else self.package_api)
|
||||
|
||||
def install_requirements_for_package_api(
|
||||
self, execution, repo_info, requirements_manager, cached_requirements=None, cwd=None, package_api=None,
|
||||
):
|
||||
# type: (ExecutionInfo, RepoInfo, RequirementsManager, Optional[dict]) -> None
|
||||
"""
|
||||
@@ -1520,27 +1668,28 @@ class Worker(ServiceCommandSection):
|
||||
:param repo_info: repository information
|
||||
:param requirements_manager: requirements manager for task
|
||||
:param cached_requirements: cached requirements from previous run
|
||||
:param package_api: package_api to be used when installing requirements
|
||||
"""
|
||||
if self.package_api:
|
||||
self.package_api.cwd = cwd
|
||||
if package_api:
|
||||
package_api.cwd = cwd
|
||||
api = self._install_poetry_requirements(repo_info)
|
||||
if api:
|
||||
self.package_api = api
|
||||
package_api = api
|
||||
return
|
||||
|
||||
self.package_api.upgrade_pip()
|
||||
self.package_api.set_selected_package_manager()
|
||||
package_api.upgrade_pip()
|
||||
package_api.set_selected_package_manager()
|
||||
# always install cython,
|
||||
# if we have a specific version in the requirements,
|
||||
# the CythonRequirement(SimpleSubstitution) will reinstall cython with the specific version
|
||||
if not self.is_conda:
|
||||
self.package_api.out_of_scope_install_package('Cython')
|
||||
package_api.out_of_scope_install_package('Cython')
|
||||
|
||||
cached_requirements_failed = False
|
||||
if cached_requirements and ('pip' in cached_requirements or 'conda' in cached_requirements):
|
||||
self.log("Found task requirements section, trying to install")
|
||||
try:
|
||||
self.package_api.load_requirements(cached_requirements)
|
||||
package_api.load_requirements(cached_requirements)
|
||||
except Exception as e:
|
||||
self.log_traceback(e)
|
||||
cached_requirements_failed = True
|
||||
@@ -1576,7 +1725,7 @@ class Worker(ServiceCommandSection):
|
||||
temp_file.write(new_requirements)
|
||||
temp_file.flush()
|
||||
# close the file before reading in install_from_file for Windows compatibility
|
||||
self.package_api.install_from_file(temp_file.name)
|
||||
package_api.install_from_file(temp_file.name)
|
||||
except Exception as e:
|
||||
print('ERROR: Failed installing requirements.txt:\n{}'.format(requirements_text))
|
||||
raise e
|
||||
@@ -1584,7 +1733,7 @@ class Worker(ServiceCommandSection):
|
||||
if self._session.debug_mode and temp_file:
|
||||
rm_file(temp_file.name)
|
||||
# call post installation callback
|
||||
requirements_manager.post_install()
|
||||
requirements_manager.post_install(self._session)
|
||||
# mark as successful installation
|
||||
repo_requirements_installed = True
|
||||
|
||||
@@ -1748,7 +1897,8 @@ class Worker(ServiceCommandSection):
|
||||
base_interpreter=executable_name
|
||||
)
|
||||
|
||||
rm_tree(normalize_path(venv_dir, WORKING_REPOSITORY_DIR))
|
||||
if not standalone_mode:
|
||||
rm_tree(normalize_path(venv_dir, WORKING_REPOSITORY_DIR))
|
||||
package_manager_params = dict(
|
||||
session=self._session,
|
||||
python=executable_version_suffix if self.is_conda else executable_name,
|
||||
@@ -1756,7 +1906,17 @@ class Worker(ServiceCommandSection):
|
||||
requirements_manager=requirements_manager,
|
||||
)
|
||||
|
||||
if not self.is_conda:
|
||||
global_package_manager_params = dict(
|
||||
interpreter=executable_name,
|
||||
session=self._session,
|
||||
)
|
||||
|
||||
if not self.is_conda and standalone_mode:
|
||||
# pip with standalone mode
|
||||
get_pip = partial(VirtualenvPip, **package_manager_params)
|
||||
self.package_api = get_pip()
|
||||
self.global_package_api = SystemPip(**global_package_manager_params)
|
||||
elif not self.is_conda:
|
||||
if self.is_venv_update:
|
||||
self.package_api = VenvUpdateAPI(
|
||||
url=self._session.config["agent.venv_update.url"] or DEFAULT_VENV_UPDATE_URL,
|
||||
@@ -1767,6 +1927,7 @@ class Worker(ServiceCommandSection):
|
||||
if first_time:
|
||||
self.package_api.remove()
|
||||
self.package_api.create()
|
||||
self.global_package_api = SystemPip(**global_package_manager_params)
|
||||
elif standalone_mode:
|
||||
# conda with standalone mode
|
||||
get_conda = partial(CondaAPI, **package_manager_params)
|
||||
@@ -1812,12 +1973,15 @@ class Worker(ServiceCommandSection):
|
||||
print(requirements_manager.replace(contents))
|
||||
|
||||
def get_docker_config_cmd(self, docker_args):
|
||||
def docker_cmd_functor(default_kwargs, **kwargs):
|
||||
def docker_cmd_functor(default_kwargs, temp_config, **kwargs):
|
||||
# Make sure we have created the configuration file for the executor
|
||||
if not self.dump_config(temp_config):
|
||||
self.log.warning('Could not update docker configuration file {}'.format(self.temp_config_path))
|
||||
args = deepcopy(default_kwargs)
|
||||
args.update(kwargs)
|
||||
return self._get_docker_cmd(**args)
|
||||
|
||||
docker_image = str(os.environ.get("TRAINS_DOCKER_IMAGE") or os.environ.get("ALG_DOCKER_IMAGE") or
|
||||
docker_image = str(os.environ.get("TRAINS_DOCKER_IMAGE") or
|
||||
self._session.config.get("agent.default_docker.image", "nvidia/cuda")) \
|
||||
if not docker_args else docker_args[0]
|
||||
docker_arguments = docker_image.split(' ') if docker_image else []
|
||||
@@ -1833,7 +1997,7 @@ class Worker(ServiceCommandSection):
|
||||
python_version = 'python'+python_version
|
||||
print("Running in Docker {} mode (v19.03 and above) - using default docker image: {} running {}\n".format(
|
||||
'*standalone*' if self._standalone_mode else '', docker_image, python_version))
|
||||
temp_config = self._session.config.copy()
|
||||
temp_config = deepcopy(self._session.config)
|
||||
mounted_cache_dir = '/root/.trains/cache'
|
||||
mounted_pip_dl_dir = '/root/.trains/pip-download-cache'
|
||||
mounted_vcs_cache = '/root/.trains/vcs-cache'
|
||||
@@ -1850,6 +2014,8 @@ class Worker(ServiceCommandSection):
|
||||
temp_config.put("agent.cuda_version", "")
|
||||
temp_config.put("agent.cudnn_version", "")
|
||||
temp_config.put("agent.venvs_dir", mounted_venv_dir)
|
||||
temp_config.put("agent.git_user", (ENV_AGENT_GIT_USER.get() or self._session.config.get("agent.git_user", None)))
|
||||
temp_config.put("agent.git_pass", (ENV_AGENT_GIT_PASS.get() or self._session.config.get("agent.git_pass", None)))
|
||||
|
||||
host_apt_cache = Path(os.path.expandvars(self._session.config.get(
|
||||
"agent.docker_apt_cache", '~/.trains/apt-cache'))).expanduser().as_posix()
|
||||
@@ -1914,7 +2080,7 @@ class Worker(ServiceCommandSection):
|
||||
host_pip_dl=host_pip_dl, mounted_pip_dl=mounted_pip_dl_dir,
|
||||
host_vcs_cache=host_vcs_cache, mounted_vcs_cache=mounted_vcs_cache,
|
||||
standalone_mode=self._standalone_mode, force_current_version=self._force_current_version)
|
||||
return temp_config, partial(docker_cmd_functor, docker_cmd)
|
||||
return temp_config, partial(docker_cmd_functor, docker_cmd, temp_config)
|
||||
|
||||
@staticmethod
|
||||
def _get_docker_cmd(worker_id, docker_image, docker_arguments,
|
||||
@@ -1946,6 +2112,8 @@ class Worker(ServiceCommandSection):
|
||||
base_cmd += ['--gpus', 'device='+gpu_devices, ]
|
||||
# We are using --gpu, so we should not pass NVIDIA_VISIBLE_DEVICES, I think.
|
||||
# base_cmd += ['-e', 'NVIDIA_VISIBLE_DEVICES=' + gpu_devices, ]
|
||||
elif gpu_devices.strip() == 'none':
|
||||
dockers_nvidia_visible_devices = gpu_devices
|
||||
|
||||
if docker_arguments:
|
||||
docker_arguments = list(docker_arguments) \
|
||||
@@ -1958,7 +2126,7 @@ class Worker(ServiceCommandSection):
|
||||
base_cmd += [str(a) for a in extra_docker_arguments if a]
|
||||
|
||||
# check if running inside a kubernetes
|
||||
if os.environ.get('KUBERNETES_SERVICE_HOST') and os.environ.get('KUBERNETES_PORT'):
|
||||
if ENV_DOCKER_HOST_MOUNT.get() or (os.environ.get('KUBERNETES_SERVICE_HOST') and os.environ.get('KUBERNETES_PORT')):
|
||||
# map network to sibling docker, unless we have other network argument
|
||||
if not any(a.strip().startswith('--network') for a in base_cmd):
|
||||
try:
|
||||
@@ -1970,9 +2138,9 @@ class Worker(ServiceCommandSection):
|
||||
base_cmd += ['-e', 'NVIDIA_VISIBLE_DEVICES={}'.format(dockers_nvidia_visible_devices)]
|
||||
|
||||
# check if we need to map host folders
|
||||
if os.environ.get(ENV_K8S_HOST_MOUNT):
|
||||
if ENV_DOCKER_HOST_MOUNT.get():
|
||||
# expect TRAINS_AGENT_K8S_HOST_MOUNT = '/mnt/host/data:/root/.trains'
|
||||
k8s_node_mnt, _, k8s_pod_mnt = os.environ.get(ENV_K8S_HOST_MOUNT).partition(':')
|
||||
k8s_node_mnt, _, k8s_pod_mnt = ENV_DOCKER_HOST_MOUNT.get().partition(':')
|
||||
# search and replace all the host folders with the k8s
|
||||
host_mounts = [host_apt_cache, host_pip_cache, host_pip_dl, host_cache, host_vcs_cache]
|
||||
for i, m in enumerate(host_mounts):
|
||||
@@ -1986,6 +2154,7 @@ class Worker(ServiceCommandSection):
|
||||
# copy the configuration file into the mounted folder
|
||||
new_conf_file = os.path.join(k8s_pod_mnt, '.trains_agent.{}.cfg'.format(quote(worker_id, safe="")))
|
||||
try:
|
||||
rm_tree(new_conf_file)
|
||||
rm_file(new_conf_file)
|
||||
shutil.copy(conf_file, new_conf_file)
|
||||
conf_file = new_conf_file.replace(k8s_pod_mnt, k8s_node_mnt)
|
||||
@@ -2014,6 +2183,15 @@ class Worker(ServiceCommandSection):
|
||||
except:
|
||||
pass
|
||||
|
||||
if os.environ.get('FORCE_LOCAL_TRAINS_AGENT_WHEEL'):
|
||||
local_wheel = os.path.expanduser(os.environ.get('FORCE_LOCAL_TRAINS_AGENT_WHEEL'))
|
||||
docker_wheel = str(Path('/tmp') / Path(local_wheel).name)
|
||||
base_cmd += ['-v', local_wheel + ':' + docker_wheel]
|
||||
trains_agent_wheel = '\"{}\"'.format(docker_wheel)
|
||||
else:
|
||||
# trains-agent{specify_version}
|
||||
trains_agent_wheel = 'trains-agent{specify_version}'.format(specify_version=specify_version)
|
||||
|
||||
if not standalone_mode:
|
||||
update_scheme += \
|
||||
"echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/docker-clean ; " \
|
||||
@@ -2021,13 +2199,13 @@ class Worker(ServiceCommandSection):
|
||||
"apt-get update ; " \
|
||||
"apt-get install -y git libsm6 libxext6 libxrender-dev libglib2.0-0 {python_single_digit}-pip ; " \
|
||||
"{python} -m pip install -U \"pip{pip_version}\" ; " \
|
||||
"{python} -m pip install -U trains-agent{specify_version} ; ".format(
|
||||
"{python} -m pip install -U {trains_agent_wheel} ; ".format(
|
||||
python_single_digit=python_version.split('.')[0],
|
||||
python=python_version, pip_version=PackageManager.get_pip_version(),
|
||||
specify_version=specify_version)
|
||||
trains_agent_wheel=trains_agent_wheel)
|
||||
|
||||
base_cmd += (
|
||||
['-v', conf_file+':/root/trains.conf'] +
|
||||
['-v', conf_file+':'+DOCKER_ROOT_CONF_FILE] +
|
||||
(['-v', host_git_credentials+':/root/.git-credentials'] if host_git_credentials else []) +
|
||||
(['-v', host_ssh_cache+':/root/.ssh'] if host_ssh_cache else []) +
|
||||
(['-v', host_apt_cache+':/var/cache/apt/archives'] if host_apt_cache else []) +
|
||||
@@ -2038,6 +2216,7 @@ class Worker(ServiceCommandSection):
|
||||
['--rm', docker_image, 'bash', '-c',
|
||||
update_scheme +
|
||||
extra_shell_script +
|
||||
"cp {} {} ; ".format(DOCKER_ROOT_CONF_FILE, DOCKER_DEFAULT_CONF_FILE) +
|
||||
"NVIDIA_VISIBLE_DEVICES={nv_visible} {python} -u -m trains_agent ".format(
|
||||
nv_visible=dockers_nvidia_visible_devices, python=python_version)
|
||||
])
|
||||
@@ -2058,8 +2237,11 @@ class Worker(ServiceCommandSection):
|
||||
|
||||
def set_uid(self, user_uid, user_gid):
|
||||
from pwd import getpwnam
|
||||
self.uid = getpwnam(user_uid).pw_uid
|
||||
self.gid = getpwnam(user_gid).pw_gid
|
||||
try:
|
||||
self.uid = getpwnam(user_uid).pw_uid
|
||||
self.gid = getpwnam(user_gid).pw_gid
|
||||
except Exception:
|
||||
raise ValueError("Could not find requested user uid={} gid={}".format(user_uid, user_gid))
|
||||
|
||||
def _change_uid(self):
|
||||
os.setgid(self.gid)
|
||||
@@ -2068,18 +2250,13 @@ class Worker(ServiceCommandSection):
|
||||
# create a home folder for our user
|
||||
trains_agent_home = 'trains_agent_home{}'.format('.'+str(Singleton.get_slot()) if Singleton.get_slot() else '')
|
||||
try:
|
||||
home_folder = (Path('/') / trains_agent_home).absolute().as_posix()
|
||||
home_folder = '/trains_agent_home'
|
||||
rm_tree(home_folder)
|
||||
Path(home_folder).mkdir(parents=True, exist_ok=True)
|
||||
except:
|
||||
try:
|
||||
home_folder = (Path.home().parent / trains_agent_home).absolute().as_posix()
|
||||
rm_tree(home_folder)
|
||||
Path(home_folder).mkdir(parents=True, exist_ok=True)
|
||||
except:
|
||||
home_folder = (Path(gettempdir()) / trains_agent_home).absolute().as_posix()
|
||||
rm_tree(home_folder)
|
||||
Path(home_folder).mkdir(parents=True, exist_ok=True)
|
||||
home_folder = '/home/trains_agent_home'
|
||||
rm_tree(home_folder)
|
||||
Path(home_folder).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# move our entire venv into the new home
|
||||
venv_folder = venv_folder.as_posix()
|
||||
@@ -2136,14 +2313,37 @@ class Worker(ServiceCommandSection):
|
||||
else:
|
||||
worker_name = '{}:cpu'.format(worker_name)
|
||||
|
||||
self.worker_id, worker_slot = Singleton.register_instance(unique_worker_id=worker_id, worker_name=worker_name,
|
||||
api_client=self._session.api_client)
|
||||
# if we are running in services mode, we allow double register since
|
||||
# docker-compose will kill instances before they cleanup
|
||||
self.worker_id, worker_slot = Singleton.register_instance(
|
||||
unique_worker_id=worker_id, worker_name=worker_name, api_client=self._session.api_client,
|
||||
allow_double=bool(self._services_mode) and bool(ENV_DOCKER_HOST_MOUNT.get()))
|
||||
|
||||
if self.worker_id is None:
|
||||
error('Instance with the same WORKER_ID [{}] is already running'.format(worker_id))
|
||||
exit(1)
|
||||
# update folders based on free slot
|
||||
self._session.create_cache_folders(slot_index=worker_slot)
|
||||
|
||||
def _resolve_queue_names(self, queues, create_if_missing=False):
|
||||
if not queues:
|
||||
default_queue = self._session.send_api(queues_api.GetDefaultRequest())
|
||||
return [default_queue.id]
|
||||
|
||||
queues = return_list(queues)
|
||||
if not create_if_missing:
|
||||
return [self._resolve_name(q.name, "queues") for q in queues]
|
||||
|
||||
queue_ids = []
|
||||
for q in queues:
|
||||
try:
|
||||
q_id = self._resolve_name(q.name, "queues")
|
||||
except:
|
||||
self._session.send_api(queues_api.CreateRequest(name=q.name))
|
||||
q_id = self._resolve_name(q.name, "queues")
|
||||
queue_ids.append(q_id)
|
||||
return queue_ids
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pass
|
||||
|
||||
@@ -55,23 +55,23 @@ class EnvironmentConfig(object):
|
||||
|
||||
|
||||
ENVIRONMENT_CONFIG = {
|
||||
"api.api_server": EnvironmentConfig("TRAINS_API_HOST", "ALG_API_HOST"),
|
||||
"api.api_server": EnvironmentConfig("TRAINS_API_HOST", ),
|
||||
"api.credentials.access_key": EnvironmentConfig(
|
||||
"TRAINS_API_ACCESS_KEY", "ALG_API_ACCESS_KEY"
|
||||
"TRAINS_API_ACCESS_KEY",
|
||||
),
|
||||
"api.credentials.secret_key": EnvironmentConfig(
|
||||
"TRAINS_API_SECRET_KEY", "ALG_API_SECRET_KEY"
|
||||
"TRAINS_API_SECRET_KEY",
|
||||
),
|
||||
"agent.worker_name": EnvironmentConfig("TRAINS_WORKER_NAME", "ALG_WORKER_NAME"),
|
||||
"agent.worker_id": EnvironmentConfig("TRAINS_WORKER_ID", "ALG_WORKER_ID"),
|
||||
"agent.worker_name": EnvironmentConfig("TRAINS_WORKER_NAME", ),
|
||||
"agent.worker_id": EnvironmentConfig("TRAINS_WORKER_ID", ),
|
||||
"agent.cuda_version": EnvironmentConfig(
|
||||
"TRAINS_CUDA_VERSION", "ALG_CUDA_VERSION", "CUDA_VERSION"
|
||||
"TRAINS_CUDA_VERSION", "CUDA_VERSION"
|
||||
),
|
||||
"agent.cudnn_version": EnvironmentConfig(
|
||||
"TRAINS_CUDNN_VERSION", "ALG_CUDNN_VERSION", "CUDNN_VERSION"
|
||||
"TRAINS_CUDNN_VERSION", "CUDNN_VERSION"
|
||||
),
|
||||
"agent.cpu_only": EnvironmentConfig(
|
||||
"TRAINS_CPU_ONLY", "ALG_CPU_ONLY", "CPU_ONLY", type=bool
|
||||
"TRAINS_CPU_ONLY", "CPU_ONLY", type=bool
|
||||
),
|
||||
"sdk.aws.s3.key": EnvironmentConfig("AWS_ACCESS_KEY_ID"),
|
||||
"sdk.aws.s3.secret": EnvironmentConfig("AWS_SECRET_ACCESS_KEY"),
|
||||
@@ -81,15 +81,15 @@ ENVIRONMENT_CONFIG = {
|
||||
"sdk.google.storage.credentials_json": EnvironmentConfig("GOOGLE_APPLICATION_CREDENTIALS"),
|
||||
}
|
||||
|
||||
CONFIG_FILE_ENV = EnvironmentConfig("ALG_CONFIG_FILE")
|
||||
|
||||
ENVIRONMENT_SDK_PARAMS = {
|
||||
"task_id": ("TRAINS_TASK_ID", "ALG_TASK_ID"),
|
||||
"config_file": ("TRAINS_CONFIG_FILE", "ALG_CONFIG_FILE", "TRAINS_CONFIG_FILE"),
|
||||
"log_level": ("TRAINS_LOG_LEVEL", "ALG_LOG_LEVEL"),
|
||||
"log_to_backend": ("TRAINS_LOG_TASK_TO_BACKEND", "ALG_LOG_TASK_TO_BACKEND"),
|
||||
"task_id": ("TRAINS_TASK_ID", ),
|
||||
"config_file": ("TRAINS_CONFIG_FILE", ),
|
||||
"log_level": ("TRAINS_LOG_LEVEL", ),
|
||||
"log_to_backend": ("TRAINS_LOG_TASK_TO_BACKEND", ),
|
||||
}
|
||||
|
||||
ENVIRONMENT_BACKWARD_COMPATIBLE = EnvironmentConfig("TRAINS_AGENT_ALG_ENV", type=bool)
|
||||
|
||||
VIRTUAL_ENVIRONMENT_PATH = {
|
||||
"python2": normalize_path(CONFIG_DIR, "py2venv"),
|
||||
"python3": normalize_path(CONFIG_DIR, "py3venv"),
|
||||
@@ -113,16 +113,18 @@ HTTP_HEADERS = {
|
||||
METADATA_EXTENSION = ".json"
|
||||
|
||||
DEFAULT_VENV_UPDATE_URL = (
|
||||
"https://raw.githubusercontent.com/Yelp/venv-update/v3.2.2/venv_update.py"
|
||||
"https://raw.githubusercontent.com/Yelp/venv-update/v3.2.4/venv_update.py"
|
||||
)
|
||||
WORKING_REPOSITORY_DIR = "task_repository"
|
||||
DEFAULT_VCS_CACHE = normalize_path(CONFIG_DIR, "vcs-cache")
|
||||
PIP_EXTRA_INDICES = [
|
||||
]
|
||||
DEFAULT_PIP_DOWNLOAD_CACHE = normalize_path(CONFIG_DIR, "pip-download-cache")
|
||||
ENV_AGENT_GIT_USER = EnvironmentConfig('TRAINS_AGENT_GIT_USER')
|
||||
ENV_AGENT_GIT_PASS = EnvironmentConfig('TRAINS_AGENT_GIT_PASS')
|
||||
ENV_TASK_EXECUTE_AS_USER = 'TRAINS_AGENT_EXEC_USER'
|
||||
ENV_TASK_EXTRA_PYTHON_PATH = 'TRAINS_AGENT_EXTRA_PYTHON_PATH'
|
||||
ENV_K8S_HOST_MOUNT = 'TRAINS_AGENT_K8S_HOST_MOUNT'
|
||||
ENV_DOCKER_HOST_MOUNT = EnvironmentConfig('TRAINS_AGENT_K8S_HOST_MOUNT', 'TRAINS_AGENT_DOCKER_HOST_MOUNT')
|
||||
|
||||
|
||||
class FileBuffering(IntEnum):
|
||||
|
||||
1
trains_agent/glue/__init__.py
Normal file
1
trains_agent/glue/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
|
||||
@@ -555,3 +555,17 @@ class ExecutionInfo(NonStrictAttrs):
|
||||
execution.working_dir = working_dir or ""
|
||||
|
||||
return execution
|
||||
|
||||
|
||||
class safe_furl(furl.furl):
|
||||
|
||||
@property
|
||||
def port(self):
|
||||
return self._port
|
||||
|
||||
@port.setter
|
||||
def port(self, port):
|
||||
"""
|
||||
Any port value is valid
|
||||
"""
|
||||
self._port = port
|
||||
|
||||
@@ -111,10 +111,12 @@ class PackageManager(object):
|
||||
def out_of_scope_install_package(cls, package_name, *args):
|
||||
if PackageManager._selected_manager is not None:
|
||||
try:
|
||||
return PackageManager._selected_manager._install(package_name, *args)
|
||||
result = PackageManager._selected_manager._install(package_name, *args)
|
||||
if result not in (0, None, True):
|
||||
return False
|
||||
except Exception:
|
||||
pass
|
||||
return
|
||||
return False
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def out_of_scope_freeze(cls):
|
||||
|
||||
@@ -262,6 +262,7 @@ class CondaAPI(PackageManager):
|
||||
# this should happen if experiment was executed on non-conda machine or old trains client
|
||||
conda_supported_req = requirements['pip'] if requirements.get('conda', None) is None else requirements['conda']
|
||||
conda_supported_req_names = []
|
||||
pip_requirements = []
|
||||
for r in conda_supported_req:
|
||||
try:
|
||||
marker = list(parse(r))
|
||||
@@ -271,6 +272,10 @@ class CondaAPI(PackageManager):
|
||||
continue
|
||||
|
||||
m = MarkerRequirement(marker[0])
|
||||
# conda does not support version control links
|
||||
if m.vcs:
|
||||
pip_requirements.append(m)
|
||||
continue
|
||||
conda_supported_req_names.append(m.name.lower())
|
||||
if m.req.name.lower() == 'matplotlib':
|
||||
has_matplotlib = True
|
||||
@@ -287,7 +292,6 @@ class CondaAPI(PackageManager):
|
||||
|
||||
reqs.append(m)
|
||||
|
||||
pip_requirements = []
|
||||
# if we have a conda list, the rest should be installed with pip,
|
||||
if requirements.get('conda', None) is not None:
|
||||
for r in requirements['pip']:
|
||||
@@ -374,7 +378,7 @@ class CondaAPI(PackageManager):
|
||||
print(e)
|
||||
raise e
|
||||
|
||||
self.requirements_manager.post_install()
|
||||
self.requirements_manager.post_install(self.session)
|
||||
return True
|
||||
|
||||
def _parse_conda_result_bad_packges(self, result_dict):
|
||||
@@ -416,10 +420,14 @@ class CondaAPI(PackageManager):
|
||||
try:
|
||||
print('Executing Conda: {}'.format(command.serialize()))
|
||||
result = command.get_output(stdin=DEVNULL, **kwargs)
|
||||
if self.session.debug_mode:
|
||||
print(result)
|
||||
except Exception as e:
|
||||
result = e.output if hasattr(e, 'output') else ''
|
||||
if self.session.debug_mode:
|
||||
print(result)
|
||||
if raw:
|
||||
raise
|
||||
result = e.output if hasattr(e, 'output') else ''
|
||||
if raw:
|
||||
return result
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ from typing import Text
|
||||
|
||||
from .base import PackageManager
|
||||
from .requirements import SimpleSubstitution
|
||||
from ..base import safe_furl as furl
|
||||
|
||||
|
||||
class ExternalRequirements(SimpleSubstitution):
|
||||
@@ -22,7 +23,7 @@ class ExternalRequirements(SimpleSubstitution):
|
||||
return False
|
||||
return True
|
||||
|
||||
def post_install(self):
|
||||
def post_install(self, session):
|
||||
post_install_req = self.post_install_req
|
||||
self.post_install_req = []
|
||||
for req in post_install_req:
|
||||
@@ -30,7 +31,30 @@ class ExternalRequirements(SimpleSubstitution):
|
||||
freeze_base = PackageManager.out_of_scope_freeze() or ''
|
||||
except:
|
||||
freeze_base = ''
|
||||
PackageManager.out_of_scope_install_package(req.tostr(markers=False), "--no-deps")
|
||||
|
||||
req_line = req.tostr(markers=False)
|
||||
if req.req.vcs and req_line.startswith('git+'):
|
||||
try:
|
||||
url_no_frag = furl(req_line)
|
||||
url_no_frag.set(fragment=None)
|
||||
# reverse replace
|
||||
fragment = req_line[::-1].replace(url_no_frag.url[::-1], '', 1)[::-1]
|
||||
vcs_url = req_line[4:]
|
||||
# reverse replace
|
||||
vcs_url = vcs_url[::-1].replace(fragment[::-1], '', 1)[::-1]
|
||||
from ..repo import Git
|
||||
vcs = Git(session=session, url=vcs_url, location=None, revision=None)
|
||||
vcs._set_ssh_url()
|
||||
new_req_line = 'git+{}{}'.format(vcs.url_with_auth, fragment)
|
||||
if new_req_line != req_line:
|
||||
url_pass = furl(new_req_line).password
|
||||
print('Replacing original pip vcs \'{}\' with \'{}\''.format(
|
||||
req_line, new_req_line.replace(url_pass, '****', 1) if url_pass else new_req_line))
|
||||
req_line = new_req_line
|
||||
except Exception:
|
||||
print('WARNING: Failed parsing pip git install, using original line {}'.format(req_line))
|
||||
|
||||
PackageManager.out_of_scope_install_package(req_line, "--no-deps")
|
||||
try:
|
||||
freeze_post = PackageManager.out_of_scope_freeze() or ''
|
||||
package_name = list(set(freeze_post['pip']) - set(freeze_base['pip']))
|
||||
@@ -38,7 +62,8 @@ class ExternalRequirements(SimpleSubstitution):
|
||||
self.post_install_req_lookup[package_name[0]] = req.req.line
|
||||
except:
|
||||
pass
|
||||
PackageManager.out_of_scope_install_package(req.tostr(markers=False), "--ignore-installed")
|
||||
if not PackageManager.out_of_scope_install_package(req_line, "--ignore-installed"):
|
||||
raise ValueError("Failed installing GIT/HTTPs package \'{}\'".format(req_line))
|
||||
|
||||
def replace(self, req):
|
||||
"""
|
||||
|
||||
@@ -16,7 +16,7 @@ class HorovodRequirement(SimpleSubstitution):
|
||||
# match both horovod
|
||||
return req.name and self.name == req.name.lower()
|
||||
|
||||
def post_install(self):
|
||||
def post_install(self, session):
|
||||
if self.post_install_req:
|
||||
PackageManager.out_of_scope_install_package(self.post_install_req.tostr(markers=False))
|
||||
self.post_install_req = None
|
||||
|
||||
@@ -1,22 +1,24 @@
|
||||
import sys
|
||||
from itertools import chain
|
||||
from typing import Text
|
||||
from typing import Text, Optional
|
||||
|
||||
from trains_agent.definitions import PIP_EXTRA_INDICES, PROGRAM_NAME
|
||||
from trains_agent.helper.package.base import PackageManager
|
||||
from trains_agent.helper.process import Argv, DEVNULL
|
||||
from trains_agent.session import Session
|
||||
|
||||
|
||||
class SystemPip(PackageManager):
|
||||
|
||||
indices_args = None
|
||||
|
||||
def __init__(self, interpreter=None):
|
||||
# type: (Text) -> ()
|
||||
def __init__(self, interpreter=None, session=None):
|
||||
# type: (Optional[Text], Optional[Session]) -> ()
|
||||
"""
|
||||
Program interface to the system pip.
|
||||
"""
|
||||
self._bin = interpreter or sys.executable
|
||||
self.session = session
|
||||
|
||||
@property
|
||||
def bin(self):
|
||||
|
||||
@@ -15,19 +15,17 @@ class VirtualenvPip(SystemPip, PackageManager):
|
||||
Program interface to virtualenv pip.
|
||||
Must be given either path to virtualenv or source command.
|
||||
Either way, ``self.source`` is exposed.
|
||||
:param session: a Session object for communication
|
||||
:param python: interpreter path
|
||||
:param path: path of virtual environment to create/manipulate
|
||||
:param python: python version
|
||||
:param interpreter: path of python interpreter
|
||||
"""
|
||||
super(VirtualenvPip, self).__init__(
|
||||
interpreter
|
||||
or Path(
|
||||
path,
|
||||
select_for_platform(linux="bin/python", windows="scripts/python.exe"),
|
||||
)
|
||||
session=session,
|
||||
interpreter=interpreter or Path(
|
||||
path, select_for_platform(linux="bin/python", windows="scripts/python.exe"))
|
||||
)
|
||||
self.session = session
|
||||
self.path = path
|
||||
self.requirements_manager = requirements_manager
|
||||
self.python = python
|
||||
@@ -39,7 +37,7 @@ class VirtualenvPip(SystemPip, PackageManager):
|
||||
if isinstance(requirements, dict) and requirements.get("pip"):
|
||||
requirements["pip"] = self.requirements_manager.replace(requirements["pip"])
|
||||
super(VirtualenvPip, self).load_requirements(requirements)
|
||||
self.requirements_manager.post_install()
|
||||
self.requirements_manager.post_install(self.session)
|
||||
|
||||
def create_flags(self):
|
||||
"""
|
||||
|
||||
@@ -74,6 +74,7 @@ class SimplePytorchRequirement(SimpleSubstitution):
|
||||
packages = ("torch", "torchvision", "torchaudio")
|
||||
|
||||
page_lookup_template = 'https://download.pytorch.org/whl/cu{}/torch_stable.html'
|
||||
nightly_page_lookup_template = 'https://download.pytorch.org/whl/nightly/cu{}/torch_nightly.html'
|
||||
torch_page_lookup = {
|
||||
0: 'https://download.pytorch.org/whl/cpu/torch_stable.html',
|
||||
80: 'https://download.pytorch.org/whl/cu80/torch_stable.html',
|
||||
@@ -115,11 +116,23 @@ class SimplePytorchRequirement(SimpleSubstitution):
|
||||
package_manager.add_extra_install_flags(('-f', extra_url))
|
||||
|
||||
@classmethod
|
||||
def get_torch_page(cls, cuda_version):
|
||||
def get_torch_page(cls, cuda_version, nightly=False):
|
||||
try:
|
||||
cuda = int(cuda_version)
|
||||
except:
|
||||
cuda = 0
|
||||
|
||||
if nightly:
|
||||
# then try the nightly builds, it might be there...
|
||||
torch_url = cls.nightly_page_lookup_template.format(cuda)
|
||||
try:
|
||||
if requests.get(torch_url, timeout=10).ok:
|
||||
cls.torch_page_lookup[cuda] = torch_url
|
||||
return cls.torch_page_lookup[cuda], cuda
|
||||
except Exception:
|
||||
pass
|
||||
return
|
||||
|
||||
# first check if key is valid
|
||||
if cuda in cls.torch_page_lookup:
|
||||
return cls.torch_page_lookup[cuda], cuda
|
||||
@@ -180,6 +193,8 @@ class PytorchRequirement(SimpleSubstitution):
|
||||
except PytorchResolutionError as e:
|
||||
self.log.warn("will not be able to install pytorch wheels: %s", e.args[0])
|
||||
|
||||
self._original_req = []
|
||||
|
||||
@property
|
||||
def is_conda(self):
|
||||
return self.package_manager == "conda"
|
||||
@@ -242,13 +257,20 @@ class PytorchRequirement(SimpleSubstitution):
|
||||
continue
|
||||
url = '/'.join(torch_url.split('/')[:-1] + l.split('/'))
|
||||
last_v = v
|
||||
# if we found an exact match, use it
|
||||
try:
|
||||
if req.specs[0][0] == '==' and \
|
||||
SimpleVersion.compare_versions(req.specs[0][1], '==', v, ignore_sub_versions=False):
|
||||
break
|
||||
except:
|
||||
pass
|
||||
|
||||
return url
|
||||
|
||||
def get_url_for_platform(self, req):
|
||||
# check if package is already installed with system packages
|
||||
try:
|
||||
if self.config.get("agent.package_manager.system_site_packages"):
|
||||
if self.config.get("agent.package_manager.system_site_packages", None):
|
||||
from pip._internal.commands.show import search_packages_info
|
||||
installed_torch = list(search_packages_info([req.name]))
|
||||
# notice the comparision order, the first part will make sure we have a valid installed package
|
||||
@@ -273,6 +295,9 @@ class PytorchRequirement(SimpleSubstitution):
|
||||
|
||||
torch_url, torch_url_key = SimplePytorchRequirement.get_torch_page(self.cuda_version)
|
||||
url = self._get_link_from_torch_page(req, torch_url)
|
||||
if not url and self.config.get("agent.package_manager.torch_nightly", None):
|
||||
torch_url, torch_url_key = SimplePytorchRequirement.get_torch_page(self.cuda_version, nightly=True)
|
||||
url = self._get_link_from_torch_page(req, torch_url)
|
||||
# try one more time, with a lower cuda version (never fallback to CPU):
|
||||
while not url and torch_url_key > 0:
|
||||
previous_cuda_key = torch_url_key
|
||||
@@ -363,7 +388,10 @@ class PytorchRequirement(SimpleSubstitution):
|
||||
|
||||
def replace(self, req):
|
||||
try:
|
||||
return self._replace(req)
|
||||
new_req = self._replace(req)
|
||||
if new_req:
|
||||
self._original_req.append((req, new_req))
|
||||
return new_req
|
||||
except Exception as e:
|
||||
message = "Exception when trying to resolve python wheel"
|
||||
self.log.debug(message, exc_info=True)
|
||||
@@ -378,17 +406,17 @@ class PytorchRequirement(SimpleSubstitution):
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
result = self._table_lookup(req)
|
||||
except Exception as e:
|
||||
exc = e
|
||||
else:
|
||||
self.log.debug('Replacing requirement "%s" with %r', req, result)
|
||||
return result
|
||||
# try:
|
||||
# result = self._table_lookup(req)
|
||||
# except Exception as e:
|
||||
# exc = e
|
||||
# else:
|
||||
# self.log.debug('Replacing requirement "%s" with %r', req, result)
|
||||
# return result
|
||||
# self.log.debug(
|
||||
# "Could not find Pytorch wheel in table, trying manually constructing URL"
|
||||
# )
|
||||
|
||||
self.log.debug(
|
||||
"Could not find Pytorch wheel in table, trying manually constructing URL"
|
||||
)
|
||||
result = ok = None
|
||||
# try:
|
||||
# result, ok = self.get_url_for_platform(req)
|
||||
@@ -399,7 +427,7 @@ class PytorchRequirement(SimpleSubstitution):
|
||||
if result:
|
||||
self.log.debug("URL not found: {}".format(result))
|
||||
exc = PytorchResolutionError(
|
||||
"Was not able to find pytorch wheel URL: {}".format(exc)
|
||||
"Could not find pytorch wheel URL for: {} with cuda {} support".format(req, self.cuda_version)
|
||||
)
|
||||
# cancel exception chaining
|
||||
six.raise_from(exc, None)
|
||||
@@ -407,6 +435,37 @@ class PytorchRequirement(SimpleSubstitution):
|
||||
self.log.debug('Replacing requirement "%s" with %r', req, result)
|
||||
return result
|
||||
|
||||
def replace_back(self, list_of_requirements): # type: (Dict) -> Dict
|
||||
"""
|
||||
:param list_of_requirements: {'pip': ['a==1.0', ]}
|
||||
:return: {'pip': ['a==1.0', ]}
|
||||
"""
|
||||
if not self._original_req:
|
||||
return list_of_requirements
|
||||
try:
|
||||
for k, lines in list_of_requirements.items():
|
||||
# k is either pip/conda
|
||||
if k not in ('pip', 'conda'):
|
||||
continue
|
||||
for i, line in enumerate(lines):
|
||||
if not line or line.lstrip().startswith('#'):
|
||||
continue
|
||||
parts = [p for p in re.split('\s|=|\.|<|>|~|!|@|#', line) if p]
|
||||
if not parts:
|
||||
continue
|
||||
for req, new_req in self._original_req:
|
||||
if req.req.name == parts[0]:
|
||||
# support for pip >= 20.1
|
||||
if '@' in line:
|
||||
lines[i] = '{} # {}'.format(str(req), str(new_req))
|
||||
else:
|
||||
lines[i] = '{} # {}'.format(line, str(new_req))
|
||||
break
|
||||
except:
|
||||
pass
|
||||
|
||||
return list_of_requirements
|
||||
|
||||
MAP = {
|
||||
"windows": {
|
||||
"cuda100": {
|
||||
|
||||
@@ -54,7 +54,17 @@ class MarkerRequirement(object):
|
||||
|
||||
if self.specifier:
|
||||
parts.append(self.format_specs())
|
||||
|
||||
elif self.vcs:
|
||||
# leave the line as is, let pip handle it
|
||||
if self.line:
|
||||
parts = [self.line]
|
||||
else:
|
||||
# let's build the line manually
|
||||
parts = [
|
||||
self.uri,
|
||||
'@{}'.format(self.revision) if self.revision else '',
|
||||
'#subdirectory={}'.format(self.subdirectory) if self.subdirectory else ''
|
||||
]
|
||||
else:
|
||||
parts = [self.uri]
|
||||
|
||||
@@ -316,7 +326,7 @@ class RequirementSubstitution(object):
|
||||
"""
|
||||
pass
|
||||
|
||||
def post_install(self):
|
||||
def post_install(self, session):
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
@@ -472,12 +482,13 @@ class RequirementsManager(object):
|
||||
result = map(self.translator.translate, result)
|
||||
return join_lines(result)
|
||||
|
||||
def post_install(self):
|
||||
def post_install(self, session):
|
||||
for h in self.handlers:
|
||||
try:
|
||||
h.post_install()
|
||||
h.post_install(session)
|
||||
except Exception as ex:
|
||||
print('RequirementsManager handler {} raised exception: {}'.format(h, ex))
|
||||
raise
|
||||
|
||||
def replace_back(self, requirements):
|
||||
for h in self.handlers:
|
||||
|
||||
@@ -22,7 +22,7 @@ class RequirementsTranslator(object):
|
||||
self.enabled = config["agent.pip_download_cache.enabled"]
|
||||
Path(self.cache_dir).mkdir(parents=True, exist_ok=True)
|
||||
self.config = Config()
|
||||
self.pip = SystemPip(interpreter=interpreter)
|
||||
self.pip = SystemPip(interpreter=interpreter, session=self._session)
|
||||
|
||||
def download(self, url):
|
||||
self.pip.download_package(url, cache_dir=self.cache_dir)
|
||||
|
||||
@@ -83,7 +83,15 @@ def shutdown_docker_process(docker_cmd_contains=None, docker_id=None):
|
||||
pass
|
||||
|
||||
|
||||
def commit_docker(container_name, docker_cmd_contains=None, docker_id=None):
|
||||
def commit_docker(container_name, docker_cmd_contains=None, docker_id=None, apply_change=None):
|
||||
"""
|
||||
Commit a docker into a new image
|
||||
:param str container_name: Name for the new image
|
||||
:param docker_cmd_contains: partial container id to be committed
|
||||
:param str docker_id: Id of container to be comitted
|
||||
:param str apply_change: apply Dockerfile instructions to the image that is created
|
||||
(see docker commit documentation for '--change').
|
||||
"""
|
||||
try:
|
||||
if not docker_id:
|
||||
docker_id = get_docker_id(docker_cmd_contains=docker_cmd_contains)
|
||||
@@ -93,7 +101,8 @@ def commit_docker(container_name, docker_cmd_contains=None, docker_id=None):
|
||||
|
||||
if docker_id:
|
||||
# we found our docker, stop it
|
||||
output = get_bash_output(cmd='docker commit {} {}'.format(docker_id, container_name))
|
||||
apply_change = '--change=\'{}\''.format(apply_change) if apply_change else ''
|
||||
output = get_bash_output(cmd='docker commit {} {} {}'.format(apply_change, docker_id, container_name))
|
||||
return output
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
@@ -12,6 +12,8 @@ from furl import furl
|
||||
from pathlib2 import Path
|
||||
|
||||
import six
|
||||
|
||||
from trains_agent.definitions import ENV_AGENT_GIT_USER, ENV_AGENT_GIT_PASS
|
||||
from trains_agent.helper.console import ensure_text, ensure_binary
|
||||
from trains_agent.errors import CommandFailedError
|
||||
from trains_agent.helper.base import (
|
||||
@@ -95,7 +97,7 @@ class VCS(object):
|
||||
:param session: program session
|
||||
:param url: repository url
|
||||
:param location: (desired) clone location
|
||||
:param: desired clone revision
|
||||
:param revision: desired clone revision
|
||||
"""
|
||||
self.session = session
|
||||
self.log = self.session.get_logger(
|
||||
@@ -206,7 +208,7 @@ class VCS(object):
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def resolve_ssh_url(cls, url):
|
||||
def replace_ssh_url(cls, url):
|
||||
# type: (Text) -> Text
|
||||
"""
|
||||
Replace SSH URL with HTTPS URL when applicable
|
||||
@@ -240,18 +242,46 @@ class VCS(object):
|
||||
).url
|
||||
return url
|
||||
|
||||
@classmethod
|
||||
def replace_http_url(cls, url):
|
||||
# type: (Text) -> Text
|
||||
"""
|
||||
Replace HTTPS URL with SSH URL when applicable
|
||||
"""
|
||||
parsed_url = furl(url)
|
||||
if parsed_url.scheme == "https":
|
||||
parsed_url.scheme = "ssh"
|
||||
parsed_url.username = "git"
|
||||
parsed_url.password = None
|
||||
# make sure there is no port in the final url (safe_furl support)
|
||||
parsed_url.port = None
|
||||
url = parsed_url.url
|
||||
return url
|
||||
|
||||
def _set_ssh_url(self):
|
||||
"""
|
||||
Replace instance URL with SSH substitution result and report to log.
|
||||
According to ``man ssh-add``, ``SSH_AUTH_SOCK`` must be set in order for ``ssh-add`` to work.
|
||||
"""
|
||||
if self.session.config.get('agent.force_git_ssh_protocol', None) and self.url:
|
||||
parsed_url = furl(self.url)
|
||||
if parsed_url.scheme == "https":
|
||||
new_url = self.replace_http_url(self.url)
|
||||
if new_url != self.url:
|
||||
print("Using SSH credentials - replacing https url '{}' with ssh url '{}'".format(
|
||||
self.url, new_url))
|
||||
self.url = new_url
|
||||
return
|
||||
|
||||
if not self.session.config.agent.translate_ssh:
|
||||
return
|
||||
|
||||
ssh_agent_variable = "SSH_AUTH_SOCK"
|
||||
if not getenv(ssh_agent_variable) and (self.session.config.get('agent.git_user', None) and
|
||||
self.session.config.get('agent.git_pass', None)):
|
||||
new_url = self.resolve_ssh_url(self.url)
|
||||
if not getenv(ssh_agent_variable) and (
|
||||
(ENV_AGENT_GIT_USER.get() or self.session.config.get('agent.git_user', None)) and
|
||||
(ENV_AGENT_GIT_PASS.get() or self.session.config.get('agent.git_pass', None))
|
||||
):
|
||||
new_url = self.replace_ssh_url(self.url)
|
||||
if new_url != self.url:
|
||||
print("Using user/pass credentials - replacing ssh url '{}' with https url '{}'".format(
|
||||
self.url, new_url))
|
||||
@@ -392,11 +422,14 @@ class VCS(object):
|
||||
Add username and password to URL if missing from URL and present in config.
|
||||
Does not modify ssh URLs.
|
||||
"""
|
||||
parsed_url = furl(url)
|
||||
try:
|
||||
parsed_url = furl(url)
|
||||
except ValueError:
|
||||
return url
|
||||
if parsed_url.scheme in ["", "ssh"] or parsed_url.scheme.startswith("git"):
|
||||
return parsed_url.url
|
||||
config_user = config.get("agent.{}_user".format(cls.executable_name), None)
|
||||
config_pass = config.get("agent.{}_pass".format(cls.executable_name), None)
|
||||
config_user = ENV_AGENT_GIT_USER.get() or config.get("agent.{}_user".format(cls.executable_name), None)
|
||||
config_pass = ENV_AGENT_GIT_PASS.get() or config.get("agent.{}_pass".format(cls.executable_name), None)
|
||||
if (
|
||||
(not (parsed_url.username and parsed_url.password))
|
||||
and config_user
|
||||
@@ -529,11 +562,16 @@ def clone_repository_cached(session, execution, destination):
|
||||
|
||||
clone_folder_name = Path(str(furl(repo_url).path)).name # type: str
|
||||
clone_folder = Path(destination) / clone_folder_name
|
||||
cached_repo_path = (
|
||||
Path(session.config["agent.vcs_cache.path"]).expanduser()
|
||||
/ "{}.{}".format(clone_folder_name, md5(ensure_binary(repo_url)).hexdigest())
|
||||
/ clone_folder_name
|
||||
) # type: Path
|
||||
|
||||
standalone_mode = session.config.get("agent.standalone_mode", False)
|
||||
if standalone_mode:
|
||||
cached_repo_path = clone_folder
|
||||
else:
|
||||
cached_repo_path = (
|
||||
Path(session.config["agent.vcs_cache.path"]).expanduser()
|
||||
/ "{}.{}".format(clone_folder_name, md5(ensure_binary(repo_url)).hexdigest())
|
||||
/ clone_folder_name
|
||||
) # type: Path
|
||||
|
||||
vcs = VcsFactory.create(
|
||||
session, execution_info=execution, location=cached_repo_path
|
||||
@@ -541,23 +579,25 @@ def clone_repository_cached(session, execution, destination):
|
||||
if not find_executable(vcs.executable_name):
|
||||
raise CommandFailedError(vcs.executable_not_found_error_help())
|
||||
|
||||
if session.config["agent.vcs_cache.enabled"] and cached_repo_path.exists():
|
||||
print('Using cached repository in "{}"'.format(cached_repo_path))
|
||||
else:
|
||||
print("cloning: {}".format(no_password_url))
|
||||
rm_tree(cached_repo_path)
|
||||
# We clone the entire repository, not a specific branch
|
||||
vcs.clone() # branch=execution.branch)
|
||||
if not standalone_mode:
|
||||
if session.config["agent.vcs_cache.enabled"] and cached_repo_path.exists():
|
||||
print('Using cached repository in "{}"'.format(cached_repo_path))
|
||||
|
||||
vcs.pull()
|
||||
rm_tree(destination)
|
||||
shutil.copytree(Text(cached_repo_path), Text(clone_folder))
|
||||
if not clone_folder.is_dir():
|
||||
raise CommandFailedError(
|
||||
"copying of repository failed: from {} to {}".format(
|
||||
cached_repo_path, clone_folder
|
||||
else:
|
||||
print("cloning: {}".format(no_password_url))
|
||||
rm_tree(cached_repo_path)
|
||||
# We clone the entire repository, not a specific branch
|
||||
vcs.clone() # branch=execution.branch)
|
||||
|
||||
vcs.pull()
|
||||
rm_tree(destination)
|
||||
shutil.copytree(Text(cached_repo_path), Text(clone_folder))
|
||||
if not clone_folder.is_dir():
|
||||
raise CommandFailedError(
|
||||
"copying of repository failed: from {} to {}".format(
|
||||
cached_repo_path, clone_folder
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
# checkout in the newly copy destination
|
||||
vcs.location = Text(clone_folder)
|
||||
|
||||
@@ -75,9 +75,15 @@ class ResourceMonitor(object):
|
||||
self._exit_event = Event()
|
||||
self._gpustat_fail = 0
|
||||
self._gpustat = gpustat
|
||||
if not self._gpustat:
|
||||
self._active_gpus = None
|
||||
if os.environ.get('NVIDIA_VISIBLE_DEVICES') == 'none':
|
||||
# NVIDIA_VISIBLE_DEVICES set to none, marks cpu_only flag
|
||||
# active_gpus == False means no GPU reporting
|
||||
self._active_gpus = False
|
||||
elif not self._gpustat:
|
||||
log.warning('Trains-Agent Resource Monitor: GPU monitoring is not available')
|
||||
else:
|
||||
# None means no filtering, report all gpus
|
||||
self._active_gpus = None
|
||||
try:
|
||||
active_gpus = os.environ.get('NVIDIA_VISIBLE_DEVICES', '') or \
|
||||
@@ -244,8 +250,8 @@ class ResourceMonitor(object):
|
||||
stats["io_read_mbs"] = BytesSizes.megabytes(io_stats.read_bytes)
|
||||
stats["io_write_mbs"] = BytesSizes.megabytes(io_stats.write_bytes)
|
||||
|
||||
# check if we can access the gpu statistics
|
||||
if self._gpustat:
|
||||
# check if we need to monitor gpus and if we can access the gpu statistics
|
||||
if self._active_gpus is not False and self._gpustat:
|
||||
try:
|
||||
gpu_stat = self._gpustat.new_query()
|
||||
for i, g in enumerate(gpu_stat.gpus):
|
||||
|
||||
@@ -4,7 +4,7 @@ from time import sleep
|
||||
from glob import glob
|
||||
from tempfile import gettempdir, NamedTemporaryFile
|
||||
|
||||
from trains_agent.definitions import ENV_K8S_HOST_MOUNT
|
||||
from trains_agent.definitions import ENV_DOCKER_HOST_MOUNT
|
||||
from trains_agent.helper.base import warning
|
||||
|
||||
|
||||
@@ -18,9 +18,27 @@ class Singleton(object):
|
||||
_pid_file = None
|
||||
_lock_file_name = sep+prefix+sep+'global.lock'
|
||||
_lock_timeout = 10
|
||||
_pid = None
|
||||
|
||||
@classmethod
|
||||
def register_instance(cls, unique_worker_id=None, worker_name=None, api_client=None):
|
||||
def update_pid_file(cls):
|
||||
new_pid = str(os.getpid())
|
||||
if not cls._pid_file or cls._pid == new_pid:
|
||||
return
|
||||
old_name = cls._pid_file.name
|
||||
parts = cls._pid_file.name.split(os.path.sep)
|
||||
parts[-1] = parts[-1].replace(cls.sep + cls._pid + cls.sep, cls.sep + new_pid + cls.sep)
|
||||
new_pid_file = os.path.sep.join(parts)
|
||||
cls._pid = new_pid
|
||||
cls._pid_file.name = new_pid_file
|
||||
# we need to rename to match new pid
|
||||
try:
|
||||
os.rename(old_name, new_pid_file)
|
||||
except:
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
def register_instance(cls, unique_worker_id=None, worker_name=None, api_client=None, allow_double=False):
|
||||
"""
|
||||
# Exit the process if another instance of us is using the same worker_id
|
||||
|
||||
@@ -47,8 +65,9 @@ class Singleton(object):
|
||||
f.write(bytes(os.getpid()))
|
||||
f.flush()
|
||||
try:
|
||||
ret = cls._register_instance(unique_worker_id=unique_worker_id, worker_name=worker_name,
|
||||
api_client=api_client)
|
||||
ret = cls._register_instance(
|
||||
unique_worker_id=unique_worker_id, worker_name=worker_name,
|
||||
api_client=api_client, allow_double=allow_double)
|
||||
except:
|
||||
ret = None, None
|
||||
|
||||
@@ -60,7 +79,7 @@ class Singleton(object):
|
||||
return ret
|
||||
|
||||
@classmethod
|
||||
def _register_instance(cls, unique_worker_id=None, worker_name=None, api_client=None):
|
||||
def _register_instance(cls, unique_worker_id=None, worker_name=None, api_client=None, allow_double=False):
|
||||
if cls.worker_id:
|
||||
return cls.worker_id, cls.instance_slot
|
||||
# make sure we have a unique name
|
||||
@@ -85,7 +104,7 @@ class Singleton(object):
|
||||
pass
|
||||
|
||||
worker = None
|
||||
if api_client and os.environ.get(ENV_K8S_HOST_MOUNT) and uid:
|
||||
if api_client and ENV_DOCKER_HOST_MOUNT.get() and uid:
|
||||
try:
|
||||
worker = [w for w in api_client.workers.get_all() if w.id == uid]
|
||||
except Exception:
|
||||
@@ -105,7 +124,11 @@ class Singleton(object):
|
||||
continue
|
||||
|
||||
if uid == unique_worker_id:
|
||||
return None, None
|
||||
if allow_double:
|
||||
warning('Instance with the same WORKER_ID [{}] was found on this machine. '
|
||||
'We are ignoring it, make sure this not a mistake.'.format(unique_worker_id))
|
||||
else:
|
||||
return None, None
|
||||
|
||||
slots[slot] = uid
|
||||
|
||||
@@ -124,8 +147,9 @@ class Singleton(object):
|
||||
unique_worker_id = worker_name + cls.worker_name_sep + str(cls.instance_slot)
|
||||
|
||||
# create lock
|
||||
cls._pid_file = NamedTemporaryFile(dir=cls._get_temp_folder(),
|
||||
prefix=cls.prefix + cls.sep + str(os.getpid()) + cls.sep, suffix=cls.ext)
|
||||
cls._pid = str(os.getpid())
|
||||
cls._pid_file = NamedTemporaryFile(
|
||||
dir=cls._get_temp_folder(), prefix=cls.prefix + cls.sep + cls._pid + cls.sep, suffix=cls.ext)
|
||||
cls._pid_file.write(('{}\n{}'.format(unique_worker_id, cls.instance_slot)).encode())
|
||||
cls._pid_file.flush()
|
||||
cls.worker_id = unique_worker_id
|
||||
@@ -134,8 +158,8 @@ class Singleton(object):
|
||||
|
||||
@classmethod
|
||||
def _get_temp_folder(cls):
|
||||
if os.environ.get(ENV_K8S_HOST_MOUNT):
|
||||
return os.environ.get(ENV_K8S_HOST_MOUNT).split(':')[-1]
|
||||
if ENV_DOCKER_HOST_MOUNT.get():
|
||||
return ENV_DOCKER_HOST_MOUNT.get().split(':')[-1]
|
||||
return gettempdir()
|
||||
|
||||
@classmethod
|
||||
|
||||
@@ -30,6 +30,17 @@ WORKER_ARGS = {
|
||||
'type': lambda x: x.upper(),
|
||||
'default': 'INFO',
|
||||
},
|
||||
'--gpus': {
|
||||
'help': 'Specify active GPUs for the daemon to use (docker / virtual environment), '
|
||||
'Equivalent to setting NVIDIA_VISIBLE_DEVICES '
|
||||
'Examples: --gpus 0 or --gpu 0,1,2 or --gpus all',
|
||||
'group': 'Docker support',
|
||||
},
|
||||
'--cpu-only': {
|
||||
'help': 'Disable GPU access for the daemon, only use CPU in either docker or virtual environment',
|
||||
'action': 'store_true',
|
||||
'group': 'Docker support',
|
||||
},
|
||||
}
|
||||
|
||||
DAEMON_ARGS = dict({
|
||||
@@ -45,17 +56,6 @@ DAEMON_ARGS = dict({
|
||||
'default': False,
|
||||
'group': 'Docker support',
|
||||
},
|
||||
'--gpus': {
|
||||
'help': 'Specify active GPUs for the daemon to use (docker / virtual environment), '
|
||||
'Equivalent to setting NVIDIA_VISIBLE_DEVICES '
|
||||
'Examples: --gpus 0 or --gpu 0,1,2 or --gpus all',
|
||||
'group': 'Docker support',
|
||||
},
|
||||
'--cpu-only': {
|
||||
'help': 'Disable GPU access for the daemon, only use CPU in either docker or virtual environment',
|
||||
'action': 'store_true',
|
||||
'group': 'Docker support',
|
||||
},
|
||||
'--force-current-version': {
|
||||
'help': 'Force trains-agent to use the current trains-agent version when running in the docker',
|
||||
'action': 'store_true',
|
||||
@@ -72,6 +72,14 @@ DAEMON_ARGS = dict({
|
||||
'help': 'Do not use any network connects, assume everything is pre-installed',
|
||||
'action': 'store_true',
|
||||
},
|
||||
'--services-mode': {
|
||||
'help': 'Launch multiple long-term docker services. Implies docker & cpu-only flags.',
|
||||
'action': 'store_true',
|
||||
},
|
||||
'--create-queue': {
|
||||
'help': 'Create requested queue if it does not exist already.',
|
||||
'action': 'store_true',
|
||||
},
|
||||
'--detached': {
|
||||
'help': 'Detached mode, run agent in the background',
|
||||
'action': 'store_true',
|
||||
@@ -138,6 +146,12 @@ COMMANDS = {
|
||||
'help': 'Where to build the task\'s virtual environment and source code. '
|
||||
'When used with --docker, target docker image name to create',
|
||||
},
|
||||
'--install-globally': {
|
||||
'help': 'Install required python packages before creating the virtual environment used to execute an '
|
||||
'experiment, and use the \'agent.package_manager.system_site_packages\' virtual env option. '
|
||||
'Note: when --docker is used, install-globally is always true',
|
||||
'action': 'store_true',
|
||||
},
|
||||
'--docker': {
|
||||
'help': 'Build the experiment inside a docker (v19.03 and above). Optional args <image> <arguments> or '
|
||||
'specify default docker image in agent.default_docker.image / agent.default_docker.arguments'
|
||||
@@ -145,18 +159,15 @@ COMMANDS = {
|
||||
'nargs': '*',
|
||||
'default': False,
|
||||
},
|
||||
'--gpus': {
|
||||
'help': 'Specify active GPUs for the docker to use'
|
||||
'Equivalent to setting NVIDIA_VISIBLE_DEVICES '
|
||||
'Examples: --gpus 0 or --gpu 0,1,2 or --gpus all',
|
||||
},
|
||||
'--cpu-only': {
|
||||
'help': 'Disable GPU access (cpu only) for the docker',
|
||||
'action': 'store_true',
|
||||
},
|
||||
'--python-version': {
|
||||
'help': 'Virtual environment python version to use',
|
||||
},
|
||||
'--entry-point': {
|
||||
'help': 'Run the task in the new docker. There are two options:\nEither add "reuse_task" to run the '
|
||||
'given task in the docker, or "clone_task" to first clone the given task and then run it in the docker',
|
||||
'default': False,
|
||||
'choices': ['reuse_task', 'clone_task'],
|
||||
}
|
||||
}, **WORKER_ARGS),
|
||||
},
|
||||
'list': {
|
||||
|
||||
@@ -15,7 +15,7 @@ from pyhocon import ConfigFactory, HOCONConverter, ConfigTree
|
||||
from trains_agent.backend_api.session import Session as _Session, Request
|
||||
from trains_agent.backend_api.session.client import APIClient
|
||||
from trains_agent.backend_config.defs import LOCAL_CONFIG_FILE_OVERRIDE_VAR, LOCAL_CONFIG_FILES
|
||||
from trains_agent.definitions import ENVIRONMENT_CONFIG, ENV_TASK_EXECUTE_AS_USER
|
||||
from trains_agent.definitions import ENVIRONMENT_CONFIG, ENV_TASK_EXECUTE_AS_USER, ENVIRONMENT_BACKWARD_COMPATIBLE
|
||||
from trains_agent.errors import APIError
|
||||
from trains_agent.helper.base import HOCONEncoder
|
||||
from trains_agent.helper.process import Argv
|
||||
@@ -63,6 +63,7 @@ def tree(*args):
|
||||
|
||||
class Session(_Session):
|
||||
version = __version__
|
||||
force_debug = False
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
# make sure we set the environment variable so the parent session opens the correct file
|
||||
@@ -77,12 +78,22 @@ class Session(_Session):
|
||||
os.environ['CUDA_VISIBLE_DEVICES'] = os.environ['NVIDIA_VISIBLE_DEVICES'] = 'none'
|
||||
if kwargs.get('gpus') and not os.environ.get('KUBERNETES_SERVICE_HOST') \
|
||||
and not os.environ.get('KUBERNETES_PORT'):
|
||||
os.environ['CUDA_VISIBLE_DEVICES'] = os.environ['NVIDIA_VISIBLE_DEVICES'] = kwargs.get('gpus')
|
||||
# CUDA_VISIBLE_DEVICES does not support 'all'
|
||||
if kwargs.get('gpus') == 'all':
|
||||
os.environ.pop('CUDA_VISIBLE_DEVICES', None)
|
||||
os.environ['NVIDIA_VISIBLE_DEVICES'] = kwargs.get('gpus')
|
||||
else:
|
||||
os.environ['CUDA_VISIBLE_DEVICES'] = os.environ['NVIDIA_VISIBLE_DEVICES'] = kwargs.get('gpus')
|
||||
if kwargs.get('only_load_config'):
|
||||
from trains_agent.backend_api.config import load
|
||||
self.config = load()
|
||||
else:
|
||||
super(Session, self).__init__(*args, **kwargs)
|
||||
|
||||
# set force debug mode, if it's on:
|
||||
if Session.force_debug:
|
||||
self.config["agent"]["debug"] = True
|
||||
|
||||
self.log = self.get_logger(__name__)
|
||||
self.trace = kwargs.get('trace', False)
|
||||
self._config_file = kwargs.get('config_file') or \
|
||||
@@ -95,8 +106,10 @@ class Session(_Session):
|
||||
def_python.set("{version.major}.{version.minor}".format(version=sys.version_info))
|
||||
|
||||
# HACK: backwards compatibility
|
||||
os.environ['ALG_CONFIG_FILE'] = self._config_file
|
||||
os.environ['SM_CONFIG_FILE'] = self._config_file
|
||||
if ENVIRONMENT_BACKWARD_COMPATIBLE.get():
|
||||
os.environ['ALG_CONFIG_FILE'] = self._config_file
|
||||
os.environ['SM_CONFIG_FILE'] = self._config_file
|
||||
|
||||
if not self.config.get('api.host', None) and self.config.get('api.api_server', None):
|
||||
self.config['api']['host'] = self.config.get('api.api_server')
|
||||
|
||||
@@ -152,9 +165,16 @@ class Session(_Session):
|
||||
logger.propagate = True
|
||||
return TrainsAgentLogger(logger)
|
||||
|
||||
@staticmethod
|
||||
def set_debug_mode(enable):
|
||||
if enable:
|
||||
import logging
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
Session.force_debug = enable
|
||||
|
||||
@property
|
||||
def debug_mode(self):
|
||||
return self.config.get("agent.debug", False)
|
||||
return Session.force_debug or self.config.get("agent.debug", False)
|
||||
|
||||
@property
|
||||
def config_file(self):
|
||||
|
||||
@@ -1 +1 @@
|
||||
__version__ = '0.14.1'
|
||||
__version__ = '0.15.1'
|
||||
|
||||
Reference in New Issue
Block a user