* affinity for serving deployments was incorrectly put under containers. It needs to be put at pod spec level.
* Fixed: helm docs generation
* Fixed: helm-docs generation
---------
Co-authored-by: = <s.bertl@iaea.org>
Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
* fixed#256.
nodeSelector was incorrectly placed under the container.
moved it to pod spec
added runtimeClassName to the pod spec to select specific GPU nodes.
* increment version number
* added artifacthub.io/changes
* update readme.md
* try to fix helm docs generation issue
* update readme.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
---------
Co-authored-by: IAEA_SG\BERTLS <s.bertl@iaea.org>
Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
* Added: mount file for additional configs in pod
* Changed: bump up version
* Fixed: missing parameters for deployment
* Fixed naming typo
* Changed: changelog fixes added
* added triton deployment and service, added triton block to values file, added value for CLEARML_DEFAULT_TRITON_GRPC_ADDR env variable in the serving-inference deployment
* re-generated README
* fixed yaml
* added condition to enable triton support
* changed chart version
* changed chart version
* bumped version to 0.3.0
* added conditional extraPythonPackages variable to clearml_serving_triton deploymnent
* added conditional extraPythonPackages to all the relevant deployments
* bumped version to 0.3.0