[ML] Update trained model inference endpoint (#556)

Infer trained model deployment API has been deprecated, so I changed the code to use the new one.
This commit is contained in:
Valeriy Khakhutskyy 2023-07-11 10:55:11 +02:00 committed by GitHub
parent f38de0ed05
commit 77781b90ff
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 2 additions and 2 deletions

View File

@ -245,7 +245,7 @@ The `--start` argument will deploy the model with one allocation and one
thread per allocation, which will not offer good performance. When starting
the model deployment using the ML UI in Kibana or the Elasticsearch
[API](https://www.elastic.co/guide/en/elasticsearch/reference/current/start-trained-model-deployment.html)
you will be able to set the threading options to make best use of your
you will be able to set the threading options to make the best use of your
hardware.
```python

View File

@ -134,7 +134,7 @@ class PyTorchModel:
__body: Dict[str, Any] = {}
__body["docs"] = docs
__path = f"/_ml/trained_models/{_quote(self.model_id)}/deployment/_infer"
__path = f"/_ml/trained_models/{_quote(self.model_id)}/_infer"
__query: Dict[str, Any] = {}
__query["timeout"] = timeout
__headers = {"accept": "application/json", "content-type": "application/json"}