Get trained models usage info | Elasticsearch API documentation (original) (raw)
Dismiss highlight Show more
Path parameters
- model_id
string | array[string] Required
The unique identifier of the trained model or a model alias. It can be a comma-separated list or a wildcard expression.
Query parameters
- Specifies what to do when the request:
- Contains wildcard expressions and there are no models that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
If true, it returns an empty array when there are no matches and the subset of results when there are partial matches.
- Skips the specified number of models.
- Specifies the maximum number of models to obtain.
Responses
- 200 application/json
Hide response attributes Show response attributes object- The total number of trained model statistics that matched the requested ID patterns. Could be higher than the number of items in the trained_model_stats array as the size of the array is restricted by the supplied size parameter.
- trained_model_stats
array[object] Required
An array of trained model statistics, which are sorted by the model_id value in ascending order.
Hide trained_model_stats attributes Show trained_model_stats attributes object
* Hide deployment_stats attributes Show deployment_stats attributes object
* adaptive_allocations
object
Hide adaptive_allocations attributes Show adaptive_allocations attributes object
* If true, adaptive_allocations is enabled
* min_number_of_allocations
number
Specifies the minimum number of allocations to scale to. If set, it must be greater than or equal to 0. If not defined, the deployment scales to 0.
* max_number_of_allocations
number
Specifies the maximum number of allocations to scale to. If set, it must be greater than or equal to min_number_of_allocations.
* Hide allocation_status attributes Show allocation_status attributes object
* allocation_count
number Required
The current number of nodes where the model is allocated.
* Values arestarted
,starting
, orfully_allocated
.
* target_allocation_count
number Required
The desired number of nodes for model allocation.
* #### cache_size number | string
* deployment_id
string Required
* The sum oferror_count
for all nodes in the deployment.
* The sum ofinference_count
for all nodes in the deployment.
* nodes
array[object] Required
The deployment stats for each node that currently has the model allocated. In serverless, stats are reported for a single unnamed virtual node.
Hide nodes attributes Show nodes attributes object
* average_inference_time_ms
* average_inference_time_ms_last_minute
* average_inference_time_ms_excluding_cache_hits
* The number of errors when evaluating the trained model.
* The total number of inference calls made against this node for this model.
* inference_cache_hit_count
number
* inference_cache_hit_count_last_minute
number
* last_access
* number_of_allocations
number
The number of allocations assigned to this node.
* number_of_pending_requests
number
The number of inference requests queued to be processed.
* peak_throughput_per_minute
number Required
* rejected_execution_count
number
The number of inference requests that were not processed because the queue was full.
* routing_state
object Required
* start_time
* threads_per_allocation
number
The number of threads used by each allocation during inference.
* throughput_last_minute
number Required
* The number of inference requests that timed out before being processed.
* number_of_allocations
number
The number of allocations requested.
* peak_throughput_per_minute
number Required
* Values arenormal
orlow
.
* The number of inference requests that can be queued before new requests are rejected.
* rejected_execution_count
number
The sum ofrejected_execution_count
for all nodes in the deployment. Individual nodes reject an inference request if the inference queue is full. The queue size is controlled by thequeue_capacity
setting in the start trained model deployment API.
* The reason for the current deployment state. Usually only populated when the model is not deployed to a node.
* Time unit for milliseconds
* Values arestarted
,starting
,stopping
, orfailed
.
* threads_per_allocation
number
The number of threads used be each allocation during inference.
* The sum oftimeout_count
for all nodes in the deployment.
* Hide inference_stats attributes Show inference_stats attributes object
* cache_miss_count
number Required
The number of times the model was loaded for inference and was not retrieved from the cache. If this number is close to theinference_count
, the cache is not being appropriately used. This can be solved by increasing the cache size or its time-to-live (TTL). Refer to general machine learning settings for the appropriate settings.
* failure_count
number Required
The number of failures when using the model for inference.
* inference_count
number Required
The total number of times the model has been called for inference. This is across all inference contexts, including all pipelines.
* missing_all_fields_count
number Required
The number of inference calls where all the training features for the model were missing.
* Time unit for milliseconds
* A collection of ingest stats for the model across all nodes. The values are summations of the individual node statistics. The format matches the ingest section in the nodes stats API.
Hide ingest attribute Show ingest attribute object
* *
object Additional properties
* model_size_stats
object Required
Hide model_size_stats attributes Show model_size_stats attributes object
* #### model_size_bytes number | string Required
* #### required_native_memory_bytes number | string Required
* pipeline_count
number Required
The number of ingest pipelines that currently refer to the model.