LLMRunMetadataColumnNames

Ingest metadata about your LLM inferences

Arize class to map up to 4 columns: total_token_count_column_name , prompt_token_count_column_name, response_token_count_column_name, andresponse_latency_ms_column_name

class LLMRunMetadataColumnNames:
    total_token_count_column_name: Optional[str] = None
    prompt_token_count_column_name: Optional[str] = None
    response_token_count_column_name: Optional[str] = None
    response_latency_ms_column_name: Optional[str] = None
ParametersData TypeExpected Type in ColumnDescription

total_token_count_column_name

str

The contents of this column must be integers

Column name for the total number of tokens used in the inference, both in the prompt sent to the LLM and in its response

promt_token_count_column_name

str

The contents of this column must be integers

Column name for the number of tokens used in the prompt sent to the LLM

response_token_count_column_name

str

The contents of this column must be integers

Column name for the number of tokens used in the response returned by the LLM

response_latency_ms_column_name

str

The contents of this column must be integers or floats

Column name for the latency (in ms) experienced during the LLM run

Code Example

Indextotal_token_count prompt_token_countresponse_token_countresponse_latency

0

4325

2325

2000

20000

from arize.utils.types import LLMRunMetadataColumnNames

# Declare LLM run metadata columns
llm_run_metadata = LLMRunMetadataColumnNames(
    total_token_count_column_name = "total_token_count", # column containing the number of tokens in the prompt and response
    prompt_token_count_column_name = "prompt_token_count", # column containing the number of tokens in the prompt
    response_token_count_column_name = "response_token_count", # column containing the number of tokens in the response
    response_latency_ms_column_name = "response_latency" # column containing the latency of the LLM run
)

Last updated

Copyright © 2023 Arize AI, Inc