Hi,
I am deploying a model with following code:
w = WorkspaceClient()
model_cfg = {
"entity_name": uc_model,
"entity_version": str(version),
"workload_type": "CPU",
"workload_size": "Small",
"scale_to_zero_enabled": True
}
ai_gateway_config = AiGatewayConfig(
inference_table_config=AiGatewayInferenceTableConfig(
catalog_name=catalog,
schema_name=schema,
enabled=True
)
)
served_model = ServedEntityInput.from_dict(model_cfg)
endpoint_config = EndpointCoreConfigInput(served_entities=[served_model])
try:
w.serving_endpoints.get(endpoint_name)
w.serving_endpoints.update_config(
name=endpoint_name,
served_entities=[served_model]
)
except NotFound:
w.serving_endpoints.create(
name=endpoint_name,
ai_gateway=ai_gateway_config,
config=endpoint_config,
route_optimized=True
)
It creates the inference table, the endpoint makes predictions, but it doesn't log the request and response. Could someone point me what I am missing? Thank you.