01-29-2026 05:08 AM
I am getting an error message:
org.apache.spark.SparkException: [INTERNAL_ERROR] Cannot get workspace url and credential from data plane.
Workspace URL: Some(url here ) SQLSTATE: XX000.
When I am trying to run the ai_query command to access my personally created model serving endpoint, co-pilot is convinced that the error is because of ai_query() because SQL function fails exclusively in Delta Live Tables (DLT) serverless pipelines during the SETTING_UP_TABLES phase with credential access errors. The same ai_query() function works correctly in SQL Warehouses and serverless compute.
Are you able to provide me a way to test that ai_query works because the code was working before (nothing has changed) except the Databricks runtime 16.4.15 to 17.3.*. By the way I am running my code on VS code via Databricks asset bundles.
01-29-2026 08:03 AM
Hi , what dbr version are you using ?
The current Lakeflow Spark Declarative Pipelines warehouse channel does not use the latest Databricks Runtime version that supports ai_query(). Set the pipelines.channel in the table properties as 'preview' to use ai_query().
> create or replace materialized view
ai_query_mv
TBLPROPERTIES('pipelines.channel' = 'PREVIEW') AS
SELECT
ai_query("databricks-meta-llama-3-3-70b-instruct", text) as response
FROM
messages
LIMIT 10;
01-29-2026 08:07 AM
Hi there
I am using dlt:17.3.3
01-29-2026 08:13 AM
dlt:17.3.3-delta-pipelines-aarch64-dlt-release-dp-2026 is a more specific answer
01-29-2026 11:19 PM
I have the same problem starting today. My Lakeflow Spark Declarative Pipeline calls the ai_query function and runs every day.
Starting today, it stopped working. I tried:
channel: preview → dlt:17.3.3-delta-pipelines-aarch64-dlt-release-dp-20260115-rc0-commit-afb2d8f-image-307dbfc
channel: current → dlt:16.4.15-delta-pipelines-aarch64-dlt-release-dp-20260115-rc0-commit-afb2d8f-image-845d492
It always returns the same error.
On 21/01/2026 I encountered the same issue and solved it by changing the channel from preview to current, but today neither of them works anymore. It looks like an update in DLT is preventing ai_query from working properly.
2 weeks ago
Hi @Kyu-007,
The error you are seeing, "Cannot get workspace url and credential from data plane," indicates that the Lakeflow Spark Declarative Pipelines (SDP) serverless compute environment is unable to resolve workspace credentials needed by ai_query() to reach your model serving endpoint. This is a known behavior that can surface when the pipeline runtime version changes or when the credential bootstrapping during the SETTING_UP_TABLES phase does not complete before ai_query() attempts to connect.
Here are several things to check and try:
1. SET THE PIPELINE CHANNEL TO PREVIEW
The ai_query() function requires a runtime version that supports it within SDP. The "current" channel may lag behind in feature support. As @saurabh18cs mentioned, explicitly set the channel in your table properties:
CREATE OR REPLACE MATERIALIZED VIEW my_view
TBLPROPERTIES(
'pipelines.channel' = 'PREVIEW'
)
AS
SELECT ai_query('your-endpoint', input_col) AS response
FROM source_table;
Since @TheSmike reported that neither current nor preview channels resolved the issue recently, there may have been a transient runtime rollout in progress at the time. If you tried this before and it did not work, it is worth trying again with the latest available preview runtime.
2. VERIFY MODEL SERVING ENDPOINT PERMISSIONS
The service principal or identity running the SDP pipeline must have CAN QUERY permission on the model serving endpoint. In a serverless pipeline, the pipeline runs under the identity of the pipeline owner (or the configured service principal). Confirm that this identity has the right permissions:
- Go to Serving in the left sidebar
- Click on your endpoint
- Go to the Permissions tab
- Ensure the pipeline owner or service principal has "Can Query"
3. CHECK FOR NETWORK / PRIVATELINK CONFIGURATION
If your workspace uses AWS PrivateLink, there are additional requirements for ai_query() to reach model serving endpoints. The serverless compute environment must be able to route to the serving infrastructure. Review the network connectivity documentation for your workspace:
https://docs.databricks.com/aws/en/security/network/classic/privatelink.html
If you recently moved to serverless pipelines from classic compute, network routing differences could explain why the same code stopped working.
4. RUNTIME VERSION CHANGE FROM 16.4 TO 17.3
You mentioned the code was working before and the only change was from runtime 16.4.15 to 17.3.*. This is a significant jump. A few things could have changed:
- The credential propagation mechanism in serverless SDP pipelines may behave differently in 17.x runtimes
- If you previously ran on classic (non-serverless) compute, the credential flow would have been different
To isolate the issue, try running your pipeline on non-serverless compute temporarily. In the pipeline settings, disable serverless and attach a cluster. If ai_query() works on classic compute but not serverless, that confirms the issue is specific to the serverless credential bootstrapping.
5. TEST ai_query() INDEPENDENTLY
To verify that ai_query() itself works with your endpoint, run a simple test from a SQL Warehouse or notebook on serverless compute:
SELECT ai_query('your-endpoint-name', 'test input') AS result;
You mentioned this already works on SQL Warehouses and serverless notebook compute. This confirms the endpoint is fine and the issue is isolated to the SDP serverless pipeline environment.
6. RETRY OR PIPELINE RESTART
The SETTING_UP_TABLES phase is an early initialization step where the pipeline resolves table schemas and dependencies before processing data. Credential provisioning for serverless compute sometimes has transient delays. If you have not already, try:
- Doing a full refresh of the pipeline (not just a restart)
- Running the pipeline update again after a short wait
If the issue is intermittent, this points to a timing issue with credential availability during initialization.
7. CONTACT SUPPORT IF THE ISSUE PERSISTS
Since @TheSmike also reported encountering the same problem starting on a specific date, and switching channels did not help, this may be related to a serverless infrastructure update. If the issue persists after trying the steps above, I recommend opening a support ticket with Databricks. Include:
- The full error message and stack trace
- The exact pipeline runtime version (e.g., dlt:17.3.3-delta-pipelines-aarch64-dlt-release-dp-2026...)
- Whether the pipeline is serverless or classic
- Whether the same ai_query() call works from a SQL Warehouse
- The dates when it started failing
This will help the support team investigate whether a specific runtime release introduced a regression.
REFERENCES
- ai_query() function documentation:
https://docs.databricks.com/aws/en/sql/language-manual/functions/ai_query.html
- AI Functions overview (supported compute environments):
https://docs.databricks.com/aws/en/large-language-models/ai-functions.html
- Lakeflow Spark Declarative Pipelines configuration (channel settings):
https://docs.databricks.com/aws/en/delta-live-tables/configure-pipeline.html
- Model serving endpoint permissions:
https://docs.databricks.com/aws/en/machine-learning/model-serving/model-serving-permissions.html
Hope this helps you and @TheSmike get things working again. Let us know what you find.
* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.
2 weeks ago
Hi Steve
Thank you for the reply.
I worked again after a couple of days (without any code change), I suspect it had something to do with the runtime serverless compute rollout. All is resolved and I will be much more cognizant of similar issues should they occur in the future for more efficient troubleshooting procedures.