cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Can't query delta tables, token missing required scope

Miro_ta
New Contributor III

Hello,
I've correctly set up a stream from kinesis, but I can't read anything from my delta table

I'm actually reproducing the demo from Frank Munz: https://github.com/fmunz/delta-live-tables-notebooks/tree/main/motion-demo

and I'm running the following script which is named M-Magn SSS.py

 

# Databricks notebook source
# MAGIC %md
# MAGIC ![Name of the image](https://raw.githubusercontent.com/fmunz/motion/master/img/magn.png)

# COMMAND ----------

# MAGIC %sql
# MAGIC USE demo_frank.motion;
# MAGIC DESCRIBE table sensor;

# COMMAND ----------

from pyspark.sql.functions import window

# set watermark for 1 minute for time column
# spark.sql.shuffle.partitions (default 200, match number of cores)
spark.conf.set("spark.sql.shuffle.partitions", 30)


display(spark.readStream.format("delta").
        table("sensor").withWatermark("time", "3 seconds"). \
        groupBy(window("time", "1 second")).avg("magn").orderBy("window",ascending=False).limit(30))

 

  I've created the pipeline successfully, I'm now trying to look at my delta table, but databricks raises the following error when i run this:

 

 

%sql
USE bigdatalab.motion;
DESCRIBE table sensor;

 

Error in SQL statement: ExecutionException: org.apache.spark.sql.AnalysisException: 403: Your token is missing the required scopes for this endpoint.

I don't know what could be the cause of the problem, I've tried to set up all the authorizations but it doesn't work.

Hope you can help me, I'm using the Databricks on AWS 14 days trial.

 

2 ACCEPTED SOLUTIONS

Accepted Solutions

Miro_ta
New Contributor III

Not really, but I solved in my case using the hive metastore instead of the unity catalog, maybe it's because I'm on a trial version

View solution in original post

Brosenberg
New Contributor III
New Contributor III

At this time, Delta Live Tables in Unity Catalog can only be read using a Shared compute or SQL Warehouse (support to read from Assigned compute is on the roadmap).

To read the table using Assigned compute (e.g. Personal Compute), you will first need to make a copy of the table using a Shared Compute/SQL Warehouse (i.e. not in a DLT pipeline). You can use Assigned compute to read from the copy.

View solution in original post

9 REPLIES 9

Miro_ta
New Contributor III

If I try to query any other table that is not delta, I don't get any error.

trejas
New Contributor II

Did you figure out an answer to this? I am seeing the same thing on a delta streaming table.

Miro_ta
New Contributor III

Not really, but I solved in my case using the hive metastore instead of the unity catalog, maybe it's because I'm on a trial version

Kaniz
Community Manager
Community Manager

Hi @Miro_ta , It seems you’re encountering an authorization issue while accessing your Databricks resources. Let’s troubleshoot this together.

 

The error message you’re seeing, “403: Your token is missing the required scopes for this endpoint,” indicates that your service principal might not have the necessary permissions to access the Databricks API.

 

Here are some steps to address this:

 

Grant Roles to the Service Principal:

  • Ensure that you have granted roles to the service principal associated with your Databricks workspace. You can do this via the Azure Portal:
    • Go to Azure Portal > Azure Databricks > Azure Databricks Service > Access control (IAM) > Add a role assignment.
    • Select the role you want to grant (e.g., Contributor) and find your service principal.
    • Save the changes.

Check Workspace Access:

  • Make sure you are either a Contributor or Owner role on the Databricks workspace resource in Azure.
  • If not, request access from your admin separately.

Add Service Principal as a User:

  • If you wish to access Databricks endpoints using just the access token (as in CI/CD workflows), add the service principal as a user in the Databricks workspace. In this case, only the access token is needed.

Remember that Databricks endpoints may require different scopes and permissions, so ensure that your service principal has the appropriate access. If you encounter any further issues, feel free to ask for more assistance! 🚀

jlb0001
New Contributor III

I'm running into a similar situation such that DESCRIBE <table> on a table in UC works fine but running DESCRIBE EXTENDED <table> on that same table gives me that same error from a databricks notebook.

Strangely, I don't get this error if I switch the notebook to instead use a Serverless SQL Warehouse, and not a personal cluster compute.

So, I personally definitely have whatever access is needed, but somehow this compute instance does not have access to just the extended properties?

ExecutionException: org.apache.spark.sql.AnalysisException: 403: Your token is missing the required scopes for this endpoint.

Spoiler
What permissions must the compute instance have to access extended properties?

udi_azulay
New Contributor II

I am having similar issue, any update on this one ?

nag_kanchan
New Contributor III

I am facing the same issue but i cannot move my data to hive_metastore. Does anyone has a solution for Unity catalog?

@Kaniz 

Brosenberg
New Contributor III
New Contributor III

At this time, Delta Live Tables in Unity Catalog can only be read using a Shared compute or SQL Warehouse (support to read from Assigned compute is on the roadmap).

To read the table using Assigned compute (e.g. Personal Compute), you will first need to make a copy of the table using a Shared Compute/SQL Warehouse (i.e. not in a DLT pipeline). You can use Assigned compute to read from the copy.

holly
New Contributor III
New Contributor III

Hello, I also had this issue. It was because I was trying to read a DLT table with a Machine Learning Runtime. At time of writing, Machine Learning Runtimes are not compatible with shared access mode, so I ended up setting up two clusters, one MLR assigned access mode, then a normal runtime shared access mode, and then played a fun game of dependency jenga. 

I was only setting up a demo, so no big deal. If you're looking at something in production, I'm not sure I could recommend this approach unless it's a last resort.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.