11-04-2023 10:58 AM - edited 11-04-2023 11:03 AM
Hello,
I've correctly set up a stream from kinesis, but I can't read anything from my delta table
I'm actually reproducing the demo from Frank Munz: https://github.com/fmunz/delta-live-tables-notebooks/tree/main/motion-demo
and I'm running the following script which is named M-Magn SSS.py
# Databricks notebook source
# MAGIC %md
# MAGIC ![Name of the image](https://raw.githubusercontent.com/fmunz/motion/master/img/magn.png)
# COMMAND ----------
# MAGIC %sql
# MAGIC USE demo_frank.motion;
# MAGIC DESCRIBE table sensor;
# COMMAND ----------
from pyspark.sql.functions import window
# set watermark for 1 minute for time column
# spark.sql.shuffle.partitions (default 200, match number of cores)
spark.conf.set("spark.sql.shuffle.partitions", 30)
display(spark.readStream.format("delta").
table("sensor").withWatermark("time", "3 seconds"). \
groupBy(window("time", "1 second")).avg("magn").orderBy("window",ascending=False).limit(30))
I've created the pipeline successfully, I'm now trying to look at my delta table, but databricks raises the following error when i run this:
%sql
USE bigdatalab.motion;
DESCRIBE table sensor;
Error in SQL statement: ExecutionException: org.apache.spark.sql.AnalysisException: 403: Your token is missing the required scopes for this endpoint.
I don't know what could be the cause of the problem, I've tried to set up all the authorizations but it doesn't work.
Hope you can help me, I'm using the Databricks on AWS 14 days trial.
11-10-2023 11:27 AM
Not really, but I solved in my case using the hive metastore instead of the unity catalog, maybe it's because I'm on a trial version
02-21-2024 05:13 AM
At this time, Delta Live Tables in Unity Catalog can only be read using a Shared compute or SQL Warehouse (support to read from Assigned compute is on the roadmap).
To read the table using Assigned compute (e.g. Personal Compute), you will first need to make a copy of the table using a Shared Compute/SQL Warehouse (i.e. not in a DLT pipeline). You can use Assigned compute to read from the copy.
11-04-2023 11:01 AM
If I try to query any other table that is not delta, I don't get any error.
11-08-2023 05:47 PM
Did you figure out an answer to this? I am seeing the same thing on a delta streaming table.
11-10-2023 11:27 AM
Not really, but I solved in my case using the hive metastore instead of the unity catalog, maybe it's because I'm on a trial version
01-20-2024 10:55 AM
I'm running into a similar situation such that DESCRIBE <table> on a table in UC works fine but running DESCRIBE EXTENDED <table> on that same table gives me that same error from a databricks notebook.
Strangely, I don't get this error if I switch the notebook to instead use a Serverless SQL Warehouse, and not a personal cluster compute.
So, I personally definitely have whatever access is needed, but somehow this compute instance does not have access to just the extended properties?
ExecutionException: org.apache.spark.sql.AnalysisException: 403: Your token is missing the required scopes for this endpoint.
01-20-2024 09:35 PM
I am having similar issue, any update on this one ?
01-04-2024 10:38 AM
I am facing the same issue but i cannot move my data to hive_metastore. Does anyone has a solution for Unity catalog?
02-21-2024 05:13 AM
At this time, Delta Live Tables in Unity Catalog can only be read using a Shared compute or SQL Warehouse (support to read from Assigned compute is on the roadmap).
To read the table using Assigned compute (e.g. Personal Compute), you will first need to make a copy of the table using a Shared Compute/SQL Warehouse (i.e. not in a DLT pipeline). You can use Assigned compute to read from the copy.
03-08-2024 02:36 AM
Hello, I also had this issue. It was because I was trying to read a DLT table with a Machine Learning Runtime. At time of writing, Machine Learning Runtimes are not compatible with shared access mode, so I ended up setting up two clusters, one MLR assigned access mode, then a normal runtime shared access mode, and then played a fun game of dependency jenga.
I was only setting up a demo, so no big deal. If you're looking at something in production, I'm not sure I could recommend this approach unless it's a last resort.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group