- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-04-2025 11:20 AM
Hello,
When I do some DML statements using a view "the_source_view" direcly in SQL as a basis, all works fine. I can do inserts in another Delta table based on the data coming out of this view.
It works too from a Job: WorkFlow -> Task -> Notebook containing SQL statements.
Now if I try to execute the same statements via PySpark, like I would do as a dynamic SQL in Oracle, I get the following error:
"
[INSUFFICIENT_PERMISSIONS] Insufficient privileges:
User does not have SELECT on Table 'mycatalog.mychema.__materialization_mat_951a1902_a8ff_43e1_aa6a_4ca05a4e3b19_the_source_view_1'. SQLSTATE: 42501
"
What is missing there in terms of privileges? When I look at the permissions, I don't see any difference with other tables or views.
The view "the_source_view" has been created by a colleague of another team, not by me, and so I don't have all details on "what is behing" if I can say.
The PySpark execution mean that I'm using a All Purpose compute.
Thanks for the help !
Vinc
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a month ago
Hello,
Things are working as soon as I use a compute with Access Mode = Standard.
Vinc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-06-2025 07:54 AM
Hello,
In order to add a bit more of "context" on this question:
. Is there a difference of privileges needed when executing a SQL code in a Notebook - targeting a view like "the_source_view" in my example, versus executing a PySpark code which is using the same view ?
Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a month ago
Hello,
Things are working as soon as I use a compute with Access Mode = Standard.
Vinc

