cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Delete row from table is not working.

kjoth
Contributor II

I have created External table using spark via below command. (Using Data science & Engineering)

df.write.mode("overwrite").format("parquet").saveAsTable(name=f'{db_name}.{table_name}', path="dbfs:/reports/testing")

I have tried to delete a row based on filter condition using SQL endpoint (Using SQL )

DELETE FROM
 
  testing.mobile_number_table
 
WHERE
 
  x1 == 9940062964

Getting below error message.

Spark 3.0 Plans are not fully supported on table acl or credential passthrough clusters: DeleteFromTable (x1#6341L = 9940062964)

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III

try using :

.format("delta")

if not help I would check dbfs mount

View solution in original post

9 REPLIES 9

Hubert-Dudek
Esteemed Contributor III

try using :

.format("delta")

if not help I would check dbfs mount

Hi @Hubert Dudek​ , Is this(delta) is the only way for updating & deleting records.

Hubert-Dudek
Esteemed Contributor III

using delta file format is only ("real") way to delte smth from file as it is transaction file (so it make commit that record is deleted kind of sql/git)

In other data files it will require to overwrite everything every time.

kjoth
Contributor II

Thank you.

Hubert-Dudek
Esteemed Contributor III

if helped you can choose my answer as best one 🙂

jose_gonzalez
Moderator
Moderator

hi @karthick J​ ,

It seems like the error is coming from your table permissions. Are you using a high concurrency cluster? if you do, then check if have table ACLs enable. Also try to test it using a standard cluster.

kjoth
Contributor II

Hi @Jose Gonzalez​ , Yes, Table access control has been enabled. Is this what you are referring too.

image.png

jose_gonzalez
Moderator
Moderator

hi @karthick J​ ,

Can you try to delete the row and execute your command in a non high concurrency cluster? the reason why im asking this is because we first need to isolate the error message and undertand why is happening to be able to find the best solution. Is this issue still blocking you or you are able to mitigate/solve this error?

HI @Jose Gonzalez​  ,

Have created a table via spark using non high concurrency cluster - not writing as delta format. And created the table via spark. Where I have tried to delete the row but getting the error 1. Via Spark Notebook SQL & 2. SQL query

df.write.mode("overwrite").format("parquet").saveAsTable(name=f'{db_name}.{table_name}', path="dbfs:/reports/testing")

imageimage 

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.