- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-07-2022 08:11 PM
I tried to use Spark as much as possible but experience some regression. Hopefully to get some direction how to use it correctly.
I've created a Databricks table using spark.sql
spark.sql('select * from example_view ') \
.write \
.mode('overwrite') \
.saveAsTable('example_table')
and then I need to patch some value
%sql
update example_table set create_date = '2022-02-16' where id = '123';
update example_table set create_date = '2022-02-17' where id = '124';
update example_table set create_date = '2022-02-18' where id = '125';
update example_table set create_date = '2022-02-19' where id = '126';
However, I found this awlfully slow since it created hundreds of spark jobs:
Why it Spark doing this and any suggestion how to improve my code? Last thing I want to do is to convert it back to Pandas and update the cell values individually. Any suggestion is appreciated.
- Labels:
-
Databricks table
-
Slow
-
Spark
-
Sparkdataframe
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2022 12:00 AM
Hi, @Vincent Doe ,
Updates are available in Delta tables, but under the hood you are updating parquet files, it means that each update needs to find the file where records are stored, then re-write the file to new version, and make new file current version.
In your case maybe you should try something like this:
spark.sql("""
select
col1,
col2,
col3,
case
when id = '123' then '2022-02-16'
when id = '124' then '2022-02-17'
end as create_date
...
from example_view
""") \
.write \
.mode('overwrite') \
.saveAsTable('example_table')
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2022 12:00 AM
Hi, @Vincent Doe ,
Updates are available in Delta tables, but under the hood you are updating parquet files, it means that each update needs to find the file where records are stored, then re-write the file to new version, and make new file current version.
In your case maybe you should try something like this:
spark.sql("""
select
col1,
col2,
col3,
case
when id = '123' then '2022-02-16'
when id = '124' then '2022-02-17'
end as create_date
...
from example_view
""") \
.write \
.mode('overwrite') \
.saveAsTable('example_table')
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2022 06:26 PM
@Pat Sienkiewicz . That's good tips. Thanks.

