โ06-23-2022 10:38 AM
A delta lake table is created with identity column and I'm not able to load the data parallelly from four process. i'm getting the metadata exception error.
I don't want to load the data in temp table . Need to load directly and parallelly in to delta table.
โ06-23-2022 11:43 AM
@Gokul Kโ , Identity is stored in table schema (which is the awful solution). That's why concurrent inserts are not supported.
I even record a video about that problem: Delta Identity Column with Databricks 10.4 - crash test - YouTube
โ06-23-2022 10:55 AM
MetadataChangedException: The metadata of the Delta table has been changed by a concurrent update. Please try the operation again
โ06-23-2022 11:21 AM
No alter table operations are carried out. Just loading data from four parallelly running notebooks in to same delta lake table which is having ID as identity column is making the issue.
when loading the data in to temp table and putting in to target table having identity column doesn't make any issues.
But for some reason i need to load the data parallelly in to the table which is having identity column.
โ06-23-2022 11:43 AM
@Gokul Kโ , Identity is stored in table schema (which is the awful solution). That's why concurrent inserts are not supported.
I even record a video about that problem: Delta Identity Column with Databricks 10.4 - crash test - YouTube
โ06-13-2023 12:45 AM
@Hubert Dudekโ @Kaniz Fatmaโ
I am experiencing the same issue. Now that I understand the reason behind it, I would appreciate your assistance in finding a solution for generating a sequence for the table. Multiple concurrent jobs will be performing insertions and updates on the same table. To address the concurrent update issue, I have partitioned the table. However, I am struggling to determine the best approach for generating the Id values. I would greatly appreciate any suggestions you can provide.
โ11-08-2023 09:03 AM
Even in retry method or in try & exception method, there is no guarantee that the load of another parallel process is complete especially for large volume tables. So in such cases even if you try to repeat the write in exception, it would fail. What is best possible solution for this? Is there any other way to generate id column with auto increment method without using GENERATE clause in DDL?
โ06-23-2022 12:01 PM
Thanks @Hubert Dudekโ
โ08-06-2024 09:09 PM
I recently ran into this MetadataChangedException. Watching the video @Hubert Dudekโ posted it's pretty clear what is going on: object storage folks not thinking like someone who builds relational database engines built it. That's to be expected. Databricks is wonderful in many ways, but in SQL and relational database engine features like sequences, they're evolving slowly.
I switched to a serial write to get around the problem because of a deadline but we really should open a ticket on this with Databricks to get some clarity on an issue (parallel sequence updates) relational databases solved 50+ years ago. Like the video says, it's a bad idea to store the identity information in schema. Needs to be a something like a separate file with a thread safe approach to updates.
2 weeks ago
I'm having the same issue, need to load a large amount of data from separate files into a delta table and I want to do it with a for each loop so I don't have to run it sequentially which will take days. There should be a way to handle this ๐
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group