Databricks provides ACID guarantees since the inception of Delta format. In order to ensure the C - Consistency is addressed, it limits concurrent workflows to perform updates at the same time, like other ACID compliant SQL engines. The key difference is some SQL engines limits concurrent processes at the beginning of the merge by performing a lock. However, new age SQL engines like Delta looks at the signature of the delta log before it starts the commit and then assumes that there are no other processes performing the update in the concurrent fashion. Now when one of the processes completed write the file and changed the delta log, the other process cannot write since the signature changed. This is the underlying workings, but what is the solution.
1. You can restart the processes that failed at the workflow level or using a pythonic exception handling.
2. Or, ensure that these processes are writing different partitions of the delta table that way it doesn't overlap the files that is created by the conflicting process.
Hope this helps...