<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic TASK_WRITE_FAILED when trying to write on the table, Databricks (Scala) in Get Started Discussions</title>
    <link>https://community.databricks.com/t5/get-started-discussions/task-write-failed-when-trying-to-write-on-the-table-databricks/m-p/63702#M6800</link>
    <description>&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;BR /&gt;Hello,&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;P&gt;I have a code on Databricks (Scala) that constructs a df and then write it to a Database table. It is working fine for almost all of the tables, but there is a table with a problem. It says&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;No module named 'delta.connect' - TASK_WRITE_FAILED.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;In the penultimate cell I have:&lt;/P&gt;&lt;PRE&gt;display(data)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;And looks fine (not bad format):&lt;/P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chemajar_2-1710422710695.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/6653i1B975F27DCC4104B/image-size/medium/is-moderation-mode/true?v=v2&amp;amp;px=400" role="button" title="chemajar_2-1710422710695.png" alt="chemajar_2-1710422710695.png" /&gt;&lt;/span&gt;&lt;P&gt;Then, in the last cell I have:&lt;/P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chemajar_5-1710422843604.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/6654i010F15B0FAE5BEDD/image-size/medium/is-moderation-mode/true?v=v2&amp;amp;px=400" role="button" title="chemajar_5-1710422843604.png" alt="chemajar_5-1710422843604.png" /&gt;&lt;/span&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The text of error is the following:&lt;/P&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;ModuleNotFoundError: &lt;/SPAN&gt;No module named 'delta.connect'&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;---------------------------------------------------------------------------&lt;/SPAN&gt; &lt;SPAN class=""&gt;_MultiThreadedRendezvous&lt;/SPAN&gt; Traceback (most recent call last) File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1414&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient._execute_and_fetch_as_iterator&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, req)&lt;/SPAN&gt; &lt;SPAN&gt;1411&lt;/SPAN&gt; generator &lt;SPAN&gt;=&lt;/SPAN&gt; ExecutePlanResponseReattachableIterator( &lt;SPAN&gt;1412&lt;/SPAN&gt; req, &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_stub, &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_retry_policy, &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_builder&lt;SPAN&gt;.&lt;/SPAN&gt;metadata() &lt;SPAN&gt;1413&lt;/SPAN&gt; ) &lt;SPAN class=""&gt;-&amp;gt; 1414&lt;/SPAN&gt; &lt;SPAN class=""&gt;for&lt;/SPAN&gt; b &lt;SPAN class=""&gt;in&lt;/SPAN&gt; generator: &lt;SPAN&gt;1415&lt;/SPAN&gt; &lt;SPAN class=""&gt;yield from&lt;/SPAN&gt; handle_response(b) File &lt;SPAN class=""&gt;/usr/lib/python3.10/_collections_abc.py:330&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;Generator.__next__&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self)&lt;/SPAN&gt; &lt;SPAN&gt;327&lt;/SPAN&gt; &lt;SPAN&gt;"""Return the next item from the generator.&lt;/SPAN&gt; &lt;SPAN&gt;328&lt;/SPAN&gt; &lt;SPAN&gt;When exhausted, raise StopIteration.&lt;/SPAN&gt; &lt;SPAN&gt;329&lt;/SPAN&gt; &lt;SPAN&gt;"""&lt;/SPAN&gt; &lt;SPAN class=""&gt;--&amp;gt; 330&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;send&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;None&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:131&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator.send&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, value)&lt;/SPAN&gt; &lt;SPAN&gt;129&lt;/SPAN&gt; &lt;SPAN class=""&gt;def&lt;/SPAN&gt; &lt;SPAN&gt;send&lt;/SPAN&gt;(&lt;SPAN&gt;self&lt;/SPAN&gt;, value: Any) &lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;&amp;gt;&lt;/SPAN&gt; pb2&lt;SPAN&gt;.&lt;/SPAN&gt;ExecutePlanResponse: &lt;SPAN&gt;130&lt;/SPAN&gt; &lt;SPAN&gt;# will trigger reattach in case the stream completed without result_complete&lt;/SPAN&gt; &lt;SPAN class=""&gt;--&amp;gt; 131&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; &lt;SPAN class=""&gt;not&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_has_next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt;: &lt;SPAN&gt;132&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; &lt;SPAN class=""&gt;StopIteration&lt;/SPAN&gt;() File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:188&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator._has_next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self)&lt;/SPAN&gt; &lt;SPAN&gt;187&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_release_all() &lt;SPAN class=""&gt;--&amp;gt; 188&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; e &lt;SPAN&gt;189&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; &lt;SPAN class=""&gt;False&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:160&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator._has_next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self)&lt;/SPAN&gt; &lt;SPAN&gt;159&lt;/SPAN&gt; &lt;SPAN class=""&gt;try&lt;/SPAN&gt;: &lt;SPAN class=""&gt;--&amp;gt; 160&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_current &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_call_iter&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt; &lt;SPAN&gt;161&lt;/SPAN&gt; &lt;SPAN&gt;lambda&lt;/SPAN&gt;&lt;SPAN class=""&gt;:&lt;/SPAN&gt; &lt;SPAN class=""&gt;next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_iterator&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN class=""&gt;# type: ignore[arg-type]&lt;/SPAN&gt; &lt;SPAN&gt;162&lt;/SPAN&gt; &lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;163&lt;/SPAN&gt; &lt;SPAN class=""&gt;except&lt;/SPAN&gt; &lt;SPAN class=""&gt;StopIteration&lt;/SPAN&gt;: File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:285&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator._call_iter&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, iter_fun)&lt;/SPAN&gt; &lt;SPAN&gt;284&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_iterator &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt; &lt;SPAN class=""&gt;--&amp;gt; 285&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; e &lt;SPAN&gt;286&lt;/SPAN&gt; &lt;SPAN class=""&gt;except&lt;/SPAN&gt; &lt;SPAN class=""&gt;Exception&lt;/SPAN&gt; &lt;SPAN class=""&gt;as&lt;/SPAN&gt; e: &lt;SPAN&gt;287&lt;/SPAN&gt; &lt;SPAN&gt;# Remove the iterator, so that a new one will be created after retry.&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:267&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator._call_iter&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, iter_fun)&lt;/SPAN&gt; &lt;SPAN&gt;266&lt;/SPAN&gt; &lt;SPAN class=""&gt;try&lt;/SPAN&gt;: &lt;SPAN class=""&gt;--&amp;gt; 267&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; &lt;SPAN class=""&gt;iter_fun&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;268&lt;/SPAN&gt; &lt;SPAN class=""&gt;except&lt;/SPAN&gt; grpc&lt;SPAN&gt;.&lt;/SPAN&gt;RpcError &lt;SPAN class=""&gt;as&lt;/SPAN&gt; e: File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:161&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator._has_next.&amp;lt;locals&amp;gt;.&amp;lt;lambda&amp;gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;()&lt;/SPAN&gt; &lt;SPAN&gt;159&lt;/SPAN&gt; &lt;SPAN class=""&gt;try&lt;/SPAN&gt;: &lt;SPAN&gt;160&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_current &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_call_iter( &lt;SPAN class=""&gt;--&amp;gt; 161&lt;/SPAN&gt; &lt;SPAN class=""&gt;lambda&lt;/SPAN&gt;: &lt;SPAN class=""&gt;next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_iterator&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;# type: ignore[arg-type]&lt;/SPAN&gt; &lt;SPAN&gt;162&lt;/SPAN&gt; ) &lt;SPAN&gt;163&lt;/SPAN&gt; &lt;SPAN class=""&gt;except&lt;/SPAN&gt; &lt;SPAN class=""&gt;StopIteration&lt;/SPAN&gt;: File &lt;SPAN class=""&gt;/databricks/python/lib/python3.10/site-packages/grpc/_channel.py:426&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;_Rendezvous.__next__&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self)&lt;/SPAN&gt; &lt;SPAN&gt;425&lt;/SPAN&gt; &lt;SPAN class=""&gt;def&lt;/SPAN&gt; &lt;SPAN&gt;__next__&lt;/SPAN&gt;(&lt;SPAN&gt;self&lt;/SPAN&gt;&lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt; &lt;SPAN class=""&gt;--&amp;gt; 426&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/python/lib/python3.10/site-packages/grpc/_channel.py:826&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;_MultiThreadedRendezvous._next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self)&lt;/SPAN&gt; &lt;SPAN&gt;825&lt;/SPAN&gt; &lt;SPAN class=""&gt;elif&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_state&lt;SPAN&gt;.&lt;/SPAN&gt;code &lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;not&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt;: &lt;SPAN class=""&gt;--&amp;gt; 826&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt; &lt;SPAN class=""&gt;_MultiThreadedRendezvous&lt;/SPAN&gt;: &amp;lt;_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.INTERNAL details = "Job aborted due to stage failure: Task 0 in stage 3406.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3406.0 (TID 3637) (172.26.0.11 executor 0): org.apache.spark.SparkException: [TASK_WRITE_FAILED] Task failed while writing rows to abfss://ucatalogwemetastore@azwesadbricksunitycatmst.dfs.core.windows.net/metastore/cddf9e6b-f8c2-4735-a50e-cabc613c06db/tables/58535ee1-b3c7-43d1-afbc-21efde623438. at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:927) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:551) at org.apache.spark.sql.execution.datasources.WriteFilesExec.$anonfun$doExecuteWrite$1(WriteFiles.scala:116) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:934) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:934) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60) at org.apache.spark.rdd.RDD.$anonfun$computeOrReadCheckpoint$1(RDD.scala:410) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:407) at org.apache.spark.rdd.RDD.iterator(RDD.scala:374) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:196) at org.apache.spark.scheduler.Task.doRunTask(Task.scala:181) at org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:146) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:41) at com.databricks.unity.HandleImpl.runWith(UCSHandle..." debug_error_string = "UNKNOWN:Error received from peer unix:/databricks/sparkconnect/grpc.sock {created_time:"2024-03-14T13:18:45.417841112+00:00", grpc_status:13, grpc_message:"Job aborted due to stage failure: Task 0 in stage 3406.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3406.0 (TID 3637) (172.26.0.11 executor 0): org.apache.spark.SparkException: [TASK_WRITE_FAILED] Task failed while writing rows to abfss://ucatalogwemetastore@azwesadbricksunitycatmst.dfs.core.windows.net/metastore/cddf9e6b-f8c2-4735-a50e-cabc613c06db/tables/58535ee1-b3c7-43d1-afbc-21efde623438.\n\tat org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:927)\n\tat org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:551)\n\tat org.apache.spark.sql.execution.datasources.WriteFilesExec.$anonfun$doExecuteWrite$1(WriteFiles.scala:116)\n\tat org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:934)\n\tat org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:934)\n\tat org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)\n\tat org.apache.spark.rdd.RDD.$anonfun$computeOrReadCheckpoint$1(RDD.scala:410)\n\tat com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)\n\tat org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:407)\n\tat org.apache.spark.rdd.RDD.iterator(RDD.scala:374)\n\tat org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:82)\n\tat com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)\n\tat org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:82)\n\tat com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)\n\tat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)\n\tat org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:196)\n\tat org.apache.spark.scheduler.Task.doRunTask(Task.scala:181)\n\tat org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:146)\n\tat com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:41)\n\tat com.databricks.unity.HandleImpl.runWith(UCSHandle..."}" &amp;gt; During handling of the above exception, another exception occurred: &lt;SPAN class=""&gt;ModuleNotFoundError&lt;/SPAN&gt; Traceback (most recent call last) File &lt;SPAN class=""&gt;&amp;lt;command-2290610805725404&amp;gt;, line 5&lt;/SPAN&gt; &lt;SPAN&gt;1&lt;/SPAN&gt; ( &lt;SPAN&gt;2&lt;/SPAN&gt; &lt;SPAN class=""&gt;data&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;write&lt;/SPAN&gt; &lt;SPAN&gt;3&lt;/SPAN&gt; &lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;format&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;delta&lt;/SPAN&gt;&lt;SPAN class=""&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;4&lt;/SPAN&gt; &lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;mode&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;append&lt;/SPAN&gt;&lt;SPAN class=""&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN class=""&gt;----&amp;gt; 5&lt;/SPAN&gt; &lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;insertInto&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;targetTable&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;6&lt;/SPAN&gt; ) File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/readwriter.py:660&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;DataFrameWriter.insertInto&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, tableName, overwrite)&lt;/SPAN&gt; &lt;SPAN&gt;658&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_write&lt;SPAN&gt;.&lt;/SPAN&gt;table_name &lt;SPAN&gt;=&lt;/SPAN&gt; tableName &lt;SPAN&gt;659&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_write&lt;SPAN&gt;.&lt;/SPAN&gt;table_save_method &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;insert_into&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt; &lt;SPAN class=""&gt;--&amp;gt; 660&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_spark&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;client&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;execute_command&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_write&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;command&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_spark&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;client&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1079&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient.execute_command&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, command)&lt;/SPAN&gt; &lt;SPAN&gt;1077&lt;/SPAN&gt; req&lt;SPAN&gt;.&lt;/SPAN&gt;user_context&lt;SPAN&gt;.&lt;/SPAN&gt;user_id &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_user_id &lt;SPAN&gt;1078&lt;/SPAN&gt; req&lt;SPAN&gt;.&lt;/SPAN&gt;plan&lt;SPAN&gt;.&lt;/SPAN&gt;command&lt;SPAN&gt;.&lt;/SPAN&gt;CopyFrom(command) &lt;SPAN class=""&gt;-&amp;gt; 1079&lt;/SPAN&gt; data, _, _, _, properties &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_execute_and_fetch&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;req&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;1080&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; data &lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;not&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt;: &lt;SPAN&gt;1081&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; (data&lt;SPAN&gt;.&lt;/SPAN&gt;to_pandas(), properties) File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1441&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient._execute_and_fetch&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, req, self_destruct)&lt;/SPAN&gt; &lt;SPAN&gt;1438&lt;/SPAN&gt; schema: Optional[StructType] &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt; &lt;SPAN&gt;1439&lt;/SPAN&gt; properties: Dict[&lt;SPAN&gt;str&lt;/SPAN&gt;, Any] &lt;SPAN&gt;=&lt;/SPAN&gt; {} &lt;SPAN class=""&gt;-&amp;gt; 1441&lt;/SPAN&gt; &lt;SPAN class=""&gt;for&lt;/SPAN&gt; response &lt;SPAN class=""&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_execute_and_fetch_as_iterator(req): &lt;SPAN&gt;1442&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; &lt;SPAN&gt;isinstance&lt;/SPAN&gt;(response, StructType): &lt;SPAN&gt;1443&lt;/SPAN&gt; schema &lt;SPAN&gt;=&lt;/SPAN&gt; response File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1422&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient._execute_and_fetch_as_iterator&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, req)&lt;/SPAN&gt; &lt;SPAN&gt;1420&lt;/SPAN&gt; &lt;SPAN class=""&gt;yield from&lt;/SPAN&gt; handle_response(b) &lt;SPAN&gt;1421&lt;/SPAN&gt; &lt;SPAN class=""&gt;except&lt;/SPAN&gt; &lt;SPAN class=""&gt;Exception&lt;/SPAN&gt; &lt;SPAN class=""&gt;as&lt;/SPAN&gt; error: &lt;SPAN class=""&gt;-&amp;gt; 1422&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_handle_error&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;error&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1706&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient._handle_error&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, error)&lt;/SPAN&gt; &lt;SPAN&gt;1693&lt;/SPAN&gt; &lt;SPAN&gt;"""&lt;/SPAN&gt; &lt;SPAN&gt;1694&lt;/SPAN&gt; &lt;SPAN&gt;Handle errors that occur during RPC calls.&lt;/SPAN&gt; &lt;SPAN&gt;1695&lt;/SPAN&gt; &lt;SPAN class=""&gt;(...)&lt;/SPAN&gt; &lt;SPAN&gt;1703&lt;/SPAN&gt; &lt;SPAN&gt;Throws the appropriate internal Python exception.&lt;/SPAN&gt; &lt;SPAN&gt;1704&lt;/SPAN&gt; &lt;SPAN&gt;"""&lt;/SPAN&gt; &lt;SPAN&gt;1705&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; &lt;SPAN&gt;isinstance&lt;/SPAN&gt;(error, grpc&lt;SPAN&gt;.&lt;/SPAN&gt;RpcError): &lt;SPAN class=""&gt;-&amp;gt; 1706&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_handle_rpc_error&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;error&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;1707&lt;/SPAN&gt; &lt;SPAN class=""&gt;elif&lt;/SPAN&gt; &lt;SPAN&gt;isinstance&lt;/SPAN&gt;(error, &lt;SPAN class=""&gt;ValueError&lt;/SPAN&gt;&lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt; &lt;SPAN&gt;1708&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; &lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;Cannot invoke RPC&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt; &lt;SPAN class=""&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;str&lt;/SPAN&gt;(error) &lt;SPAN class=""&gt;and&lt;/SPAN&gt; &lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;closed&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt; &lt;SPAN class=""&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;str&lt;/SPAN&gt;(error): File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1742&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient._handle_rpc_error&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, rpc_error)&lt;/SPAN&gt; &lt;SPAN&gt;1740&lt;/SPAN&gt; info &lt;SPAN&gt;=&lt;/SPAN&gt; error_details_pb2&lt;SPAN&gt;.&lt;/SPAN&gt;ErrorInfo() &lt;SPAN&gt;1741&lt;/SPAN&gt; d&lt;SPAN&gt;.&lt;/SPAN&gt;Unpack(info) &lt;SPAN class=""&gt;-&amp;gt; 1742&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; &lt;SPAN class=""&gt;convert_exception&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;info&lt;/SPAN&gt;&lt;SPAN class=""&gt;,&lt;/SPAN&gt; &lt;SPAN class=""&gt;status&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;message&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN class=""&gt;from&lt;/SPAN&gt; &lt;SPAN&gt;None&lt;/SPAN&gt; &lt;SPAN&gt;1744&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; SparkConnectGrpcException(status&lt;SPAN&gt;.&lt;/SPAN&gt;message) &lt;SPAN class=""&gt;from&lt;/SPAN&gt; &lt;SPAN&gt;None&lt;/SPAN&gt; &lt;SPAN&gt;1745&lt;/SPAN&gt; &lt;SPAN class=""&gt;else&lt;/SPAN&gt;: File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/errors/exceptions/connect.py:90&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;convert_exception&lt;/SPAN&gt;&lt;SPAN class=""&gt;(info, message)&lt;/SPAN&gt; &lt;SPAN&gt;84&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; PythonException( &lt;SPAN&gt;85&lt;/SPAN&gt; &lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;\n&lt;/SPAN&gt;&lt;SPAN&gt; An exception was thrown from the Python worker. &lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt; &lt;SPAN&gt;86&lt;/SPAN&gt; &lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;Please see the stack trace below.&lt;/SPAN&gt;&lt;SPAN class=""&gt;\n&lt;/SPAN&gt;&lt;SPAN class=""&gt;%s&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt; &lt;SPAN&gt;%&lt;/SPAN&gt; message &lt;SPAN&gt;87&lt;/SPAN&gt; ) &lt;SPAN&gt;89&lt;/SPAN&gt; &lt;SPAN&gt;# BEGIN-EDGE&lt;/SPAN&gt; &lt;SPAN class=""&gt;---&amp;gt; 90&lt;/SPAN&gt; &lt;SPAN class=""&gt;from&lt;/SPAN&gt; &lt;SPAN class=""&gt;delta&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;connect&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;exceptions&lt;/SPAN&gt; &lt;SPAN class=""&gt;import&lt;/SPAN&gt; _convert_delta_exception &lt;SPAN&gt;92&lt;/SPAN&gt; delta_exception &lt;SPAN&gt;=&lt;/SPAN&gt; _convert_delta_exception(info, message) &lt;SPAN&gt;93&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; delta_exception &lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;not&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt;: &lt;SPAN class=""&gt;ModuleNotFoundError&lt;/SPAN&gt;: No module named 'delta.connect'&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am the owner of the table, so I don't think it is an error permission. Maybe of the capacity of the server?, because it says: TASK_WRITE_FAILED&lt;/P&gt;&lt;P&gt;When constructing the data dataframe it conects to MongoDB to obtain the configuration of the table (delimiters, etc) and the name of the fileds. (I think doesn´t have any importance)&lt;/P&gt;&lt;P&gt;If it helps, the original csv has 116 columns, and 3900 files (the rest of the tables doesn't have so many columns).&lt;BR /&gt;Maybe the number of columns is a problem?&lt;/P&gt;&lt;P&gt;Any help or suggestion would be appreciatte.&lt;/P&gt;&lt;P&gt;Thanks.&lt;/P&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
    <pubDate>Thu, 14 Mar 2024 13:32:43 GMT</pubDate>
    <dc:creator>chemajar</dc:creator>
    <dc:date>2024-03-14T13:32:43Z</dc:date>
    <item>
      <title>TASK_WRITE_FAILED when trying to write on the table, Databricks (Scala)</title>
      <link>https://community.databricks.com/t5/get-started-discussions/task-write-failed-when-trying-to-write-on-the-table-databricks/m-p/63702#M6800</link>
      <description>&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;BR /&gt;Hello,&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;P&gt;I have a code on Databricks (Scala) that constructs a df and then write it to a Database table. It is working fine for almost all of the tables, but there is a table with a problem. It says&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;No module named 'delta.connect' - TASK_WRITE_FAILED.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;In the penultimate cell I have:&lt;/P&gt;&lt;PRE&gt;display(data)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;And looks fine (not bad format):&lt;/P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chemajar_2-1710422710695.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/6653i1B975F27DCC4104B/image-size/medium/is-moderation-mode/true?v=v2&amp;amp;px=400" role="button" title="chemajar_2-1710422710695.png" alt="chemajar_2-1710422710695.png" /&gt;&lt;/span&gt;&lt;P&gt;Then, in the last cell I have:&lt;/P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="chemajar_5-1710422843604.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/6654i010F15B0FAE5BEDD/image-size/medium/is-moderation-mode/true?v=v2&amp;amp;px=400" role="button" title="chemajar_5-1710422843604.png" alt="chemajar_5-1710422843604.png" /&gt;&lt;/span&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The text of error is the following:&lt;/P&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;ModuleNotFoundError: &lt;/SPAN&gt;No module named 'delta.connect'&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;---------------------------------------------------------------------------&lt;/SPAN&gt; &lt;SPAN class=""&gt;_MultiThreadedRendezvous&lt;/SPAN&gt; Traceback (most recent call last) File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1414&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient._execute_and_fetch_as_iterator&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, req)&lt;/SPAN&gt; &lt;SPAN&gt;1411&lt;/SPAN&gt; generator &lt;SPAN&gt;=&lt;/SPAN&gt; ExecutePlanResponseReattachableIterator( &lt;SPAN&gt;1412&lt;/SPAN&gt; req, &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_stub, &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_retry_policy, &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_builder&lt;SPAN&gt;.&lt;/SPAN&gt;metadata() &lt;SPAN&gt;1413&lt;/SPAN&gt; ) &lt;SPAN class=""&gt;-&amp;gt; 1414&lt;/SPAN&gt; &lt;SPAN class=""&gt;for&lt;/SPAN&gt; b &lt;SPAN class=""&gt;in&lt;/SPAN&gt; generator: &lt;SPAN&gt;1415&lt;/SPAN&gt; &lt;SPAN class=""&gt;yield from&lt;/SPAN&gt; handle_response(b) File &lt;SPAN class=""&gt;/usr/lib/python3.10/_collections_abc.py:330&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;Generator.__next__&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self)&lt;/SPAN&gt; &lt;SPAN&gt;327&lt;/SPAN&gt; &lt;SPAN&gt;"""Return the next item from the generator.&lt;/SPAN&gt; &lt;SPAN&gt;328&lt;/SPAN&gt; &lt;SPAN&gt;When exhausted, raise StopIteration.&lt;/SPAN&gt; &lt;SPAN&gt;329&lt;/SPAN&gt; &lt;SPAN&gt;"""&lt;/SPAN&gt; &lt;SPAN class=""&gt;--&amp;gt; 330&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;send&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;None&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:131&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator.send&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, value)&lt;/SPAN&gt; &lt;SPAN&gt;129&lt;/SPAN&gt; &lt;SPAN class=""&gt;def&lt;/SPAN&gt; &lt;SPAN&gt;send&lt;/SPAN&gt;(&lt;SPAN&gt;self&lt;/SPAN&gt;, value: Any) &lt;SPAN&gt;-&lt;/SPAN&gt;&lt;SPAN&gt;&amp;gt;&lt;/SPAN&gt; pb2&lt;SPAN&gt;.&lt;/SPAN&gt;ExecutePlanResponse: &lt;SPAN&gt;130&lt;/SPAN&gt; &lt;SPAN&gt;# will trigger reattach in case the stream completed without result_complete&lt;/SPAN&gt; &lt;SPAN class=""&gt;--&amp;gt; 131&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; &lt;SPAN class=""&gt;not&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_has_next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt;: &lt;SPAN&gt;132&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; &lt;SPAN class=""&gt;StopIteration&lt;/SPAN&gt;() File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:188&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator._has_next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self)&lt;/SPAN&gt; &lt;SPAN&gt;187&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_release_all() &lt;SPAN class=""&gt;--&amp;gt; 188&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; e &lt;SPAN&gt;189&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; &lt;SPAN class=""&gt;False&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:160&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator._has_next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self)&lt;/SPAN&gt; &lt;SPAN&gt;159&lt;/SPAN&gt; &lt;SPAN class=""&gt;try&lt;/SPAN&gt;: &lt;SPAN class=""&gt;--&amp;gt; 160&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_current &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_call_iter&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt; &lt;SPAN&gt;161&lt;/SPAN&gt; &lt;SPAN&gt;lambda&lt;/SPAN&gt;&lt;SPAN class=""&gt;:&lt;/SPAN&gt; &lt;SPAN class=""&gt;next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_iterator&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN class=""&gt;# type: ignore[arg-type]&lt;/SPAN&gt; &lt;SPAN&gt;162&lt;/SPAN&gt; &lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;163&lt;/SPAN&gt; &lt;SPAN class=""&gt;except&lt;/SPAN&gt; &lt;SPAN class=""&gt;StopIteration&lt;/SPAN&gt;: File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:285&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator._call_iter&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, iter_fun)&lt;/SPAN&gt; &lt;SPAN&gt;284&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_iterator &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt; &lt;SPAN class=""&gt;--&amp;gt; 285&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; e &lt;SPAN&gt;286&lt;/SPAN&gt; &lt;SPAN class=""&gt;except&lt;/SPAN&gt; &lt;SPAN class=""&gt;Exception&lt;/SPAN&gt; &lt;SPAN class=""&gt;as&lt;/SPAN&gt; e: &lt;SPAN&gt;287&lt;/SPAN&gt; &lt;SPAN&gt;# Remove the iterator, so that a new one will be created after retry.&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:267&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator._call_iter&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, iter_fun)&lt;/SPAN&gt; &lt;SPAN&gt;266&lt;/SPAN&gt; &lt;SPAN class=""&gt;try&lt;/SPAN&gt;: &lt;SPAN class=""&gt;--&amp;gt; 267&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; &lt;SPAN class=""&gt;iter_fun&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;268&lt;/SPAN&gt; &lt;SPAN class=""&gt;except&lt;/SPAN&gt; grpc&lt;SPAN&gt;.&lt;/SPAN&gt;RpcError &lt;SPAN class=""&gt;as&lt;/SPAN&gt; e: File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/reattach.py:161&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;ExecutePlanResponseReattachableIterator._has_next.&amp;lt;locals&amp;gt;.&amp;lt;lambda&amp;gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;()&lt;/SPAN&gt; &lt;SPAN&gt;159&lt;/SPAN&gt; &lt;SPAN class=""&gt;try&lt;/SPAN&gt;: &lt;SPAN&gt;160&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_current &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_call_iter( &lt;SPAN class=""&gt;--&amp;gt; 161&lt;/SPAN&gt; &lt;SPAN class=""&gt;lambda&lt;/SPAN&gt;: &lt;SPAN class=""&gt;next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_iterator&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;# type: ignore[arg-type]&lt;/SPAN&gt; &lt;SPAN&gt;162&lt;/SPAN&gt; ) &lt;SPAN&gt;163&lt;/SPAN&gt; &lt;SPAN class=""&gt;except&lt;/SPAN&gt; &lt;SPAN class=""&gt;StopIteration&lt;/SPAN&gt;: File &lt;SPAN class=""&gt;/databricks/python/lib/python3.10/site-packages/grpc/_channel.py:426&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;_Rendezvous.__next__&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self)&lt;/SPAN&gt; &lt;SPAN&gt;425&lt;/SPAN&gt; &lt;SPAN class=""&gt;def&lt;/SPAN&gt; &lt;SPAN&gt;__next__&lt;/SPAN&gt;(&lt;SPAN&gt;self&lt;/SPAN&gt;&lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt; &lt;SPAN class=""&gt;--&amp;gt; 426&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/python/lib/python3.10/site-packages/grpc/_channel.py:826&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;_MultiThreadedRendezvous._next&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self)&lt;/SPAN&gt; &lt;SPAN&gt;825&lt;/SPAN&gt; &lt;SPAN class=""&gt;elif&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_state&lt;SPAN&gt;.&lt;/SPAN&gt;code &lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;not&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt;: &lt;SPAN class=""&gt;--&amp;gt; 826&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt; &lt;SPAN class=""&gt;_MultiThreadedRendezvous&lt;/SPAN&gt;: &amp;lt;_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.INTERNAL details = "Job aborted due to stage failure: Task 0 in stage 3406.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3406.0 (TID 3637) (172.26.0.11 executor 0): org.apache.spark.SparkException: [TASK_WRITE_FAILED] Task failed while writing rows to abfss://ucatalogwemetastore@azwesadbricksunitycatmst.dfs.core.windows.net/metastore/cddf9e6b-f8c2-4735-a50e-cabc613c06db/tables/58535ee1-b3c7-43d1-afbc-21efde623438. at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:927) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:551) at org.apache.spark.sql.execution.datasources.WriteFilesExec.$anonfun$doExecuteWrite$1(WriteFiles.scala:116) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:934) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:934) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60) at org.apache.spark.rdd.RDD.$anonfun$computeOrReadCheckpoint$1(RDD.scala:410) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:407) at org.apache.spark.rdd.RDD.iterator(RDD.scala:374) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:82) at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:196) at org.apache.spark.scheduler.Task.doRunTask(Task.scala:181) at org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:146) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:41) at com.databricks.unity.HandleImpl.runWith(UCSHandle..." debug_error_string = "UNKNOWN:Error received from peer unix:/databricks/sparkconnect/grpc.sock {created_time:"2024-03-14T13:18:45.417841112+00:00", grpc_status:13, grpc_message:"Job aborted due to stage failure: Task 0 in stage 3406.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3406.0 (TID 3637) (172.26.0.11 executor 0): org.apache.spark.SparkException: [TASK_WRITE_FAILED] Task failed while writing rows to abfss://ucatalogwemetastore@azwesadbricksunitycatmst.dfs.core.windows.net/metastore/cddf9e6b-f8c2-4735-a50e-cabc613c06db/tables/58535ee1-b3c7-43d1-afbc-21efde623438.\n\tat org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:927)\n\tat org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:551)\n\tat org.apache.spark.sql.execution.datasources.WriteFilesExec.$anonfun$doExecuteWrite$1(WriteFiles.scala:116)\n\tat org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:934)\n\tat org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:934)\n\tat org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)\n\tat org.apache.spark.rdd.RDD.$anonfun$computeOrReadCheckpoint$1(RDD.scala:410)\n\tat com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)\n\tat org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:407)\n\tat org.apache.spark.rdd.RDD.iterator(RDD.scala:374)\n\tat org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:82)\n\tat com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)\n\tat org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:82)\n\tat com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)\n\tat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)\n\tat org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:196)\n\tat org.apache.spark.scheduler.Task.doRunTask(Task.scala:181)\n\tat org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:146)\n\tat com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:41)\n\tat com.databricks.unity.HandleImpl.runWith(UCSHandle..."}" &amp;gt; During handling of the above exception, another exception occurred: &lt;SPAN class=""&gt;ModuleNotFoundError&lt;/SPAN&gt; Traceback (most recent call last) File &lt;SPAN class=""&gt;&amp;lt;command-2290610805725404&amp;gt;, line 5&lt;/SPAN&gt; &lt;SPAN&gt;1&lt;/SPAN&gt; ( &lt;SPAN&gt;2&lt;/SPAN&gt; &lt;SPAN class=""&gt;data&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;write&lt;/SPAN&gt; &lt;SPAN&gt;3&lt;/SPAN&gt; &lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;format&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;delta&lt;/SPAN&gt;&lt;SPAN class=""&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;4&lt;/SPAN&gt; &lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;mode&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;append&lt;/SPAN&gt;&lt;SPAN class=""&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN class=""&gt;----&amp;gt; 5&lt;/SPAN&gt; &lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;insertInto&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;targetTable&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;6&lt;/SPAN&gt; ) File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/readwriter.py:660&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;DataFrameWriter.insertInto&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, tableName, overwrite)&lt;/SPAN&gt; &lt;SPAN&gt;658&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_write&lt;SPAN&gt;.&lt;/SPAN&gt;table_name &lt;SPAN&gt;=&lt;/SPAN&gt; tableName &lt;SPAN&gt;659&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_write&lt;SPAN&gt;.&lt;/SPAN&gt;table_save_method &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;insert_into&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt; &lt;SPAN class=""&gt;--&amp;gt; 660&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_spark&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;client&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;execute_command&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_write&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;command&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_spark&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;client&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1079&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient.execute_command&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, command)&lt;/SPAN&gt; &lt;SPAN&gt;1077&lt;/SPAN&gt; req&lt;SPAN&gt;.&lt;/SPAN&gt;user_context&lt;SPAN&gt;.&lt;/SPAN&gt;user_id &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_user_id &lt;SPAN&gt;1078&lt;/SPAN&gt; req&lt;SPAN&gt;.&lt;/SPAN&gt;plan&lt;SPAN&gt;.&lt;/SPAN&gt;command&lt;SPAN&gt;.&lt;/SPAN&gt;CopyFrom(command) &lt;SPAN class=""&gt;-&amp;gt; 1079&lt;/SPAN&gt; data, _, _, _, properties &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_execute_and_fetch&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;req&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;1080&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; data &lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;not&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt;: &lt;SPAN&gt;1081&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; (data&lt;SPAN&gt;.&lt;/SPAN&gt;to_pandas(), properties) File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1441&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient._execute_and_fetch&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, req, self_destruct)&lt;/SPAN&gt; &lt;SPAN&gt;1438&lt;/SPAN&gt; schema: Optional[StructType] &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt; &lt;SPAN&gt;1439&lt;/SPAN&gt; properties: Dict[&lt;SPAN&gt;str&lt;/SPAN&gt;, Any] &lt;SPAN&gt;=&lt;/SPAN&gt; {} &lt;SPAN class=""&gt;-&amp;gt; 1441&lt;/SPAN&gt; &lt;SPAN class=""&gt;for&lt;/SPAN&gt; response &lt;SPAN class=""&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;_execute_and_fetch_as_iterator(req): &lt;SPAN&gt;1442&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; &lt;SPAN&gt;isinstance&lt;/SPAN&gt;(response, StructType): &lt;SPAN&gt;1443&lt;/SPAN&gt; schema &lt;SPAN&gt;=&lt;/SPAN&gt; response File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1422&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient._execute_and_fetch_as_iterator&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, req)&lt;/SPAN&gt; &lt;SPAN&gt;1420&lt;/SPAN&gt; &lt;SPAN class=""&gt;yield from&lt;/SPAN&gt; handle_response(b) &lt;SPAN&gt;1421&lt;/SPAN&gt; &lt;SPAN class=""&gt;except&lt;/SPAN&gt; &lt;SPAN class=""&gt;Exception&lt;/SPAN&gt; &lt;SPAN class=""&gt;as&lt;/SPAN&gt; error: &lt;SPAN class=""&gt;-&amp;gt; 1422&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_handle_error&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;error&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1706&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient._handle_error&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, error)&lt;/SPAN&gt; &lt;SPAN&gt;1693&lt;/SPAN&gt; &lt;SPAN&gt;"""&lt;/SPAN&gt; &lt;SPAN&gt;1694&lt;/SPAN&gt; &lt;SPAN&gt;Handle errors that occur during RPC calls.&lt;/SPAN&gt; &lt;SPAN&gt;1695&lt;/SPAN&gt; &lt;SPAN class=""&gt;(...)&lt;/SPAN&gt; &lt;SPAN&gt;1703&lt;/SPAN&gt; &lt;SPAN&gt;Throws the appropriate internal Python exception.&lt;/SPAN&gt; &lt;SPAN&gt;1704&lt;/SPAN&gt; &lt;SPAN&gt;"""&lt;/SPAN&gt; &lt;SPAN&gt;1705&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; &lt;SPAN&gt;isinstance&lt;/SPAN&gt;(error, grpc&lt;SPAN&gt;.&lt;/SPAN&gt;RpcError): &lt;SPAN class=""&gt;-&amp;gt; 1706&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;_handle_rpc_error&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;error&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN&gt;1707&lt;/SPAN&gt; &lt;SPAN class=""&gt;elif&lt;/SPAN&gt; &lt;SPAN&gt;isinstance&lt;/SPAN&gt;(error, &lt;SPAN class=""&gt;ValueError&lt;/SPAN&gt;&lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt; &lt;SPAN&gt;1708&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; &lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;Cannot invoke RPC&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt; &lt;SPAN class=""&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;str&lt;/SPAN&gt;(error) &lt;SPAN class=""&gt;and&lt;/SPAN&gt; &lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;closed&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt; &lt;SPAN class=""&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;str&lt;/SPAN&gt;(error): File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/sql/connect/client/core.py:1742&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;SparkConnectClient._handle_rpc_error&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, rpc_error)&lt;/SPAN&gt; &lt;SPAN&gt;1740&lt;/SPAN&gt; info &lt;SPAN&gt;=&lt;/SPAN&gt; error_details_pb2&lt;SPAN&gt;.&lt;/SPAN&gt;ErrorInfo() &lt;SPAN&gt;1741&lt;/SPAN&gt; d&lt;SPAN&gt;.&lt;/SPAN&gt;Unpack(info) &lt;SPAN class=""&gt;-&amp;gt; 1742&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; &lt;SPAN class=""&gt;convert_exception&lt;/SPAN&gt;&lt;SPAN class=""&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;info&lt;/SPAN&gt;&lt;SPAN class=""&gt;,&lt;/SPAN&gt; &lt;SPAN class=""&gt;status&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;message&lt;/SPAN&gt;&lt;SPAN class=""&gt;)&lt;/SPAN&gt; &lt;SPAN class=""&gt;from&lt;/SPAN&gt; &lt;SPAN&gt;None&lt;/SPAN&gt; &lt;SPAN&gt;1744&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt; SparkConnectGrpcException(status&lt;SPAN&gt;.&lt;/SPAN&gt;message) &lt;SPAN class=""&gt;from&lt;/SPAN&gt; &lt;SPAN&gt;None&lt;/SPAN&gt; &lt;SPAN&gt;1745&lt;/SPAN&gt; &lt;SPAN class=""&gt;else&lt;/SPAN&gt;: File &lt;SPAN class=""&gt;/databricks/spark/python/pyspark/errors/exceptions/connect.py:90&lt;/SPAN&gt;, in &lt;SPAN class=""&gt;convert_exception&lt;/SPAN&gt;&lt;SPAN class=""&gt;(info, message)&lt;/SPAN&gt; &lt;SPAN&gt;84&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; PythonException( &lt;SPAN&gt;85&lt;/SPAN&gt; &lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;\n&lt;/SPAN&gt;&lt;SPAN&gt; An exception was thrown from the Python worker. &lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt; &lt;SPAN&gt;86&lt;/SPAN&gt; &lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;Please see the stack trace below.&lt;/SPAN&gt;&lt;SPAN class=""&gt;\n&lt;/SPAN&gt;&lt;SPAN class=""&gt;%s&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt; &lt;SPAN&gt;%&lt;/SPAN&gt; message &lt;SPAN&gt;87&lt;/SPAN&gt; ) &lt;SPAN&gt;89&lt;/SPAN&gt; &lt;SPAN&gt;# BEGIN-EDGE&lt;/SPAN&gt; &lt;SPAN class=""&gt;---&amp;gt; 90&lt;/SPAN&gt; &lt;SPAN class=""&gt;from&lt;/SPAN&gt; &lt;SPAN class=""&gt;delta&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;connect&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;exceptions&lt;/SPAN&gt; &lt;SPAN class=""&gt;import&lt;/SPAN&gt; _convert_delta_exception &lt;SPAN&gt;92&lt;/SPAN&gt; delta_exception &lt;SPAN&gt;=&lt;/SPAN&gt; _convert_delta_exception(info, message) &lt;SPAN&gt;93&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; delta_exception &lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;not&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt;: &lt;SPAN class=""&gt;ModuleNotFoundError&lt;/SPAN&gt;: No module named 'delta.connect'&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am the owner of the table, so I don't think it is an error permission. Maybe of the capacity of the server?, because it says: TASK_WRITE_FAILED&lt;/P&gt;&lt;P&gt;When constructing the data dataframe it conects to MongoDB to obtain the configuration of the table (delimiters, etc) and the name of the fileds. (I think doesn´t have any importance)&lt;/P&gt;&lt;P&gt;If it helps, the original csv has 116 columns, and 3900 files (the rest of the tables doesn't have so many columns).&lt;BR /&gt;Maybe the number of columns is a problem?&lt;/P&gt;&lt;P&gt;Any help or suggestion would be appreciatte.&lt;/P&gt;&lt;P&gt;Thanks.&lt;/P&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Thu, 14 Mar 2024 13:32:43 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/task-write-failed-when-trying-to-write-on-the-table-databricks/m-p/63702#M6800</guid>
      <dc:creator>chemajar</dc:creator>
      <dc:date>2024-03-14T13:32:43Z</dc:date>
    </item>
  </channel>
</rss>

