Py4JJavaError: An error occurred while calling o236.sql. : org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:201) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:192) at
Caused by: java.util.concurrent.ExecutionException: org.apache.spark.SparkException: The query is not executed because it tries to launch 390944 tasks in a single stage, while the maximum allowed tasks one query can launch is 100000; this limit can be modified with configuration parameter "spark.databricks.queryWatchdog.maxQueryTasks".
Please find the summary of the error stack trace. could you please let us know how to resolve this issue?