- 4890 Views
- 16 replies
- 5 kudos
Using DBR 10.0When calling toPandas() the worker fails with IndexOutOfBoundsException. It seems like ArrowWriter.sizeInBytes (which looks like a proprietary method since I can't find it in OSS) calls arrow's getBufferSizeFor which fails with this err...
- 4890 Views
- 16 replies
- 5 kudos
Latest Reply
I am also facing the same issue, I have applied the config: `spark.sql.execution.arrow.pyspark.enabled` set to `false`, but still facing the same issue. Any Idea, what's going on???. Please help me out....org.apache.spark.SparkException: Job aborted ...
15 More Replies
- 2092 Views
- 5 replies
- 8 kudos
I work with Spark-Scala and I receive Data in different formats ( .csv/.xlxs/.txt etc ), when I try to read/write this data from different sources to a any database, many records got rejected due to various issues like (special characters, data type ...
- 2092 Views
- 5 replies
- 8 kudos
Latest Reply
or maybe schema evolution on delta lake is enough, in combination with Hubert's answer
4 More Replies