โ03-31-2022 09:38 AM
Hi ,
I get the error: Py4JJavaError: An error occurred while calling o5082.csv.
: org.apache.spark.SparkException: Job aborted. when writing to csv.
Screenshot below with detail error.
Any idea how to solve it?
Thanks!
โ05-13-2022 07:52 AM
Please try output.coalesce(1).write.option("header","true").format("csv").save("path")
It seems to be same to https://community.databricks.com/s/topic/0TO3f000000CjVqGAK/py4jjavaerror
โ04-07-2022 11:06 AM
i am also facing same issue
โ04-26-2022 05:41 AM
Hello @Laura Blancarteโ , @Rahul Rathoreโ
Would you mind sharing sample data from the input dataframe that is producing this error while saving?
โ05-13-2022 07:52 AM
Please try output.coalesce(1).write.option("header","true").format("csv").save("path")
It seems to be same to https://community.databricks.com/s/topic/0TO3f000000CjVqGAK/py4jjavaerror
โ07-01-2025 08:01 AM
Traceback (most recent call last):
File "C:\Users\Administrator\Documents\practice code\spark_pract\read_merge_print copy.py", line 49, in <module>
merged_df.write.mode("overwrite").option("header", True).csv("output")
py4j.protocol.Py4JJavaError: ... ExitCodeException exitCode=-1073741515
I tried your code still getting same error,
Also checked, Hadoop.dll,winutils.exe,spark home variable at environmental variable,
All exist.....still getting error,
I have spark lattest version,
Hadoop 3.3.0
Jdk 17
Please help me if you can
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now