For more context, Please use the following code to replicate the error:
# Create a Python list containing JSON objects
json_data = [
{
"id": 1,
"name": "John",
"age": 25
},
{
"id": 2,
"name": "Jane",
"age": 30
},
{
"id": 3,
"name": "Mike",
"age": 35
}
]
# Create a DataFrame using the JSON data
df = spark.createDataFrame(json_data)
# Save the DataFrame in S3 with compression
df.write.format('json').save('s3://path', compression='com.hadoop.compression.lzo.LzopCodec')
Make sure to have lzo-codec installed in your cluster
Tried with both R class instances and Graviton C class instances, and it always failed with in case of Graviton instance