โ01-19-2024 04:34 AM
if file is renamed with file_name.sv.gz (lower case extension) is working fine, if file_name.sv.GZ (upper case extension) the data is read as corrupted, means it simply reading compressed file as is.
โ01-19-2024 06:33 AM
I don't think .GZ(upper case) is a valid file extension. I have seen most systems compress the file using .gz(lower case) extension
โ01-20-2024 11:05 PM
I assume, if a .gz file is renamed as .GZ purposefully then we need to consider it as valid file format as gzip file. Cause that .GZ file still consist compressed data which is still valid.
โ01-22-2024 07:56 AM - edited โ01-22-2024 07:57 AM
Agree but Spark infers the compression from your filename and Spark cannot infer the compression from .GZ format. You can read more about this in below article:
https://aws.plainenglish.io/demystifying-apache-spark-quirks-2c91ba2d3978
โ01-22-2024 08:27 AM
Yup, Spark does infer it from filename, I have been through spark code in Github.
Article is also refering to the internal code from Spark library.
I assume we can add an exception to handle .GZ file as gzip by tweaking spark libraries.
โ01-22-2024 08:34 AM
Yes, we can do it but is it worth doing it? This is something you can discuss in a Jira ticket.
โ01-22-2024 08:44 AM
I assume it should worth handling such thing, as filename or extension should not be a constraint to process data.
As we know it's a gzip file and we can pass the paramter to read it as gzip.
Thanks a lot for your responses @Lakshay .
โ01-22-2024 08:48 AM
Happy to help!
โ08-10-2024 03:47 AM
Recently I restarted look at a solution for this issue, I found out we can add few exception for allowing "GZ" in hadoop library as GzipCodec is invoked from there.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group