Showing results for 
Search instead for 
Did you mean: 
Data Governance
Join discussions on data governance practices, compliance, and security within the Databricks Community. Exchange strategies and insights to ensure data integrity and regulatory compliance.
Showing results for 
Search instead for 
Did you mean: 

I have to read zipped csv file using spark without unzipping it. can anyone please provide pyspark/spark sql code for that?

New Contributor II

Zipped csv files are receiving to s3 raw layer.


Not applicable

Why can't you unzip it? You can not read zipped files with spark as zip isn't a file type. has some instructions on how to unzip them and read them.

Additionally, if you don't want or can't unzip whole archive, you can list the contents of the archive and unzip only selected file.

Still, as @Joseph Kambourakis​ asked - why can't you just unzip it? What's blocking you?

New Contributor II

We encountered a similar issue, but for gzip files. If you can convert your files to gzip instead of ZIP, it is as easy as the following (in PySpark)

df ="header", "true").csv(PATH + "/*.csv.gz")

As best as I can tell, this is not possible with ZIP files, but if you have a place where you can write the output to, writing a Python or Scala script to unzip and then gzip the files should not be too hard [if keeping them compressed is required, else do what @Joseph Kambourakis​ said and just unzip them 🙂 ]

Great you pointed out @Ben Elbert​ that spark allows to read compressed files (`compression` property mentioned here: Still, it won't work with .zip archive.


one more solution - you can read .zip using old good pandas `read_csv` method (

import pandas as pd
simple_csv_zipped = pd.read_csv("/dbfs/FileStore/")

still there is one disclaimer: "If using ‘zip’ or ‘tar’, the ZIP file must contain only one data file to be read in."

and there is also obvious trade-off: using pandas means no distribution, no scalability and exposure to OOM errors - but maybe in your specific case it is acceptable

Honored Contributor

@Jog Giri​  I also recently encountered a similar scenario, the below code solved my purpose without any issues.

import zipfile
for i in'/mnt/zipfilespath/'):
  with zipfile.ZipFile(i.path.replace('dbfs:','/dbfs'), mode="r") as zip_ref:

where I mounted an ADLS Gen 2 container which consists of several .csv zip files, please let me know if you face any further issues, happy to help!!

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!