12-16-2022 01:51 PM
Hi community,
I don't know what is happening TBH.
I have a use case where data is written to the location "dbfs:/mnt/...", don't ask me why it's mounted, it's just a side project. I do believe that data is stored in ADLS2.
I've been trying to read the data after it's written bu when I try to read data from the folder:
df = spark.read.format("parquet").load("dbfs:/mnt/table/")
or
df = spark.read.format("parquet").load("dbfs:/mnt/table/date=2022-12-16")
I get: AnalysisException: Unable to infer schema for Parquet. It must be specified manually.
when I provide the schema, the count = 0 (zero):
df.count()
but when I provide full path to the parquet file it works:
df = spark.read.format("parquet").load("dbfs:/mnt/table/date=2022-12-16/some-spark-file.snappy.parquet")
df.count()
it return 700 rows.
any ideas ? 🙂
12-16-2022 02:57 PM
I am still not sure what happened, but I've re-run job on smaller dataset and seems to work, maybe corrupted data ?
12-16-2022 06:22 PM
Yes, maybe the data of a particular partition or file got corrupted and for me, it is working fine for a sample parquet data, I can able to read without any issues.
12-17-2022 10:08 PM
this is really interesting never faced this type od situation @Pat Sienkiewicz can you please share whole code by that we can test and debug this in our system
Thanks
Aviral
12-18-2022 11:35 PM
Hi @Aviral Bhardwaj ,
I will try to re-produce this. I think that at least one of the files is corrupted, but I would expect different error in that case, not long running job that fails with `Unable to infer schema for Parquet. It must be specified manually.`
12-19-2022 05:45 PM
thanks for the sharing ,i hope it will work
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group