Normalizing data from autoloader

Rishitha
New Contributor III

I have data on s3 and i'm using autoloader to load the data. My json docs have fields which are array of structures.

When I don't specify any schema the whole data is stored as strings even the array of structures are just a blob of string making it difficult to process with pyspark dataframe.

When i do specify a schema to autoload, the whole table is null.

Did anyone face any similar issues?

artsheiko
Databricks Employee
Databricks Employee

Let's try the following :

cloudfile_options = {
    "cloudFiles.format": “json”,
    "cloudFiles.schemaLocation": "<path_to_schema_checkpoints_location>",
    "cloudFiles.inferColumnTypes":"true"
}
spark.readStream.format("cloudFiles").options(**cloudfile_options).load("<path_to_source_data>")

View solution in original post

Anonymous
Not applicable

Hi @Rishitha Reddy​ 

Hope everything is going great.

Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you. 

Cheers!