cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Autoloader and json

193801
New Contributor

Hello,

I am looking for help with autoloader. I have few questions. My target is to read the files in s3 location and get filename, fileDate, file content in one table and in another table want to convert the file content to json struct and read to 1 or 2 levels depth and read the values and keys and save them to table. Is this achievable in autoloader option.

I tried DLT but its saving to meta data which is not acceptable in our project.

Thank you for your help

2 REPLIES 2

Anonymous
Not applicable

@Neeharika Andavarapuโ€‹ :

Yes, it is possible to achieve your target using Databricks AutoLoader. You can create an AutoLoader job to read the files in the S3 location, and then parse the file content to extract the desired information and save it into two separate tables.

To achieve this, you can define a schema for your input files and map it to the target tables using the readStream function in Spark. You can then use the from_json function to parse the file content into a JSON struct and extract the desired information.

Here is a sample code that demonstrates how to achieve this:

from pyspark.sql.functions import *
from pyspark.sql.types import *
 
# Define schema for input files
schema = StructType([
  StructField("filename", StringType()),
  StructField("fileDate", TimestampType()),
  StructField("fileContent", StringType())
])
 
# Read input files using AutoLoader
inputDF = spark.readStream.format("cloudFiles") \
  .option("cloudFiles.format", "json") \
  .option("cloudFiles.includeExistingFiles", "true") \
  .schema(schema) \
  .load("s3://path/to/input/files")
 
# Extract filename, fileDate, and file content to a separate table
outputDF1 = inputDF.select("filename", "fileDate", "fileContent")
outputQuery1 = outputDF1.writeStream.format("delta").option("checkpointLocation", "s3://path/to/checkpoint/location1").table("outputTable1")
 
# Convert file content to JSON and extract information to a separate table
jsonSchema = StructType([
  StructField("key1", StringType()),
  StructField("key2", StructType([
    StructField("subkey1", IntegerType()),
    StructField("subkey2", StringType())
  ]))
])
outputDF2 = inputDF.select("filename", from_json("fileContent", jsonSchema).alias("jsonContent"))
outputDF2 = outputDF2.select("filename", col("jsonContent.key1").alias("key1"), col("jsonContent.key2.subkey1").alias("subkey1"), col("jsonContent.key2.subkey2").alias("subkey2"))
outputQuery2 = outputDF2.writeStream.format("delta").option("checkpointLocation", "s3://path/to/checkpoint/location2").table("outputTable2")

In this example, inputDF represents the input files read using AutoLoader with the specified schema.

outputDF1 extracts the filename, fileDate, and file content to a separate table, and outputDF2 converts the file content to JSON and extracts the desired information to another table. The outputQuery1

and outputQuery2 variables represent the output queries to write the data to Delta tables.

Note that the JSON schema should match the structure of your input files, and you may need to adjust it based on your requirements. Also, you need to provide appropriate values for the checkpointLocation and table options when writing the data to Delta tables.

I hope this helps! Let me know if you have any further questions.

Anonymous
Not applicable

Hi @Neeharika Andavarapuโ€‹ 

Thank you for posting your question in our community! We are happy to assist you.

To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?

This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance! 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group