cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Using cloudFiles.inferColumnTypes with inferSchema and without defining schema checkpoint

BF7
Contributor

Two Issues:

1. What is the behavior of cloudFiles.inferColumnTypes with and without cloudFiles.inferSchema? Why would you use both?

2. When can cloudFiles.inferColumnTypes be used without a schema checkpoint?  How does that affect the behavior of cloudFiles.inferColumnTypes?

Discussion:

1. I see example notebooks from databricks that use inferColumnTypes both WITH inferSchema: delta-live-tables-notebooks/dms-dlt-cdc-demo/resources/dlt/dms-mysql-cdc-demo.py at main · databrick...    and WITHOUT inferSchema: delta-live-tables-notebooks/dms-dlt-cdc-demo/resources/dlt/dms-mysql-cdc-demo.py at main · databrick...

What is the use case for using both or only one of them? I would think that using both together is redundant and just creates unnecessary compute overhead. Except I find that's not necessarily true from my explorations on the behavior of these options.

2. Schema checkpoints: are they necessary or not?

All the documentation I find on cloudFiles.inferColumnTypes says that when using it, you must also define a schema checkpoint: Configure schema inference and evolution in Auto Loader - Azure Databricks | Microsoft Learn

However, I see some example notebooks from databricks that depict using cloudFiles.inferColumnTypes = True without ever defining a schema checkpoint:  

delta-live-tables-notebooks/dms-dlt-cdc-demo/resources/dlt/dms-mysql-cdc-demo.py at main · databrick...

- delta-live-tables-notebooks/change-data-capture-example/notebooks/2-Retail_DLT_CDC_Python.py at ma...

 

1 ACCEPTED SOLUTION

Accepted Solutions

Louis_Frolio
Databricks Employee
Databricks Employee
  1. Behavior of cloudFiles.inferColumnTypes with and without cloudFiles.inferSchema:
    When cloudFiles.inferColumnTypes is enabled, Auto Loader attempts to identify the appropriate data types for columns instead of defaulting everything to strings, which is the default behavior for file formats like JSON, CSV, and XML.
    Without enabling cloudFiles.inferSchema, Auto Loader does not perform automatic schema inference. Instead, users must provide a schema explicitly or use schema hints. When both cloudFiles.inferColumnTypes and cloudFiles.inferSchema are enabled together, Auto Loader performs full schema inference on the incoming data, determining appropriate column data types based on the sampled data. This is especially useful for file formats lacking inherent type encoding (e.g., CSV, JSON).
    Why use both: The combination is beneficial when you want Auto Loader to infer both the schema structure (new columns, changes) and column data types dynamically, reducing manual intervention in managing schema during ingestion.
  2.  
  3. Using cloudFiles.inferColumnTypes without a schema checkpoint and its behavior:
    The cloudFiles.inferColumnTypes option can technically be enabled without specifying a schema checkpoint (cloudFiles.schemaLocation), but this setup is not recommended. Without a schema checkpoint, inferred schema changes cannot be tracked or persisted across runs, leading to potential issues when new data arrives with schema alterations.
    The schema checkpoint enables Auto Loader to persist schema evolution information and manage additions like new columns or changes in the data structure across micro-batches. Without a schema checkpoint, the behavior of cloudFiles.inferColumnTypes is limited to inferring column types for the current batch or sample scope, and schema consistency is the user’s responsibility.
    Using both cloudFiles.inferColumnTypes and a schema checkpoint allows seamless management of schema evolution while ensuring column types are accurately inferred and tracked. Missing checkpoint information may result in redundant inference and susceptibility to runtime errors if data evolves unexpectedly.

 

Hope this helps. BigRoux.

View solution in original post

7 REPLIES 7

Louis_Frolio
Databricks Employee
Databricks Employee
  1. Behavior of cloudFiles.inferColumnTypes with and without cloudFiles.inferSchema:
    When cloudFiles.inferColumnTypes is enabled, Auto Loader attempts to identify the appropriate data types for columns instead of defaulting everything to strings, which is the default behavior for file formats like JSON, CSV, and XML.
    Without enabling cloudFiles.inferSchema, Auto Loader does not perform automatic schema inference. Instead, users must provide a schema explicitly or use schema hints. When both cloudFiles.inferColumnTypes and cloudFiles.inferSchema are enabled together, Auto Loader performs full schema inference on the incoming data, determining appropriate column data types based on the sampled data. This is especially useful for file formats lacking inherent type encoding (e.g., CSV, JSON).
    Why use both: The combination is beneficial when you want Auto Loader to infer both the schema structure (new columns, changes) and column data types dynamically, reducing manual intervention in managing schema during ingestion.
  2.  
  3. Using cloudFiles.inferColumnTypes without a schema checkpoint and its behavior:
    The cloudFiles.inferColumnTypes option can technically be enabled without specifying a schema checkpoint (cloudFiles.schemaLocation), but this setup is not recommended. Without a schema checkpoint, inferred schema changes cannot be tracked or persisted across runs, leading to potential issues when new data arrives with schema alterations.
    The schema checkpoint enables Auto Loader to persist schema evolution information and manage additions like new columns or changes in the data structure across micro-batches. Without a schema checkpoint, the behavior of cloudFiles.inferColumnTypes is limited to inferring column types for the current batch or sample scope, and schema consistency is the user’s responsibility.
    Using both cloudFiles.inferColumnTypes and a schema checkpoint allows seamless management of schema evolution while ensuring column types are accurately inferred and tracked. Missing checkpoint information may result in redundant inference and susceptibility to runtime errors if data evolves unexpectedly.

 

Hope this helps. BigRoux.

Yes! This is exactly what I needed! Thank you so much!

mits1
New Contributor III

Hi @Louis_Frolio ,

I explored something related and intresteing (or confusing).

This conflicts with Databrick's documentation statement as follows

"By default, Auto Loader schema inference seeks to avoid schema evolution issues due to type mismatches. For formats that don't encode data types (JSON, CSV, and XML), Auto Loader infers all columns as strings (including nested fields in JSON files)."

What I experinced recenlty is that Autoloader DOES inefer schema for json file IN the SCHEMA FILE it creates at the schema location.Also,it infers all columns as string IN THE DATAFRAME ONLY.

Below is my observation.

Input Data :

{"Name":"Alfred","geneder":"M","Age":14}
{"Name":"John","geneder":"M","Age":12}

Scenario 1 : Without cloudFiles.inferColumnTypes

df = spark.readStream.\
    format("cloudFiles")\
    .option("cloudFiles.format", "json")\
    .option("cloudFiles.schemaLocation", "/Volumes/workspace/default/sys/schema4")\
    .load('/Volumes/workspace/dev/input/')
 
DF schema -> 
Age:string
Name:string
geneder:string
_rescued_data:string
 
Autoloader schema file contents ->
v1
{"dataSchemaJson":"{\"type\":\"struct\",\"fields\":[{\"name\":\"Age\",\"type\":\"long\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Name\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"geneder\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}}]}","partitionSchemaJson":"{\"type\":\"struct\",\"fields\":[]}"}
 
Note :- Looks like autoloader still infers the data but not the dataframe.
 ---------------------------------------------------------------------------------------------------------------------------------------------------------
Scenario 2 : With cloudFiles.inferColumnTypes
Input data : 
{"Name":"Mits","geneder":"F","Age":35}
 
df = spark.readStream.\
    format("cloudFiles")\
    .option("cloudFiles.format", "json")\
    .option("cloudFiles.inferColumnTypes", "true")\
    .option("cloudFiles.schemaLocation", "/Volumes/workspace/default/sys/schema4")\
    .load('/Volumes/workspace/dev/input/')
 
DF Schema ->
 
Age:long
Name:string
geneder:string
_rescued_data:string
 
Autoloader schema file contents -> No change!!
 
v1
{"dataSchemaJson":"{\"type\":\"struct\",\"fields\":[{\"name\":\"Age\",\"type\":\"long\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Name\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"geneder\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}}]}","partitionSchemaJson":"{\"type\":\"struct\",\"fields\":[]}"}
 
Note : Dataframe's schema changes but not the Autoloader's (obviosuly,because there is no change in the source data).
 
It would be very helpful for me to understand behaviour of schema inference,if you could clarify this.
Thanks in advance!!

Louis_Frolio
Databricks Employee
Databricks Employee

@mits1 

Great observation. I can see why this feels like it contradicts the docs. It doesn’t, but I agree the documentation could be clearer about what’s happening under the hood. Let me walk through it.

Auto Loader’s schema inference actually operates in two layers, and that’s the key to what you’re seeing.

First, the schema file (stored in _schemas at your schemaLocation) always captures the actual detected types from sampling your data. That’s why Age shows up as a long there. Auto Loader needs those real types to track schema evolution over time, detect type changes, and decide what should land in _rescued_data when something doesn’t match.

Second, the DataFrame schema is where cloudFiles.inferColumnTypes comes into play. When that option is false, which is the default for JSON, CSV, and XML, Auto Loader takes the inferred schema and casts everything to strings before exposing it in the DataFrame. That’s the “safe default” the docs are referring to. When you flip it to true, the DataFrame reflects the actual detected types from the schema file instead of flattening everything to strings.

So when the docs say “all columns are inferred as strings,” they’re really talking about the DataFrame output, not the schema file itself.

You can see this clearly in your scenarios. In Scenario 1, the schema file correctly records Age as a long, but the DataFrame shows it as a string. In Scenario 2, once you enable inferColumnTypes, the DataFrame starts reflecting the real types. The schema file doesn’t change because it already had the correct types from the start.

Here’s the clean way to think about it:

Schema file

Always stores the true detected types, regardless of inferColumnTypes. This is by design for schema evolution.

DataFrame

Controlled by inferColumnTypes

False means everything is presented as strings

True means you get the actual detected types

Your instinct was spot on. Auto Loader is still inferring the data types, it just doesn’t always surface them in the DataFrame unless you tell it to.

Hope this helps, Lou.

mits1
New Contributor III

Hi @Louis_Frolio ,

After reading your reply I breathed a sigh of my relief.I spent hours just to make sure my experiment alings with Databrick's Documentation.

Thank you so muchh for your attension to this issue!!

Great job 😄

Just FYI,I am strugelling with one more issue(below is the link).If you can help me understand this then that would be helpful.

https://community.databricks.com/t5/data-engineering/autoloader-inserts-null-rows-in-delta-table-whi...

 

Louis_Frolio
Databricks Employee
Databricks Employee

@mits1 , if you are happy with the answer please click on "Accept as Solution." It will give confidence to others.  Cheers, Lou.

 

mits1
New Contributor III

@Louis_Frolio of course.where can I find this "Accept as Solution." option?