cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

autoloader documentation does not work

chanansh
Contributor

I am trying to following the documentation here:

https://learn.microsoft.com/en-us/azure/databricks/getting-started/etl-quick-start

My code looks like:

(spark.readStream
  .format("cloudFiles")
  .option("header", "true")
  #.option("cloudFiles.partitionColumns", "date, hour")
  .option("cloudFiles.format", "csv")
  .option("cloudFiles.maxBytesPerTrigger", "10m")
  .option("cloudFiles.schemaHints", SCHEMA_HINT)
  .option("cloudFiles.schemaLocation", checkpoint_path)
  .option("cloudFiles.schemaEvolutionMode", "addNewColumns")
  .load(file_path)
  .withColumn('source_file', input_file_name())
  .withColumn('processing_time', current_timestamp())
  .withColumnRenamed("date","timestamp")
  .withColumnRenamed("FW_Version","fw_version_1")
  .withColumnRenamed('fw_version','fw_version_2') # https://kb.databricks.com/en_US/sql/dupe-column-in-metadata
  .withColumnRenamed('Time_since_last_clear_[Min]', 'Time_since_last_clear_min') # delta does not like column names with brackets
  .writeStream
  .format("delta")
  .option("checkpointLocation", checkpoint_path)
  .option("path", delta_path)
  .trigger(availableNow=True)
  .toTable(table_name))

(I have commented out the partition option because one of my original columns has the same name as the partition so it is overwritten. Could not find a workaround.)

However, it does not work.

I get the following error:

AnalysisException: Incompatible format detected.
 
You are trying to write to `s3://nbu-ml/projects/rca/msft/dsm09collectx/delta` using Databricks Delta, but there is no
transaction log present. Check the upstream job to make sure that it is writing
using format("delta") and that you are trying to write to the table base path.
 
To disable this check, SET spark.databricks.delta.formatCheck.enabled=false
To learn more about Delta, see https://docs.databricks.com/delta/index.html

1 ACCEPTED SOLUTION

Accepted Solutions

Murthy1
Contributor II

Hi,

It seems like you are writing to a path which is not empty and has some non - delta format files.

Also, can you confirm if the path mentioned in the error message "`s3://nbu-ml/projects/rca/msft/dsm09collectx/delta` " is the path you are writing to or reading from? I faced a similar error - but that was when I read a delta table path through : .option("cloudFiles.format", "parquet"). I overcame the error by adding spark.databricks.delta.formatCheck.enabled=false in spark config.

View solution in original post

1 REPLY 1

Murthy1
Contributor II

Hi,

It seems like you are writing to a path which is not empty and has some non - delta format files.

Also, can you confirm if the path mentioned in the error message "`s3://nbu-ml/projects/rca/msft/dsm09collectx/delta` " is the path you are writing to or reading from? I faced a similar error - but that was when I read a delta table path through : .option("cloudFiles.format", "parquet"). I overcame the error by adding spark.databricks.delta.formatCheck.enabled=false in spark config.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group