cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Passing multiple paths to .load in autoloader

TimB
New Contributor III

I am trying to use autoloader to load data from two different blobs from within the same account so that spark will discover the data asynchronously. However, when I try this, it doesn't work and I get the error outlined below. Can anyone point out where I am going wrong, or an alternative method to achieve this?

 

To use 'cloudFiles' as a streaming source, please provide the file format with the option 'cloudFiles.format', and use .load() to create your DataFrame.
df = (spark.readStream
  .format("cloudFiles")
  .option("cloudFiles.format", "csv")
  .option("cloudFiles.backfillInterval", "1 day")
  .load("wasbs://customer1@container_name.*csv", "wasbs://customer2@container_name.*csv")

 

 

 

 

9 REPLIES 9

Corbin
Databricks Employee
Databricks Employee

Hello Tim, as you will note from the Spark Streaming docs, the load function only accepts one string for the path arg. This means that all files need to be detectible from the same base path if you wish to do this in a single stream. You can then use glob patterns to pick up the files from the same base path.

You have 2 options here:

  1. [Recommended] Change the upstream to be in ADLSg2 so that you have hierarchical namespace (also, Microsoft has deprecated the Windows Azure Storage Blob driver (WASB) for Azure Blob Storage in favor of the Azure Blob Filesystem driver (ABFS), so you should move off anyways -- docs). Then use 1 container in 1 storage account in ADLSg2, then have subdirectories for each customer. Now you can easily glob all customers together in a single string from the same base path.
  2. Your only other option is to create a separate stream for each customer, which wouldn't scale as well as 1 stream for all (though is still a possible solution).

TimB
New Contributor III

Thanks for your feedback Corbin. I am aware of the deprecation, but the type of blob storage is beyond my control in this case i'm afraid. Further to this, the business logic dictates that there be a single blob storage per customer so this also cannot be changed.

Therefore it looks like option 2 would be my only choice. Is there a way to create separate streams dynamically from a list of customers and still utilise the asynchronous nature of spark?

Corbin
Databricks Employee
Databricks Employee

What do you mean by "asynchronous nature of spark"? What behavior are you trying to maintain?

TimB
New Contributor III

Can I create multiple streams that run at the same time? Or do I have to wait for one stream to finish before starting another? So if I have 10 different storage containers, can I create 10 streams that run at the same time?

Corbin
Databricks Employee
Databricks Employee

Yes, each stream will run independently of one another and all can run together at the same time.

TimB
New Contributor III

That's good news. So would this be the correct sort of set up, or should I be creating all the streams first before writing the streams to a table?

customer_list = ['customer1', 'customer2', 'customer3', ...]

table_name = "bronze.customer_data_table"

for customer in customer_list:
    file_path = f"wasbs://{customer}@conatiner.blob.core.windows.net/*/*.csv"

    checkpoint_ path = f"/tmp/checkpoints/{customer}/_checkpoints"

    cloudFile = {
        "cloudFiles.format": "csv",
        "cloudFiles.backfillInterval": "1 day",
        "cloudFiles.schemaLocation": checkpoint_path,
        "cloudFiles.schemaEvolutionMode": "rescue",
    }

    df = (
        spark.readStream
        .format("cloudFiles")
        .options(**cloudFile)
        .load(file_path)
    )

    streamQuery = (
        df.writeStream.format("delta")
        .option("outputMode", "append")
        .option("checkpointLocation", checkpoint_path)
        .trigger(once=True)
        .toTable(table_name)
)

Corbin
Databricks Employee
Databricks Employee

LGTM, but use .trigger(availableNow=True) instead of once since once is now deprecated

I'm trying to do something similar either in a DLT pipeline or a standard streaming query autoloader job.  This thread implies there is a way to pass multiple source directories via options to the readstream query while maintaining state all through one checkpoint.   this would greatly simplify my process if true but I have been unable to get it to work.  I've also considered your appraoch which is to parameterize my file_path using subfolders as sources.

TimB
New Contributor III

If were were to upgrade to ADLSg2, but retain the same structure, would there be scope for this method above to be improved (besides moving to notification mode)?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group