cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Passing multiple paths to .load in autoloader

TimB
New Contributor III

I am trying to use autoloader to load data from two different blobs from within the same account so that spark will discover the data asynchronously. However, when I try this, it doesn't work and I get the error outlined below. Can anyone point out where I am going wrong, or an alternative method to achieve this?

 

To use 'cloudFiles' as a streaming source, please provide the file format with the option 'cloudFiles.format', and use .load() to create your DataFrame.
df = (spark.readStream
  .format("cloudFiles")
  .option("cloudFiles.format", "csv")
  .option("cloudFiles.backfillInterval", "1 day")
  .load("wasbs://customer1@container_name.*csv", "wasbs://customer2@container_name.*csv")

 

 

 

 

8 REPLIES 8

Corbin
New Contributor III
New Contributor III

Hello Tim, as you will note from the Spark Streaming docs, the load function only accepts one string for the path arg. This means that all files need to be detectible from the same base path if you wish to do this in a single stream. You can then use glob patterns to pick up the files from the same base path.

You have 2 options here:

  1. [Recommended] Change the upstream to be in ADLSg2 so that you have hierarchical namespace (also, Microsoft has deprecated the Windows Azure Storage Blob driver (WASB) for Azure Blob Storage in favor of the Azure Blob Filesystem driver (ABFS), so you should move off anyways -- docs). Then use 1 container in 1 storage account in ADLSg2, then have subdirectories for each customer. Now you can easily glob all customers together in a single string from the same base path.
  2. Your only other option is to create a separate stream for each customer, which wouldn't scale as well as 1 stream for all (though is still a possible solution).

TimB
New Contributor III

Thanks for your feedback Corbin. I am aware of the deprecation, but the type of blob storage is beyond my control in this case i'm afraid. Further to this, the business logic dictates that there be a single blob storage per customer so this also cannot be changed.

Therefore it looks like option 2 would be my only choice. Is there a way to create separate streams dynamically from a list of customers and still utilise the asynchronous nature of spark?

Corbin
New Contributor III
New Contributor III

What do you mean by "asynchronous nature of spark"? What behavior are you trying to maintain?

TimB
New Contributor III

Can I create multiple streams that run at the same time? Or do I have to wait for one stream to finish before starting another? So if I have 10 different storage containers, can I create 10 streams that run at the same time?

Corbin
New Contributor III
New Contributor III

Yes, each stream will run independently of one another and all can run together at the same time.

TimB
New Contributor III

That's good news. So would this be the correct sort of set up, or should I be creating all the streams first before writing the streams to a table?

customer_list = ['customer1', 'customer2', 'customer3', ...]

table_name = "bronze.customer_data_table"

for customer in customer_list:
    file_path = f"wasbs://{customer}@conatiner.blob.core.windows.net/*/*.csv"

    checkpoint_ path = f"/tmp/checkpoints/{customer}/_checkpoints"

    cloudFile = {
        "cloudFiles.format": "csv",
        "cloudFiles.backfillInterval": "1 day",
        "cloudFiles.schemaLocation": checkpoint_path,
        "cloudFiles.schemaEvolutionMode": "rescue",
    }

    df = (
        spark.readStream
        .format("cloudFiles")
        .options(**cloudFile)
        .load(file_path)
    )

    streamQuery = (
        df.writeStream.format("delta")
        .option("outputMode", "append")
        .option("checkpointLocation", checkpoint_path)
        .trigger(once=True)
        .toTable(table_name)
)

Corbin
New Contributor III
New Contributor III

LGTM, but use .trigger(availableNow=True) instead of once since once is now deprecated

TimB
New Contributor III

If were were to upgrade to ADLSg2, but retain the same structure, would there be scope for this method above to be improved (besides moving to notification mode)?

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!