I am trying to read stream from azure:
(spark.readStream
.format("cloudFiles")
.option('cloudFiles.clientId', CLIENT_ID)
.option('cloudFiles.clientSecret', CLIENT_SECRET)
.option('cloudFiles.tenantId', TENTANT_ID)
.option("header", "true")
.option("cloudFiles.format", "csv")
.option("cloudFiles.schemaLocation", CHECKPOINT_PATH)
.load(f"wasbs://{CONTAINER}@{ACCOUNT_NAME}.blob.core.windows.net/"+AZURE_PATH)
)
yet I get
Py4JJavaError: An error occurred while calling o9451.load.
: shaded.databricks.org.apache.hadoop.fs.azure.AzureException: shaded.databricks.org.apache.hadoop.fs.azure.AzureException: Container <container> in account <account>.blob.core.windows.net not found, and we can't create it using anoynomous credentials, and no credentials found for them in the configuration.
I know the location exists and it seems it ignores the provided credentials. How can I set the credentials?