Hi,
There is a set of .csv/.txt files in the storage container ie Azure Blob Storage/ Azure Storage Gen 2. I would like to ingest the files to Databricks. Dataset,LinkedServices was created on both ends. Also an all purpose cluster was created in Bricks. Can we ingest the data using ETL Tool ADF from Blob to Databricks?
I tried creating and debugging the pipeline, but failed with the following message in the pipeline - "ErrorCode=AzureDatabricksCommandError,Hit an error when running the command in Azure Databricks. Error details: shaded.databricks.org.apache.hadoop.fs.azure.AzureException: shaded.databricks.org.apache.hadoop.fs.azure.AzureException: Unable to access container csv-file-storage in account storageaccountfordev.blob.core.windows.net using anonymous credentials, and no credentials found for them in the configuration.
Caused by: shaded.databricks.org.apache.hadoop.fs.azure.AzureException: Unable to access container csv-file-storage in account storageaccountfordev.blob.core.windows.net using anonymous credentials, and no credentials found for them in the configuration.
Caused by: hadoop_azure_shaded.com.microsoft.azure.storage.StorageException: Public access is not permitted on this storage account.. "
I have a limited experience with Databricks. Can anyone give me some insights about this? Basically i wanted to know how the ADF works.
Thanks,