cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

[SQL_CONF_NOT_FOUND] The SQL config "/Volumes/xxx...." canot be found. Please verify that the confi

DataGeek_JT
New Contributor II

I am getting the below error when trying to stream data from Azure Storage path to a Delta Live Table ([PATH] is the path to my files which I have redacted here):

[SQL_CONF_NOT_FOUND] The SQL config "/Volumes/[PATH]" cannot be found. Please verify that the config exists. SQLSTATE: 42K0I

I can read data from the volume in Unity Catalogue using spark.read.csv("/Volumes/[PATH]") but it seems to think there is an issue doing it this way. 

The notebook code is:

 
import dlt
from pyspark.sql import functions as F
 
data_source_path = spark.conf.get("/Volumes/[PATH]")

@dlt.table
def ticks():
    return (spark.readStream
        .format("cscloudFilesv")
        .option("cloudFiles.format", "csv")
        .option("header", True)
        .load(data_source_path )
        .select("*", F.col("_metadata.file_path").alias("source_file_name"),  F.split(F.col("_metadata.file_path"), "/")[8].alias("Symbol"))
    )

Any idea what is wrong please? 

1 REPLY 1

Kaniz_Fatma
Community Manager
Community Manager

Hi @DataGeek_JT

  • Ensure that the path you’ve provided is correct. Double-check the path to make sure it points to the right location in your Azure Storage.
  • If you’ve redacted the actual path, replace “[PATH]” with the actual path to your files.
  • When working with Delta Live Tables, consider using Auto Loader for data ingestion tasks from cloud object storage. Auto Loader is designed to incrementally and idempotently load ever-growing data as it arrives in cloud storage.
  • If you’re using Unity Catalog, make sure it’s properly configured with your Delta Live Tables pipeline. Unity Catalog enables you to manage metadata and data lineage for your tables.
  • To access data from cloud object storage, you should mount it first on DBFS (Databricks File System) under the /mnt folder. This allows you to access the mounted storage as a directory within your Databricks workspace.
  • You can mount Azure Blob Storage, ADLS Gen2, or other cloud storage services. Refer to the official documentation for details on mounting cloud storage.
  • When using Auto Loader in a Unity Catalog-enabled pipeline, ensure that you use external locations for loading files. This means specifying the full path to the files, including the container and storage account details.
  • Instead of using “/Volumes/[PATH]”, directly include the pattern in the path, like this:
     
  • "abfss://<container>@<storage_account>.dfs.core.windows.net/path/to/folder/*file_1*"
    
  • If you’re mixing SQL and Python notebooks in your Delta Live Tables pipeline, consider using SQL for operations beyond ingestion. You can also manage dependencies using libraries not packaged in Delta Live Tables by default.
  • If you’re using Auto Loader with file notifications and perform a full refresh for your pipeline or streaming table, remember to manually clean up resources. You can use the CloudFilesResourceManager in a notebook to perform cleanup.
  • If you encounter any further issues, feel free to ask for additional assistance! 🚀

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group