01-07-2025 11:45 PM
How can I refrence external lcoations in python notebook.
I got the docs for referencing it in python : https://docs.databricks.com/en/sql/language-manual/sql-ref-external-locations.html.
But how to do it in python. I am not able to understand. Do we have to pass the adls:// path directly in the python notebook or is there any other way.
One more question I..e - all our python notebooks are being handled by azure devops for multiple dev and prod envs.
So the storage container paths are different for dev and prod.
Let's say if I have to pass the adls:// paths to reference the external locations then for referencing it in dev - I have to pass dev storage account and for prod I have to pass prod storage account. And we are using single azure devops with multiple parameters. So will best method be to create a var which handles storage account as per the envs.
Like if prod then -the paratmer has prod storage account path and if dev the dev staoge account path.
So I will be referencing it like this : adls://path/{stroage_container} ?
01-08-2025 01:21 AM
Referencing external locations in a Databricks Python notebook, particularly for environments like Azure DevOps with different paths for development (dev) and production (prod), can be effectively managed using parameterized variables. Here’s a detailed explanation and recommended approach:
In Databricks Python notebooks, you can reference external locations (such as Azure Data Lake Storage or other cloud storage) by passing the storage path directly or using environment-specific parameters. Below is a step-by-step explanation:
If you want to directly reference an ADLS path, you can use it as a string in the Python notebook:
path = "abfss://container@storageaccount.dfs.core.windows.net/folder"
df = spark.read.format("parquet").load(path)
df.show()
For managing different environments (e.g., dev, prod), using parameterized variables is the best practice. This ensures flexibility and maintainability. You can set these parameters dynamically based on the environment being executed.
Example:
# Define environment-specific parameters
env = dbutils.widgets.get("env") # Set this widget value via Azure DevOps or manually
storage_account = "devstorage" if env == "dev" else "prodstorage"
container = "mycontainer"
# Construct the path dynamically
path = f"abfss://{container}@{storage_account}.dfs.core.windows.net/folder"
# Use the path
df = spark.read.format("parquet").load(path)
df.show()
To handle dev and prod storage paths dynamically in Azure DevOps:
parameters:
- name: env
type: string
default: dev
steps:
- task: DatabricksRunNotebook@2
inputs:
notebookPath: /path/to/notebook
parameters: '{"env": "$(env)"}'
In your Python notebook, use the passed env parameter to decide the storage account dynamically, as shown in the Python example above.
You can use a structured approach where the storage account name is a function of the environment.
For example:
# Define environment and construct the path
env = dbutils.widgets.get("env") # 'dev' or 'prod'
storage_accounts = {
"dev": "devstorageaccount",
"prod": "prodstorageaccount"
}
container = "mycontainer"
# Get storage account based on the environment
storage_account = storage_accounts.get(env, "defaultstorageaccount")
path = f"abfss://{container}@{storage_account}.dfs.core.windows.net/folder"
# Load data
df = spark.read.format("parquet").load(path)
df.show()
For more detailed information, refer to the official Databricks External Locations Documentation.
01-08-2025 01:21 AM
Referencing external locations in a Databricks Python notebook, particularly for environments like Azure DevOps with different paths for development (dev) and production (prod), can be effectively managed using parameterized variables. Here’s a detailed explanation and recommended approach:
In Databricks Python notebooks, you can reference external locations (such as Azure Data Lake Storage or other cloud storage) by passing the storage path directly or using environment-specific parameters. Below is a step-by-step explanation:
If you want to directly reference an ADLS path, you can use it as a string in the Python notebook:
path = "abfss://container@storageaccount.dfs.core.windows.net/folder"
df = spark.read.format("parquet").load(path)
df.show()
For managing different environments (e.g., dev, prod), using parameterized variables is the best practice. This ensures flexibility and maintainability. You can set these parameters dynamically based on the environment being executed.
Example:
# Define environment-specific parameters
env = dbutils.widgets.get("env") # Set this widget value via Azure DevOps or manually
storage_account = "devstorage" if env == "dev" else "prodstorage"
container = "mycontainer"
# Construct the path dynamically
path = f"abfss://{container}@{storage_account}.dfs.core.windows.net/folder"
# Use the path
df = spark.read.format("parquet").load(path)
df.show()
To handle dev and prod storage paths dynamically in Azure DevOps:
parameters:
- name: env
type: string
default: dev
steps:
- task: DatabricksRunNotebook@2
inputs:
notebookPath: /path/to/notebook
parameters: '{"env": "$(env)"}'
In your Python notebook, use the passed env parameter to decide the storage account dynamically, as shown in the Python example above.
You can use a structured approach where the storage account name is a function of the environment.
For example:
# Define environment and construct the path
env = dbutils.widgets.get("env") # 'dev' or 'prod'
storage_accounts = {
"dev": "devstorageaccount",
"prod": "prodstorageaccount"
}
container = "mycontainer"
# Get storage account based on the environment
storage_account = storage_accounts.get(env, "defaultstorageaccount")
path = f"abfss://{container}@{storage_account}.dfs.core.windows.net/folder"
# Load data
df = spark.read.format("parquet").load(path)
df.show()
For more detailed information, refer to the official Databricks External Locations Documentation.
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now