@ashraf1395 ,
Referencing external locations in a Databricks Python notebook, particularly for environments like Azure DevOps with different paths for development (dev) and production (prod), can be effectively managed using parameterized variables. Hereโs a detailed explanation and recommended approach:
Referencing External Locations in a Python Notebook
In Databricks Python notebooks, you can reference external locations (such as Azure Data Lake Storage or other cloud storage) by passing the storage path directly or using environment-specific parameters. Below is a step-by-step explanation:
1. Direct Reference with Path
If you want to directly reference an ADLS path, you can use it as a string in the Python notebook:
path = "abfss://container@storageaccount.dfs.core.windows.net/folder"
df = spark.read.format("parquet").load(path)
df.show()
2. Using Parameters for Environment Handling
For managing different environments (e.g., dev, prod), using parameterized variables is the best practice. This ensures flexibility and maintainability. You can set these parameters dynamically based on the environment being executed.
Example:
- Define the environment (e.g., dev or prod) in Azure DevOps pipeline parameters or notebook widgets.
- Use the environment variable to construct the storage path.
# Define environment-specific parameters
env = dbutils.widgets.get("env") # Set this widget value via Azure DevOps or manually
storage_account = "devstorage" if env == "dev" else "prodstorage"
container = "mycontainer"
# Construct the path dynamically
path = f"abfss://{container}@{storage_account}.dfs.core.windows.net/folder"
# Use the path
df = spark.read.format("parquet").load(path)
df.show()
Steps to Handle Environment-Specific Paths with Azure DevOps
To handle dev and prod storage paths dynamically in Azure DevOps:
1. Pass Environment as a Parameter
- In your Azure DevOps pipeline, pass the environment as a parameter (env: dev or env: prod).
- Inject the parameter into your notebook using Databricks CLI or API when running the notebook.
parameters:
- name: env
type: string
default: dev
steps:
- task: DatabricksRunNotebook@2
inputs:
notebookPath: /path/to/notebook
parameters: '{"env": "$(env)"}'
2. Use Environment Variables
In your Python notebook, use the passed env parameter to decide the storage account dynamically, as shown in the Python example above.
Using a Single Variable for Storage Accounts
You can use a structured approach where the storage account name is a function of the environment.
For example:
# Define environment and construct the path
env = dbutils.widgets.get("env") # 'dev' or 'prod'
storage_accounts = {
"dev": "devstorageaccount",
"prod": "prodstorageaccount"
}
container = "mycontainer"
# Get storage account based on the environment
storage_account = storage_accounts.get(env, "defaultstorageaccount")
path = f"abfss://{container}@{storage_account}.dfs.core.windows.net/folder"
# Load data
df = spark.read.format("parquet").load(path)
df.show()
Best Practices for Managing External Location References
- Parameterize the Environment: Always use parameters to pass environment-specific values.
- Environment Mapping: Maintain a mapping of environments to storage accounts and paths in a configuration file or dictionary in the notebook.
- Secure Configuration: Use Azure Key Vault for storing sensitive information like storage account keys or connection strings.
- Test Across Environments: Validate that both dev and prod configurations work seamlessly in the pipeline.
For more detailed information, refer to the official Databricks External Locations Documentation.