cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

referencing external locations in python notebooks

ashraf1395
Valued Contributor

How can I refrence external lcoations in python notebook. 

I got the docs for referencing it in python : https://docs.databricks.com/en/sql/language-manual/sql-ref-external-locations.html.

But how to do it in python. I am not able to understand. Do we have to pass the adls:// path directly in the python notebook or is there any other way. 

One more question I..e - all our python notebooks are being handled by azure devops for multiple dev and prod envs.
So the storage container paths are different for dev and prod. 

Let's say if I have to pass the adls:// paths to reference the external locations then for referencing it in dev - I have to pass dev storage account and for prod I have to pass prod storage account. And we are using single azure devops with multiple parameters. So will best method be to create a var which handles storage account as per the envs.
Like if prod then -the paratmer has prod storage account path and if dev the dev staoge account path.
So I will be referencing it like this : adls://path/{stroage_container} ?


2 REPLIES 2

bella964
New Contributor

@ashraf1395 wrote:

How can I refrence external lcoations in python notebook. 

I got the docs for referencing it in python : https://docs.databricks.com/en/sql/language-manual/sql-ref-external-locations.html.

But how to do it in python. I am not able to understand. Do we have to pass the adls:// path directly in the python notebook or is there any other way. 

One more question I..e - all our python notebooks are being handled by azure devops for multiple dev and prod envs.
So the storage container paths are different for dev and prod. 

Let's say if I have to pass the adls:// paths to reference the external locations then for referencing it in dev - I have to pass dev storage account and for prod I have to pass prod storage account. And we are using single azure devops with multiple parameters. So will best method be to create a var which handles storage account as per the envs.
Like if prod then -the paratmer has prod storage account path and if dev the dev staoge account path.
So I will be referencing it like this : adls://path/{stroage_container} ?

Hello,

It's great that you've been using Ultimate Guitar (UG) for your chord charts! The platform does offer some advanced features with UG-Pro, but as you've noted, it can be a bit limiting when it comes to things like measures and timing.

Unfortunately, UG's chord charts generally don't display measure lines or timing information directly above the chords. The Pro version does give you access to "Pro Tabs," which can be more detailed and show things like note placement and rhythm, but it doesn't include timing or measure markers in a traditional sheet music format.

If you're looking for specific timing or measures for better rhythm accuracy, that info typically isn't part of the chord chart files themselves unless you're looking at a specific type of Pro Tab with advanced notation, and even then, it's more about note placement rather than a clear measure/timing line.

 

Best Regards


 

fmadeiro
New Contributor II

@ashraf1395 ,

Referencing external locations in a Databricks Python notebook, particularly for environments like Azure DevOps with different paths for development (dev) and production (prod), can be effectively managed using parameterized variables. Here’s a detailed explanation and recommended approach:

Referencing External Locations in a Python Notebook

In Databricks Python notebooks, you can reference external locations (such as Azure Data Lake Storage or other cloud storage) by passing the storage path directly or using environment-specific parameters. Below is a step-by-step explanation:

1. Direct Reference with Path

If you want to directly reference an ADLS path, you can use it as a string in the Python notebook:

 

path = "abfss://container@storageaccount.dfs.core.windows.net/folder"
df = spark.read.format("parquet").load(path)
df.show()

 

2. Using Parameters for Environment Handling

For managing different environments (e.g., dev, prod), using parameterized variables is the best practice. This ensures flexibility and maintainability. You can set these parameters dynamically based on the environment being executed.

Example:

  • Define the environment (e.g., dev or prod) in Azure DevOps pipeline parameters or notebook widgets.
  • Use the environment variable to construct the storage path.

 

# Define environment-specific parameters
env = dbutils.widgets.get("env")  # Set this widget value via Azure DevOps or manually
storage_account = "devstorage" if env == "dev" else "prodstorage"
container = "mycontainer"

# Construct the path dynamically
path = f"abfss://{container}@{storage_account}.dfs.core.windows.net/folder"

# Use the path
df = spark.read.format("parquet").load(path)
df.show()

 

 

Steps to Handle Environment-Specific Paths with Azure DevOps

To handle dev and prod storage paths dynamically in Azure DevOps:

1. Pass Environment as a Parameter

  • In your Azure DevOps pipeline, pass the environment as a parameter (env: dev or env: prod).
  • Inject the parameter into your notebook using Databricks CLI or API when running the notebook.

 

parameters:
  - name: env
    type: string
    default: dev

steps:
  - task: DatabricksRunNotebook@2
    inputs:
      notebookPath: /path/to/notebook
      parameters: '{"env": "$(env)"}'

 

2. Use Environment Variables

In your Python notebook, use the passed env parameter to decide the storage account dynamically, as shown in the Python example above.

Using a Single Variable for Storage Accounts

You can use a structured approach where the storage account name is a function of the environment.

For example:

 

# Define environment and construct the path
env = dbutils.widgets.get("env")  # 'dev' or 'prod'
storage_accounts = {
    "dev": "devstorageaccount",
    "prod": "prodstorageaccount"
}
container = "mycontainer"

# Get storage account based on the environment
storage_account = storage_accounts.get(env, "defaultstorageaccount")
path = f"abfss://{container}@{storage_account}.dfs.core.windows.net/folder"

# Load data
df = spark.read.format("parquet").load(path)
df.show()

 

Best Practices for Managing External Location References

  1. Parameterize the Environment: Always use parameters to pass environment-specific values.
  2. Environment Mapping: Maintain a mapping of environments to storage accounts and paths in a configuration file or dictionary in the notebook.
  3. Secure Configuration: Use Azure Key Vault for storing sensitive information like storage account keys or connection strings.
  4. Test Across Environments: Validate that both dev and prod configurations work seamlessly in the pipeline.

For more detailed information, refer to the official Databricks External Locations Documentation.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group