cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

import * from ../my/relative/path

GCera
New Contributor II

I have the following repository structure:
/Repos/main/MyRepo/
 -> run (folder)
 -> setup (folder)
 -> src (folder)
 -> main (notebook)

Now, starting from a notebook in the "src" folder I need to run/import all variables defined in all notebooks stored in "src/etl_notebooks" (e.g. /Repos/main/MyRepo/src/etl_notebooks/bronze_table_x).

How can I do that in a programmatic way (i.e. without typing the full notebook path by myself right into the notebook)?

2 REPLIES 2

Kaniz
Community Manager
Community Manager

Hi @GCera ,

  • One way to run or import all variables defined in all notebooks stored in a directory is to use the %run magic command in Databricks. This command allows you to execute another notebook within a notebook, and the functions and variables defined in it become available in the calling notebook. For example, if you have a notebook called main.ipynb in the src folder, and you want to run or import all notebooks in the src/etl_notebooks folder, you can use something like this:

%run ./src/etl_notebooks/*

This will run all notebooks in the src/etl_notebooks folder relative to the current notebook. You can also use an absolute path instead of a relative path if you prefer. Note that %run must be in a cell by itself because it runs the entire notebook inline.

 

  • Another way to run or import all variables defined in all notebooks stored in a directory is to use the dbutils.notebook.run() method in Databricks. This method lets you pass parameters to and return values from a notebook, which allows you to build complex workflows and pipelines with dependencies. For example, if you have a notebook called main.ipynb in the src folder, and you want to run or import all notebooks in the src/etl_notebooks folder, you can use something like this:

dbutils.notebook.run("./src/etl_notebooks/*", 300)

This will start a new job to run all notebooks in the src/etl_notebooks folder relative to the current notebook. The first argument is the directory's path containing the notebooks, and the second argument is an optional timeout value (in seconds). You can also pass additional arguments as dictionaries if needed.

 

  • A third way to run or import all variables defined in all notebooks stored in a directory is to use files or libraries. Files allow you to modularize your code by putting supporting functions or data into separate files that can be imported into any notebook. Libraries allow you to package your code into reusable modules installed into any cluster that uses Databricks. For example, suppose you have a file called reverse.py that defines a function called reverse and another file called bronze_table_x.py that defines some variables related to bronze table x, and both files are located in the same directory as your main notebook (src/etl_notebooks/bronze_table_x.py). In that case, you can import them into your main notebook like this:

import reverse import bronze_table_x

This will make both functions and variables available for your main notebook. To create files or libraries from Python code, see Modularize your code using files or Create Databricks libraries from Python libraries.

I hope this helps! Let me know if you have any other questions. 😊

GCera
New Contributor II

Hi Kaniz! Thank you for your reply - I appreciate your time and effort.

I tried the solution you suggested but incurred in the "Notebook not found" error (please see the attached image) because, I think, it is not possible to use * in the notebook path.

Again, from a notebook in the "src" folder (e.g. /src/etl_sql_query) I am trying to automatically import all notebooks stored in a "/src/etl_notebooks/" folder without typing each and every one of their path.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.