cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Latest pushed code is not taken into account by Notebook

pgagliardi
New Contributor II

Hello,

I cloned a repo my_repo in the Dataricks space Repos.

Inside my_repo, I created a notebook new_experiment where I can import functions from my_repo, which is really handy.

When I want to modify a function in my_repo, I open my local IDE, do the modifications and push them on Github, then I pull the changes on Databricks. The problem is that when I import the function in new_experiment the changes are not synchronized.

I tried :

%load_ext autoreload %autoreload 2

But the only solution seems to detach & reattach the cluster, which is a pain as I have to rerun the code from scratch. Is there a better solution?

1 REPLY 1

Jnguyen
Databricks Employee
Databricks Employee

Use 

%reload_ext autoreload

instead, it will do your expected behavior.
You just need to run it once, like %load_ext autoreload %autoreload 2

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group