Hi @johnp ,
When running multiple notebooks on the same Databricks cluster, each notebook runs in its own isolated environment. This means that variable names and their values in one notebook should not interfere with those in another notebook. In theory, this isolation should prevent race conditions or conflicts from occurring due to variable name overlap.
Every notebook attached to a cluster has a pre-defined variable named spark that represents a SparkSession. SparkSession is the entry point for using Spark APIs as well as setting runtime configurations.
Spark session isolation is enabled by default.
But there is one caveat. If those notebooks are run from some kind of main notebook via %run command, they will share the same session. So, question is how do you run your notebooks? Because here may lay the problem.
https://learn.microsoft.com/en-us/azure/databricks/notebooks/notebook-isolation