The ability to import .py files into notebooks looked like a clean and easy way to reuse code and to ensure all notebooks are using the same version of code. However, two items remain unclear after scouring documentation and forums.
Are these the right / best solutions to the problems or should we revert back to %run and notebooks instead of .py files? Thanks!
Code within the .py file does not have access to the spark session by default.
Outcome: NameError: name 'spark' is not defined
Solution: add the following to the .py file:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
Are there any implications to this?
Does the notebook code and .py code share the same session or does this cause separate sessions?
display() and displayHTML() functions are not available to the .py code by default
Outcome: NameError: name 'displayHTML' is not defined when displayHTML() is called from within the .py file
Solution: add the following to the .py file and use display(HTML()) instead of displayHTML():
from IPython.core.display import display, HTML #How to use: display(HTML("your content"))
Is there a better way to get displayHTML() working inside the .py file?
What about all of the other databricks specific functions?