Hi @Rjdudley,
Thanks for your question - You can create regular .py files in your workspace and use the %run magic command to include them in your notebooks. This method is straightforward and good for development and testing.
%run /path/to/your/custom_datasource_file
For a more production-ready approach, you can create a wheel file of your custom data source implementation and upload it to your cluster or workspace. This method is preferred for sharing across multiple notebooks or jobs
•Package your code into a wheel file
•Upload the wheel file to your Databricks workspace or a accessible location (e.g., DBFS)
•Install the wheel file on your cluster using init scripts or pip install commands
You can also package your custom data source as a library and install it directly on your cluster