Databricks Notebooks aren’t just for running code — a few built-in features can make your work a lot smoother:
- Modularize with %run to reuse common functions across notebooks.
- Add parameters with widgets so notebooks become interactive and easy to run in different scenarios.
- Mix SQL and Python cells to query and transform data side by side, without switching tools.
- Use Repos integration to connect with GitHub, GitLab, or Azure DevOps for version control and collaboration.
- Manage notebook dependencies with the Environment panel when using serverless compute.
These are some of my favorites — but everyone has their own little tricks.
What’s your go-to notebook hack in Databricks?