How to export/clone Databricks Notebook without results via web UI?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2024 03:07 PM - edited 05-19-2024 03:32 PM
When a Databricks Notebook exceeds size limit, it suggests to `clone/export without results`.
This is exactly what I want to do, but the current web UI does not provide the ability to bypass/skip the results in either the `clone` or `export` context menus. Additional note: I do not want to clear the outputs of the current notebook.
Screenshots provided for visual context. Not sure if there are other places that I may have overlooked.
Fact finding performed so far:
- Reference consulted Export and import Databricks notebooks - Azure Databricks | Microsoft Learn.
Thank you in advance for any insights you can share on achieving this goal.
- Labels:
-
Workflows
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2024 09:18 PM
@dataslicer good day!
When you export a notebook as HTML, IPython notebook (.ipynb), or archive (DBC), and you have not cleared the command outputs, the outputs are included in the export.
Kind regards,
Yesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2024 03:40 PM
Thank you @Yeshwanth for the response.
I am looking for a way without clearing up the current outputs.
This is necessary because I want to preserve the existing outputs and fork off another notebook instance to run with few parameter changes and come back to compare the output results. This way the forked notebook attempt is not restricted by the notebook max size limitations.
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-15-2024 02:08 PM
I have a similar use case as yours but I couldn't find any way to export a notebook without its outputs as a dbc or ipython file.
What worked for me is to export the notebook as a source file, which creates a .py file. Then I import the .py file back into databricks and its interpreted as a regular notebook. I fork off my experimentation using this new notebook.

