Hi @ManojkMohan
1. Verify the exact notebook location in the workspace
In Databricks, open the Workspace browser.
Navigate manually to where you think CreateRawData lives.
Right-click on the notebook and select Copy Path — this gives you the exact absolute path Databricks expects, for example:
/Users/manojdatabricks73@gmail.com/includes/CreateRawData
Paste this exact path into your %run or job configuration.
2. Check case sensitivity & spelling
Notebook paths in Databricks are case-sensitive (CreateRawData ≠ createrawdata).
Make sure there are no hidden spaces in the name (sometimes copied names have trailing spaces)
3. Confirm you’re in the right workspace
If you have multiple workspaces, the notebook might exist in another one.
Jobs run in the workspace/environment they are scheduled in — so a job in Workspace A won’t see a notebook stored in Workspace B.
4. If using relative paths
%run ./includes/CreateRawData works only if:
- The includes folder is in the same directory as the current notebook.
- You’re using the correct relative reference (../ for one level up).
If the folder is not in the same hierarchy, use the absolute path instead.
5. Check Repo vs Workspace paths
If your code is in a Databricks Repo, the path changes:
/Repos/<username>/<repo-name>/includes/CreateRawData
Jobs referencing a notebook from a repo must point to the repo path, not /Users/....
6. Version / rename issues
- If the notebook was renamed, deleted, or moved, update the reference.
- Check workspace Revision History in case you need to restore it.
Always copy the path from the UI after locating the notebook — it prevents typos and works for both jobs and %run calls.
LR