cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Suddenly unable to load csv in Community Edition

whatthefee
New Contributor

Hi there, 

I have be experimenting with some notebooks in the community edition workspace. I have a notebook which loads a .csv file from the same location where the notebook is saved. I have a separate notebook where this was working before, but suddenly today I'm not able to load .csv file from the workspace.

Nothing in my configurations has changed. Code and files in the same location, just rerunning code which has run no problem before. Python recognizes the files are there but the load is now no longer working.

whatthefee_0-1770522803156.png

whatthefee_0-1770523075333.png

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

Louis_Frolio
Databricks Employee
Databricks Employee

Hey @whatthefee , quick clarification here. You’re on Free Edition, not Community Edition (Community Edition has been retired).

Can you confirm whether those screenshots are from Free Edition, or from the legacy Community Edition environment (which was taken offline on January 1, 2026)?

If you’re on Free Edition, you should be able to import directly from the Workspace path, for example:

df = spark.read.csv(”/Workspace/Users/username/path/to/file.csv”, header=True

TL;DR

Use Unity Catalog volumes (the /Volumes path) or workspace-backed tables/files. You may still see remnants of DBFS via dbutils, but most of that functionality doesn’t exist the way it did in Community Edition. DBFS is also slated to be removed entirely in the future.

Hope this helps, Louis

View solution in original post

1 REPLY 1

Louis_Frolio
Databricks Employee
Databricks Employee

Hey @whatthefee , quick clarification here. You’re on Free Edition, not Community Edition (Community Edition has been retired).

Can you confirm whether those screenshots are from Free Edition, or from the legacy Community Edition environment (which was taken offline on January 1, 2026)?

If you’re on Free Edition, you should be able to import directly from the Workspace path, for example:

df = spark.read.csv(”/Workspace/Users/username/path/to/file.csv”, header=True

TL;DR

Use Unity Catalog volumes (the /Volumes path) or workspace-backed tables/files. You may still see remnants of DBFS via dbutils, but most of that functionality doesn’t exist the way it did in Community Edition. DBFS is also slated to be removed entirely in the future.

Hope this helps, Louis