cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How do you read files from the DBFS with OS and Pandas Python libraries?

MattPython
New Contributor

I created translations for decoded values and want to save the dictionary object the DBFS for mapping. However, I am unable to access the DBFS without using dbutils or PySpark library.

Is there a way to access the DBFS with OS and Pandas Python libraries? At work, we can directly use the same path as the PySpark functions to write/ read from the DBFS without issue.

  1. Confirm files exist within DBFSimage.png
  2. Confirm ability to read file with PySparkimage
  3. Error 1 - recommends using "/dbfs" instead of "dbfs:"image
  4. Updated "/dbfs" - error persistsimage
  5. Removed DBFS entirely:image
  6. and one last shot....image

Thank you!

4 REPLIES 4

Chaitanya_Raju
Honored Contributor

Hi @Matthew LIbonati​ ,

Can you please check again, I tried in the exact same way and initially faced the error as mentioned in point 3, then changed it to as mentioned in point 4, can able to see the data without any issues.

Happy Learning!!

Anonymous
Not applicable

Hi @Matthew LIbonati​ 

Hope everything is going great.

Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you. 

Cheers!

Johny
New Contributor III

Hi @Vidula Khanna​ ,

I am having the same issue (using Community Edition). I am aware that in CE, DBFS is not mounted to /dbfs root directory. Is this the cause? If so, that is the alternative?

Thank you

User16789202230
New Contributor II
New Contributor II
db_path = 'file:///Workspace/Users/l<xxxxx>@databricks.com/TITANIC_DEMO/tested.csv'
df = spark.read.csv(db_path, header = "True", inferSchema="True")
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!