โ06-12-2025 07:40 AM
I want to read a csv file using pandas library in python in Databricks Notebooks and I uploaded my csv file (employee_data) on adfs but it still shows no such file exists can anyone help me on this?
โ06-12-2025 09:15 AM
Hi @vivek_purbey, can you try reading with the path below? It takes '/dbfs' by default, you do not need to pass it explicitly.
pd. read_csv('/FileStore/tables/employee_data.csv')
โ06-12-2025 09:46 AM
Still had the same issue.
โ06-12-2025 10:07 AM
Can you try uploading this file outside FileStore and give it a try? Maybe create a new folder in dbfs, and upload the files there?
โ06-12-2025 10:17 AM
can you please guide me how I can do that
โ06-12-2025 10:33 AM
You can create a new folder using like below:
dbutils.fs.mkdirs("/folder_name/")
Then, you can upload the file here manually the way you were uploading the FileStore.
And then you can read the file using below code
pd.read_csv('/folder_name/employee_data.csv')
โ06-12-2025 12:40 PM
I think you are using databricks course cluster which might have some limitations.
Pandas reads only from local files. So try to read the file using spark instead of pandas.
df = spark.read.csv('dbfs:/FileStore/tables/employee_data.csv')
display(df)
You can then convert it to pandas df if it is absolutely needed using toPandas.
โ06-12-2025 11:43 PM
Sure, Thank You.
โ06-13-2025 01:18 AM
Load it using PySpark and create a pandas data frame. Here is how you do it after uploading the data
file_path = "/FileStore/tables/your_file_name.csv"
# Load CSV as Spark DataFrame
df_spark = spark.read.option("header", "true").option("inferSchema", "true").csv(file_path)
# Convert to pandas DataFrame
df_pandas = df_spark.toPandas()
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now