Hiyou can try: my_df = spark.read.format("csv") .option("inferSchema","true") # to get the types from your data .option("sep",",") # if your file is using "," as separator .option("header","true") # if you...
ops I didn't see the other answers, anyway here you have how to use %fs magic to do the same that dbutils.fs.ls() utils. Just before to create the spark data frame, check if the file exists in the mentioned path. You can use the %fs magic like this:
Hi Rodrigoif you're using Azure databricks you can also try Autoloader this will upload you data into a spark dataframe or directly into a panda's dataframe. Be aware that you'll need setup your workspace with the necessary permissions.Here a link wi...