cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

File found with %fs ls but not with spark.read

joseroca99
New Contributor II
Code:
 
wikipediaDF = (spark.read
  .option("HEADER", True)
  .option("inferSchema", True)
  .csv("/databricks-datasets/wikipedia-datasets/data-001/pageviews/raw/pageviews_by_second.tsv")
)

display(bostonDF)
 
Error: 
Failed to store the result. Try rerunning the command.
Failed to upload command result to DBFS. Error message: PUT request to create file error HttpResponseProxy{HTTP/1.1 404 The specified filesystem does not exist. [Content-Length: 175, Content-Type: application/json;charset=utf-8, Server: Windows-Azure-HDFS/1.0 Microsoft-HTTPAPI/2.0, x-ms-error-code: FilesystemNotFound, x-ms-request-id: 614c7044-901f-004d-1bd4-d3b66f000000, x-ms-version: 2021-04-10, Date: Thu, 11 Jul 2024 20:52:49 GMT] ResponseEntityProxy{[Content-Type: application/json;charset=utf-8,Content-Length: 175,Chunked: false]}}
 
The files are open databases shared by databricks, I was always able to open them but now I'm not
1 ACCEPTED SOLUTION

Accepted Solutions

Slash
New Contributor III

I think there is some kind of problem with networking/permissions to the storage account created in managed resource group by Databricks. By default, when you run a notebook interactively by clicking Run in the notebook:

  • If the results are small, they are stored in the Azure Databricks control plane, along with the notebook’s command contents and metadata.
  • Larger results are stored in the workspace storage account in your Azure subscription. Azure Databricks automatically creates the workspace storage account. Azure Databricks uses this storage area for workspace system data and your workspace’s DBFS root. Notebook results are stored in workspace system data storage, which is not accessible by users. 

So in your case, when you limit the result set then it works becasue small results are stored in Azure Databricks control plane.
But when you try to display whole datframe without limiting it, databricks will try to save result in the workspace storage account. Look at the cluster logs and see if there is some errors related to the root storage account.
Maybe you have some firewall that prevents Databricks to connect to storage account.

View solution in original post

5 REPLIES 5

Slash
New Contributor III

Hi @joseroca99 ,

Try to add filesystem type to your path. Something like that: dbfs:/databricks-datasets/wikipedia-datasets/data-001/pageviews/raw/pageviews_by_second

L

 

p4pratikjain
Contributor

Depending on where did you find the file using %fs you should use appropriate filesystem pre-fix.
If its in dbfs use dbfs:/YOUR_PATH
If its in local file system try with - file:/

Pratik Jain

I tried writing dbfs: and /dbfs before the path, still not working

joseroca99
New Contributor II

Update 1: Apparently the problem shows up when using display(), using show() or display(df.limit()) works fine. I also started using the premium pricing tier, I'm going to see what happens if I use the free 14 days trial pricing tier.

Update 2: I tried using dbfs: and /dbfs prefixes, still not working. I also tried using a table I got from the marketplace and spark.read.table() and the problem persists

Slash
New Contributor III

I think there is some kind of problem with networking/permissions to the storage account created in managed resource group by Databricks. By default, when you run a notebook interactively by clicking Run in the notebook:

  • If the results are small, they are stored in the Azure Databricks control plane, along with the notebook’s command contents and metadata.
  • Larger results are stored in the workspace storage account in your Azure subscription. Azure Databricks automatically creates the workspace storage account. Azure Databricks uses this storage area for workspace system data and your workspace’s DBFS root. Notebook results are stored in workspace system data storage, which is not accessible by users. 

So in your case, when you limit the result set then it works becasue small results are stored in Azure Databricks control plane.
But when you try to display whole datframe without limiting it, databricks will try to save result in the workspace storage account. Look at the cluster logs and see if there is some errors related to the root storage account.
Maybe you have some firewall that prevents Databricks to connect to storage account.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!