cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

File found with %fs ls but not with spark.read

joseroca99
New Contributor II
Code:
 
wikipediaDF = (spark.read
  .option("HEADER", True)
  .option("inferSchema", True)
  .csv("/databricks-datasets/wikipedia-datasets/data-001/pageviews/raw/pageviews_by_second.tsv")
)

display(bostonDF)
 
Error: 
Failed to store the result. Try rerunning the command.
Failed to upload command result to DBFS. Error message: PUT request to create file error HttpResponseProxy{HTTP/1.1 404 The specified filesystem does not exist. [Content-Length: 175, Content-Type: application/json;charset=utf-8, Server: Windows-Azure-HDFS/1.0 Microsoft-HTTPAPI/2.0, x-ms-error-code: FilesystemNotFound, x-ms-request-id: 614c7044-901f-004d-1bd4-d3b66f000000, x-ms-version: 2021-04-10, Date: Thu, 11 Jul 2024 20:52:49 GMT] ResponseEntityProxy{[Content-Type: application/json;charset=utf-8,Content-Length: 175,Chunked: false]}}
 
The files are open databases shared by databricks, I was always able to open them but now I'm not
1 ACCEPTED SOLUTION

Accepted Solutions

I think there is some kind of problem with networking/permissions to the storage account created in managed resource group by Databricks. By default, when you run a notebook interactively by clicking Run in the notebook:

  • If the results are small, they are stored in the Azure Databricks control plane, along with the notebookโ€™s command contents and metadata.
  • Larger results are stored in the workspace storage account in your Azure subscription. Azure Databricks automatically creates the workspace storage account. Azure Databricks uses this storage area for workspace system data and your workspaceโ€™s DBFS root. Notebook results are stored in workspace system data storage, which is not accessible by users. 

So in your case, when you limit the result set then it works becasue small results are stored in Azure Databricks control plane.
But when you try to display whole datframe without limiting it, databricks will try to save result in the workspace storage account. Look at the cluster logs and see if there is some errors related to the root storage account.
Maybe you have some firewall that prevents Databricks to connect to storage account.

View solution in original post

5 REPLIES 5

szymon_dybczak
Contributor III

Hi @joseroca99 ,

Try to add filesystem type to your path. Something like that: dbfs:/databricks-datasets/wikipedia-datasets/data-001/pageviews/raw/pageviews_by_second

L

 

p4pratikjain
Contributor

Depending on where did you find the file using %fs you should use appropriate filesystem pre-fix.
If its in dbfs use dbfs:/YOUR_PATH
If its in local file system try with - file:/

Pratik Jain

I tried writing dbfs: and /dbfs before the path, still not working

joseroca99
New Contributor II

Update 1: Apparently the problem shows up when using display(), using show() or display(df.limit()) works fine. I also started using the premium pricing tier, I'm going to see what happens if I use the free 14 days trial pricing tier.

Update 2: I tried using dbfs: and /dbfs prefixes, still not working. I also tried using a table I got from the marketplace and spark.read.table() and the problem persists

I think there is some kind of problem with networking/permissions to the storage account created in managed resource group by Databricks. By default, when you run a notebook interactively by clicking Run in the notebook:

  • If the results are small, they are stored in the Azure Databricks control plane, along with the notebookโ€™s command contents and metadata.
  • Larger results are stored in the workspace storage account in your Azure subscription. Azure Databricks automatically creates the workspace storage account. Azure Databricks uses this storage area for workspace system data and your workspaceโ€™s DBFS root. Notebook results are stored in workspace system data storage, which is not accessible by users. 

So in your case, when you limit the result set then it works becasue small results are stored in Azure Databricks control plane.
But when you try to display whole datframe without limiting it, databricks will try to save result in the workspace storage account. Look at the cluster logs and see if there is some errors related to the root storage account.
Maybe you have some firewall that prevents Databricks to connect to storage account.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group