- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2022 03:24 PM
I'm trying to read a file from a Google Cloud Storage bucket. The filename starts with a period, so Spark assumes the file is hidden and won't let me read it.
My code is similar to this:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = spark.read.format("text").load("gs://<bucket>/.myfile", wholetext=True)
df.show()
The resulting DataFrame is empty (as in, it has no rows).
When I run this on my laptop, I get the following error message:
22/02/15 16:40:58 WARN DataSource: All paths were ignored:
gs://<bucket>/.myfile
I've noticed that this applies to files starting with an underscore as well.
How can I get around this?
- Labels:
-
Spark job
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-04-2022 09:19 AM
I don't think there is an easy way to do this. You will also break very basic functionality (like being able to read Delta tables) if you were able to get around these constraints. I suggest you employ a rename job and then read.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-16-2022 08:00 AM
Spark uses the Hadoop Input API to read files, which ignores every file that starts with an underscore or a period.
I did not find a solution for this as the hiddenFileFilter is always active.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-16-2022 09:13 AM
Is there any way to work around this?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-16-2022 08:18 AM
Hi there, @Lincoln Bergeson! My name is Piper, and I'm a moderator for Databricks. Thank you for your question and welcome to the community. We'll give your peers a chance to respond and then we'll circle back if we need to.
Thanks in advance for your patience. 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-16-2022 09:13 AM
Looking forward to the answers. From my research this looks something that needs a special configuration or work-around, which I'm hoping Databricks can provide.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2022 09:34 PM
@Lincoln Bergeson GCS object names are very liberal. Only \r and \n are invalid, everything else is valid, including the NUL character. I am still not sure if this can help you. We do really need to hack this from spark side!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-11-2022 11:51 AM
Hi @Lincoln Bergeson ,
Just a friendly follow-up. Did any of the previous responses help you to resolve your issue? Please let us know if you still need help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2022 11:49 AM
Hi @Jose Gonzalez , none of these answers helped me, unfortunately. I'm still hoping to find a good solution to this issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-04-2022 09:19 AM
I don't think there is an easy way to do this. You will also break very basic functionality (like being able to read Delta tables) if you were able to get around these constraints. I suggest you employ a rename job and then read.

