<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Not Able To Access GCP storage bucket from Databricks in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115200#M45045</link>
    <description>&lt;P&gt;Here is an example of a properly formatted and delimited PKCS#8 private key in PEM format. This format includes the required headers and footers:&lt;/P&gt;
&lt;P&gt;```&lt;BR /&gt;-----BEGIN PRIVATE KEY-----&lt;BR /&gt;MIIBVgIBADANBgkqhkiG9w0BAQEFAASCAUAwggE8AgEAAkEAq7BFUpkGp3+LQmlQ&lt;BR /&gt;Yx2eqzDV+xeG8kx/sQFV18S5JhzGeIJNA72wSeukEPojtqUyX2J0CciPBh7eqclQ&lt;BR /&gt;2zpAswIDAQABAkAgisq4+zRdrzkwH1ITV1vpytnkO/NiHcnePQiOW0VUybPyHoGM&lt;BR /&gt;/jf75C5xET7ZQpBe5kx5VHsPZj0CBb3b+wSRAiEA2mPWCBytosIU/ODRfq6EiV04&lt;BR /&gt;lt6waE7I2uSPqIC20LcCIQDJQYIHQII+3YaPqyhGgqMexuuuGx+lDKD6/Fu/JwPb&lt;BR /&gt;5QIhAKthiYcYKlL9h8bjDsQhZDUACPasjzdsDEdq8inDyLOFAiEAmCr/tZwA3qeA&lt;BR /&gt;ZoBzI10DGPIuoKXBd3nk/eBxPkaxlEECIQCNymjsoI7GldtujVnr1qT+3yedLfHK&lt;BR /&gt;srDVjIT3LsvTqw==&lt;BR /&gt;-----END PRIVATE KEY-----&lt;BR /&gt;```&lt;/P&gt;
&lt;P&gt;Explanation:&lt;BR /&gt;- Headers and Footers: The key begins with `-----BEGIN PRIVATE KEY-----` and ends with `-----END PRIVATE KEY-----`. These delimiters are mandatory in PEM format.&lt;BR /&gt;- Base64 Encoding: The content between the headers is the Base64-encoded representation of the private key data.&lt;BR /&gt;- Line Breaks: The encoded data is split into lines of 64 characters for readability, though this is not strictly required by all tools.&lt;/P&gt;
&lt;P&gt;This format is widely used for storing private keys in PKCS#8 syntax, which supports various cryptographic algorithms.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Further, if you are still encountering problems I would suggest you try using Databricks Secret scopes.&amp;nbsp; This way you don't have to expose a key which is a security anti-pattern.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Cheers, Louis.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 10 Apr 2025 18:52:21 GMT</pubDate>
    <dc:creator>Louis_Frolio</dc:creator>
    <dc:date>2025-04-10T18:52:21Z</dc:date>
    <item>
      <title>Not Able To Access GCP storage bucket from Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/114944#M44996</link>
      <description>&lt;P&gt;While running :&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;df &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; spark.read.&lt;/SPAN&gt;&lt;SPAN&gt;format&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"csv"&lt;/SPAN&gt;&lt;SPAN&gt;) \&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"header"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"true"&lt;/SPAN&gt;&lt;SPAN&gt;) \&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"inferSchema"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"true"&lt;/SPAN&gt;&lt;SPAN&gt;) \&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;load&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;'path'&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;df.&lt;/SPAN&gt;&lt;SPAN&gt;show&lt;/SPAN&gt;&lt;SPAN&gt;()&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;Getting error :&amp;nbsp;&lt;SPAN&gt;java.io.IOException: Invalid PKCS8 data.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Cluster Spark Config : &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;spark.hadoop.fs.gs.auth.service.account.private.key.id {{secrets/newscope/gsaprivatekeyid}} spark.hadoop.fs.gs.auth.service.account.private.key {{secrets/newscope/gsaprivatekeynew}} spark.hadoop.google.cloud.auth.service.account.enable true spark.hadoop.fs.gs.project.id &amp;lt;projectid&amp;gt; spark.hadoop.fs.gs.auth.service.account.email &amp;lt;email id&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Have followed the document :&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/databricks/connect/storage/gcs" target="_blank"&gt;Connect to Google Cloud Storage - Azure Databricks | Microsoft Learn&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Please Help&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 09 Apr 2025 10:41:43 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/114944#M44996</guid>
      <dc:creator>ShivangiB</dc:creator>
      <dc:date>2025-04-09T10:41:43Z</dc:date>
    </item>
    <item>
      <title>Re: Not Able To Access GCP storage bucket from Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/114955#M45000</link>
      <description>&lt;H4&gt;Troubleshooting and Resolution for &lt;CODE&gt;java.io.IOException: Invalid PKCS8 data&lt;/CODE&gt;&lt;/H4&gt;
&lt;DIV class="paragraph"&gt;The error &lt;CODE&gt;java.io.IOException: Invalid PKCS8 data&lt;/CODE&gt; typically occurs when there is an issue with the private key format or its storage in Databricks secrets. Based on the provided cluster Spark configurations and the referenced document, here are the potential causes and their resolutions:&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Step 1: Validate the Private Key&lt;/STRONG&gt; - Ensure the private key stored in the secret is in the correct PKCS8 format. - Sometimes, when copying or storing the key, additional spaces, newlines, or formatting issues can occur. Verify that the key matches the exact format listed in the JSON key file downloaded from Google Cloud Platform (GCP). - Example of a correct private key format in the JSON file:&lt;/DIV&gt;
&lt;PRE&gt;&lt;CODE class="markdown-code-json"&gt;"private_key": "-----BEGIN PRIVATE KEY-----\
MIIEvQI...\
-----END PRIVATE KEY-----\
"&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Step 2: Check Databricks Secret Configuration&lt;/STRONG&gt; - Ensure the private key and private key ID are properly stored in the Databricks secret. - Verify the secrets by running the following in a Databricks notebook:&lt;/DIV&gt;
&lt;PRE&gt;&lt;CODE class="markdown-code-python"&gt;dbutils.secrets.get(scope="newscope", key="gsaprivatekeynew")
dbutils.secrets.get(scope="newscope", key="gsaprivatekeyid")
```
- The secrets should correctly retrieve the values stored without additional whitespace or errors.

Step 3: Confirm Spark Configuration**
- Double-check if the cluster Spark configuration matches the setup described in the document:
  - *Service Account Email:* Ensure this matches the email value from your GCP service account JSON.
  - *Project ID:* Verify the project ID is correct and matches your GCP project.

Here is the corrected Spark configuration example:
```properties
spark.hadoop.google.cloud.auth.service.account.enable true
spark.hadoop.fs.gs.auth.service.account.email &amp;lt;service-account-email&amp;gt;
spark.hadoop.fs.gs.project.id &amp;lt;project-id&amp;gt;
spark.hadoop.fs.gs.auth.service.account.private.key {{secrets/newscope/gsaprivatekeynew}}
spark.hadoop.fs.gs.auth.service.account.private.key.id {{secrets/newscope/gsaprivatekeyid}}&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Step 4: Test with Minimal Configuration&lt;/STRONG&gt; - Create a basic test with only the required Spark configuration to isolate the issue:&lt;/DIV&gt;
&lt;PRE&gt;&lt;CODE class="markdown-code-python"&gt;df = spark.read.format("csv").option("header", "true").load("gs://&amp;lt;bucket-name&amp;gt;/&amp;lt;path&amp;gt;")
df.show()&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Wed, 09 Apr 2025 11:50:18 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/114955#M45000</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-04-09T11:50:18Z</dc:date>
    </item>
    <item>
      <title>Re: Not Able To Access GCP storage bucket from Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115054#M45018</link>
      <description>&lt;P&gt;what value should i store in private key, just the part between begin and end. As I am saving that only still getting error.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 09 Apr 2025 18:14:52 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115054#M45018</guid>
      <dc:creator>ShivangiB</dc:creator>
      <dc:date>2025-04-09T18:14:52Z</dc:date>
    </item>
    <item>
      <title>Re: Not Able To Access GCP storage bucket from Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115166#M45036</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/34815"&gt;@Louis_Frolio&lt;/a&gt;&amp;nbsp; can you please suggest .&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;what value should i store in private key, just the part between begin and end. As I am saving that only still getting error.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 10 Apr 2025 14:25:56 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115166#M45036</guid>
      <dc:creator>ShivangiB</dc:creator>
      <dc:date>2025-04-10T14:25:56Z</dc:date>
    </item>
    <item>
      <title>Re: Not Able To Access GCP storage bucket from Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115179#M45040</link>
      <description>&lt;P&gt;What is the error you are getting? More context is needed here.&lt;/P&gt;</description>
      <pubDate>Thu, 10 Apr 2025 15:41:00 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115179#M45040</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-04-10T15:41:00Z</dc:date>
    </item>
    <item>
      <title>Re: Not Able To Access GCP storage bucket from Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115184#M45043</link>
      <description>&lt;P&gt;same error&amp;nbsp;&lt;SPAN&gt;java.io.IOException: Invalid PKCS8 data.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;"private_key": "-----BEGIN PRIVATE KEY-----\n --have stored this value present between these two--\n-----END PRIVATE KEY-----\n",&lt;/P&gt;</description>
      <pubDate>Thu, 10 Apr 2025 16:14:44 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115184#M45043</guid>
      <dc:creator>ShivangiB</dc:creator>
      <dc:date>2025-04-10T16:14:44Z</dc:date>
    </item>
    <item>
      <title>Re: Not Able To Access GCP storage bucket from Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115200#M45045</link>
      <description>&lt;P&gt;Here is an example of a properly formatted and delimited PKCS#8 private key in PEM format. This format includes the required headers and footers:&lt;/P&gt;
&lt;P&gt;```&lt;BR /&gt;-----BEGIN PRIVATE KEY-----&lt;BR /&gt;MIIBVgIBADANBgkqhkiG9w0BAQEFAASCAUAwggE8AgEAAkEAq7BFUpkGp3+LQmlQ&lt;BR /&gt;Yx2eqzDV+xeG8kx/sQFV18S5JhzGeIJNA72wSeukEPojtqUyX2J0CciPBh7eqclQ&lt;BR /&gt;2zpAswIDAQABAkAgisq4+zRdrzkwH1ITV1vpytnkO/NiHcnePQiOW0VUybPyHoGM&lt;BR /&gt;/jf75C5xET7ZQpBe5kx5VHsPZj0CBb3b+wSRAiEA2mPWCBytosIU/ODRfq6EiV04&lt;BR /&gt;lt6waE7I2uSPqIC20LcCIQDJQYIHQII+3YaPqyhGgqMexuuuGx+lDKD6/Fu/JwPb&lt;BR /&gt;5QIhAKthiYcYKlL9h8bjDsQhZDUACPasjzdsDEdq8inDyLOFAiEAmCr/tZwA3qeA&lt;BR /&gt;ZoBzI10DGPIuoKXBd3nk/eBxPkaxlEECIQCNymjsoI7GldtujVnr1qT+3yedLfHK&lt;BR /&gt;srDVjIT3LsvTqw==&lt;BR /&gt;-----END PRIVATE KEY-----&lt;BR /&gt;```&lt;/P&gt;
&lt;P&gt;Explanation:&lt;BR /&gt;- Headers and Footers: The key begins with `-----BEGIN PRIVATE KEY-----` and ends with `-----END PRIVATE KEY-----`. These delimiters are mandatory in PEM format.&lt;BR /&gt;- Base64 Encoding: The content between the headers is the Base64-encoded representation of the private key data.&lt;BR /&gt;- Line Breaks: The encoded data is split into lines of 64 characters for readability, though this is not strictly required by all tools.&lt;/P&gt;
&lt;P&gt;This format is widely used for storing private keys in PKCS#8 syntax, which supports various cryptographic algorithms.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Further, if you are still encountering problems I would suggest you try using Databricks Secret scopes.&amp;nbsp; This way you don't have to expose a key which is a security anti-pattern.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Cheers, Louis.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 10 Apr 2025 18:52:21 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115200#M45045</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-04-10T18:52:21Z</dc:date>
    </item>
    <item>
      <title>Re: Not Able To Access GCP storage bucket from Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115271#M45055</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/34815"&gt;@Louis_Frolio&lt;/a&gt;&amp;nbsp;after updating the key we are getting different error:&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;java.io.IOException: Error accessing gs://gcp-storage/FlatFiles/test_data.csv&lt;/DIV&gt;&lt;DIV&gt;---------------------------------------------------------------------------&lt;/DIV&gt;&lt;DIV&gt;Py4JJavaError&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Traceback (most recent call last)&lt;/DIV&gt;&lt;DIV&gt;File &amp;lt;command-5419098352410353&amp;gt;, line 4&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; 1 df = spark.read.format("csv") \&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; 2&amp;nbsp; &amp;nbsp; &amp;nbsp;.option("header", "true") \&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; 3&amp;nbsp; &amp;nbsp; &amp;nbsp;.option("inferSchema", "true") \&lt;/DIV&gt;&lt;DIV&gt;----&amp;gt; 4&amp;nbsp; &amp;nbsp; &amp;nbsp;.load('gs://gcp-storage/FlatFiles/test_data.csv')&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; 6 df.show()&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;Py4JJavaError: An error occurred while calling o407.load.&lt;/DIV&gt;&lt;DIV&gt;: java.io.IOException: Error accessing gs://gcp-storage/FlatFiles/test_data.csv&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getObject(GoogleCloudStorageImpl.java:2140)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;Caused by: shaded.databricks.com.google.api.client.auth.oauth2.TokenResponseException: 400 Bad Request&lt;/DIV&gt;&lt;DIV&gt;POST &lt;A href="https://oauth2.googleapis.com/token" target="_blank"&gt;https://oauth2.googleapis.com/token&lt;/A&gt;" target="_blank" rel="noopener noreferrer"&amp;gt;&lt;A href="https://oauth2.googleapis.com/token" target="_blank"&gt;https://oauth2.googleapis.com/token&lt;/A&gt;&amp;lt;/a&amp;gt;&lt;/DIV&gt;&lt;DIV&gt;{&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; "error" : "invalid_grant",&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; "error_description" : "Invalid grant: account not found"&lt;/DIV&gt;&lt;DIV&gt;}&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.api.client.auth.oauth2.TokenResponseException.from(TokenResponseException.java:103)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.api.client.auth.oauth2.TokenRequest.executeUnparsed(TokenRequest.java:308)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.api.client.auth.oauth2.TokenRequest.execute(TokenRequest.java:324)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.cloud.hadoop.util.CredentialFactory$GoogleCredentialWithRetry.executeRefreshToken(CredentialFactory.java:170)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:470)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.api.client.auth.oauth2.Credential.intercept(Credential.java:201)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.cloud.hadoop.util.ChainingHttpRequestInitializer$2.intercept(ChainingHttpRequestInitializer.java:98)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.api.client.http.HttpRequest.execute(HttpRequest.java:880)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:514)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:455)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:565)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;at shaded.databricks.com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getObject(GoogleCloudStorageImpl.java:2134)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Fri, 11 Apr 2025 11:03:59 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115271#M45055</guid>
      <dc:creator>ShivangiB</dc:creator>
      <dc:date>2025-04-11T11:03:59Z</dc:date>
    </item>
    <item>
      <title>Re: Not Able To Access GCP storage bucket from Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115279#M45057</link>
      <description>&lt;P&gt;At this point it is out of my area of knowledge and I don't havey any further suggestions. You may want to consider contacting Databricks Support if you have a support contract.&lt;/P&gt;</description>
      <pubDate>Fri, 11 Apr 2025 12:33:44 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/not-able-to-access-gcp-storage-bucket-from-databricks/m-p/115279#M45057</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-04-11T12:33:44Z</dc:date>
    </item>
  </channel>
</rss>

