- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā02-12-2025 09:29 AM
Iām running Databricks on Azure and trying to read a CSV file from Google Cloud Storage (GCS) bucket using Spark. However, despite configuring Spark with a Google service account key, Iām encountering the following error:
Error getting access token from metadata server at: http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token
Iāve configured Spark with these settings to ensure it uses the service account for authentication following this document: https://docs.databricks.com/en/connect/storage/gcs.html
Iāve configured Spark with these settings to ensure it uses the service account for authentication following this document: https://docs.databricks.com/en/connect/storage/gcs.html
spark.conf.set("spark.hadoop.google.cloud.auth.service.account.enable", "true") spark.conf.set("spark.hadoop.fs.gs.auth.service.account.email", client_email) spark.conf.set("spark.hadoop.fs.gs.project.id", project_id) spark.conf.set("spark.hadoop.fs.gs.auth.service.account.private.key", private_key) spark.conf.set("spark.hadoop.fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem") spark.conf.set("spark.hadoop.fs.AbstractFileSystem.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS")spark.conf.set("spark.hadoop.fs.gs.auth.service.account.private.key.id", private_key_id)
Attempting to read the test csv file from my GCS bucket:
gcs_path = "gs://ddfsdfts/events/31dfsdfs4_2025_02_01_000000000000.csv"
df = spark.read.format("csv") \ .option("header", "true") \ .option("inferSchema", "true") \ .load(gcs_path) df.show()
The error happens when trying df.show()
I've seen a few other questions like this but no straight forward answers. Why is it trying to get to the metadata server token?
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā02-12-2025 10:36 AM
I figured it out! I hope this helps someone else, I had to update the spark configuration in the cluster I was using, update with this:
https://docs.databricks.com/en/connect/storage/gcs.html#global-configuration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā02-12-2025 10:36 AM
I figured it out! I hope this helps someone else, I had to update the spark configuration in the cluster I was using, update with this:
https://docs.databricks.com/en/connect/storage/gcs.html#global-configuration

