cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

URGENT: Delta writes to S3 fail after workspace migrated to Premium

DBU100725
New Contributor II

Delta writes to S3 fail after workspace migrated to Premium (401 “Credential was not sent or unsupported type”)

Summary

After our Databricks workspace migrated from Standard to Premium, all Delta writes to S3 started failing with:

com.databricks.s3commit.DeltaCommitRejectException: ... 401: Credential was not sent or was of an unsupported type for this API.

The same code ran for years on Standard. Reads from Delta work, and read/write as CSV/Parquet/TXT still work. The failure occurs only during the Delta commit phase.

Environment

  • Cloud: AWS
  • Workspace tier: Premium (recently migrated from Standard)
  • Compute: classic (non-serverless) all-purpose/job clusters
  • Auth to S3: static S3A keys set in notebook at runtime
  • Buckets involved: s3://<bucket-name>/... (Delta target), s3://<bucket-name>/... (staging)
  • We also have an IAM Role (cluster role) with S3 permissions (see “IAM details”)

Exact error (top of stack)

DeltaCommitRejectException: rejected by server 26 times, most recent error: 401: Credential was not sent or was of an unsupported type for this API.

    at com.databricks.s3commit.DeltaCommitClient.commitWithRetry...

    at com.databricks.tahoe.store.EnhancedS3AFileSystem.putIfAbsent...

    ...

Minimal repro

from pyspark.sql.functions import lit

# Static keys (legacy)
sc._jsc.hadoopConfiguration().set('fs.s3a.awsAccessKeyId',   config['AWS_ACCESS_KEY'])
sc._jsc.hadoopConfiguration().set('fs.s3a.awsSecretAccessKey', config['AWS_SECRET_KEY'])

# (no session token)

 

# Simple delta write
(spark.range(1).withColumn("date", lit("1970-01-01"))
 .write.format("delta")
 .mode("overwrite").partitionBy("date")
 .save("s3://<bucket-name>/path/ops/_tmp_delta_write_check"))

 

# Result: fails with 401 during Delta commit

Control tests

  • - spark.read.format("delta").load("s3://<bucket-name>/path/...") → works
  • - .write.mode("overwrite").parquet("s3a://<bucket-name>/path/staging/...") → works
  • - .write.csv(...) / .write.text(...) → work

What changed

Only the workspace tier (Standard → Premium). Code, buckets, and IAM remained the same.

IAM details (high level)

- Cluster role has S3 permissions (Get/Put/Delete on object prefixes; ListBucket on buckets).

- CSV/Parquet I/O proves general S3 access is OK.

- Happy to share a redacted IAM policy if needed.

Spark/cluster config (relevant bits)

- We set static keys in the first cell via fs.s3a.awsAccessKeyId/SecretAccessKey.

- We have seth the IAM Cluster/Instance profile for the workspace

1 REPLY 1

DBU100725
New Contributor II

The update/append to delta on s3 fails with both Databrciks Runtime 13.3 and 15.4.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now