<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic URGENT: Delta writes to S3 fail after workspace migrated to Premium in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/urgent-delta-writes-to-s3-fail-after-workspace-migrated-to/m-p/134067#M50008</link>
    <description>&lt;H1&gt;&lt;FONT size="4"&gt;Delta writes to S3 fail after workspace migrated to Premium (401 “Credential was not sent or unsupported type”)&lt;/FONT&gt;&lt;/H1&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Summary&lt;/FONT&gt;&lt;/H2&gt;&lt;P&gt;After our Databricks workspace migrated from Standard to Premium, all Delta writes to S3 started failing with:&lt;/P&gt;&lt;P&gt;&lt;EM&gt;com.databricks.s3commit.DeltaCommitRejectException: ... 401: Credential was not sent or was of an unsupported type for this API.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;The same code ran for years on Standard. Reads from Delta work, and read/write as CSV/Parquet/TXT still work. The failure occurs only during the Delta commit phase.&lt;/P&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Environment&lt;/FONT&gt;&lt;/H2&gt;&lt;UL&gt;&lt;LI&gt;&lt;FONT size="2"&gt;Cloud: AWS&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT size="2"&gt;Workspace tier: Premium (recently migrated from Standard)&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT size="2"&gt;Compute: classic (non-serverless) all-purpose/job clusters&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT size="2"&gt;Auth to S3: static S3A keys set in notebook at runtime&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT size="2"&gt;Buckets involved: s3://&amp;lt;bucket-name&amp;gt;/... (Delta target), s3://&amp;lt;bucket-name&amp;gt;/... (staging)&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT size="2"&gt;We also have an IAM Role (cluster role) with S3 permissions (see “IAM details”)&lt;/FONT&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Exact error (top of stack)&lt;/FONT&gt;&lt;/H2&gt;&lt;PRE&gt;DeltaCommitRejectException: rejected by server 26 times, most recent error: 401: Credential was not sent or was of an unsupported type for this API.&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; at com.databricks.s3commit.DeltaCommitClient.commitWithRetry...&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; at com.databricks.tahoe.store.EnhancedS3AFileSystem.putIfAbsent...&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; ...&lt;/PRE&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Minimal repro&lt;/FONT&gt;&lt;/H2&gt;&lt;PRE&gt;from pyspark.sql.functions import lit&lt;BR /&gt;&lt;BR /&gt;# Static keys (legacy)&lt;BR /&gt;sc._jsc.hadoopConfiguration().set('fs.s3a.awsAccessKeyId',&amp;nbsp;&amp;nbsp; config['AWS_ACCESS_KEY'])&lt;BR /&gt;sc._jsc.hadoopConfiguration().set('fs.s3a.awsSecretAccessKey', config['AWS_SECRET_KEY'])&lt;BR /&gt;&lt;BR /&gt;# (no session token)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;# Simple delta write&lt;BR /&gt;(spark.range(1).withColumn("date", lit("1970-01-01"))&lt;BR /&gt;&amp;nbsp;.write.format("delta")&lt;BR /&gt;&amp;nbsp;.mode("overwrite").partitionBy("date")&lt;BR /&gt;&amp;nbsp;.save("s3://&amp;lt;bucket-name&amp;gt;/path/ops/_tmp_delta_write_check"))&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# Result: fails with 401 during Delta commit&lt;/P&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Control tests&lt;/FONT&gt;&lt;/H2&gt;&lt;UL&gt;&lt;LI&gt;- spark.read.format("delta").load("s3://&amp;lt;bucket-name&amp;gt;/path/...") → works&lt;/LI&gt;&lt;LI&gt;- .write.mode("overwrite").parquet("s3a://&amp;lt;bucket-name&amp;gt;/path/staging/...") → works&lt;/LI&gt;&lt;LI&gt;- .write.csv(...) / .write.text(...) → work&lt;/LI&gt;&lt;/UL&gt;&lt;H2&gt;&lt;FONT size="4"&gt;What changed&lt;/FONT&gt;&lt;/H2&gt;&lt;P&gt;Only the workspace tier (Standard → Premium). Code, buckets, and IAM remained the same.&lt;/P&gt;&lt;H2&gt;&lt;FONT size="4"&gt;IAM details (high level)&lt;/FONT&gt;&lt;/H2&gt;&lt;P&gt;- Cluster role has S3 permissions (Get/Put/Delete on object prefixes; ListBucket on buckets).&lt;/P&gt;&lt;P&gt;- CSV/Parquet I/O proves general S3 access is OK.&lt;/P&gt;&lt;P&gt;- Happy to share a redacted IAM policy if needed.&lt;/P&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Spark/cluster config (relevant bits)&lt;/FONT&gt;&lt;/H2&gt;&lt;P&gt;- We set static keys in the first cell via fs.s3a.awsAccessKeyId/SecretAccessKey.&lt;/P&gt;&lt;P&gt;- We have seth the IAM Cluster/Instance profile for the workspace&lt;/P&gt;</description>
    <pubDate>Tue, 07 Oct 2025 13:27:05 GMT</pubDate>
    <dc:creator>DBU100725</dc:creator>
    <dc:date>2025-10-07T13:27:05Z</dc:date>
    <item>
      <title>URGENT: Delta writes to S3 fail after workspace migrated to Premium</title>
      <link>https://community.databricks.com/t5/data-engineering/urgent-delta-writes-to-s3-fail-after-workspace-migrated-to/m-p/134067#M50008</link>
      <description>&lt;H1&gt;&lt;FONT size="4"&gt;Delta writes to S3 fail after workspace migrated to Premium (401 “Credential was not sent or unsupported type”)&lt;/FONT&gt;&lt;/H1&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Summary&lt;/FONT&gt;&lt;/H2&gt;&lt;P&gt;After our Databricks workspace migrated from Standard to Premium, all Delta writes to S3 started failing with:&lt;/P&gt;&lt;P&gt;&lt;EM&gt;com.databricks.s3commit.DeltaCommitRejectException: ... 401: Credential was not sent or was of an unsupported type for this API.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;The same code ran for years on Standard. Reads from Delta work, and read/write as CSV/Parquet/TXT still work. The failure occurs only during the Delta commit phase.&lt;/P&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Environment&lt;/FONT&gt;&lt;/H2&gt;&lt;UL&gt;&lt;LI&gt;&lt;FONT size="2"&gt;Cloud: AWS&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT size="2"&gt;Workspace tier: Premium (recently migrated from Standard)&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT size="2"&gt;Compute: classic (non-serverless) all-purpose/job clusters&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT size="2"&gt;Auth to S3: static S3A keys set in notebook at runtime&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT size="2"&gt;Buckets involved: s3://&amp;lt;bucket-name&amp;gt;/... (Delta target), s3://&amp;lt;bucket-name&amp;gt;/... (staging)&lt;/FONT&gt;&lt;/LI&gt;&lt;LI&gt;&lt;FONT size="2"&gt;We also have an IAM Role (cluster role) with S3 permissions (see “IAM details”)&lt;/FONT&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Exact error (top of stack)&lt;/FONT&gt;&lt;/H2&gt;&lt;PRE&gt;DeltaCommitRejectException: rejected by server 26 times, most recent error: 401: Credential was not sent or was of an unsupported type for this API.&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; at com.databricks.s3commit.DeltaCommitClient.commitWithRetry...&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; at com.databricks.tahoe.store.EnhancedS3AFileSystem.putIfAbsent...&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; ...&lt;/PRE&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Minimal repro&lt;/FONT&gt;&lt;/H2&gt;&lt;PRE&gt;from pyspark.sql.functions import lit&lt;BR /&gt;&lt;BR /&gt;# Static keys (legacy)&lt;BR /&gt;sc._jsc.hadoopConfiguration().set('fs.s3a.awsAccessKeyId',&amp;nbsp;&amp;nbsp; config['AWS_ACCESS_KEY'])&lt;BR /&gt;sc._jsc.hadoopConfiguration().set('fs.s3a.awsSecretAccessKey', config['AWS_SECRET_KEY'])&lt;BR /&gt;&lt;BR /&gt;# (no session token)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;# Simple delta write&lt;BR /&gt;(spark.range(1).withColumn("date", lit("1970-01-01"))&lt;BR /&gt;&amp;nbsp;.write.format("delta")&lt;BR /&gt;&amp;nbsp;.mode("overwrite").partitionBy("date")&lt;BR /&gt;&amp;nbsp;.save("s3://&amp;lt;bucket-name&amp;gt;/path/ops/_tmp_delta_write_check"))&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# Result: fails with 401 during Delta commit&lt;/P&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Control tests&lt;/FONT&gt;&lt;/H2&gt;&lt;UL&gt;&lt;LI&gt;- spark.read.format("delta").load("s3://&amp;lt;bucket-name&amp;gt;/path/...") → works&lt;/LI&gt;&lt;LI&gt;- .write.mode("overwrite").parquet("s3a://&amp;lt;bucket-name&amp;gt;/path/staging/...") → works&lt;/LI&gt;&lt;LI&gt;- .write.csv(...) / .write.text(...) → work&lt;/LI&gt;&lt;/UL&gt;&lt;H2&gt;&lt;FONT size="4"&gt;What changed&lt;/FONT&gt;&lt;/H2&gt;&lt;P&gt;Only the workspace tier (Standard → Premium). Code, buckets, and IAM remained the same.&lt;/P&gt;&lt;H2&gt;&lt;FONT size="4"&gt;IAM details (high level)&lt;/FONT&gt;&lt;/H2&gt;&lt;P&gt;- Cluster role has S3 permissions (Get/Put/Delete on object prefixes; ListBucket on buckets).&lt;/P&gt;&lt;P&gt;- CSV/Parquet I/O proves general S3 access is OK.&lt;/P&gt;&lt;P&gt;- Happy to share a redacted IAM policy if needed.&lt;/P&gt;&lt;H2&gt;&lt;FONT size="4"&gt;Spark/cluster config (relevant bits)&lt;/FONT&gt;&lt;/H2&gt;&lt;P&gt;- We set static keys in the first cell via fs.s3a.awsAccessKeyId/SecretAccessKey.&lt;/P&gt;&lt;P&gt;- We have seth the IAM Cluster/Instance profile for the workspace&lt;/P&gt;</description>
      <pubDate>Tue, 07 Oct 2025 13:27:05 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/urgent-delta-writes-to-s3-fail-after-workspace-migrated-to/m-p/134067#M50008</guid>
      <dc:creator>DBU100725</dc:creator>
      <dc:date>2025-10-07T13:27:05Z</dc:date>
    </item>
    <item>
      <title>Re: URGENT: Delta writes to S3 fail after workspace migrated to Premium</title>
      <link>https://community.databricks.com/t5/data-engineering/urgent-delta-writes-to-s3-fail-after-workspace-migrated-to/m-p/134089#M50016</link>
      <description>&lt;P&gt;The update/append to delta on s3 fails with both Databrciks Runtime 13.3 and 15.4.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Oct 2025 15:05:41 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/urgent-delta-writes-to-s3-fail-after-workspace-migrated-to/m-p/134089#M50016</guid>
      <dc:creator>DBU100725</dc:creator>
      <dc:date>2025-10-07T15:05:41Z</dc:date>
    </item>
    <item>
      <title>Re: URGENT: Delta writes to S3 fail after workspace migrated to Premium</title>
      <link>https://community.databricks.com/t5/data-engineering/urgent-delta-writes-to-s3-fail-after-workspace-migrated-to/m-p/135571#M50384</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/32140"&gt;@DBU100725&lt;/a&gt;&amp;nbsp;- Are you using a No isolation shared cluster? Can you check if &lt;A href="https://docs.databricks.com/aws/en/admin/account-settings/no-isolation-shared#gsc.tab=0" target="_self"&gt;this&lt;/A&gt; was turned ON in your account?&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="dkushari_0-1761072746919.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/20897i41A831910103F329/image-size/medium?v=v2&amp;amp;px=400" role="button" title="dkushari_0-1761072746919.png" alt="dkushari_0-1761072746919.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 21 Oct 2025 18:52:47 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/urgent-delta-writes-to-s3-fail-after-workspace-migrated-to/m-p/135571#M50384</guid>
      <dc:creator>dkushari</dc:creator>
      <dc:date>2025-10-21T18:52:47Z</dc:date>
    </item>
  </channel>
</rss>

