<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic &amp;quot;AmazonS3Exception: The bucket is in this region&amp;quot; error in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/quot-amazons3exception-the-bucket-is-in-this-region-quot-error/m-p/28909#M20674</link>
    <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;I have read access to an S3 bucket in an AWS account that is not mine. For more than a year I've had a job successfully reading from that bucket using dbutils.fs.mount(...) and sqlContext.read.json(...). Recently the job started failing with the exception: "com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is in this region: us-east-1. Please use this region to retry the request." on the sqlContext.read.json() command. My Databricks Cloud platform is in us-west-2, and the bucket may have been moved, but as far as I understand from this question this shouldn't be a problem: &lt;A href="https://forums.databricks.com/questions/416/does-my-s3-data-need-to-be-in-the-same-aws-region.html" target="test_blank"&gt;https://forums.databricks.com/questions/416/does-my-s3-data-need-to-be-in-the-same-aws-region.html&lt;/A&gt; . I have no problem accessing the bucket with boto3 (with no need to specify region). I've also tried setting the 'spark.hadoop.fs.s3a.endpoint' of the cluster to 's3.us-east-1.amazonaws.com' . Surprisingly this resulted in the same error on the mount() command saying that the bucket is in the 'us-west-2' region.&lt;/P&gt;
&lt;P&gt;I'm a bit confused as to the possible causes of this error and would be happy for some pointers,&lt;/P&gt;
&lt;P&gt;Thanks!&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Sun, 18 Feb 2018 11:57:10 GMT</pubDate>
    <dc:creator>DanielAnderson</dc:creator>
    <dc:date>2018-02-18T11:57:10Z</dc:date>
    <item>
      <title>"AmazonS3Exception: The bucket is in this region" error</title>
      <link>https://community.databricks.com/t5/data-engineering/quot-amazons3exception-the-bucket-is-in-this-region-quot-error/m-p/28909#M20674</link>
      <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;I have read access to an S3 bucket in an AWS account that is not mine. For more than a year I've had a job successfully reading from that bucket using dbutils.fs.mount(...) and sqlContext.read.json(...). Recently the job started failing with the exception: "com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is in this region: us-east-1. Please use this region to retry the request." on the sqlContext.read.json() command. My Databricks Cloud platform is in us-west-2, and the bucket may have been moved, but as far as I understand from this question this shouldn't be a problem: &lt;A href="https://forums.databricks.com/questions/416/does-my-s3-data-need-to-be-in-the-same-aws-region.html" target="test_blank"&gt;https://forums.databricks.com/questions/416/does-my-s3-data-need-to-be-in-the-same-aws-region.html&lt;/A&gt; . I have no problem accessing the bucket with boto3 (with no need to specify region). I've also tried setting the 'spark.hadoop.fs.s3a.endpoint' of the cluster to 's3.us-east-1.amazonaws.com' . Surprisingly this resulted in the same error on the mount() command saying that the bucket is in the 'us-west-2' region.&lt;/P&gt;
&lt;P&gt;I'm a bit confused as to the possible causes of this error and would be happy for some pointers,&lt;/P&gt;
&lt;P&gt;Thanks!&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 18 Feb 2018 11:57:10 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/quot-amazons3exception-the-bucket-is-in-this-region-quot-error/m-p/28909#M20674</guid>
      <dc:creator>DanielAnderson</dc:creator>
      <dc:date>2018-02-18T11:57:10Z</dc:date>
    </item>
    <item>
      <title>Re: "AmazonS3Exception: The bucket is in this region" error</title>
      <link>https://community.databricks.com/t5/data-engineering/quot-amazons3exception-the-bucket-is-in-this-region-quot-error/m-p/28910#M20675</link>
      <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;@andersource&lt;/P&gt;
&lt;P&gt; Looks like the bucket is in &lt;PRE&gt;&lt;CODE&gt;us-east-1&lt;/CODE&gt;&lt;/PRE&gt; but you've configured your &lt;PRE&gt;&lt;CODE&gt;AmazonS3&lt;/CODE&gt;&lt;/PRE&gt; Cloud platform with &lt;PRE&gt;&lt;CODE&gt;us-west-2&lt;/CODE&gt;&lt;/PRE&gt;. Can you try switching configuring the client to use &lt;PRE&gt;&lt;CODE&gt;us-east-1&lt;/CODE&gt;&lt;/PRE&gt; ?&lt;/P&gt;
&lt;P&gt;I hope it will work for you. Thank you&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 08 Jun 2019 06:59:23 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/quot-amazons3exception-the-bucket-is-in-this-region-quot-error/m-p/28910#M20675</guid>
      <dc:creator>Chandan</dc:creator>
      <dc:date>2019-06-08T06:59:23Z</dc:date>
    </item>
  </channel>
</rss>

