cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Connecting to Serverless Redshift from a Databricks Notebook

arunak
New Contributor

Hello Experts, 

A new databricks user here. I am trying to access an Redshift serverless table using a databricks notebook. 
Here is what happens when I try the below code, 

 
df = spark.read.format("redshift")\
.option("dbtable", "public.customer")\
.option("tempdir", "s3://BLAH/rs-temp/")\
.option("url", "jdbc:redshift://BLAH:5439/dev")\
.option("user", "user")\
.option("password", "password")\
.load()
df.show(10,False)

It fails with the below error 

IllegalArgumentException:
requirement failed: You must specify a method for authenticating Redshift's connection to S3 (aws_iam_role, forward_spark_s3_credentials, or temporary_aws_*. For a discussion of the differences between these options, please see the README.

If I edit the format to "jdbc", it works no issue. I am on 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)

I don't have an instance profile role. Why wouldn't the format("redshift") use the provided username and password and connect to redshift? What config should I be using? 


1 REPLY 1

shan_chandra
Esteemed Contributor

@arunak - we need to specify forward_spark_s3_credentials to true during read. This will help spark detect the credentials used to authenticate to the S3 bucket and use these credentials to r read from redshift.  

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group