cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Databricks Cross platform data access

Phani1
Valued Contributor II

 

Hi Team,

We have a requirement . data storage on the S3 platform, while our databricks is hosted on Azure.
Our objective is to access the data from the S3 location.
Could you kindly provide us with the most suitable approach for this scenario? ex- external tables/delta sharing or any other

Regards.

Janga

1 REPLY 1

Kaniz_Fatma
Community Manager
Community Manager
Hi @Phani1, To access data from an S3 location in Azure Databricks, you have a few options:
  1. Using AWS Keys and Secret Scopes:

    • Configure Spark properties to set your AWS keys stored in secret scopes as environment variables.
    • Create a secret scope to store the credentials securely.
    • Grant users, service principals, and groups access to read the secret scope.
    • Example code snippet:
      aws_bucket_name = "my-s3-bucket"
      df = spark.read.load(f"s3a://{aws_bucket_name}/flowers/delta/")
      display(df)
      dbutils.fs.ls(f"s3a://{aws_bucket_name}/")
      
  2. Open-Source Hadoop Options:

    • Databricks Runtime supports configuring the S3A filesystem using open-source Hadoop options.
    • You can set global properties and per-bucket properties.
    • Example global configuration:
      spark.hadoop.fs.s3a.aws.credentials.provider <aws-credentials-provider-class>
      spark.hadoop.fs.s3a.endpoint <aws-endpoint>
      spark.hadoop.fs.s3a.server-side-encryption-algorithm SSE-KMS
      
    • Per-bucket configuration allows you to set up buckets with different credentials, endpoints, etc.
  3. Mounting an S3 Bucket:

    • You can mount an S3 bucket to Databricks using the Databricks File System (DBFS).
    • Example code snippet:
      mount_name = "my-s3-mount"
      df = spark.read.format("text").load(f"/mnt/{mount_name}/...")
      

Choose the approach that best fits your requirements and security considerations.

If you need further assistance, feel free to ask! ๐Ÿ˜Š

 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group