cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Hubert-Dudek
by Esteemed Contributor III
  • 2281 Views
  • 2 replies
  • 13 kudos

Resolved! something like AWS Macie to perform scans on Azure Data Lake

Does anyone know alternative for AWS Macie in Azure?AWS Macie scan S3 buckets for files with sensitive data (personal address, credit card etc...).I would like to use the same style ready scanner for Azure Data Lake.

  • 2281 Views
  • 2 replies
  • 13 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 13 kudos

thank you, I checked and yes it is definitely the way to go

  • 13 kudos
1 More Replies
brickster_2018
by Databricks Employee
  • 1886 Views
  • 1 replies
  • 0 kudos

Resolved! Getting file permission issues even though I have the right IAM role attached

I am reading data from S3 from a Databricks cluster and the read operation seldom fails with 403 permission errors. Restarting the cluster fixes my issue.

  • 1886 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 0 kudos

The main reason for this behavior is : AWS keys are used in addition to the IAM role. Using global init scripts to set the AWS keys can cause this behavior.The IAM role has the required permission to access the S3 data, but AWS keys are set in the Sp...

  • 0 kudos
Srikanth_Gupta_
by Valued Contributor
  • 1426 Views
  • 1 replies
  • 0 kudos

Resolved! Does size of optimized files after running OPTIMIZE varies between cloud providers (S3, Blob and GCS)?

are there any other parameters to consider running OPTIMIZE depending cloud vendor?

  • 1426 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryan_Chynoweth
Esteemed Contributor
  • 0 kudos

The optimize is not dependent on the cloud provider whatsoever. Optimize will produce the same results regardless of the underlying storage. It is idempotent, meaning if it is run twice on the same dataset the the second execution has no effect.

  • 0 kudos
akj2784
by New Contributor II
  • 8233 Views
  • 5 replies
  • 0 kudos

How to create a dataframe with the files from S3 bucket

I have connected my S3 bucket from databricks. Using the following command : import urllib import urllib.parse ACCESS_KEY = "Test" SECRET_KEY = "Test" ENCODED_SECRET_KEY = urllib.parse.quote(SECRET_KEY, "") AWS_BUCKET_NAME = "Test" MOUNT_NAME = "...

  • 8233 Views
  • 5 replies
  • 0 kudos
Latest Reply
shyam_9
Databricks Employee
  • 0 kudos

Hi @akj2784,Please go through Databricks documentation on working with files in S3,https://docs.databricks.com/spark/latest/data-sources/aws/amazon-s3.html#mount-s3-buckets-with-dbfs

  • 0 kudos
4 More Replies
kali_tummala
by New Contributor II
  • 9125 Views
  • 5 replies
  • 0 kudos

Why Databricks spark is faster than AWS EMR Spark ?

https://databricks.com/blog/2017/07/12/benchmarking-big-data-sql-platforms-in-the-cloud.html Hi All, just wondering why Databricks Spark is lot faster on S3 compared with AWS EMR spark both the systems are on spark version 2.4 , is Databricks have ...

  • 9125 Views
  • 5 replies
  • 0 kudos
Latest Reply
RafiKurlansik
Databricks Employee
  • 0 kudos

I think you can get some pretty good insight into the optimizations on Databricks here:https://docs.databricks.com/delta/delta-on-databricks.html Specifically, check out the sections on caching, z-ordering, and join optimization. There's also a grea...

  • 0 kudos
4 More Replies
DanielAnderson
by New Contributor
  • 6000 Views
  • 1 replies
  • 0 kudos

"AmazonS3Exception: The bucket is in this region" error

I have read access to an S3 bucket in an AWS account that is not mine. For more than a year I've had a job successfully reading from that bucket using dbutils.fs.mount(...) and sqlContext.read.json(...). Recently the job started failing with the exc...

  • 6000 Views
  • 1 replies
  • 0 kudos
Latest Reply
Chandan
New Contributor II
  • 0 kudos

@andersource Looks like the bucket is in us-east-1 but you've configured your AmazonS3 Cloud platform with us-west-2. Can you try switching configuring the client to use us-east-1 ? I hope it will work for you. Thank you

  • 0 kudos
WajdiFATHALLAH
by New Contributor
  • 15089 Views
  • 4 replies
  • 0 kudos

Writing large parquet file (500 millions row / 1000 columns) to S3 takes too much time

Hello community,First let me introduce my use case, i daily receive a 500 million rows like so :ID | Categories 1 | cat1, cat2, cat3, ..., catn 2 | cat1, catx, caty, ..., anothercategory Input data: 50 compressed csv files each file is 250 MB ...

  • 15089 Views
  • 4 replies
  • 0 kudos
Latest Reply
EliasHaydar
New Contributor II
  • 0 kudos

So you are basically creating an inverted index ?

  • 0 kudos
3 More Replies
Labels