cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

sarvesh
by Contributor III
  • 33127 Views
  • 18 replies
  • 6 kudos

Resolved! java.lang.OutOfMemoryError: GC overhead limit exceeded. [ solved ]

solution :- i don't need to add any executor or driver memory all i had to do in my case was add this : - option("maxRowsInMemory", 1000). Before i could n't even read a 9mb file now i just read a 50mb file without any error.{ val df = spark.read .f...

edit spark ui 2 edit spark ui 1
  • 33127 Views
  • 18 replies
  • 6 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 6 kudos

can you try without: .set("spark.driver.memory","4g") .set("spark.executor.memory", "6g")It is clearly show that there is no 4gb free on driver and 6gb free on executor (you can share hardware cluster details also).You can not also allocate 100% for ...

  • 6 kudos
17 More Replies
dataEngineer3
by New Contributor II
  • 4892 Views
  • 8 replies
  • 0 kudos

Hi All, I am trying to read a csv file from datalake and loading data into sql table using Copyinto. am facing an issue   Here i created one table wit...

Hi All,I am trying to read a csv file from datalake and loading data into sql table using Copyinto.am facing an issue  Here i created one table with 6 columns same as data in csv file.but unable to load the data.can anyone helpme on this

image
  • 4892 Views
  • 8 replies
  • 0 kudos
Latest Reply
dataEngineer3
New Contributor II
  • 0 kudos

Thanks Werners for your Reply,How to pass schema(ColumnName && Types) to CSV file ??

  • 0 kudos
7 More Replies
Erik
by Valued Contributor III
  • 3576 Views
  • 4 replies
  • 2 kudos

Resolved! Does Z-ordering speed up reading of a single file?

Situation: we have one partion per date, and it just so happens that each partition ends up (after optimize) as *a single* 128mb file. We partition on date, and zorder on userid, and our query is something like "find max value of column A where useri...

  • 3576 Views
  • 4 replies
  • 2 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 2 kudos

Z-Order will make sure that in case you need to read multiple files, these files are co-located.For a single file this does not matter as a single file is always local to itself.If you are certain that your spark program will only read a single file,...

  • 2 kudos
3 More Replies
Jreco
by Contributor
  • 4338 Views
  • 2 replies
  • 5 kudos

Resolved! Reference py file from a notebook

Hi All,I'm trying to reference a py file from a notebook following this documentation: Files in repoI downloaded and added the files to my repo and when I try to run the notebook, the modules is not recognized: Any idea why is this happening? Thanks ...

image image
  • 4338 Views
  • 2 replies
  • 5 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 5 kudos

In this topic you can find some more info:https://community.databricks.com/s/question/0D53f00001Pp5EhCAJThe docs are not that clear.

  • 5 kudos
1 More Replies
Frankooo
by New Contributor III
  • 7175 Views
  • 8 replies
  • 7 kudos

How to optimize exporting dataframe to delta file?

Scenario : I have a dataframe that have 5 billion records/rows and 100+ columns. Is there a way to write this in a delta format efficiently. I have tried to export it but cancelled it after 2 hours (write didnt finish) as this processing time is not ...

  • 7175 Views
  • 8 replies
  • 7 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 7 kudos

Hi @Franco Sia​ ,I will recommend to avoid to use the repartition(50), instead enable optimizes writes on your Delta table. You can find more details hereEnable optimized writes and auto compaction on your Delta table. Use AQE (docs here) to have eno...

  • 7 kudos
7 More Replies
nlee
by New Contributor
  • 3197 Views
  • 1 replies
  • 1 kudos

Resolved! How to create a temporary file with sql

what are the commands to create a temporary file with SQL

  • 3197 Views
  • 1 replies
  • 1 kudos
Latest Reply
mathan_pillai
Databricks Employee
  • 1 kudos

In Spark SQL, you could use commands like "insert overwrite directory" that indirectly creates a temporary file with the datahttps://docs.databricks.com/spark/latest/spark-sql/language-manual/sql-ref-syntax-dml-insert-overwrite-directory.html#example...

  • 1 kudos
gbrueckl
by Contributor II
  • 6297 Views
  • 8 replies
  • 9 kudos

Slow performance of VACUUM on Azure Data Lake Store Gen2

We need to run VACCUM on one of our biggest tables to free the storage. According to our analysis using VACUUM bigtable DRY RUN this affects 30M+ files that need to be deleted.If we run the final VACUUM, the file-listing takes up to 2h (which is OK) ...

  • 6297 Views
  • 8 replies
  • 9 kudos
Latest Reply
Deepak_Bhutada
Contributor III
  • 9 kudos

@Gerhard Brueckl​ we have seen near 80k-120k file deletions in Azure per hour while doing a VACUUM on delta tables, it's just that the vacuum is slower in azure and S3. It might take some time as you said when deleting the files from the delta path. ...

  • 9 kudos
7 More Replies
snoeprol
by New Contributor II
  • 5081 Views
  • 3 replies
  • 2 kudos

Resolved! Unable to open files with python, but filesystem shows files exist

Dear community,I have the following problem:%fs mv '/FileStore/Tree_point_classification-1.dlpk' '/dbfs/mnt/group22/Tree_point_classification-1.dlpk'I have uploaded a file of a ML-model and have transferred it to the directory with When I now check ...

  • 5081 Views
  • 3 replies
  • 2 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 2 kudos

There is dbfs:/dbfs/ displayed maybe file is in /dbfs/dbfs directory? Please check it and try to open with open('/dbfs/dbfs. You can also use "data" from left menu to check what is in dbfs file system more easily.

  • 2 kudos
2 More Replies
William_Scardua
by Valued Contributor
  • 3806 Views
  • 4 replies
  • 4 kudos

Resolved! Small/big file problem, how do you fix it ?

How do you work to fixing the small/big file problem ? what you suggest ?

  • 3806 Views
  • 4 replies
  • 4 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 4 kudos

What Jose said.If you cannot use delta or do not want to:the use of coalesce and repartition/partitioning is the way to define the file size.There is no one ideal file size. It all depends on the use case, available cluster size, data flow downstrea...

  • 4 kudos
3 More Replies
William_Scardua
by Valued Contributor
  • 8201 Views
  • 5 replies
  • 3 kudos

Resolved! Read just the new file ???

Hi guys,How can I read just the new file in a batch process ?Can you help me ? pleasThank you

  • 8201 Views
  • 5 replies
  • 3 kudos
Latest Reply
Ryan_Chynoweth
Esteemed Contributor
  • 3 kudos

What type of file? Is the file stored in a storage account? Typically, you would read and write data with something like the following code: # read a parquet file df = spark.read.format("parquet").load("/path/to/file")   # write the data as a file df...

  • 3 kudos
4 More Replies
brickster_2018
by Databricks Employee
  • 2427 Views
  • 1 replies
  • 0 kudos

Resolved! Can auto-loader process the overwritten file

I have a directory where I get files with the same multiple times. Will Auto-loader process all the files or will it process the first and ignore the rest

  • 2427 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 0 kudos

Autoloader has an option - "cloudFiles. allowOverwrites". This determines whether to allow input directory file changes to overwrite existing data. This option is available in Databricks Runtime 7.6 and above.

  • 0 kudos
User16826987838
by Contributor
  • 1547 Views
  • 2 replies
  • 1 kudos

Prevent file downloads from /files/ URL

I would like to prevent file download via  /files/ URL. For example: https://customer.databricks.com/files/some-file-in-the-filestore.txtIs there a way to do this?

  • 1547 Views
  • 2 replies
  • 1 kudos
Latest Reply
Mooune_DBU
Valued Contributor
  • 1 kudos

Unfortunately this is not possible from the platform.You can however use an external Web Application Firewall (e.g. Akmai) to filter all web traffic to your workspaces.  This can block both Web access to download root bucket data.

  • 1 kudos
1 More Replies
Labels