cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Nazar
by New Contributor II
  • 5644 Views
  • 3 replies
  • 4 kudos

Resolved! Incremental write

Hi All,I have a daily spark job that reads and joins 3-4 source tables and writes the df in a parquet format. This data frame consists of 100+ columns. As this job run daily, our deduplication logic identifies the latest record from each of source t...

  • 5644 Views
  • 3 replies
  • 4 kudos
Latest Reply
Nazar
New Contributor II
  • 4 kudos

Thanks werners

  • 4 kudos
2 More Replies
rami1
by New Contributor II
  • 853 Views
  • 0 replies
  • 0 kudos

Data bricks Write Performance

I have a requirement to replay ingestion from landing data and build silver table. I am trying to write delta file from raw Avro files based in landing zone. The raw files are located in folder based on date. I am currently using streaming to read d...

  • 853 Views
  • 0 replies
  • 0 kudos
prakharjain
by New Contributor
  • 19720 Views
  • 2 replies
  • 0 kudos

Resolved! I need to edit my parquet files, and change field name, replacing space by underscore

Hello, I am facing trouble as mentioned in following topics in stackoverflow, https://stackoverflow.com/questions/45804534/pyspark-org-apache-spark-sql-analysisexception-attribute-name-contains-inv https://stackoverflow.com/questions/38191157/spark-...

  • 19720 Views
  • 2 replies
  • 0 kudos
Latest Reply
DimitriBlyumin
New Contributor III
  • 0 kudos

One option is to use something other than Spark to read the problematic file, e.g. Pandas, if your file is small enough to fit on the driver node (Pandas will only run on the driver). If you have multiple files - you can loop through them and fix on...

  • 0 kudos
1 More Replies
1stcommander
by New Contributor II
  • 8479 Views
  • 2 replies
  • 0 kudos

Parquet partitionBy - date column to nested folders

Hi, when writing a DataFrame to parquet using partitionBy(<date column>), the resulting folder structure looks like this: root |----------------- day1 |----------------- day2 |----------------- day3 Is it possible to create a structure like to foll...

  • 8479 Views
  • 2 replies
  • 0 kudos
Latest Reply
Saphira
New Contributor II
  • 0 kudos

Hey @1stcommander​ You'll have to create those columns yourself. If it's something you will have to do often you could always write a function. In any case, imho it's not that much work. Im not sure what your problem is with the partition pruning. It...

  • 0 kudos
1 More Replies
Labels