cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

bhargavi1
by New Contributor II
  • 1415 Views
  • 1 replies
  • 1 kudos
  • 1415 Views
  • 1 replies
  • 1 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 1 kudos

Hi,Could you share more details on what you have tried? please provide more details.

  • 1 kudos
Abeeya
by New Contributor II
  • 5326 Views
  • 1 replies
  • 5 kudos

Resolved! How to Overwrite Using pyspark's JDBC without loosing constraints on table columns

Hello,My table has primary key constraint on a perticular column, Im loosing primary key constaint on that column each time I overwrite the table , What Can I do to preserve it? Any Heads up would be appreciatedTried Belowdf.write.option("truncate", ...

  • 5326 Views
  • 1 replies
  • 5 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 5 kudos

@Abeeya .​ , Mode "truncate", is correct to preserve the table. However, when you want to add a new column (mismatched schema), it wants to drop it anyway.

  • 5 kudos
Braxx
by Contributor II
  • 2683 Views
  • 2 replies
  • 3 kudos

Resolved! issue with rounding selected column in "for in" loop

This must be trivial, but I must have missed something.I have a dataframe (test1) and want to round all the columns listed in list of columns (col_list)here is the code I am running:col_list = ['measure1', 'measure2', 'measure3']   for i in col_list:...

image image
  • 2683 Views
  • 2 replies
  • 3 kudos
Latest Reply
Braxx
Contributor II
  • 3 kudos

You're absolutely right. thanks

  • 3 kudos
1 More Replies
irfanaziz
by Contributor II
  • 8137 Views
  • 3 replies
  • 2 kudos

Resolved! Issue in reading parquet file in pyspark databricks.

One of the source systems generates from time to time a parquet file which is only 220kb in size.But reading it fails."java.io.IOException: Could not read or convert schema for file: 1-2022-00-51-56.parquetCaused by: org.apache.spark.sql.AnalysisExce...

  • 8137 Views
  • 3 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

@nafri A​ - Howdy! My name is Piper, and I'm a community moderator for Databricks. Would you be happy to mark @Hubert Dudek​'s answer as best if it solved the problem? That will help other members find the answer more quickly. Thanks

  • 2 kudos
2 More Replies
Anonymous
by Not applicable
  • 22471 Views
  • 4 replies
  • 4 kudos

Resolved! Spark is not able to resolve the columns correctly when joins data frames

Hello all, I m using pyspark ( python 3.8) over spark3.0 on Databricks. When running this DataFrame join:next_df = days_currencies_matrix.alias('a').join( data_to_merge.alias('b') , [ days_currencies_matrix.dt == data_to_merge.RATE_DATE, days...

  • 22471 Views
  • 4 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

@Alessio Palma​ - Howdy! My name is Piper, and I'm a moderator for the community. Would you be happy to mark whichever answer solved your issue so other members may find the solution more quickly?

  • 4 kudos
3 More Replies
Labels