cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

francisix
by New Contributor II
  • 4416 Views
  • 5 replies
  • 1 kudos

Resolved! I haven't received badge for completion

Hi,Today I completed the test for Lakehouse fundamentals by scored 85%, still I haven't received the badge through my email francis@intellectyx.comKindly let me know please !-Francis

  • 4416 Views
  • 5 replies
  • 1 kudos
Latest Reply
sureshrocks1984
New Contributor II
  • 1 kudos

HI  I completed the test for Databricks Certified Data Engineer Associate on 17 December 2024.  still I haven't received the badge through my email sureshrocks.1984@hotmail.comKindly let me know please !SURESHK 

  • 1 kudos
4 More Replies
lzha174
by Contributor
  • 5416 Views
  • 3 replies
  • 3 kudos

Resolved! ipywidgets stopped displaying today

everything was working yesterday, but today it stopped working as below: The example from the DB website does not work either with the same error. The page source says  This is affecting my work~~~a bit annoying, is DB people going to look into this ...

image image
  • 5416 Views
  • 3 replies
  • 3 kudos
Latest Reply
lzha174
Contributor
  • 3 kudos

Today its back to work! I got a pop up window sayingthis should be the reason it was broken

  • 3 kudos
2 More Replies
Michael_Galli
by Contributor III
  • 4966 Views
  • 3 replies
  • 2 kudos

Resolved! Spark Streaming - only process new files in streaming path?

In our streaming jobs, we currently run streaming (cloudFiles format) on a directory with sales transactions coming every 5 minutes.In this directory, the transactions are ordered in the following format:<streaming-checkpoint-root>/<transaction_date>...

  • 4966 Views
  • 3 replies
  • 2 kudos
Latest Reply
Michael_Galli
Contributor III
  • 2 kudos

Update:Seems that maxFileAge was not a good idea. The following with the option "includeExistingFiles" = False solved my problem:streaming_df = ( spark.readStream.format("cloudFiles") .option("cloudFiles.format", extension) .option("...

  • 2 kudos
2 More Replies
Labels