Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Hi,Today I completed the test for Lakehouse fundamentals by scored 85%, still I haven't received the badge through my email francis@intellectyx.comKindly let me know please !-Francis
HI I completed the test for Databricks Certified Data Engineer Associate on 17 December 2024. still I haven't received the badge through my email sureshrocks.1984@hotmail.comKindly let me know please !SURESHK
everything was working yesterday, but today it stopped working as below: The example from the DB website does not work either with the same error. The page source says This is affecting my work~~~a bit annoying, is DB people going to look into this ...
In our streaming jobs, we currently run streaming (cloudFiles format) on a directory with sales transactions coming every 5 minutes.In this directory, the transactions are ordered in the following format:<streaming-checkpoint-root>/<transaction_date>...
Update:Seems that maxFileAge was not a good idea. The following with the option "includeExistingFiles" = False solved my problem:streaming_df = ( spark.readStream.format("cloudFiles") .option("cloudFiles.format", extension) .option("...