Dive into a collaborative space where members like YOU can exchange knowledge, tips, and best practices. Join the conversation today and unlock a wealth of collective wisdom to enhance your experience and drive success.
This post is to set up Databricks workflow jobs as a CI/CD.
Below are the two essential components needed for a complete CI/CD setup of workflow jobs.
Databricks Asset Bundles(DABs)AzureDevOps pipeline.
Databricks Asset Bundle ( From local terminal )...
Hi Team,My name is Ram based out of Singapore. I am new to this Community . Recently I have completed my certification in Databricks starting from Data Analyst , Data Engineering and Gen AI. Looking forward to get connected in serving the Data and AI...
Hi All,I came across a very good set of Practice tests on Databricks Professional Data Engineer Certification.For time being It is being given for free by instructor as promotional activity . Enroll if you are planning to go for the certificationhttp...
Definitely configure job timeouts, and configure notifications.
This will help you to identify slowness due to various factors.
It is crucial to also investigate and fix the issue causing the slowness.
The first step is to identify the problem. This ...
If you have a streaming job, you need to check the batch metrics to be able to understand the stream progress.
However, here are some other suggestions which we can use to monitor a streaming job and be stuck in a "hung" state.
Streaming Listeners sp...
If you use Databricks Jobs for your workloads, it is possible you might have run into a situation where you find your jobs to be in "hung" state.
Before cancelling the job it is important to collect the thread dump as I described here to be able to f...
I just wanted to share a tool I built called spark-column-analyzer. It's a Python package that helps you dig into your Spark DataFrames with ease.Ever spend ages figuring out what's going on in your columns? Like, how many null values are there, or h...
An example added to README in GitHubDoing analysis for column PostcodeJson formatted output{"Postcode": {"exists": true,"num_rows": 93348,"data_type": "string","null_count": 21921,"null_percentage": 23.48,"distinct_count": 38726,"distinct_percentage"...
You can now add the WITH SCHEMA EVOLUTION clause to a SQL merge statement to enable schema evolution for the operation.
For more information: https://docs.databricks.com/en/delta/update-schema.html#sql-evo
#Databricks
In Spark 4.0, there are no more data type mismatches when converting dynamic JSONs, as the new data type VariantType comes with a new function to parse JSONs. Stay tuned for 4.0 release.
You can now enable type widening on tables backed by Delta Lake. Tables with type widening enabled allow changing the type of columns to a wider data type without rewriting underlying data files.
For more information:https://docs.databricks.co...
Hello members of Databricks's comunity,I am currently working on a project where we collect data from machines, that data is in .txt format. The data is currently in an Azure container, I need to clean the files and convert them to delta tables, how ...
Now, you can keep the state of stateful streaming in RocksDB. For example, retrieving keys from memory to check for duplicate records inside the watermark is now faster. #databricks
I have two buckets with the same configurations and labels.One is named my-bucket and the other is my_bucket. I am able to mount my-bucket but get an opaque error message when trying to mount my_bucket. Is this known/expected behavior? Are underscore...
Hi @legobricks ,
Curious on the error that you are getting. However, for GCS - https://cloud.google.com/storage/docs/buckets#naming I do see underscores are allowed but there is also a note below:
You can use a bucket name in a DNS record as part of ...