cancel
Showing results for 
Search instead for 
Did you mean: 
Community Articles
Dive into a collaborative space where members like YOU can exchange knowledge, tips, and best practices. Join the conversation today and unlock a wealth of collective wisdom to enhance your experience and drive success.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

rathorer
by New Contributor III
  • 8093 Views
  • 3 replies
  • 5 kudos

API Consumption on Databricks

In this blog, I will be talking about the building the architecture to serve the API consumption on Databricks Platform. I will be using Lakebase approach for this. It will be useful for this kind of API requirement.API Requirement: Performance:Curre...

rathorer_0-1759990971452.png rathorer_0-1760078458113.png rathorer_1-1760078513459.png
  • 8093 Views
  • 3 replies
  • 5 kudos
Latest Reply
smithsonian
New Contributor II
  • 5 kudos

Great post @rathorer Can you explain your Lakebase implementation? I understand Lakebase is the Managed PostGres implementation for OLTP (from the Neon acquisition) but not clear the Photon with the Lakebase. Thanks Venkat

  • 5 kudos
2 More Replies
Ajay-Pandey
by Esteemed Contributor III
  • 4874 Views
  • 6 replies
  • 5 kudos

Cross-filtering for AI/BI dashboards

AI/BI dashboards now support cross-filtering, which allows you to click on an element in one chart to filter and update related data in other charts.Cross-filtering allows users to interactively explore relationships and patterns across multiple visu...

cross-filter.gif
Community Articles
AI BI
Cross-filtering
Dashboards
reports
  • 4874 Views
  • 6 replies
  • 5 kudos
Latest Reply
SFDataEng
New Contributor III
  • 5 kudos

There does appear to now be a list of capsules indicating the filters applied along the top of Databricks AI/BI Dashboards.  The capsules appear to include filter-selectors and also cross-filters added by clicking charts.Also, there is a now "Reset t...

  • 5 kudos
5 More Replies
MandyR
by Databricks Employee
  • 214 Views
  • 3 replies
  • 11 kudos

Community Fellows: Shout Out to our Bricksters!

At Databricks, our Community members deserve to get a great experience in our forums, with quality answers from the experts. Who better to help out our customers than Databricks employees aka Bricksters! To work towards this goal, we created the Comm...

  • 214 Views
  • 3 replies
  • 11 kudos
Latest Reply
bidek56
Contributor
  • 11 kudos

Kudos to the DB team for keeping up with the community, but can you please work on your product as well?We are experiencing a lot of issues with your paid product: failures, crashes, slow starts and slow performance and the list goes on. Community wo...

  • 11 kudos
2 More Replies
Coffee77
by Contributor
  • 83 Views
  • 1 replies
  • 1 kudos

‌‌Cómo crear clusters en Databricks paso a paso | All-Purpose, Jobs Compute, SQL Warehouses y Pools

Recently having some fun with Databricks, I created a series of videos in Spanish  that I'd like to share here. I hope some of them could be interesting for Spanish or LATAM community Not sure if this is the most proper board to share or there is ano...

  • 83 Views
  • 1 replies
  • 1 kudos
Latest Reply
Coffee77
Contributor
  • 1 kudos

Añadido nuevo vídeo para crear clusters de tipo serverless para notebooks, jobs y DLTs https://youtu.be/RQvkssryjyQ?si=BkYI831mUK1vBE20

  • 1 kudos
TejeshS
by Contributor
  • 1944 Views
  • 3 replies
  • 7 kudos

Building a Metadata Table-Driven Framework Using LakeFlow Declarative (Formerly DLT) Pipelines

IntroductionScaling data pipelines across an organization can be challenging, particularly when data sources, requirements, and transformation rules are always changing. A metadata table-driven framework using LakeFlow Declarative (Formerly DLT) enab...

TejeshS_0-1753177497326.png
  • 1944 Views
  • 3 replies
  • 7 kudos
Latest Reply
NageshPatil
New Contributor II
  • 7 kudos

Helpful article @TejeshS  . I have a question like if I want to pass parameters from my workflow to pipeline, is it possible? if yes what will be the best approach.

  • 7 kudos
2 More Replies
BS_THE_ANALYST
by Esteemed Contributor III
  • 2151 Views
  • 17 replies
  • 29 kudos

(Episode 1: Getting Data In) - Learning Databricks one brick at a time, using the Free Edition

Episode 1: Getting Data InLearning Databricks one brick at a time, using the Free Edition.Project IntroWelcome to everyone reading. My name’s Ben, a.k.a BS_THE_ANALYST, and I’m going to share my experiences as I learn the world of Databricks. My obje...

BS_THE_ANALYST_14-1758564518121.png BS_THE_ANALYST_15-1758564518126.png Adobe Express - 2025-09-21 14-06-29.gif BS_THE_ANALYST_18-1758564518403.png
  • 2151 Views
  • 17 replies
  • 29 kudos
Latest Reply
Coffee77
Contributor
  • 29 kudos

Really interesting post @BS_THE_ANALYST Caching up with Databricks stuff again

  • 29 kudos
16 More Replies
jsdmatrix
by Databricks Employee
  • 90 Views
  • 0 replies
  • 1 kudos

SQL Scripting in Apache Spark™ 4.0

The Apache Spark™ 4.0 introduces a new feature for SQL developers and data engineers: SQL Scripting. As such, this feature enhances the power and extends the flexibility of Spark SQL, enabling users to write procedural code within SQL queries, with t...

Screenshot 2025-10-31 at 2.43.13 PM.png Screenshot 2025-10-31 at 2.41.15 PM.png
  • 90 Views
  • 0 replies
  • 1 kudos
BS_THE_ANALYST
by Esteemed Contributor III
  • 954 Views
  • 6 replies
  • 14 kudos

(Episode 3: Hands-on API Project) - Learning Databricks one brick at a time, using the Free Edition

Episode 3: APIsLearning Databricks one brick at a time, using the Free Edition.Project IntroWelcome to everyone reading. My name’s Ben, a.k.a BS_THE_ANALYST, and I’m going to share my experiences as I learn the world of Databricks. My objective is to...

BS_THE_ANALYST_0-1761119041020.png BS_THE_ANALYST_1-1761119068567.png BS_THE_ANALYST_2-1761119187093.png BS_THE_ANALYST_1-1761122579858.png
  • 954 Views
  • 6 replies
  • 14 kudos
Latest Reply
JoyO
New Contributor II
  • 14 kudos

This is great, thanks for sharing Ben, will share with my data community.

  • 14 kudos
5 More Replies
BS_THE_ANALYST
by Esteemed Contributor III
  • 700 Views
  • 3 replies
  • 16 kudos

(Episode 2: Reading Excel Files) - Learning Databricks one brick at a time, using the Free Edition

Episode 2: Reading Excel FilesLearning Databricks one brick at a time, using the Free Edition.You can download the accompanying Notebook and Excel files used in the demonstration over on my GitHub:Excel Files  & Notebook: https://github.com/BSanalyst...

BS_THE_ANALYST_0-1759097401864.png BS_THE_ANALYST_1-1759097401869.png BS_THE_ANALYST_2-1759097401874.png BS_THE_ANALYST_3-1759097401882.png
  • 700 Views
  • 3 replies
  • 16 kudos
Latest Reply
SHIFTY
Contributor II
  • 16 kudos

Thanks for this, @BS_THE_ANALYST.  Hugely beneficial.

  • 16 kudos
2 More Replies
Hubert-Dudek
by Esteemed Contributor III
  • 237 Views
  • 0 replies
  • 1 kudos

Migrate External Tables to Managed

With managed tables, you can reduce your storage and compute costs thanks to predictive optimization or file list caching. Now it is really time to migrate external tables to managed ones, thanks to ALTER SET MANAGED functionality. Read more: - h...

external_migration.png
  • 237 Views
  • 0 replies
  • 1 kudos
Brahmareddy
by Esteemed Contributor
  • 323 Views
  • 1 replies
  • 4 kudos

I Tried Teaching Databricks About Itself — Here’s What Happened

Hi All, How are you doing today?I wanted to share something interesting from my recent Databricks work — I’ve been playing around with an idea I call “Real-Time Metadata Intelligence.” Most of us focus on optimizing data pipelines, query performance,...

  • 323 Views
  • 1 replies
  • 4 kudos
Latest Reply
ruicarvalho_de
New Contributor II
  • 4 kudos

I like the core idea. You are mining signals the platform already emits.I would start rules first, track small files ratio and average file size trend, watch skew per partition and shuffle bytes per input gigabyte. Compare job time to input size to c...

  • 4 kudos
Senga98
by New Contributor II
  • 309 Views
  • 1 replies
  • 4 kudos

Hadoop Walked So Databricks Could Run

Are you familiar with this scenario: Your data team spends 80% of their time fixing infrastructure issues instead of extracting insights.In today’s data-driven world, organisations are drowning in data but starving for actionable insights. Traditiona...

  • 309 Views
  • 1 replies
  • 4 kudos
Latest Reply
Khaja_Zaffer
Contributor III
  • 4 kudos

Great one!! @Senga98  All the best!

  • 4 kudos
VamsiDatabricks
by New Contributor II
  • 226 Views
  • 0 replies
  • 1 kudos

Validating pointer-based Delta comparison architecture using flatMapGroupsWithState in Structured St

Hi everyone,I’m leading an implementation where we’re comparing events from two real-time streams — a Source and a Target — in Databricks Structured Streaming (Scala).Our goal is to identify and emit “delta” differences between corresponding records ...

  • 226 Views
  • 0 replies
  • 1 kudos
DavidOBrien
by New Contributor
  • 11290 Views
  • 6 replies
  • 3 kudos

Editing value of widget parameter within notebook code

I have a notebook with a text widget where I want to be able to edit the value of the widget within the notebook and then reference it in SQL code. For example, assuming there is a text widget named Var1 that has input value "Hello", I would want to ...

  • 11290 Views
  • 6 replies
  • 3 kudos
Latest Reply
Ville_Leinonen
New Contributor II
  • 3 kudos

It seems that only way to use parameters in sql code block is to use dbutils.widget and you cannot change those parameters without removing widget and setting it up again in code

  • 3 kudos
5 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels