- 8093 Views
- 3 replies
- 5 kudos
API Consumption on Databricks
In this blog, I will be talking about the building the architecture to serve the API consumption on Databricks Platform. I will be using Lakebase approach for this. It will be useful for this kind of API requirement.API Requirement: Performance:Curre...
- 8093 Views
- 3 replies
- 5 kudos
- 5 kudos
Great post @rathorer Can you explain your Lakebase implementation? I understand Lakebase is the Managed PostGres implementation for OLTP (from the Neon acquisition) but not clear the Photon with the Lakebase. Thanks Venkat
- 5 kudos
- 4874 Views
- 6 replies
- 5 kudos
Cross-filtering for AI/BI dashboards
AI/BI dashboards now support cross-filtering, which allows you to click on an element in one chart to filter and update related data in other charts.Cross-filtering allows users to interactively explore relationships and patterns across multiple visu...
- 4874 Views
- 6 replies
- 5 kudos
- 5 kudos
There does appear to now be a list of capsules indicating the filters applied along the top of Databricks AI/BI Dashboards. The capsules appear to include filter-selectors and also cross-filters added by clicking charts.Also, there is a now "Reset t...
- 5 kudos
- 144 Views
- 0 replies
- 2 kudos
Another BrickTalks! Let's talk about bringing data intelligence from your Lakehouse into every app!
You asked, we delivered! Another BrickTalk is scheduled for Thursday, Nov 13 @ 9 AM PT with Pranav Aurora on how to bring data intelligence from your Lakehouse into every app and user, seamlessly and in real time. What you’ll learn: Use Lakebase (Po...
- 144 Views
- 0 replies
- 2 kudos
- 214 Views
- 3 replies
- 11 kudos
Community Fellows: Shout Out to our Bricksters!
At Databricks, our Community members deserve to get a great experience in our forums, with quality answers from the experts. Who better to help out our customers than Databricks employees aka Bricksters! To work towards this goal, we created the Comm...
- 214 Views
- 3 replies
- 11 kudos
- 11 kudos
Kudos to the DB team for keeping up with the community, but can you please work on your product as well?We are experiencing a lot of issues with your paid product: failures, crashes, slow starts and slow performance and the list goes on. Community wo...
- 11 kudos
- 83 Views
- 1 replies
- 1 kudos
Cómo crear clusters en Databricks paso a paso | All-Purpose, Jobs Compute, SQL Warehouses y Pools
Recently having some fun with Databricks, I created a series of videos in Spanish that I'd like to share here. I hope some of them could be interesting for Spanish or LATAM community Not sure if this is the most proper board to share or there is ano...
- 83 Views
- 1 replies
- 1 kudos
- 1 kudos
Añadido nuevo vídeo para crear clusters de tipo serverless para notebooks, jobs y DLTs https://youtu.be/RQvkssryjyQ?si=BkYI831mUK1vBE20
- 1 kudos
- 1944 Views
- 3 replies
- 7 kudos
Building a Metadata Table-Driven Framework Using LakeFlow Declarative (Formerly DLT) Pipelines
IntroductionScaling data pipelines across an organization can be challenging, particularly when data sources, requirements, and transformation rules are always changing. A metadata table-driven framework using LakeFlow Declarative (Formerly DLT) enab...
- 1944 Views
- 3 replies
- 7 kudos
- 7 kudos
Helpful article @TejeshS . I have a question like if I want to pass parameters from my workflow to pipeline, is it possible? if yes what will be the best approach.
- 7 kudos
- 2151 Views
- 17 replies
- 29 kudos
(Episode 1: Getting Data In) - Learning Databricks one brick at a time, using the Free Edition
Episode 1: Getting Data InLearning Databricks one brick at a time, using the Free Edition.Project IntroWelcome to everyone reading. My name’s Ben, a.k.a BS_THE_ANALYST, and I’m going to share my experiences as I learn the world of Databricks. My obje...
- 2151 Views
- 17 replies
- 29 kudos
- 29 kudos
Really interesting post @BS_THE_ANALYST Caching up with Databricks stuff again
- 29 kudos
- 90 Views
- 0 replies
- 1 kudos
SQL Scripting in Apache Spark™ 4.0
The Apache Spark™ 4.0 introduces a new feature for SQL developers and data engineers: SQL Scripting. As such, this feature enhances the power and extends the flexibility of Spark SQL, enabling users to write procedural code within SQL queries, with t...
- 90 Views
- 0 replies
- 1 kudos
- 954 Views
- 6 replies
- 14 kudos
(Episode 3: Hands-on API Project) - Learning Databricks one brick at a time, using the Free Edition
Episode 3: APIsLearning Databricks one brick at a time, using the Free Edition.Project IntroWelcome to everyone reading. My name’s Ben, a.k.a BS_THE_ANALYST, and I’m going to share my experiences as I learn the world of Databricks. My objective is to...
- 954 Views
- 6 replies
- 14 kudos
- 14 kudos
This is great, thanks for sharing Ben, will share with my data community.
- 14 kudos
- 700 Views
- 3 replies
- 16 kudos
(Episode 2: Reading Excel Files) - Learning Databricks one brick at a time, using the Free Edition
Episode 2: Reading Excel FilesLearning Databricks one brick at a time, using the Free Edition.You can download the accompanying Notebook and Excel files used in the demonstration over on my GitHub:Excel Files & Notebook: https://github.com/BSanalyst...
- 700 Views
- 3 replies
- 16 kudos
- 16 kudos
Thanks for this, @BS_THE_ANALYST. Hugely beneficial.
- 16 kudos
- 237 Views
- 0 replies
- 1 kudos
Migrate External Tables to Managed
With managed tables, you can reduce your storage and compute costs thanks to predictive optimization or file list caching. Now it is really time to migrate external tables to managed ones, thanks to ALTER SET MANAGED functionality. Read more: - h...
- 237 Views
- 0 replies
- 1 kudos
- 323 Views
- 1 replies
- 4 kudos
I Tried Teaching Databricks About Itself — Here’s What Happened
Hi All, How are you doing today?I wanted to share something interesting from my recent Databricks work — I’ve been playing around with an idea I call “Real-Time Metadata Intelligence.” Most of us focus on optimizing data pipelines, query performance,...
- 323 Views
- 1 replies
- 4 kudos
- 4 kudos
I like the core idea. You are mining signals the platform already emits.I would start rules first, track small files ratio and average file size trend, watch skew per partition and shuffle bytes per input gigabyte. Compare job time to input size to c...
- 4 kudos
- 309 Views
- 1 replies
- 4 kudos
Hadoop Walked So Databricks Could Run
Are you familiar with this scenario: Your data team spends 80% of their time fixing infrastructure issues instead of extracting insights.In today’s data-driven world, organisations are drowning in data but starving for actionable insights. Traditiona...
- 309 Views
- 1 replies
- 4 kudos
- 226 Views
- 0 replies
- 1 kudos
Validating pointer-based Delta comparison architecture using flatMapGroupsWithState in Structured St
Hi everyone,I’m leading an implementation where we’re comparing events from two real-time streams — a Source and a Target — in Databricks Structured Streaming (Scala).Our goal is to identify and emit “delta” differences between corresponding records ...
- 226 Views
- 0 replies
- 1 kudos
- 11290 Views
- 6 replies
- 3 kudos
Editing value of widget parameter within notebook code
I have a notebook with a text widget where I want to be able to edit the value of the widget within the notebook and then reference it in SQL code. For example, assuming there is a text widget named Var1 that has input value "Hello", I would want to ...
- 11290 Views
- 6 replies
- 3 kudos
- 3 kudos
It seems that only way to use parameters in sql code block is to use dbutils.widget and you cannot change those parameters without removing widget and setting it up again in code
- 3 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access Data
1 -
ADF Linked Service
1 -
ADF Pipeline
1 -
Advanced Data Engineering
3 -
AI Agents
1 -
AI Readiness
1 -
Apache spark
1 -
ApacheSpark
1 -
Associate Certification
1 -
Automation
1 -
AWSDatabricksCluster
1 -
Azure
1 -
Azure databricks
3 -
Azure devops integration
1 -
AzureDatabricks
2 -
Big data
1 -
Blog
1 -
Caching
2 -
CICDForDatabricksWorkflows
1 -
Cluster
1 -
Cluster Policies
1 -
Cluster Pools
1 -
Community Event
1 -
Cost Optimization Effort
1 -
custom compute policy
1 -
CustomLibrary
1 -
Data
1 -
Data Analysis with Databricks
1 -
Data Engineering
4 -
Data Governance
1 -
Data Mesh
1 -
Data Processing
1 -
Databricks Assistant
1 -
Databricks Community
1 -
Databricks Delta Table
1 -
Databricks Demo Center
1 -
Databricks Job
1 -
Databricks Migration
2 -
Databricks Mlflow
1 -
Databricks Notebooks
1 -
Databricks Support
1 -
Databricks Unity Catalog
2 -
Databricks Workflows
1 -
DatabricksML
1 -
DBR Versions
1 -
Declartive Pipelines
1 -
DeepLearning
1 -
Delta Live Table
1 -
Delta Live Tables
1 -
Delta Time Travel
1 -
Devops
1 -
DimensionTables
1 -
DLT
2 -
DLT Pipelines
3 -
DLT-Meta
1 -
Dns
1 -
Dynamic
1 -
Free Databricks
3 -
GenAI agent
1 -
GenAI and LLMs
2 -
GenAIGeneration AI
1 -
Generative AI
1 -
Genie
1 -
Governance
1 -
Hive metastore
1 -
Hubert Dudek
1 -
Lakeflow Pipelines
1 -
Lakehouse
1 -
Lakehouse Migration
1 -
Lazy Evaluation
1 -
Learning
1 -
Library Installation
1 -
Llama
1 -
Medallion Architecture
1 -
Migrations
1 -
MSExcel
2 -
Multiagent
1 -
Networking
2 -
Partner
1 -
Performance
1 -
Performance Tuning
1 -
Private Link
1 -
Pyspark
1 -
Pyspark Code
1 -
Pyspark Databricks
1 -
Pytest
1 -
Python
1 -
Reading-excel
1 -
Scala Code
1 -
Scripting
1 -
SDK
1 -
Serverless
2 -
Spark Caching
1 -
SparkSQL
1 -
SQL
1 -
SQL Serverless
1 -
Support Ticket
1 -
Sync
1 -
Tutorial
1 -
Unit Test
1 -
Unity Catalog
4 -
Unity Catlog
1 -
Warehousing
1 -
Workflow Jobs
1 -
Workflows
3
- « Previous
- Next »
| User | Count |
|---|---|
| 71 | |
| 43 | |
| 38 | |
| 30 | |
| 23 |