cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

tarunnagar
by Contributor
  • 491 Views
  • 4 replies
  • 0 kudos

Best Development Strategies for Building Reusable Data Engineering Components in Databricks

I’m looking to gather insights from data engineers, architects, and developers who have experience building scalable pipelines in Databricks. Specifically, I want to understand how to design, implement, and manage reusable data engineering components...

  • 491 Views
  • 4 replies
  • 0 kudos
Latest Reply
Davidwilliamkt
New Contributor II
  • 0 kudos

The best strategy is to build modular, parameterized, Delta-optimized functions and package them into reusable Python modules, while keeping Databricks notebooks only for orchestration. This creates consistent, scalable, and easily maintainable data ...

  • 0 kudos
3 More Replies
Poorva21
by New Contributor III
  • 488 Views
  • 2 replies
  • 3 kudos

How realistic is truly end-to-end LLMOps on Databricks?

Databricks is positioning the platform as a full stack for LLM development — from data ingestion → feature/embedding pipelines → fine-tuning (Mosaic AI) → evaluation → deployment (Model Serving) → monitoring (Lakehouse Monitoring).I’m curious about r...

  • 488 Views
  • 2 replies
  • 3 kudos
Latest Reply
Poorva21
New Contributor III
  • 3 kudos

Thank You @Gecofer for taking the time to share such a clear, experience-backed breakdown of where Databricks shines and where real-world LLM Ops architectures still need supporting components. Your explanation was incredibly practical and resonates ...

  • 3 kudos
1 More Replies
DBXDeveloper111
by New Contributor III
  • 364 Views
  • 3 replies
  • 3 kudos

Resolved! Software engineering in data bricks

I'm a software engineer and a bit new to databricks.  My goal is to create a model serving endpoint, that interfaces with several ML models. Traditionally this would look like:API--> Service --> DataNow using databricks, my understanding is that it w...

  • 364 Views
  • 3 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 3 kudos

Just register model and then deploy service endpoint to serve this model.

  • 3 kudos
2 More Replies
bianca_unifeye
by Contributor
  • 153 Views
  • 1 replies
  • 1 kudos

Webinar: From PoC to Production: Delivering with Confidence with Databricks

  Our final webinar of December is here and we are closing the year with a powerhouse session!SpeakersAs many organisations still get stuck in the PoC phase, we’re bringing clarity, structure, and real delivery practices to help teams move from promi...

1764757532452 (1).jpg
  • 153 Views
  • 1 replies
  • 1 kudos
Latest Reply
Advika
Community Manager
  • 1 kudos

Appreciate you sharing this with the community, @bianca_unifeye!

  • 1 kudos
tarunnagar
by Contributor
  • 467 Views
  • 3 replies
  • 2 kudos

Using Databricks for Real-Time App Data

I’m exploring how to handle real-time data for an application and I keep seeing Databricks recommended as a strong option — especially with its support for streaming pipelines, Delta Live Tables, and integrations with various event sources. That said...

  • 467 Views
  • 3 replies
  • 2 kudos
Latest Reply
Suheb
Contributor
  • 2 kudos

Databricks is very effective for real-time app data because it supports streaming data processing using Apache Spark and Delta Lake. It helps handle large data volumes, provides low-latency analytics, and makes it easier to build scalable event-drive...

  • 2 kudos
2 More Replies
bharathjs
by New Contributor II
  • 11773 Views
  • 7 replies
  • 2 kudos

Alter table to add/update multiple column comments

I was wondering if there's a way to alter table and add/update comments for multiple columns at once using SQL or API calls. For instance -  ALTER TABLE <table_name> CHANGE COLUMN <col1> COMMENT '<comment1>',CHANGE COLUMN <col2> COMMENT 'comment2' ; ...

  • 11773 Views
  • 7 replies
  • 2 kudos
Latest Reply
dxwell
New Contributor II
  • 2 kudos

The correct SQL syntax for this is:ALTER TABLE your_table_name ALTER COLUMN col1 COMMENT 'comment1', col2 COMMENT 'comment2', col3 COMMENT 'comment3'; 

  • 2 kudos
6 More Replies
tarunnagar
by Contributor
  • 341 Views
  • 3 replies
  • 0 kudos

How to Connect Databricks with Web and Mobile Apps

Hi everyone,I’m exploring ways to leverage Databricks for building data-driven web and mobile applications and wanted to get some insights from this community. Databricks is great for processing large datasets, running analytics, and building machine...

  • 341 Views
  • 3 replies
  • 0 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 0 kudos

Check Databricks Apps - you pass databricks resources and then use databricks-sdk to interact with them.

  • 0 kudos
2 More Replies
boitumelodikoko
by Valued Contributor
  • 686 Views
  • 1 replies
  • 3 kudos

Data Engineering Lessons

Getting into the data space can feel overwhelming, with so many tools, terms, and technologies. But after years inExpect failure. Design for it.Jobs will fail. The data will be late. Build systems that can recover gracefully, and continually monitor ...

  • 686 Views
  • 1 replies
  • 3 kudos
Latest Reply
Gecofer
Contributor II
  • 3 kudos

Hi @boitumelodikoko A few more principles I always share with people entering the data space:Observability is non-negotiable.If you can’t see what your pipelines are doing, you can’t fix what breaks.Good logging, metrics, and alerts save countless ho...

  • 3 kudos
n1399
by New Contributor II
  • 1189 Views
  • 2 replies
  • 0 kudos

On Demand Pool Configuration & Policy definition

I'm using Job cluster and created compute policies for library management and now I'm trying to use pools in databricks. I'm getting error like this : Cluster validation error: Validation failed for azure_attributes.spot_bid_max_price from pool, the ...

  • 1189 Views
  • 2 replies
  • 0 kudos
Latest Reply
Poorva21
New Contributor III
  • 0 kudos

This error occurs because instance pools require a concrete spot bid max price value, even if the cluster policy marks it as unlimited. Set an explicit value (e.g., 100) directly in the instance pool configuration, or switch the pool to on-demand nod...

  • 0 kudos
1 More Replies
mrstevegross
by Contributor III
  • 5839 Views
  • 2 replies
  • 2 kudos

How to resolve "cannot import name 'Iterable' from 'collections'" error?

I'm running a DBR/Spark job using a container. I've set docker_image.url to `docker.io/databricksruntime/standard:13.3-LTS`, as well as the Spark env var `DATABRICKS_RUNTIME_VERSION=13.3`. At runtime, however, I'm encountering this error: ImportError...

  • 5839 Views
  • 2 replies
  • 2 kudos
Latest Reply
Poorva21
New Contributor III
  • 2 kudos

Go to Compute → Your Cluster / Job ComputeChange Databricks Runtime to:Databricks Runtime 13.3 LTSRe-run your job with the same container.

  • 2 kudos
1 More Replies
steveKris
by New Contributor
  • 449 Views
  • 6 replies
  • 4 kudos

Resolved! Extract all users from Databricks Groups

Hey everyone,we are trying to get an overview of all users that we have in our databricks groups. We have tried to do so with the REST API as well as the SQL-queries (with normal developer accounts as well as workspace administrator accounts). The pr...

  • 449 Views
  • 6 replies
  • 4 kudos
Latest Reply
Poorva21
New Contributor III
  • 4 kudos

Use the Databricks SQL system users tableSELECT * FROM system.usersOnly shows fully provisioned users Users pending invitation may not appear.

  • 4 kudos
5 More Replies
Gokkul007
by New Contributor
  • 252 Views
  • 2 replies
  • 1 kudos

How to create a lakebase table ?

Hi databricks community, I want to create a lakebase table that is synced with the delta table . So whenever the delta table is updated the changes should be available in lakebase table.  Now I want to create a databricks streamlit application and ma...

  • 252 Views
  • 2 replies
  • 1 kudos
Latest Reply
Poorva21
New Contributor III
  • 1 kudos

Yes, it’s possible to have a Lakehouse table synced with a Delta table in Unity Catalog. You have a few options:Direct read: Register the Delta table in Unity Catalog and query it directly from your Streamlit app.Delta Live Tables (DLT): Create a DLT...

  • 1 kudos
1 More Replies
gokkul
by New Contributor II
  • 179 Views
  • 1 replies
  • 1 kudos

Help me with the databricks streamlit application related doubt

Hi Databricks community ,Hi I have a doubt regarding databricks streamlit application . I have a databricks streamlit application that takes input values from the user through streamlit UI. Now I want to store these input values in a delta table in U...

  • 179 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @gokkul ,Your app service principal needs to have a proper permission to write to UC table. You also need to use python databricks sdk to interact with UC object (i.e read/save a table).You can get some inspiration from following databricks cookbo...

  • 1 kudos
tarunnagar
by Contributor
  • 909 Views
  • 7 replies
  • 4 kudos

Resolved! How to Optimize Data Pipeline Development on Databricks for Large-Scale Workloads?

Hi everyone,I’m working on building and optimizing data pipelines in Databricks, especially for large-scale workloads, and I want to learn from others who have hands-on experience with performance tuning, architecture decisions, and best practices.I’...

  • 909 Views
  • 7 replies
  • 4 kudos
Latest Reply
jameswood32
Contributor
  • 4 kudos

Optimizing Databricks pipelines for large-scale workloads mostly comes down to smart architecture + efficient Spark practices.Key tips from real-world users:Use Delta Lake – for ACID transactions, incremental updates, and schema enforcement.Partition...

  • 4 kudos
6 More Replies
AswinGovindan77
by New Contributor II
  • 1443 Views
  • 2 replies
  • 2 kudos

Databricks community group in Kerala

  Calling All Data Enthusiasts in Kerala! Hey everyone,I'm excited about the idea of launching a Databricks Community Group here in Kerala! This group would be a hub for learning, sharing knowledge, and networking among data enthusiasts, analysts, a...

  • 1443 Views
  • 2 replies
  • 2 kudos
Latest Reply
sreekufeg
New Contributor II
  • 2 kudos

Great initiative! It's good to see the tech community growing here. I’m representing Fegno Technologies, a web and mobile app development company in Kochi. We are always keen to stay updated on the latest data engineering trends and cloud platforms.

  • 2 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels