cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Hubert-Dudek
by Esteemed Contributor III
  • 11 Views
  • 0 replies
  • 2 kudos

Databricks Advent Calendar 2025 #10

Databricks goes native on Excel. You can now ingest + query .xls/.xlsx directly in Databricks (SQL + PySpark, batch and streaming), with auto schema/type inference, sheet + cell-range targeting, and evaluated formulas, no extra libraries anymore.

2025_10.png
  • 11 Views
  • 0 replies
  • 2 kudos
bianca_unifeye
by Contributor
  • 13 Views
  • 1 replies
  • 1 kudos

Webinar: From PoC to Production: Delivering with Confidence with Databricks

  Our final webinar of December is here and we are closing the year with a powerhouse session!SpeakersAs many organisations still get stuck in the PoC phase, we’re bringing clarity, structure, and real delivery practices to help teams move from promi...

1764757532452 (1).jpg
  • 13 Views
  • 1 replies
  • 1 kudos
Latest Reply
Advika
Databricks Employee
  • 1 kudos

Appreciate you sharing this with the community, @bianca_unifeye!

  • 1 kudos
lance-gliser
by New Contributor
  • 4320 Views
  • 8 replies
  • 0 kudos

Databricks apps - Volumes and Workspace - FileNotFound issues

I have a Databricks App I need to integrate with volumes using local python os functions. I've setup a simple test:  def __init__(self, config: ObjectStoreConfig): self.config = config # Ensure our required paths are created ...

  • 4320 Views
  • 8 replies
  • 0 kudos
Latest Reply
ksboli
New Contributor II
  • 0 kudos

I also cannot read from a Volume from a databricks app and would be interested in a solution

  • 0 kudos
7 More Replies
tarunnagar
by Contributor
  • 93 Views
  • 3 replies
  • 2 kudos

Using Databricks for Real-Time App Data

I’m exploring how to handle real-time data for an application and I keep seeing Databricks recommended as a strong option — especially with its support for streaming pipelines, Delta Live Tables, and integrations with various event sources. That said...

  • 93 Views
  • 3 replies
  • 2 kudos
Latest Reply
Suheb
New Contributor III
  • 2 kudos

Databricks is very effective for real-time app data because it supports streaming data processing using Apache Spark and Delta Lake. It helps handle large data volumes, provides low-latency analytics, and makes it easier to build scalable event-drive...

  • 2 kudos
2 More Replies
greengil
by Visitor
  • 22 Views
  • 1 replies
  • 1 kudos

Create function issue

Hello - I am following some online code to create a function as follows:-----------------------------------------CREATE OR REPLACE FUNCTION my_catalog.my_schema.insert_data_function(col1_value STRING,col2_value INT)RETURNS BOOLEANCOMMENT 'Inserts dat...

  • 22 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @greengil ,You can't use functions to modify data. They're intended to return scalar value or table. If you need to modify content of a table use stored procedure instead.

  • 1 kudos
bharathjs
by New Contributor II
  • 11243 Views
  • 7 replies
  • 2 kudos

Alter table to add/update multiple column comments

I was wondering if there's a way to alter table and add/update comments for multiple columns at once using SQL or API calls. For instance -  ALTER TABLE <table_name> CHANGE COLUMN <col1> COMMENT '<comment1>',CHANGE COLUMN <col2> COMMENT 'comment2' ; ...

  • 11243 Views
  • 7 replies
  • 2 kudos
Latest Reply
dxwell
New Contributor II
  • 2 kudos

The correct SQL syntax for this is:ALTER TABLE your_table_name ALTER COLUMN col1 COMMENT 'comment1', col2 COMMENT 'comment2', col3 COMMENT 'comment3'; 

  • 2 kudos
6 More Replies
Hubert-Dudek
by Esteemed Contributor III
  • 44 Views
  • 0 replies
  • 2 kudos

Databricks Advent Calendar 2025 #9

Tags, whether manually assigned or automatically assigned by the “data classification” service, can be protected using policies. Column masking can automatically mask columns with a given tag for all except some with elevated access.

2025_9.png
  • 44 Views
  • 0 replies
  • 2 kudos
tarunnagar
by Contributor
  • 118 Views
  • 3 replies
  • 0 kudos

How to Connect Databricks with Web and Mobile Apps

Hi everyone,I’m exploring ways to leverage Databricks for building data-driven web and mobile applications and wanted to get some insights from this community. Databricks is great for processing large datasets, running analytics, and building machine...

  • 118 Views
  • 3 replies
  • 0 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 0 kudos

Check Databricks Apps - you pass databricks resources and then use databricks-sdk to interact with them.

  • 0 kudos
2 More Replies
boitumelodikoko
by Valued Contributor
  • 614 Views
  • 1 replies
  • 3 kudos

Data Engineering Lessons

Getting into the data space can feel overwhelming, with so many tools, terms, and technologies. But after years inExpect failure. Design for it.Jobs will fail. The data will be late. Build systems that can recover gracefully, and continually monitor ...

  • 614 Views
  • 1 replies
  • 3 kudos
Latest Reply
Gecofer
Contributor
  • 3 kudos

Hi @boitumelodikoko A few more principles I always share with people entering the data space:Observability is non-negotiable.If you can’t see what your pipelines are doing, you can’t fix what breaks.Good logging, metrics, and alerts save countless ho...

  • 3 kudos
Poorva21
by New Contributor
  • 108 Views
  • 1 replies
  • 2 kudos

How realistic is truly end-to-end LLMOps on Databricks?

Databricks is positioning the platform as a full stack for LLM development — from data ingestion → feature/embedding pipelines → fine-tuning (Mosaic AI) → evaluation → deployment (Model Serving) → monitoring (Lakehouse Monitoring).I’m curious about r...

  • 108 Views
  • 1 replies
  • 2 kudos
Latest Reply
Gecofer
Contributor
  • 2 kudos

Hi @Poorva21 In several projects I’ve seen and worked on, Databricks gets you very close to a full end-to-end LLMOps platform, but not completely. It realistically covers most of the lifecycle, but in real production setups you still complement it wi...

  • 2 kudos
Hubert-Dudek
by Esteemed Contributor III
  • 76 Views
  • 0 replies
  • 2 kudos

Databricks Advent Calendar 2025 #7

Imagine all a data engineer or analyst needs to do to read from a REST API is use spark.read(), no direct request calls, no manual JSON parsing - just spark .read. That’s the power of a custom Spark Data Source. Soon, we will see a surge of open-sour...

2025_7.png 2025_7.png
  • 76 Views
  • 0 replies
  • 2 kudos
Hubert-Dudek
by Esteemed Contributor III
  • 99 Views
  • 0 replies
  • 2 kudos

Databricks Advent Calendar 2025 #6

DBX is one of the most crucial projects of dblabs this year, and we can expect that more and more great checks from it will be supported natively in databricks. More about dbx on https://databrickslabs.github.io/dqx/

2025_6.png
  • 99 Views
  • 0 replies
  • 2 kudos
n1399
by New Contributor II
  • 1072 Views
  • 2 replies
  • 0 kudos

On Demand Pool Configuration & Policy definition

I'm using Job cluster and created compute policies for library management and now I'm trying to use pools in databricks. I'm getting error like this : Cluster validation error: Validation failed for azure_attributes.spot_bid_max_price from pool, the ...

  • 1072 Views
  • 2 replies
  • 0 kudos
Latest Reply
Poorva21
New Contributor
  • 0 kudos

This error occurs because instance pools require a concrete spot bid max price value, even if the cluster policy marks it as unlimited. Set an explicit value (e.g., 100) directly in the instance pool configuration, or switch the pool to on-demand nod...

  • 0 kudos
1 More Replies
mrstevegross
by Contributor III
  • 5125 Views
  • 2 replies
  • 2 kudos

How to resolve "cannot import name 'Iterable' from 'collections'" error?

I'm running a DBR/Spark job using a container. I've set docker_image.url to `docker.io/databricksruntime/standard:13.3-LTS`, as well as the Spark env var `DATABRICKS_RUNTIME_VERSION=13.3`. At runtime, however, I'm encountering this error: ImportError...

  • 5125 Views
  • 2 replies
  • 2 kudos
Latest Reply
Poorva21
New Contributor
  • 2 kudos

Go to Compute → Your Cluster / Job ComputeChange Databricks Runtime to:Databricks Runtime 13.3 LTSRe-run your job with the same container.

  • 2 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels