cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 
PSA: Community Edition retires on January 1, 2026. Move to the Free Edition today to keep your work.

Databricks Free Edition is the new home for personal learning and exploration on Databricks. It’s perpetually free and built on modern Databricks - the same Data Intelligence Platform used by professionals. Free Edition lets you learn professional da...

  • 1694 Views
  • 1 replies
  • 4 kudos
2 weeks ago
🎤 Call for Presentations: Data + AI Summit 2026 is Open!

June 15–18, 2026 Are you building the future with data and AI? Then this is your moment. The Call for Proposals for Data + AI Summit 2026 is officially open, and we want to hear from builders, practitioners, and innovators across the data and AI com...

  • 2112 Views
  • 4 replies
  • 6 kudos
2 weeks ago
Last Chance: Help Shape the 2026 Data + AI Summit | Win a Full Conference Pass

Your voice matters to us. We are planning the 2026 Data + AI Summit, and we’d love your input on what would make the experience even more valuable for you. Take a few minutes to share your feedback through our quick survey — your insights directly in...

  • 932 Views
  • 3 replies
  • 5 kudos
2 weeks ago
Level Up with Databricks Specialist Sessions

How to Register & Prepare If you're interested in advancing your skills with Databricks through a Specialist Session, here's a clear guide on how to register and what free courses you can take to prepare effectively. How to Begin Your Learning Path S...

  • 3392 Views
  • 2 replies
  • 9 kudos
10-02-2025
Celebrating Our First Brickster Champion: Louis Frolio

Our Champion program has always celebrated the customers who go above and beyond to engage, help others, and uplift the Community. Recently, we have seen remarkable participation from Bricksters as well—and their impact deserves recognition too. Begi...

  • 1533 Views
  • 7 replies
  • 14 kudos
11-21-2025
🌟 Community Pulse: Your Weekly Roundup! December 12 – 21, 2025

Learning doesn’t pause, and neither does the impact this Community continues to create!Across threads and time zones, the knowledge kept moving. Catch up on the highlights   Voices Shaping the Week        Featuring the voices that brought clarity, ...

  • 496 Views
  • 1 replies
  • 1 kudos
a week ago

Community Activity

AyubkhanNazar
by > New Contributor
  • 15 Views
  • 0 replies
  • 0 kudos

Clarification on DEA Certification Content Updates and Unity Catalog Requirement

Hello,I currently hold the Databricks Data Engineer Associate (DEA) certification, which I passed in September 2024. I am planning to revisit the material since I have not been working with Databricks for a long time.While reviewing the updated cours...

  • 15 Views
  • 0 replies
  • 0 kudos
Hubert-Dudek
by Databricks MVP
  • 24 Views
  • 0 replies
  • 0 kudos

Goodbye community edition, Long live the free edition

I just logged in to the community edition for the last time and spun up the cluster for the last time. Today is the last day, but it's still there. Haven't logged in there for a while, as the free edition offers much more, but it is a place where man...

goodbye_era.png
  • 24 Views
  • 0 replies
  • 0 kudos
Brahmareddy
by > Esteemed Contributor
  • 23 Views
  • 0 replies
  • 0 kudos

Happy New Year 2026 : Building, Learning, and Growing Together in the Year of Data + AI

Happy New Year to the Austin Databricks Community!A new year always feels like a fresh notebook. Clean pages, big ideas, and the excitement of building something better than before. As we step into this year, one thing is clear: Data + AI is no longe...

  • 23 Views
  • 0 replies
  • 0 kudos
DBXDeveloper111
by > New Contributor III
  • 57 Views
  • 2 replies
  • 0 kudos

ModuleNotFoundError: No module named 'MY-MODEL'

I'm currently trying to create a model serving end point around a model I've recently created. I'm trying to wrap my head around an error. The model is defined as below class MY-MODEL(mlflow.pyfunc.PythonModel): def load_context(self, context): ...

  • 57 Views
  • 2 replies
  • 0 kudos
Latest Reply
JAHNAVI
Databricks Employee
  • 0 kudos

@DBXDeveloper111 could you please create the class like MYMODEL without hyphen and then try improting it. as hyphen is invalid identifier. Please confirm if you are still facing the issue after this change.

  • 0 kudos
1 More Replies
amekojc
by > New Contributor II
  • 63 Views
  • 1 replies
  • 0 kudos

How to not make tab headers show when embedding dashboard

When embedding the AI BI dashboard, is there a way to not make the tabs show and instead use our own UI tab to navigate the tabs?Currently, there are two tab headers - one in the databricks dashboard and then another tab section in our embedding webp...

  • 63 Views
  • 1 replies
  • 0 kudos
Latest Reply
mukul1409
New Contributor
  • 0 kudos

Hi @amekojc At the moment, Databricks AI BI Dashboards do not support hiding or disabling the native dashboard tabs when embedding. The embedded dashboard always renders with its own tab headers, and there is no configuration or API to control tab vi...

  • 0 kudos
libpekin
by > New Contributor II
  • 105 Views
  • 2 replies
  • 2 kudos

Resolved! Databricks Free Edition - Accessing files in S3

Hello,Attempting read/write files from s3 but got the error below. I am on the free edition (serverless by default). I'm  using access_key and secret_key. Has anyone done this successfully? Thanks!Directly accessing the underlying Spark driver JVM us...

  • 105 Views
  • 2 replies
  • 2 kudos
Latest Reply
libpekin
New Contributor II
  • 2 kudos

Thank @Sanjeeb2024 I was able to confirm as well

  • 2 kudos
1 More Replies
RyanHager
by > Contributor
  • 70 Views
  • 0 replies
  • 1 kudos

Liquid Clustering and S3 Performance

Are there any performance concerns when using liquid clustering and AWS S3.  I believe all the parquet files go in the same folder (Prefix in AWS S3 Terms) verses folders per partition when using "partition by".  And there is this note on S3 performa...

  • 70 Views
  • 0 replies
  • 1 kudos
emma_s
by Databricks Employee
  • 65 Views
  • 0 replies
  • 1 kudos

Databricks Excel Reader

We’ve recently created a new Excel reader function, and I decided to have a play around with it. I’ve used an open dataset for this tutorial, so you can follow along too. Using file available here - https://www.ons.gov.uk/employmentandlabourmarket/pe...

emma_s_0-1767193777925.png emma_s_1-1767193777925.png emma_s_2-1767193777925.png emma_s_3-1767193777926.png
  • 65 Views
  • 0 replies
  • 1 kudos
Sanjeeb2024
by > Contributor
  • 189 Views
  • 13 replies
  • 1 kudos

Need Help - System tables that contains all databricks users, service principal details !!

Hi all - I am trying to create a dashabord where I need to list down all users and service principals along with groups and understand their databricks usages. Is there any table available in Databricks that contains user, service principal details. ...

  • 189 Views
  • 13 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Hi, I can't find any reference to a user system table in our docs. Instead the recommended approach is to use the API to return users, groups and service principals. You can either run this using the Workspace Client if you only have worspace admin p...

  • 1 kudos
12 More Replies
erigaud
by > Honored Contributor
  • 3060 Views
  • 6 replies
  • 4 kudos

Resolved! DLT-Asset bundle : Pipelines do not support a setting a run_as user that is different from the owner

Hello !We're using Databricks asset bundles to deploy to several environments using a devops pipeline. The service principal running the CICD pipeline and creating the job (owner) is not the same as the SPN that will be running the jobs (run_as).This...

  • 3060 Views
  • 6 replies
  • 4 kudos
Latest Reply
Coffee77
Contributor III
  • 4 kudos

Maybe I'm not catching this or missing something else but I've got the following job in one of my demo workspaces:Creator is my user and the job runs as a service principal account. Those are different identities. I got this by deploying the job with...

  • 4 kudos
5 More Replies
Divaker_Soni
by > New Contributor III
  • 81 Views
  • 1 replies
  • 0 kudos

Databricks Table Protection Features

This article provides an overview of key Databricks features and best practices that protect Gold tables from accidental deletion. It also covers the implications if both the Gold and Landing layers are deleted without active retention or backup. Cor...

  • 81 Views
  • 1 replies
  • 0 kudos
Latest Reply
Sanjeeb2024
Contributor
  • 0 kudos

Thanks for sharing this. Time Travel is applicable all tables in Databricks NOT restricted to gold.

  • 0 kudos
ndw
by > New Contributor III
  • 318 Views
  • 7 replies
  • 1 kudos

Azure Content Understanding Equivalent

Hi all,I am exploring Databricks services or components that could be considered equivalent to Azure Document Intelligence and Azure Content Understanding.Our customer works with dozens of Excel and PDF files. These files follow multiple template typ...

  • 318 Views
  • 7 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

I would work, but you will need to specify and manage the ranges or number of header rows manually. You could potentially read the whole sheet in and then write some code that identifies the range that is of interest and cleans it in parsing. My reco...

  • 1 kudos
6 More Replies
Suheb
by > Contributor
  • 83 Views
  • 1 replies
  • 0 kudos

Why does my MLflow model training job fail on Databricks with an out‑of‑memory error for large datas

I am trying to train a machine learning model using MLflow on Databricks. When my dataset is very large, the training stops and gives an ‘out-of-memory’ error. Why does this happen and how can I fix it?

  • 83 Views
  • 1 replies
  • 0 kudos
Latest Reply
mukul1409
New Contributor
  • 0 kudos

Hi @Suheb This happens because during training the entire dataset or large intermediate objects are being loaded into the driver or executor memory, which can exceed the available memory on the cluster, especially when using large DataFrames, collect...

  • 0 kudos
Gaurav_784295
by > New Contributor III
  • 3470 Views
  • 3 replies
  • 0 kudos

pyspark.sql.utils.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/Datasets

pyspark.sql.utils.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/DatasetsGetting this error while writing can any one please tell how we can resolve it

  • 3470 Views
  • 3 replies
  • 0 kudos
Latest Reply
preetmdata
New Contributor II
  • 0 kudos

Hi @Gaurav_784295  ,In Spark, In case of streaming, please use a time based column in window function. Because, In streaming we cant say "last 10 rows", "limit 10" etc. Because streaming never ends. So when you use window, please dont use columns lik...

  • 0 kudos
2 More Replies
espenol
by > New Contributor III
  • 27327 Views
  • 11 replies
  • 13 kudos

input_file_name() not supported in Unity Catalog

Hey, so our notebooks reading a bunch of json files from storage typically use a input_file_name() when moving from raw to bronze, but after upgrading to Unity Catalog we get an error message:AnalysisException: [UC_COMMAND_NOT_SUPPORTED] input_file_n...

  • 27327 Views
  • 11 replies
  • 13 kudos
Latest Reply
ramanpreet
New Contributor
  • 13 kudos

The reason why the 'input_file_name' is not supported because this function was available in older versions of Databricks runtime. It got deprecated from Databricks Runtime 13.3 LTS onwards

  • 13 kudos
10 More Replies
Welcome to the Databricks Community!

Once you are logged in, you will be ready to post content, ask questions, participate in discussions, earn badges and more.

Spend a few minutes exploring Get Started Resources, Learning Paths, Certifications, and Platform Discussions.

Connect with peers through User Groups and stay updated by subscribing to Events. We are excited to see you engage!

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Top Kudoed Authors
Read Databricks Data Intelligence Platform reviews on G2

Latest from our Blog

[PARTNER BLOG] Zerobus Ingest on Databricks

Introduction TL;DR ZeroBus Ingest is a serverless, Kafka-free ingestion service in Databricks that allows applications and IoT devices to stream data directly into Delta Lake with low latency and mini...

410Views 1kudos