cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 
Meet the Databricks MVPs

The Databricks MVP Program is our way of thanking and recognizing the community members, data scientists, data engineers, developers and open source enthusiasts who go above and beyond to uplift the data and AI community. Whether they’re speaking at ...

  • 68 Views
  • 0 replies
  • 4 kudos
13 hours ago
Databricks training invests in closing the data + AI skills gap across enterprises

The Data + AI Skills Gap The “skills gap” has been a concern for CEOs and leaders for many years, and the gap is only widening. According to McKinsey, as many as 375 million workers globally might have to change occupations soon to meet company needs...

  • 153 Views
  • 0 replies
  • 1 kudos
Monday
Now Hiring: Databricks Community Technical Moderator

Apply Now! Are you passionate about data and want to make a difference for thousands of data practitioners? Databricks is looking for a dedicated and knowledgeable Community Technical Moderator to guide our thriving online community and empower users...

  • 716 Views
  • 1 replies
  • 3 kudos
2 weeks ago
Insights from a global survey of 1,100 technologists and interviews with 28 CIOs

How to unlock enterprise AI: “You can have all the AI in the world, but if it’s on a shaky data foundation, then it’s not going to bring you any value.” — Carol Clements, Chief Digital and Technology Officer, JetBlue Companies everywhere have been qu...

  • 212 Views
  • 1 replies
  • 1 kudos
Thursday
Data + AI Summit: Call for Presentations

Are you solving real-world problems with data and AI? Your peers want to be inspired by your work! At Data + AI Summit 2025, we’re looking for data engineers, ML engineers, data scientists and analysts who are pushing the limits of AI, analytics and ...

  • 322 Views
  • 0 replies
  • 2 kudos
a week ago
Season's Speedings: Databricks SQL Delivers 4x Performance Boost Over Two Years

Databricks Performance Index is derived statistically from repeating workloads, accounting for changes irrelevant to the engine, and computed against billions of production queries. Lower is better.  As the season of giving approaches, we at Databri...

  • 377 Views
  • 0 replies
  • 2 kudos
a week ago

Community Activity

184754
by > New Contributor
  • 22 Views
  • 1 replies
  • 0 kudos

Table Trigger - Too many logfiles

Hi, we have implemented a job that runs on a trigger of a table update. The job worked perfectly, until the source table now have accumulated too many log files and the job isn't running anymore. Only the error message below:Storage location /abcd/_d...

  • 22 Views
  • 1 replies
  • 0 kudos
Latest Reply
radothede
Contributor II
  • 0 kudos

Hi @184754 Interesting topic, as the docs says:"Log files are deleted automatically and asynchronously after checkpoint operations and are not governed by VACUUM. While the default retention period of log files is 30 days, running VACUUM on a table r...

  • 0 kudos
AlbertWang
by > Contributor III
  • 6 Views
  • 0 replies
  • 0 kudos

Networking configuration of Azure Databricks managed storage account

Hi all,I created an Azure Databricks Workspace, and the workspace creates an Azure Databricks managed storage account.The networking configuration of the storage account is "Enabled from all networks".Shall I change it to "Enabled from selected virtu...

  • 6 Views
  • 0 replies
  • 0 kudos
DataGeek_JT
by > New Contributor II
  • 1029 Views
  • 2 replies
  • 0 kudos

Is it possible to use Liquid Clustering on Delta Live Tables / Materialised Views?

Is it possible to use Liquid Clustering on Delta Live Tables? If it is available what is the Python syntax for adding liquid clustering to a Delta Live Table / Materialised view please? 

  • 1029 Views
  • 2 replies
  • 0 kudos
Latest Reply
kerem
New Contributor II
  • 0 kudos

Hi @amr, materialised views are not tables, they are views. Liquid clustering is not supported on views so it will throw [EXPECT_TABLE_NOT_VIEW.NO_ALTERNATIVE] error. Unfortunately it will be the same case for the "optimize" command as well. 

  • 0 kudos
1 More Replies
Kulasekaran0107
by > Visitor
  • 7 Views
  • 0 replies
  • 0 kudos

Voucher for Data Engineer Associate certification

Hi, I am looking for a voucher to complete the Data Engineer Associate certification. Could anyone help me please?

  • 7 Views
  • 0 replies
  • 0 kudos
NehaR
by > New Contributor II
  • 6 Views
  • 0 replies
  • 0 kudos

Is there any option in databricks to estimate cost of a query before execution

Hi Team,I want to check if there is any option in data bricks which can help to estimate cost of a query before execution?I mean calculate DBU before actual query execution based on the logical plan? Regards 

  • 6 Views
  • 0 replies
  • 0 kudos
Brahmareddy
by > Valued Contributor III
  • 18 Views
  • 0 replies
  • 0 kudos

Mark Your Calendar: Data Day Texas + AI 2025 on January 25!

Hey Austin Databricks Group,If you're passionate about data and AI, here's an event you won't want to miss: Data Day Texas + AI 2025, happening on Saturday, January 25. This event brings together experts and enthusiasts to share insights, trends, and...

  • 18 Views
  • 0 replies
  • 0 kudos
ameet9257
by > New Contributor II
  • 7 Views
  • 0 replies
  • 0 kudos

Cloning of Workflow from One env to different env using Job API

Hi Team,One of my team members recently shared one requirement: he wants to migrate the 10 Workflows from the sandbox to the dev environment to run his model in dev env.I wanted to move all these workflows in an automated way and one of the solutions...

ameet9257_0-1732126423665.png ameet9257_1-1732126508104.png ameet9257_2-1732126684331.png
  • 7 Views
  • 0 replies
  • 0 kudos
ksenija
by > Contributor
  • 1278 Views
  • 2 replies
  • 0 kudos

Foreign table to delta streaming table

I want to copy a table from a foreign catalog as my streaming table. This is the code I used but I am getting error: Table table_name does not support either micro-batch or continuous scan.; spark.readStream                .table(table_name)         ...

  • 1278 Views
  • 2 replies
  • 0 kudos
Latest Reply
sbiales
Visitor
  • 0 kudos

I also want to bump this! This is my exact problem right now as well.

  • 0 kudos
1 More Replies
FabianGutierrez
by > New Contributor III
  • 96 Views
  • 9 replies
  • 1 kudos

My DABS CLI Deploy call not generating a .tfstate file

Hi Community,I'm running into an issue, when executing Databricks CLI Bundle Deploy I dont get the Terraform State file (.tfstate). I know that I should get one but even when defining the state_apth on my YAML (.yml) DABS file I still do not get it.D...

FabianGutierrez_0-1731932526298.png
  • 96 Views
  • 9 replies
  • 1 kudos
Latest Reply
FabianGutierrez
New Contributor III
  • 1 kudos

Forgot to also share this print screen of the last section in the logs. Somehow the State file keeps getting ignored (not found) so how can the deployment still take place I wonder.  

  • 1 kudos
8 More Replies
akshathatm
by > New Contributor
  • 49 Views
  • 1 replies
  • 1 kudos

Unable to Access Databricks Customer Academy Platform

I am currently facing an issue accessing the Databricks Customer Academy platform. When I attempt to visit https://customer-academy.databricks.com, I receive the following error message:"You are not authorized to access https://customer-academy.datab...

  • 49 Views
  • 1 replies
  • 1 kudos
Latest Reply
Adam5
Visitor
  • 1 kudos

I am facing the same issue.

  • 1 kudos
gyorgyjelinek
by > New Contributor
  • 45 Views
  • 2 replies
  • 0 kudos

Resolved! Default schema in SQL Editor is not 'default' when unity catalog is set as default catalog

In workspace settings: Workspace admin - advanced - other - Default catalog for the workspace is set to different than hive_metastore, it is set to a `Unity Catalog` catalog - the expected behaviour is copied here from the related more info panel:"Se...

  • 45 Views
  • 2 replies
  • 0 kudos
Latest Reply
gyorgyjelinek
New Contributor
  • 0 kudos

Hi @Alberto_Umana ,Thank you for the explanation. I mark your comment as the accepted solution as it contains the current implementation logic and the work around. Good to know that the more info panel is a bit misleading as of now because the SQL Ed...

  • 0 kudos
1 More Replies
BAZA
by > New Contributor II
  • 7439 Views
  • 12 replies
  • 0 kudos

Invisible empty spaces when reading .csv files

When importing a .csv file with leading and/or trailing empty spaces around the separators, the output results in strings that appear to be trimmed on the output table or when using .display() but are not actually trimmed.It is possible to identify t...

  • 7439 Views
  • 12 replies
  • 0 kudos
Latest Reply
sallytomato
  • 0 kudos

I’ve found that investing in high-quality print services like GoPrint really makes a difference in ensuring your materials match perfectly. Also, it's good practice to always test with smaller prints first, like business cards or brochures, before go...

  • 0 kudos
11 More Replies
swapnild-tomtom
by > Visitor
  • 20 Views
  • 0 replies
  • 0 kudos
  • 20 Views
  • 0 replies
  • 0 kudos
joeyslaptop
by > New Contributor II
  • 49 Views
  • 1 replies
  • 0 kudos

How do I use DataBricks SQL query to convert a field value % back into a wildcard?

Hi.  If I've posted to the wrong area, please let me know.I am using SQL to join two tables.  One table has the wildcard '%' stored as text/string/varchar.  I need to join the value of TableA.column1 to TableB.column1 based on the wildcard in the str...

  • 49 Views
  • 1 replies
  • 0 kudos
Latest Reply
JAHNAVI
Databricks Employee
  • 0 kudos

Hi,Could you please try the query below and let me know if it meets your requirements? SELECT * FROM TableA A LEFT JOIN TableB B ON A.Column1 LIKE REPLACE(B.Column1, '%', '%%')Replace helps us in treating the %' stored in TableB.Column1 as a wildcar...

  • 0 kudos
swetha
by > New Contributor III
  • 2885 Views
  • 4 replies
  • 1 kudos

Error: no streaming listener attached to the spark app is the error we are observing post accessing streaming statistics API. Please help us with this issue ASAP. Thanks.

Issue: Spark structured streaming applicationAfter adding the listener jar file in the cluster init script, the listener is working (From what I see in the stdout/log4j logs)But when I try to hit the 'Content-Type: application/json' http://host:port/...

  • 2885 Views
  • 4 replies
  • 1 kudos
Latest Reply
INJUSTIC
Visitor
  • 1 kudos

Have you found the solution? Thanks

  • 1 kudos
3 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Top Kudoed Authors
Read Databricks Data Intelligence Platform reviews on G2

Latest from our Blog

Understanding Unity Catalog

Throughout the dozens of engagements I’ve had since joining Databricks, I’ve found that customers often struggle to understand the scope and concept of Unity Catalog. Questions like “Does it store my ...

198Views 2kudos