cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Jeremyy
by New Contributor
  • 1101 Views
  • 1 replies
  • 0 kudos

I can't create a compute resource beyond "SQL Warehouse", "Vector Search" and "Apps"?

None of the LLMs even understand why I can't create a compute resource. I was using community (now free edition) until yesterday, when I became apparent that I needed the paid version, so I upgraded. I've even got my AWS account connected, which was ...

  • 1101 Views
  • 1 replies
  • 0 kudos
Latest Reply
ilir_nuredini
Honored Contributor
  • 0 kudos

Hello Jeremyy,The free edition has some limitation in terms of computing. As you noticed that there is no such option to create a custom compute. the custom compute configurations and GPUs are not supported. Free Edition users only have access to ser...

  • 0 kudos
upskill
by New Contributor
  • 672 Views
  • 1 replies
  • 0 kudos

Delete workspace in Free account

I created a free edition account and I used my google account for logging in. I see 2 works spaces got created. I want to delete one of them. How can I delete one of the workspace. If it is not possible, how can I delete my account as a whole?

  • 672 Views
  • 1 replies
  • 0 kudos
Latest Reply
Advika
Databricks Employee
  • 0 kudos

Hello @upskill! Did you possibly sign in twice during setup? That can sometimes lead to separate accounts, each with its own workspace. Currently, there’s no self-serve option to remove a workspace or delete an account. You can reach out to help@data...

  • 0 kudos
ChristianRRL
by Valued Contributor III
  • 2627 Views
  • 3 replies
  • 1 kudos

DQ Expectations Best Practice

Hi there, I hope this is a fairly simple and straightforward question. I'm wondering if there's a "general" consensus on where along the DLT data ingestion + transformation process should data quality expectations be applied? For example, two very si...

  • 2627 Views
  • 3 replies
  • 1 kudos
Latest Reply
dataoculus_app
New Contributor III
  • 1 kudos

in my opinion, you can keep the bronze/raw layer as it is, and the quality check should be applied to silver.

  • 1 kudos
2 More Replies
Dimitry
by Contributor
  • 1131 Views
  • 2 replies
  • 1 kudos

Resolved! Struggle to parallelize UDF

Hi all I have 2 clusters, that look identical but one runs my UDF in parallel another one does not.The ones that do is personal, the bad one is shared.import pandas as pd from datetime import datetime from time import sleep import threading # test f...

Dimitry_0-1750216264118.png Dimitry_1-1750216332766.png Dimitry_3-1750216642622.png
  • 1131 Views
  • 2 replies
  • 1 kudos
Latest Reply
Dimitry
Contributor
  • 1 kudos

As a side note "no isolation shared" cluster has no access to unity catalog, so no table queries.I resorted to using personal compute assigned to a group.

  • 1 kudos
1 More Replies
Jerry01
by New Contributor III
  • 1497 Views
  • 1 replies
  • 0 kudos

How to override a in-built function in databricks

I am trying to override is_member() in-built function in such a way that, it always return true. How to do it in databricks using sql or python?

  • 1497 Views
  • 1 replies
  • 0 kudos
Latest Reply
xbgydx12
New Contributor II
  • 0 kudos

To re-active this question. I have a similar requirement. I want to override shouldRetain(log: T, currentTime: Long) in class org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog, it also always return true

  • 0 kudos
drag7ter
by Contributor
  • 1614 Views
  • 4 replies
  • 0 kudos

Parameters in dashboards data section passing via asset bundles

A new functionality allows deploy dashboards with a asset bundles. Here is an example :# This is the contents of the resulting baby_gender_by_county.dashboard.yml file. resources: dashboards: baby_gender_by_county: display_name: "Baby gen...

  • 1614 Views
  • 4 replies
  • 0 kudos
Latest Reply
drag7ter
Contributor
  • 0 kudos

variables:  catalog:    description: "Catalog name for the dataset"    default: "dev" parameters:        catalog: ${var.catalog}doesn't replace parameter values prod -> dev in json when it is being deployed"datasets": [    {      "displayName": "my_t...

  • 0 kudos
3 More Replies
zent
by New Contributor
  • 535 Views
  • 1 replies
  • 0 kudos

Requirements for Managed Iceberg tables with Unity Catalog

Does Databricks support creating native Apache iceberg tables(managed) in unity catalog or is it possible only with private preview, so what are the requirements?

  • 535 Views
  • 1 replies
  • 0 kudos
Latest Reply
Advika
Databricks Employee
  • 0 kudos

Hello @zent! Databricks now fully supports creating Apache Iceberg managed tables in Unity Catalog, and this capability is available in Public Preview (not just private preview). These managed Iceberg tables can be read and written by Databricks and ...

  • 0 kudos
Anton_Lagergren
by Contributor
  • 2385 Views
  • 2 replies
  • 1 kudos

Resolved! New Regional Group Request

Hello!How may I request and/or create a new Regional Group for the DMV Area (DC, Maryland, Virginia).Thank you,—Anton@DB_Paul   @Sujitha 

  • 2385 Views
  • 2 replies
  • 1 kudos
Latest Reply
nayan_wylde
Honored Contributor
  • 1 kudos

Is there a group you already created??

  • 1 kudos
1 More Replies
darkanita81
by New Contributor III
  • 1199 Views
  • 3 replies
  • 3 kudos

Resolved! How be a part of Databricks Groups

Hello, I am part of a Community Databricks Crew LATAM, where we have achieved 300  people connected and we have executed 3 events, one by month, we want to be part of Databricks Groups but we dont know how to do that, if somebody can help me I will a...

  • 1199 Views
  • 3 replies
  • 3 kudos
Latest Reply
Rishabh_Tiwari
Databricks Employee
  • 3 kudos

Hi Ana, Thanks for reaching out! I won’t be attending DAIS this time, but we do have a Databricks Community booth set up near the Expo Hall. My colleague @Sujitha  will be there. Do stop by to say hi and learn about all the exciting things we have go...

  • 3 kudos
2 More Replies
enhancederroruk
by New Contributor III
  • 5191 Views
  • 7 replies
  • 7 kudos

Chrome/Edge high memory usage for Databricks tabs.

Is it normal for Databricks tabs to be using such high memory?The Chrome example I just got a screenshot of was this (rounded up/down)...3 x Databricks tabs for one user, sized at6gb, 4.5gb, and 2gbTotal = 12.5gbI know it gets higher than this too, I...

  • 5191 Views
  • 7 replies
  • 7 kudos
Latest Reply
MateusPCardoso
New Contributor II
  • 7 kudos

Lately, I've noticed that Databricks is consuming a lot of memory (from my local machine) in the Chrome tab. I see memory spikes especially when I'm using the SQL editor extensively — at some point, there's even a noticeable delay between typing and ...

  • 7 kudos
6 More Replies
Dimitry
by Contributor
  • 2074 Views
  • 2 replies
  • 0 kudos

How to "Python versions in the Spark Connect client and server are different. " in UDF

I've read all relevant articles but none have solution that I could understand. Sorry I'm new to it.I have a simple UDF to demonstrate the problem:df = spark.createDataFrame([(1, 1.0, 'a'), (1, 2.0, 'b'), (2, 3.0, 'c'), (2, 5.0, 'd'), (2, 10.0, 'e')]...

Dimitry_0-1749435601522.png
  • 2074 Views
  • 2 replies
  • 0 kudos
Latest Reply
SP_6721
Contributor III
  • 0 kudos

Hi @Dimitry ,The error you're seeing indicates that the Python version in your notebook (3.11) doesn't match the version used by Databricks Serverless, which is typically Python 3.12. Since Serverless environments use a fixed Python version, this mis...

  • 0 kudos
1 More Replies
anilsampson
by New Contributor III
  • 427 Views
  • 1 replies
  • 1 kudos

Databricks Dashboard run from Job issue

Hello, i am trying to trigger a databricks dashboard via workflow task.1.when i deploy the job triggering the dashboard task via local "Deploy bundle" command deployment is successful.2. when i try to deploy to a different environment via CICD while ...

  • 427 Views
  • 1 replies
  • 1 kudos
Latest Reply
SP_6721
Contributor III
  • 1 kudos

Hi @anilsampson ,The error means your dashboard_task is not properly nested under the tasks section.tasks:- task_key: dashboard_task  dashboard_task:    dashboard_id: ${resources.dashboards.nyc_taxi_trip_analysis.id}    warehouse_id: ${var.warehouse_...

  • 1 kudos
amit_jbs
by New Contributor II
  • 3681 Views
  • 6 replies
  • 2 kudos

In databricks deployment .py files getting converted to notebooks

A critical issue has arisen that is impacting our deployment planning for our client. We have encountered a challenge with our Azure CI/CD pipeline integration, specifically concerning the deployment of Python files (.py). Despite our best efforts, w...

  • 3681 Views
  • 6 replies
  • 2 kudos
Latest Reply
AGivenUser
New Contributor II
  • 2 kudos

Another option is Databricks Asset Bundles.

  • 2 kudos
5 More Replies
Dimitry
by Contributor
  • 1023 Views
  • 1 replies
  • 2 kudos

Resolved! Cannot run merge statement in the notebook

Hi allI'm trialing Databricks for running complex python integration scripts. It will be different data sources (MS SQL, CSV files etc.) that I need to push to a target system via GraphQL. So I selected Databricks vs MS Fabric as it can handle comple...

Dimitry_0-1749101790855.png Dimitry_1-1749101815839.png
  • 1023 Views
  • 1 replies
  • 2 kudos
Latest Reply
SP_6721
Contributor III
  • 2 kudos

Hi @Dimitry ,The issue you're seeing is due to delta.enableRowTracking = true. This feature adds hidden _metadata columns, which serverless compute doesn't support, that's why the MERGE fails there.Try this out:You can disable row tracking with:ALTER...

  • 2 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels