cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Jeremyy
by New Contributor
  • 1175 Views
  • 1 replies
  • 0 kudos

I can't create a compute resource beyond "SQL Warehouse", "Vector Search" and "Apps"?

None of the LLMs even understand why I can't create a compute resource. I was using community (now free edition) until yesterday, when I became apparent that I needed the paid version, so I upgraded. I've even got my AWS account connected, which was ...

  • 1175 Views
  • 1 replies
  • 0 kudos
Latest Reply
ilir_nuredini
Honored Contributor
  • 0 kudos

Hello Jeremyy,The free edition has some limitation in terms of computing. As you noticed that there is no such option to create a custom compute. the custom compute configurations and GPUs are not supported. Free Edition users only have access to ser...

  • 0 kudos
upskill
by New Contributor
  • 749 Views
  • 1 replies
  • 0 kudos

Resolved! Delete workspace in Free account

I created a free edition account and I used my google account for logging in. I see 2 works spaces got created. I want to delete one of them. How can I delete one of the workspace. If it is not possible, how can I delete my account as a whole?

  • 749 Views
  • 1 replies
  • 0 kudos
Latest Reply
Advika
Databricks Employee
  • 0 kudos

Hello @upskill! Did you possibly sign in twice during setup? That can sometimes lead to separate accounts, each with its own workspace. Currently, there’s no self-serve option to remove a workspace or delete an account. You can reach out to help@data...

  • 0 kudos
ChristianRRL
by Valued Contributor III
  • 2890 Views
  • 3 replies
  • 1 kudos

DQ Expectations Best Practice

Hi there, I hope this is a fairly simple and straightforward question. I'm wondering if there's a "general" consensus on where along the DLT data ingestion + transformation process should data quality expectations be applied? For example, two very si...

  • 2890 Views
  • 3 replies
  • 1 kudos
Latest Reply
dataoculus_app
New Contributor III
  • 1 kudos

in my opinion, you can keep the bronze/raw layer as it is, and the quality check should be applied to silver.

  • 1 kudos
2 More Replies
Dimitry
by Contributor III
  • 1326 Views
  • 2 replies
  • 1 kudos

Resolved! Struggle to parallelize UDF

Hi all I have 2 clusters, that look identical but one runs my UDF in parallel another one does not.The ones that do is personal, the bad one is shared.import pandas as pd from datetime import datetime from time import sleep import threading # test f...

Dimitry_0-1750216264118.png Dimitry_1-1750216332766.png Dimitry_3-1750216642622.png
  • 1326 Views
  • 2 replies
  • 1 kudos
Latest Reply
Dimitry
Contributor III
  • 1 kudos

As a side note "no isolation shared" cluster has no access to unity catalog, so no table queries.I resorted to using personal compute assigned to a group.

  • 1 kudos
1 More Replies
Jerry01
by New Contributor III
  • 1612 Views
  • 1 replies
  • 0 kudos

How to override a in-built function in databricks

I am trying to override is_member() in-built function in such a way that, it always return true. How to do it in databricks using sql or python?

  • 1612 Views
  • 1 replies
  • 0 kudos
Latest Reply
xbgydx12
New Contributor II
  • 0 kudos

To re-active this question. I have a similar requirement. I want to override shouldRetain(log: T, currentTime: Long) in class org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog, it also always return true

  • 0 kudos
zent
by New Contributor
  • 665 Views
  • 1 replies
  • 0 kudos

Requirements for Managed Iceberg tables with Unity Catalog

Does Databricks support creating native Apache iceberg tables(managed) in unity catalog or is it possible only with private preview, so what are the requirements?

  • 665 Views
  • 1 replies
  • 0 kudos
Latest Reply
Advika
Databricks Employee
  • 0 kudos

Hello @zent! Databricks now fully supports creating Apache Iceberg managed tables in Unity Catalog, and this capability is available in Public Preview (not just private preview). These managed Iceberg tables can be read and written by Databricks and ...

  • 0 kudos
Anton_Lagergren
by Contributor
  • 2553 Views
  • 2 replies
  • 1 kudos

Resolved! New Regional Group Request

Hello!How may I request and/or create a new Regional Group for the DMV Area (DC, Maryland, Virginia).Thank you,—Anton@DB_Paul   @Sujitha 

  • 2553 Views
  • 2 replies
  • 1 kudos
Latest Reply
nayan_wylde
Esteemed Contributor
  • 1 kudos

Is there a group you already created??

  • 1 kudos
1 More Replies
darkanita81
by New Contributor III
  • 1362 Views
  • 3 replies
  • 3 kudos

Resolved! How be a part of Databricks Groups

Hello, I am part of a Community Databricks Crew LATAM, where we have achieved 300  people connected and we have executed 3 events, one by month, we want to be part of Databricks Groups but we dont know how to do that, if somebody can help me I will a...

  • 1362 Views
  • 3 replies
  • 3 kudos
Latest Reply
Rishabh_Tiwari
Databricks Employee
  • 3 kudos

Hi Ana, Thanks for reaching out! I won’t be attending DAIS this time, but we do have a Databricks Community booth set up near the Expo Hall. My colleague @Sujitha  will be there. Do stop by to say hi and learn about all the exciting things we have go...

  • 3 kudos
2 More Replies
Dimitry
by Contributor III
  • 2650 Views
  • 2 replies
  • 0 kudos

How to "Python versions in the Spark Connect client and server are different. " in UDF

I've read all relevant articles but none have solution that I could understand. Sorry I'm new to it.I have a simple UDF to demonstrate the problem:df = spark.createDataFrame([(1, 1.0, 'a'), (1, 2.0, 'b'), (2, 3.0, 'c'), (2, 5.0, 'd'), (2, 10.0, 'e')]...

Dimitry_0-1749435601522.png
  • 2650 Views
  • 2 replies
  • 0 kudos
Latest Reply
SP_6721
Honored Contributor
  • 0 kudos

Hi @Dimitry ,The error you're seeing indicates that the Python version in your notebook (3.11) doesn't match the version used by Databricks Serverless, which is typically Python 3.12. Since Serverless environments use a fixed Python version, this mis...

  • 0 kudos
1 More Replies
anilsampson
by New Contributor III
  • 876 Views
  • 1 replies
  • 1 kudos

Databricks Dashboard run from Job issue

Hello, i am trying to trigger a databricks dashboard via workflow task.1.when i deploy the job triggering the dashboard task via local "Deploy bundle" command deployment is successful.2. when i try to deploy to a different environment via CICD while ...

  • 876 Views
  • 1 replies
  • 1 kudos
Latest Reply
SP_6721
Honored Contributor
  • 1 kudos

Hi @anilsampson ,The error means your dashboard_task is not properly nested under the tasks section.tasks:- task_key: dashboard_task  dashboard_task:    dashboard_id: ${resources.dashboards.nyc_taxi_trip_analysis.id}    warehouse_id: ${var.warehouse_...

  • 1 kudos
amit_jbs
by New Contributor II
  • 4098 Views
  • 6 replies
  • 2 kudos

In databricks deployment .py files getting converted to notebooks

A critical issue has arisen that is impacting our deployment planning for our client. We have encountered a challenge with our Azure CI/CD pipeline integration, specifically concerning the deployment of Python files (.py). Despite our best efforts, w...

  • 4098 Views
  • 6 replies
  • 2 kudos
Latest Reply
AGivenUser
New Contributor II
  • 2 kudos

Another option is Databricks Asset Bundles.

  • 2 kudos
5 More Replies
Dimitry
by Contributor III
  • 1279 Views
  • 1 replies
  • 2 kudos

Resolved! Cannot run merge statement in the notebook

Hi allI'm trialing Databricks for running complex python integration scripts. It will be different data sources (MS SQL, CSV files etc.) that I need to push to a target system via GraphQL. So I selected Databricks vs MS Fabric as it can handle comple...

Dimitry_0-1749101790855.png Dimitry_1-1749101815839.png
  • 1279 Views
  • 1 replies
  • 2 kudos
Latest Reply
SP_6721
Honored Contributor
  • 2 kudos

Hi @Dimitry ,The issue you're seeing is due to delta.enableRowTracking = true. This feature adds hidden _metadata columns, which serverless compute doesn't support, that's why the MERGE fails there.Try this out:You can disable row tracking with:ALTER...

  • 2 kudos
pargit2
by New Contributor II
  • 1076 Views
  • 2 replies
  • 0 kudos

feature store

i need to build for data science team feature store that will return one big df after one hot encoding for almost each dimension,join and group by. should I create one feature store for final output that contain all the relevant data or create featur...

  • 1076 Views
  • 2 replies
  • 0 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 0 kudos

Here are some things to consider:   The best practice for designing a feature store in your scenario depends on balancing scalability, maintainability, and the dynamic nature of some dimensions like doctor names. Here's an outlined recommendation bas...

  • 0 kudos
1 More Replies
VigneshJaisanka
by New Contributor II
  • 1453 Views
  • 2 replies
  • 0 kudos

Databricks DLT ADLS Access issue

We have a DLT pipeline configure with spn inside the notebook, which was working fine. Now after credentials expiry, we created new one and updated the same in notebook. Now we are pipeline is not able to read from ADLS.SPN and my UserId is having co...

  • 1453 Views
  • 2 replies
  • 0 kudos
Latest Reply
SP_6721
Honored Contributor
  • 0 kudos

Hi @VigneshJaisanka The issue likely comes from a permissions or configuration mismatch. Here are a few things worth checking:Make sure the SPN is set as the pipeline owner and has the necessary permissions on the ADLS resource.If you’re using Unity ...

  • 0 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels