cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

mohaimen_syed
by New Contributor III
  • 9864 Views
  • 3 replies
  • 1 kudos

Fuzzy Match on PySpark using UDF/Pandas UDF

I'm trying to do fuzzy matching on two dataframes by cross joining them and then using a udf for my fuzzy matching. But using both python udf and pandas udf its either very slow or I get an error. @pandas_udf("int")def core_match_processor(s1: pd.Ser...

  • 9864 Views
  • 3 replies
  • 1 kudos
Latest Reply
mohaimen_syed
New Contributor III
  • 1 kudos

I'm now getting the error: (SQL_GROUPED_AGG_PANDAS_UDF) is not supported on clusters in Shared access mode.Even though this article clearly states that pandas udf is supported for shared cluster in databrickshttps://www.databricks.com/blog/shared-clu...

  • 1 kudos
2 More Replies
ntvdatabricks
by New Contributor II
  • 6460 Views
  • 2 replies
  • 1 kudos

Resolved! Okta and Unified login

Hey Folks anyone put Databricks behind Okta and enabled Unified Login with workspaces that have a Unity Catalog metastore applied and some that don't?There are some workspaces we can't move over yet and it isn't clear in documentation if Unity Catalo...

  • 6460 Views
  • 2 replies
  • 1 kudos
Latest Reply
Walter_C
Databricks Employee
  • 1 kudos

Yes, users should be able to use a single Okta application for all workspaces, regardless of whether the Unity Catalog metastore has been applied or not. The Unity Catalog is a feature that allows you to manage and secure access to your data across a...

  • 1 kudos
1 More Replies
Shravanshibu
by New Contributor III
  • 1259 Views
  • 0 replies
  • 0 kudos

Public preview API not working - artifact-allowlists

 I am trying to hit /api/2.1/unity-catalog/artifact-allowlists/as a part of INIT migration script. Its is in public preview, do we need to enable anything else to use a API which is in Public preview. I am getting 404 error. But using same token for ...

  • 1259 Views
  • 0 replies
  • 0 kudos
SaiNeelakantam
by New Contributor
  • 2757 Views
  • 1 replies
  • 0 kudos

How to enable "Create Vector Search Index" button in DB workspace?

How to enable "Create Vector Search Index" button in DB workspace?Following is the screenshot from the Microsoft Ignite 2023 Databricks presentation:

  • 2757 Views
  • 1 replies
  • 0 kudos
Latest Reply
PL_db
Databricks Employee
  • 0 kudos

The feature is in public preview only in some regions, you can check the available regions in the documentation here.  In addition there are certain requirements, such as a UC enabled workspace and Serverless Compute enabled, you can check all requir...

  • 0 kudos
SamGreene
by Contributor II
  • 4246 Views
  • 5 replies
  • 0 kudos

CONVERT_TIMEZONE issue in DLT

I can run a query that uses the CONVERT_TIMEZONE function in a SQL notebook.  When I move the code to my DLT notebook the pipeline produces this error:Cannot resolve function `CONVERT_TIMEZONE`Here is the line:  CONVERT_TIMEZONE('UTC', 'America/Phoen...

  • 4246 Views
  • 5 replies
  • 0 kudos
Latest Reply
annn
New Contributor II
  • 0 kudos

Yes, the notebook is set to SQL and the convert_timezone function is within a select statement.

  • 0 kudos
4 More Replies
Ak_0926
by New Contributor
  • 6684 Views
  • 2 replies
  • 1 kudos

Can we get the actual query execution plan programmatically after a query is executed? Apart from UI

Let's say i have run a query and it showed me results. we can find the respective query execution plan on the UI. Is there any way we can get that execution plan through programmatically or through API?

  • 6684 Views
  • 2 replies
  • 1 kudos
Latest Reply
Walter_C
Databricks Employee
  • 1 kudos

You can obtain the query execution plan programmatically using the EXPLAIN statement in SQL. The EXPLAIN statement displays the execution plan that the database planner generates for the supplied statement. The execution plan shows how the table(s) r...

  • 1 kudos
1 More Replies
Danny_Lee
by Valued Contributor
  • 2948 Views
  • 2 replies
  • 4 kudos

Top Kudoed Author 🌟🤩🧑‍🎤

I recently saw a link to the Kudos Leaderboard for the Community Discussions.  It has always been my hope and fantasy , ever since I was a little child that I would someday be the #1 Kudoed Author on Community Discusions on community.Databricks.com....

KudosOprahGIF.gif
  • 2948 Views
  • 2 replies
  • 4 kudos
Latest Reply
Danny_Lee
Valued Contributor
  • 4 kudos

Thanks @DB_Paul - I'm on my way!   

  • 4 kudos
1 More Replies
Khalil
by Contributor
  • 8465 Views
  • 5 replies
  • 7 kudos

Incremental ingestion of Snowflake data with Delta Live Table (CDC)

Hello,I have some data which are lying into Snowflake, so I want to apply CDC on them using delta live table but I am having some issues.Here is what I am trying to do:  @dlt.view() def table1(): return spark.read.format("snowflake").options(**opt...

  • 8465 Views
  • 5 replies
  • 7 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 7 kudos

The CDC for delta live works fine for delta tables, as you have noticed.  However it is not a full blown CDC implementation/software.If you want to capture changes in Snowflake, you will have to implement some CDC method on Snowflake itself, and read...

  • 7 kudos
4 More Replies
Anku_
by New Contributor II
  • 2387 Views
  • 2 replies
  • 0 kudos

New to PySpark

Hi all,I am trying to get the domain from an email field using below expression; but getting an error.Kindly help. df.select(df.email, substring(df.email,instr(df.email,'@'),length(df.email).alias('domain')))

  • 2387 Views
  • 2 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

In your case, you want to extract the domain from the email, which starts from the position just after '@'. So, you should add 1 to the position of '@'. Also, the length of the substring should be the difference between the total length of the email ...

  • 0 kudos
1 More Replies
kickbuttowski
by New Contributor II
  • 1637 Views
  • 1 replies
  • 0 kudos

Issue in inferring schema for streaming dataframe using json files

Below is the pileine design in databricks and it's not working out , kindly look on this and let me know whether it will work or not , I'm getting json files of different schemas from directory under the root directory and it read all the files using...

  • 1637 Views
  • 1 replies
  • 0 kudos
Latest Reply
AmanSehgal
Honored Contributor III
  • 0 kudos

Could you please share some sample of your dataset and code snippet of what you're trying to implement?

  • 0 kudos
NoviKamayana
by New Contributor
  • 5474 Views
  • 0 replies
  • 0 kudos

Database: Delta Lake or PostgreSQL

Hey all,I am searching for a non-political answer to my database questions. Please know that I am a data newbie and litteraly do not know anything about this topic, but I want to learn, so please be gentle.  Some context: I am working for an OEM that...

  • 5474 Views
  • 0 replies
  • 0 kudos
pernilak
by New Contributor III
  • 5212 Views
  • 2 replies
  • 3 kudos

Resolved! Pros and cons of physically separating data in different storage accounts and containers

When setting up Unity Catalog, it is recommended by Databricks to figure out your data isolation model when it comes to physically separating your data into different storage accounts and/or contaners. There are so many options, it can be hard to be ...

  • 5212 Views
  • 2 replies
  • 3 kudos
Latest Reply
raphaelblg
Databricks Employee
  • 3 kudos

Hello @pernilak , Thanks for reaching out to Databricks Community! My name is Raphael, and I'll be helping out. Should all catalogs and the metastore reside in the same storage account (but different containers)   Yes, Databricks recommends having o...

  • 3 kudos
1 More Replies
swapnilmd
by New Contributor II
  • 1429 Views
  • 1 replies
  • 1 kudos

Databricks Web Editor's Cell like UI in local IDE

I want to have databricks related developement locally.There is extension that allows to run local python file on remote databricks cluster.But I want to have cell like structure that is present in databricks UI for python files in local IDE as well....

  • 1429 Views
  • 1 replies
  • 1 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 1 kudos

@swapnilmd You can use VSCode extension for Databricks.https://docs.databricks.com/en/dev-tools/vscode-ext/index.html

  • 1 kudos
Bhavishya
by New Contributor II
  • 5320 Views
  • 2 replies
  • 0 kudos

Databricks jdbc driver connectiion issue with apache solr

Hi,databricks jdbc version - 2.6.34I am facing the below issue with connecting databricks sql from apache solr Caused by: java.sql.SQLFeatureNotSupportedException: [Databricks][JDBC](10220) Driver does not support this optional feature.at com.databri...

  • 5320 Views
  • 2 replies
  • 0 kudos
Latest Reply
Bhavishya
New Contributor II
  • 0 kudos

Databricks team recommended to set IgnoreTransactions=1 and autocommit=false in the connection string but that didn't resolve the issue .Ultimately I had to use solr update API for uploading documents

  • 0 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels