cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

kp12
by New Contributor II
  • 9426 Views
  • 4 replies
  • 1 kudos

column "id" is of type uuid but expression is of type character varying.

Hello,I'm trying to write to Azure PostgreSQL flexible  database from Azure Databricks, using PostgreSQL connector in Databricks Runtime in 12.2LTS.I'm using df.write.format("postgresql").save() to write to PostgreSQL database, but getting the follow...

  • 9426 Views
  • 4 replies
  • 1 kudos
Latest Reply
Student-Learn
New Contributor II
  • 1 kudos

Yes, this stack overflow was my reference too and adding below option made load go with no error on UUID data type in postgres columnSpoiler.option(stringtype, "unspecified").option(stringtype, "unspecified")https://stackoverflow.com/questions/409739...

  • 1 kudos
3 More Replies
Raja_fawadAhmed
by New Contributor
  • 786 Views
  • 0 replies
  • 0 kudos

databricks job compute price w.r.t running time

I have two workflows (jobs) in data bricks (AWS) with below cluster specs (job base cluster NOT general purpose)Driver: i3.xlarge · Workers: i3.xlarge · 2-8 workers Job 1 takes 10 min to completeJob 2 takes 50 min to completeQuestions:DBU cost is sam...

  • 786 Views
  • 0 replies
  • 0 kudos
Surajv
by New Contributor III
  • 929 Views
  • 1 replies
  • 0 kudos

Difference between delete token API and revoke token API Databricks

Hi Community, I am trying to understand the difference between:Delete token API: DELETE /api/2.0/token-management/tokens/{token_id}Revoke token API: POST /api/2.0/token/deleteAs, when I create more than 600 tokens - I am getting QUOTA_EXCEEDED error....

  • 929 Views
  • 1 replies
  • 0 kudos
Latest Reply
Surajv
New Contributor III
  • 0 kudos

Delete token API doc link: https://docs.databricks.com/api/workspace/tokenmanagement/deleteRevoke token API doc link: https://docs.databricks.com/api/workspace/tokens/revoketoken 

  • 0 kudos
NC
by New Contributor III
  • 1301 Views
  • 1 replies
  • 0 kudos

Using libpostal in Databricks

Hi,I am trying to work on address parsing and would like to use libpostal in Databricks.I have used the official python bindings: GitHub - openvenues/pypostal: Python bindings to libpostal for fast international address parsing/normalizationpip insta...

  • 1301 Views
  • 1 replies
  • 0 kudos
Latest Reply
NC
New Contributor III
  • 0 kudos

I managed to install pylibpostal via the Cluster Library. but I cannot seem to download the data needed to run it.Please help. Thank you.

  • 0 kudos
DatabricksGuide
by Community Manager
  • 1463 Views
  • 0 replies
  • 0 kudos

Join Our Databricks Free Trial Experience feedback AMA on Friday March 29, 2024!

We're looking for feedback on the Databricks free trial experience, and we need your help! Whether you've used it for data engineering, data science, or analytics, Sujit Nair, our Product Manager on the free trial experience, and our journey archite...

  • 1463 Views
  • 0 replies
  • 0 kudos
Frustrated_DE
by New Contributor III
  • 1954 Views
  • 1 replies
  • 0 kudos

DLT SQL demo pipeline issue

Hi,   First foray into DLT and following code exerts from the sample-DLT-notebook.I'm creating a notebook with the SQL below:CREATE STREAMING LIVE TABLE sales_orders_rawCOMMENT "The raw sales orders, ingested from /databricks-datasets."TBLPROPERTIES ...

  • 1954 Views
  • 1 replies
  • 0 kudos
Latest Reply
Frustrated_DE
New Contributor III
  • 0 kudos

If you change the notebook default language as opposed to using magic command. I normally have it set to Python, I've wrongly assumed DLT would transpose as can't use magic command but have to change default in order for it to work. 

  • 0 kudos
hpicatto
by New Contributor III
  • 2882 Views
  • 1 replies
  • 0 kudos

using the api for getting cost in usd

I'm trying to use the API of billable usage and I do get a report but I have not been able to get the usd cost report, only the dbuHours. I guess I've to change the meter_name but I cannot find the key for that parameter anywhere

  • 2882 Views
  • 1 replies
  • 0 kudos
mohaimen_syed
by New Contributor III
  • 8617 Views
  • 3 replies
  • 1 kudos

Fuzzy Match on PySpark using UDF/Pandas UDF

I'm trying to do fuzzy matching on two dataframes by cross joining them and then using a udf for my fuzzy matching. But using both python udf and pandas udf its either very slow or I get an error. @pandas_udf("int")def core_match_processor(s1: pd.Ser...

  • 8617 Views
  • 3 replies
  • 1 kudos
Latest Reply
mohaimen_syed
New Contributor III
  • 1 kudos

I'm now getting the error: (SQL_GROUPED_AGG_PANDAS_UDF) is not supported on clusters in Shared access mode.Even though this article clearly states that pandas udf is supported for shared cluster in databrickshttps://www.databricks.com/blog/shared-clu...

  • 1 kudos
2 More Replies
ntvdatabricks
by New Contributor II
  • 5672 Views
  • 2 replies
  • 1 kudos

Resolved! Okta and Unified login

Hey Folks anyone put Databricks behind Okta and enabled Unified Login with workspaces that have a Unity Catalog metastore applied and some that don't?There are some workspaces we can't move over yet and it isn't clear in documentation if Unity Catalo...

  • 5672 Views
  • 2 replies
  • 1 kudos
Latest Reply
Walter_C
Databricks Employee
  • 1 kudos

Yes, users should be able to use a single Okta application for all workspaces, regardless of whether the Unity Catalog metastore has been applied or not. The Unity Catalog is a feature that allows you to manage and secure access to your data across a...

  • 1 kudos
1 More Replies
Khalil
by Contributor
  • 7129 Views
  • 5 replies
  • 7 kudos

Incremental ingestion of Snowflake data with Delta Live Table (CDC)

Hello,I have some data which are lying into Snowflake, so I want to apply CDC on them using delta live table but I am having some issues.Here is what I am trying to do:  @dlt.view() def table1(): return spark.read.format("snowflake").options(**opt...

  • 7129 Views
  • 5 replies
  • 7 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 7 kudos

The CDC for delta live works fine for delta tables, as you have noticed.  However it is not a full blown CDC implementation/software.If you want to capture changes in Snowflake, you will have to implement some CDC method on Snowflake itself, and read...

  • 7 kudos
4 More Replies
NoviKamayana
by New Contributor
  • 4307 Views
  • 0 replies
  • 0 kudos

Database: Delta Lake or PostgreSQL

Hey all,I am searching for a non-political answer to my database questions. Please know that I am a data newbie and litteraly do not know anything about this topic, but I want to learn, so please be gentle.  Some context: I am working for an OEM that...

  • 4307 Views
  • 0 replies
  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels
Top Kudoed Authors