cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Henrik_
by New Contributor III
  • 8449 Views
  • 2 replies
  • 0 kudos

Callback bound method error

 When executing a withColumn (running on DBR 14.3 LST) I get this error:Error in callback <bound method UserNamespaceCommandHook.post_run_cell of <dbruntime.DatasetInfo.UserNamespaceCommandHook object at 0x7feda2b2efb0>> (for post_run_cell):How shoul...

  • 8449 Views
  • 2 replies
  • 0 kudos
Latest Reply
TjommeV-Vlaio
New Contributor III
  • 0 kudos

We have the same issue using a shared cluster running DBR 14.3:Code executed: dfNew = dfTmp.withColumn(HashKeyColumnName, F.sha2(F.concat_ws("||", *ColumnList), 256))Error received: Error in callback <bound method UserNamespaceCommandHook.post_run_ce...

  • 0 kudos
1 More Replies
sara-aliza
by New Contributor
  • 1528 Views
  • 0 replies
  • 0 kudos

Pystan3 with databricks runtime 13.3

I am getting a read-only error when running pystan3 build() in a UDF.I think the issue is related to the location that the code is being run from, which is read-only. I am looking to set a custom cache location inside the UDF. Based on this link  I a...

  • 1528 Views
  • 0 replies
  • 0 kudos
151640
by New Contributor III
  • 2604 Views
  • 1 replies
  • 0 kudos

Databricks JDBC driver attempting to pass access token (JWT) from an external IDP (OKTA)

Configured a Databricks workspace for SSO to an IDP (OKTA).Databricks JDBC driver 02.06.38.1068Attempting to connect to Databricks using a URL similar to following where access token is obtained from the IDP (OKTA). Using tool such as SQLSquirrel wit...

  • 2604 Views
  • 1 replies
  • 0 kudos
Latest Reply
151640
New Contributor III
  • 0 kudos

Yes, latest version 02.06.38.1068Would like to know if other persons have successfully passed access tokens from an external IDP via the driver.

  • 0 kudos
jannemanson
by New Contributor III
  • 5401 Views
  • 4 replies
  • 1 kudos

run datarbicks worflow as service pricipal (managed identity) reads from azure dev ops repo Failed

Hello,we are running a workflow as a service principal, that is a aad managed identity. This does result in the issue: run databricks workflow as service principal the reads from azure dev ops repo Failed to checkout Git repository: PERMISSION_DENIED...

Get Started Discussions
azure
managed identity
Service Principal
  • 5401 Views
  • 4 replies
  • 1 kudos
Latest Reply
IvanK
New Contributor III
  • 1 kudos

We managed to solve this problem, however it is not an elegant solution. Databricks should simplify this.The steps that have to be done are listed below. We are using user assigned managed identity (MI), but I assume this should work for Azure Servic...

  • 1 kudos
3 More Replies
Zavi
by New Contributor
  • 1932 Views
  • 1 replies
  • 0 kudos

When are DLT going to support multiple targets

Due to the limitations with all output data needing to be stored in one target we have stopped using DLT until more flexibility is added. If anyone has a workaround we are open to suggestions. 

  • 1932 Views
  • 1 replies
  • 0 kudos
Latest Reply
Rafael-Ribeiro
New Contributor II
  • 0 kudos

Hi Zavi,One potential workaround is to establish multiple DLT pipelines, with each pipeline specifically configured to point to a unique target. This approach effectively allows for a diverse range of output data to be stored across various targets.T...

  • 0 kudos
pjv
by New Contributor III
  • 1360 Views
  • 1 replies
  • 0 kudos

VSCode Databricks Extension

Hi all,I've been trying to sync my VSCode IDE with our Databricks GCP workspace using the Databricks extension. I am able to connect authenticate my account and workspace and find our clusters. However, when I try to sync a destination it throws a st...

  • 1360 Views
  • 1 replies
  • 0 kudos
Latest Reply
pjv
New Contributor III
  • 0 kudos

@Retired_mod thanks for you response.I am not running through a proxy. At least, not on purpose. How do I know if I am running through a proxy? And where can I find the values of <proxy_url> and <port> so that I can try restarting my VSCode.I have tr...

  • 0 kudos
rachit-prodigal
by New Contributor
  • 24601 Views
  • 0 replies
  • 0 kudos

Identity column has null values

I created a table in databricks using a dbt model pre hook  CREATE TABLE IF NOT EXISTS accounts ( account_id BIGINT GENERATED ALWAYS AS IDENTITY, description STRING other columns)I use the same dbt model to merge values into this table in the post...

  • 24601 Views
  • 0 replies
  • 0 kudos
unity_Catalog
by New Contributor III
  • 1718 Views
  • 0 replies
  • 0 kudos

UCX Installation Error

While downloading and installing ucx from a shell code,I am facing the below error. Can anyone provide a solution[i] Creating isolated Virtualenv with Python: /c/Program Files/Python312/pythonActual environment location may have moved due to redirect...

  • 1718 Views
  • 0 replies
  • 0 kudos
leelee3000
by Databricks Employee
  • 6195 Views
  • 3 replies
  • 1 kudos

Setting up Unity Catalog in Azure

Trying to create a metastore that will be connected to an external storage (ADLS) but we don't have the option to create a new metastore in 'Catalog' tab in the UI. Based on some research, we see that we'll have to go into "Manage Account" and then c...

  • 6195 Views
  • 3 replies
  • 1 kudos
Latest Reply
bsadler
New Contributor II
  • 1 kudos

I have been wrestling with this question for days now.  I seem to be the only one with this question so I am sure I am doing something wrong.  I am trying to create a UC metastore but there is not an option in "Catalog" to create a metastore.  This s...

  • 1 kudos
2 More Replies
nikhilprajapati
by New Contributor
  • 2292 Views
  • 2 replies
  • 1 kudos

Data in dataframe is also getting deleted when we are trying to delete records from underlying table

  Hi , We are trying to load data from a delta table to a dataframe(a copy of original table) . Initially delta table has count 911 . The dataframe in which the data is loaded also has the same count .Now,  we are deleting some records from the delta...

nikhilprajapati_1-1701930598953.png nikhilprajapati_2-1701930598960.png nikhilprajapati_3-1701930598967.png nikhilprajapati_4-1701930598974.png
  • 2292 Views
  • 2 replies
  • 1 kudos
Latest Reply
Hkesharwani
Contributor II
  • 1 kudos

Hi, There is a way to retain the copy of data frame, even if the data in underling table is manipulated but that's a memory expensive operation, be careful while using it.df1 = spark.createDataFrame(df.rdd.map(lambda x: x), schema=df.schema)Here we a...

  • 1 kudos
1 More Replies
Ramakrishnan83
by New Contributor III
  • 20792 Views
  • 6 replies
  • 0 kudos

Renaming the database Name in Databricks

Team,Initially our team created the databases with the environment name appended. Ex: cust_dev, cust_qa, cust_prod.I am looking to standardize the database name as consistent name across environments. I want to rename to "cust". All of my tables are ...

  • 20792 Views
  • 6 replies
  • 0 kudos
Latest Reply
Avvar2022
Contributor
  • 0 kudos

You can also use “CASCADE” to drop schema and tables as well. It is recursive. 

  • 0 kudos
5 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels