cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

WTW-DBrat
by New Contributor II
  • 1040 Views
  • 2 replies
  • 0 kudos

Service principal’s Microsoft Entra ID access token returns 400 when calling Databricks REST API

I'm using the following to call a Databricks REST API. When I use a PAT for access_token, everything works fine. When I use a Microsoft Entra ID access token, the response returns 400. The service principal has access to the workspace and is part of ...

  • 1040 Views
  • 2 replies
  • 0 kudos
Latest Reply
Jag
New Contributor III
  • 0 kudos

hello, Try to print the repose and see are you table to see the access_token in the payload else looks like access issue.Try to go to the workspace setting and grant token access permission to the service principle.Workspace > Setting 

  • 0 kudos
1 More Replies
isaac_gritz
by Databricks Employee
  • 4249 Views
  • 5 replies
  • 5 kudos

SQL IDE Support

How to use a SQL IDE with Databricks SQLDatabricks provides SQL IDE support using DataGrip and DBeaver with Databricks SQL.Let us know in the comments if you've used DataDrip or DBeaver with Databricks! Let us know if there are any other SQL IDEs you...

  • 4249 Views
  • 5 replies
  • 5 kudos
Latest Reply
Jag
New Contributor III
  • 5 kudos

dbeaver is perfectly working fine but I fount one issue it wont show the correct error for query. 

  • 5 kudos
4 More Replies
dataengutility
by New Contributor III
  • 3344 Views
  • 4 replies
  • 1 kudos

Resolved! Yml file replacing job cluster with all-purpose cluster when running a workflow

Hi all,I have been having some trouble running a workflow that consists of 3 tasks that run sequentially. Task1 runs on an all-purpose cluster and kicks off Task2 that needs to run on a job cluster. Task2 kicks off Task3 which also uses a job cluster...

  • 3344 Views
  • 4 replies
  • 1 kudos
Latest Reply
jacovangelder
Honored Contributor
  • 1 kudos

I don't know if you've cut off your yaml snippet, but your snippet doesn't show your job cluster with key job-cluster. Just to validate, your job cluster is also defined in your workflow yaml?Edit: Looking it it again and knowing the defaults, it loo...

  • 1 kudos
3 More Replies
tramtran
by Contributor
  • 9029 Views
  • 7 replies
  • 0 kudos

Resolved! How to import a function to another notebook?

Could you please provide guidance on the correct way to dynamically import a Python module from a user-specific path in Databricks Repos? Any advice on resolving the ModuleNotFoundError would be greatly appreciated.udf_check_table_exists notebook:fro...

  • 9029 Views
  • 7 replies
  • 0 kudos
Latest Reply
tramtran
Contributor
  • 0 kudos

Thank you all again

  • 0 kudos
6 More Replies
Volker
by Contributor
  • 2025 Views
  • 1 replies
  • 0 kudos

Terraform Error: Cannot create sql table context deadline

I am currently trying to deploy external parquet tables to the Databricks UC using terraform. However, for some tables I get the following error:Error: cannot create sql table: Post "https://[MASKED]/api/2.0/sql/statements/": context deadline exceede...

  • 2025 Views
  • 1 replies
  • 0 kudos
Latest Reply
Volker
Contributor
  • 0 kudos

Hey @Retired_mod,thanks for your reply and sorry for the late reply from my side. I couldn't fix the problem with the databricks terraform provider unfortunately. I now switched to using liquibase to deploy tables to databricks.

  • 0 kudos
Arby
by New Contributor II
  • 13923 Views
  • 4 replies
  • 0 kudos

Help With OSError: [Errno 95] Operation not supported: '/Workspace/Repos/Connectors....

Hello,I am experiencing issues with importing from utils repo the schema file I created.this is the logic we use for all ingestion and all other schemas live in this repo utills/schemasI am unable to access the file I created for a new ingestion pipe...

icon
  • 13923 Views
  • 4 replies
  • 0 kudos
Latest Reply
Arby
New Contributor II
  • 0 kudos

@Debayan Mukherjee​ Hello, thank you for your response. please let me know if these are the correct commands to access the file from notebookI can see the files in the repo folderbut I just noticed this. the file I am trying to access the size is 0 b...

  • 0 kudos
3 More Replies
JeremyH
by New Contributor II
  • 2405 Views
  • 4 replies
  • 0 kudos

CREATE WIDGETS in SQL Notebook attached to SQL Warehouse Doesn't Work.

I'm able to create and use widgets using the UI in my SQL notebooks, but they get lost quite frequently when the notebook is reset.There is documentation suggesting we can create widgets in code in SQL: https://learn.microsoft.com/en-us/azure/databri...

  • 2405 Views
  • 4 replies
  • 0 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 0 kudos

Hi @JeremyH - can you please try adding like the below in your query and see if widgets are getting populated? {{parameter_name }}

  • 0 kudos
3 More Replies
Akash_Wadhankar
by New Contributor III
  • 782 Views
  • 0 replies
  • 0 kudos

DatabricksUniForm

Hi Community members, I tried creating a Delta UniForm table using databricks notebook. I created a database without providing location. It took the dbfs default storage location. On top of that I was able to create a Delta UniForm table. Then I trie...

  • 782 Views
  • 0 replies
  • 0 kudos
Nathant93
by New Contributor III
  • 1280 Views
  • 2 replies
  • 0 kudos

Resolved! Unzipping with Serverless Compute

HiI have started using serverless compute but have come across the limitation that I cannot use the local filesystem for temporarily storing the files and directories before moving them to where they need to be in adls.Does anyone have a way of unzip...

Data Engineering
serverless
unzip
  • 1280 Views
  • 2 replies
  • 0 kudos
Latest Reply
delonb2
New Contributor III
  • 0 kudos

Do you have the ability to make a Unity Catalog volume? You could use it as temporary storage before migrating the files to adls.

  • 0 kudos
1 More Replies
wschoi
by New Contributor III
  • 9314 Views
  • 5 replies
  • 3 kudos

How to fix plots and image color rendering on Notebooks?

I am currently running dark mode for my Databricks Notebooks, and am using the "new UI" released a few days ago (May 2023) and the "New notebook editor."Currently all plots (like matplotlib) are showing wrong colors. For example, denoting:```... p...

  • 9314 Views
  • 5 replies
  • 3 kudos
Latest Reply
aleph_null
New Contributor II
  • 3 kudos

Any updated on this issue? This is a huge drawback to use the dark theme

  • 3 kudos
4 More Replies
QuikPl4y
by New Contributor III
  • 2602 Views
  • 2 replies
  • 0 kudos

Resolved! Databricks to Oracle to Delete Rows

Hi Community!  I’m working in a Fatabricks notebook and using the Oracle JDBC Thin Client connector to query and Oracle table, merge together and select specific rows from my dataframe and write those rows to a table back in Oracle. All of this works...

  • 2602 Views
  • 2 replies
  • 0 kudos
Latest Reply
delonb2
New Contributor III
  • 0 kudos

This stack overflow post ran into the same issue, would be worth trying How to delete a record from Oracle Table in Python SQLAlchemy - Stack Overflow

  • 0 kudos
1 More Replies
_TJ
by New Contributor III
  • 10259 Views
  • 3 replies
  • 2 kudos

Incremental load from source; how to handle deletes

Small introduction; I'm a BI/Data developer, mostly working in MSSQL & DataFactory, coming from SSIS. Now I'm trying Databricks, to see if it works for me and my customers. I got enthusiastic by the video; https://www.youtube.com/watch?v=PIFL7W3DmaY&...

  • 10259 Views
  • 3 replies
  • 2 kudos
Latest Reply
abajpai
New Contributor II
  • 2 kudos

@_TJ did you find a solution for sliding window

  • 2 kudos
2 More Replies
alej
by New Contributor
  • 1578 Views
  • 1 replies
  • 0 kudos

Spark Scala Vs Pyspark

With the release of spark connect and used defined table functions for pyspark, I wonder, what are the remaining advantages (if any) of using scala Spark?

  • 1578 Views
  • 1 replies
  • 0 kudos
Latest Reply
delonb2
New Contributor III
  • 0 kudos

The main remaining advantages of Scala are performance as there will always be some interoperation overhead when using PySpark. While I don't have any stats on me, I would assume the differences in performance are negligible at this point until very ...

  • 0 kudos
manish1987c
by New Contributor III
  • 1293 Views
  • 2 replies
  • 0 kudos

Delta Live Table - Flow detected an update or delete to one or more rows in the source table

I have create a pipeline where i am ingesting the data from bronze to silver and using SCD 1, however when i am trying to create gold table as dlt it is giving me error as "Flow 'user_silver' has FAILED fatally. An error occurred because we detected ...

manish1987c_0-1718341166099.png manish1987c_1-1718341206991.png
  • 1293 Views
  • 2 replies
  • 0 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 0 kudos

@manish1987c -The Streaming does not handle input that is not an append. you can set skipChangeCommits to true 

  • 0 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels