cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

wilco
by New Contributor II
  • 2074 Views
  • 2 replies
  • 0 kudos

SQL Warehouse: Retrieving SQL ARRAY Type via JDBC driver

Hi all,we are currently running into the following issuewe are using serverless SQL warehousein a JAVA application we are using the latest Databricks JDBC driver (v2.6.36)we are querying the warehouse with a collect_list function, which should return...

  • 2074 Views
  • 2 replies
  • 0 kudos
Latest Reply
KTheJoker
Databricks Employee
  • 0 kudos

Hey Wilco, The answer is no, ODBC/JDBC don't support complex types so these need to be compressed into strings over the wire (usually in JSON representation) and rehydrated on the client side into a complex object.

  • 0 kudos
1 More Replies
source2sea
by Contributor
  • 3039 Views
  • 2 replies
  • 0 kudos

Resolved! ERROR RetryingHMSHandler: NoSuchObjectException(message:There is no database named global_temp)

ERROR RetryingHMSHandler: NoSuchObjectException(message:There is no database named global_temp)should one create it in the work space manually via UI? and how?would it get overwritten if work space is created via terraform?I use 10.4 LTS runtime.

  • 3039 Views
  • 2 replies
  • 0 kudos
Latest Reply
ashish2007g
New Contributor II
  • 0 kudos

I am experiencing significant delay on my streaming. I am using changefeed connector. Its processing streaming batch very frequently but experiencing sudden halt and shows no active stage for longer time. I observed below exception continuously promp...

  • 0 kudos
1 More Replies
kskistad
by New Contributor III
  • 5314 Views
  • 2 replies
  • 4 kudos

Resolved! Streaming Delta Live Tables

I'm a little confused about how streaming works with DLT. My first questions is what is the difference in behavior if you set the pipeline mode to "Continuous" but in your notebook you don't use the "streaming" prefix on table statements, and simila...

  • 5314 Views
  • 2 replies
  • 4 kudos
Latest Reply
Harsh141220
New Contributor II
  • 4 kudos

Is it possible to have custom upserts in streaming tables in a delta live tables pipeline?Use case: I am trying to maintain a valid session based on timestamp column and want to upsert to the target table.Tried going through the documentations but dl...

  • 4 kudos
1 More Replies
sreeyv
by New Contributor II
  • 808 Views
  • 2 replies
  • 0 kudos

Unable to execute update statement through Databricks Notebook

I am unable to execute update statements through Databricks Notebook, getting this error message "com.databricks.sql.transaction.tahoe.actions.InvalidProtocolVersionException: Delta protocol version is too new for this version of the Databricks Runti...

  • 808 Views
  • 2 replies
  • 0 kudos
Latest Reply
sreeyv
New Contributor II
  • 0 kudos

This is resolved, this happens when a Column in the table has a GENERATED BY DEFAULT AS IDENTITY defined. When you remove this column, it works fine

  • 0 kudos
1 More Replies
deepu
by New Contributor II
  • 1153 Views
  • 1 replies
  • 1 kudos

performance issue with SIMBA ODBC using SSIS

i was trying to upload data into a table in hive_metastore using SSIS using SIMBA ODBC driver. The data set is huge (1.2 million records and 20 columns) , it is taking more than 40 mins to complete. is there an config change to improve the load time.

  • 1153 Views
  • 1 replies
  • 1 kudos
Latest Reply
NandiniN
Databricks Employee
  • 1 kudos

Looks like a slow data upload into a table in hive_metastore using SSIS and the SIMBA ODBC driver. This could be due to a variety of factors, including the size of your dataset and the configuration of your system. One potential solution could be to ...

  • 1 kudos
Ramseths
by New Contributor
  • 818 Views
  • 1 replies
  • 0 kudos

Wrong Path Databricks Repos

In a Databricks environment, I have cloned a repository that I have in Azure DevOps Repos, the repository is inside the path:Workspace/Repos/<user_mail>/my_repo.Then when I create a Python script that I want to call in a notebook using an import: imp...

  • 818 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

Hi @Ramseths , If your notebook and script are in the same path, it would have picked the same relative path. Is your notebook located in /databricks/driver? Thanks!

  • 0 kudos
JonLaRose
by New Contributor III
  • 2337 Views
  • 2 replies
  • 0 kudos

Adding custom Jars to SQL Warehouses

Hi there,I want to add custom JARs to an SQL warehouse (Pro if that matters) like I can in an interactive cluster, yet I don't see a way.Is that a degraded functionality when transitioning to a SQL warehouse, or have I missed something? Thank you. 

  • 2337 Views
  • 2 replies
  • 0 kudos
Latest Reply
SparkJun
Databricks Employee
  • 0 kudos

ADD JAR is a SQL syntax for Databricks runtime, it does not work for DBSQL/warehouse. DBSQL would throw this error: [NOT_SUPPORTED_WITH_DB_SQL] LIST JAR(S) is not supported on a SQL warehouse. SQLSTATE: 0A000. This feature is not supported as of now....

  • 0 kudos
1 More Replies
leungi
by Contributor
  • 2650 Views
  • 6 replies
  • 1 kudos

Resolved! Unable to add column comment in Materialized View (MV)

The following doc suggests the ability to add column comments during MV creation via the `column list` parameter.Thus, the SQL code below is expected to generate a table where the columns `col_1` and `col_2` are commented; however, this is not the ca...

  • 2650 Views
  • 6 replies
  • 1 kudos
Latest Reply
raphaelblg
Databricks Employee
  • 1 kudos

@leungi you've shared the python language reference. This is the SQL Reference from where I've based my example.

  • 1 kudos
5 More Replies
Marcin_U
by New Contributor II
  • 509 Views
  • 1 replies
  • 0 kudos

Making transform on pyspark.sql.Column object outside DataFrame.withColumn method

Hello,I made some transform on pyspark.sql.Column object: file_path_splitted=f.split(df[filepath_col_name],'/') # return Column object file_name = file_path_splitted[f.size(file_path_splitted) - 1] # return Column object Next I used variable "file_na...

  • 509 Views
  • 1 replies
  • 0 kudos
Latest Reply
raphaelblg
Databricks Employee
  • 0 kudos

Hello @Marcin_U , Thank you for reaching out. The transformation you apply within or outside the `withColumn` method will ultimately result in the same Spark plan. The answer is no, it's not possible to have rows mismatch if you're referring to the s...

  • 0 kudos
thiagoawstest
by Contributor
  • 1817 Views
  • 1 replies
  • 0 kudos

migration Azure to AWS

Hello, I need to migrate from Databricks Azure to AWS, using tool-databricks-migration generates many errors, if I do it manually using databeicks-cli, what would be the best practice?Any tips, for example:-first migrate notebooks-second jobs-third u...

Data Engineering
AWS
migration
  • 1817 Views
  • 1 replies
  • 0 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 0 kudos

This widget could not be displayed.
Hello, I need to migrate from Databricks Azure to AWS, using tool-databricks-migration generates many errors, if I do it manually using databeicks-cli, what would be the best practice?Any tips, for example:-first migrate notebooks-second jobs-third u...

This widget could not be displayed.
  • 0 kudos
This widget could not be displayed.
Pedro1
by New Contributor II
  • 1137 Views
  • 1 replies
  • 0 kudos

databricks_grants fails because it keeps track of a removed principal

Hi all,My terraform script fails on a databricks_grants with the error: "Error: cannot update grants: Could not find principal with name DataUsers". The principal DataUsers does not exist anymore because it has previously been deleted by terraform.Bo...

  • 1137 Views
  • 1 replies
  • 0 kudos
Latest Reply
Pedro1
New Contributor II
  • 0 kudos

Terraform databricks provider= 1.45.0

  • 0 kudos
Devsql
by New Contributor III
  • 1830 Views
  • 3 replies
  • 1 kudos

How to speed-up Azure Databricks processing

Hi Team,My team has designed Azure Databricks solution and we are looking for solution to speed-up process.Below are details of project:1- Data is copied from SAP to ADLS-Gen-2 based External location.2- Project follows medallion architecture i.e. we...

Data Engineering
Azure Databricks
Bronze Job
Delta Live Table
Delta Live Table Pipeline
  • 1830 Views
  • 3 replies
  • 1 kudos
Latest Reply
Devsql
New Contributor III
  • 1 kudos

Hi @Retired_mod , @raphaelblg , would you like to throw some light on this issue.

  • 1 kudos
2 More Replies
pavansharma36
by New Contributor III
  • 2296 Views
  • 4 replies
  • 0 kudos

Resolved! Job fails on cluster with runtime version 14.3 with library installation failure error

Library installation failed for library due to user error for jar: \"dbfs:////<<PATH>>/jackson-annotations-2.16.1.jar\"\n Error messages:\nLibrary installation attempted on the driver node of cluster <<clusterId>> and failed. Please refer to the foll...

  • 2296 Views
  • 4 replies
  • 0 kudos
Latest Reply
swarnadeepC
New Contributor II
  • 0 kudos

Hi @Edouard_JH Adding more details on this issue.We faced this issue with several other jars in databricks 14.3, adding the error stacktrace for the same, seems like the error comes from changes made under https://issues.apache.org/jira/browse/SPARK-...

  • 0 kudos
3 More Replies
deng77
by New Contributor III
  • 44053 Views
  • 11 replies
  • 2 kudos

Resolved! Using current_timestamp as a default value in a delta table

I want to add a column to an existing delta table with a timestamp for when the data was inserted. I know I can do this by including current_timestamp with my SQL statement that inserts into the table. Is it possible to add a column to an existing de...

  • 44053 Views
  • 11 replies
  • 2 kudos
Latest Reply
Vaibhav1000
New Contributor II
  • 2 kudos

Can you please provide information on the additional expenses related to using this feature compared to not utilizing it at all?

  • 2 kudos
10 More Replies
ClaudeR
by New Contributor III
  • 6097 Views
  • 5 replies
  • 1 kudos

Resolved! Can someone help me understand how compute pricing works.

Im looking at using Databricks internally for some Data Science projects. I am however very confused to how the pricing works and would like to obviously avoid high spending right now. Internal documentation and within Databricks All-Purpose Compute...

  • 6097 Views
  • 5 replies
  • 1 kudos
Latest Reply
GuillermoM
New Contributor II
  • 1 kudos

Hello,I was able to get a very precise cost of Azure Databricks Clusters and Computers jobs, using the Microsoft API and Databricks APIThen I wrote a simple tool to extract and manipulate the API results and generate detailed cost reports that can be...

  • 1 kudos
4 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels