cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Prachi_Sankhala
by New Contributor
  • 12021 Views
  • 7 replies
  • 1 kudos

Resolved! What are the advantages of using Delta Live tables (DLT) over Data Build Tool (dbt) in Databricks?

Please explain with some use cases which show the difference between DLT and dbt.

  • 12021 Views
  • 7 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Prachi Sankhala​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answe...

  • 1 kudos
6 More Replies
rsamant07
by New Contributor III
  • 5871 Views
  • 11 replies
  • 2 kudos

Resolved! DBT Job Type Authenticating to Azure Devops for git_source

we are trying to execute the databricks jobs for dbt task type but it is failing to autheticate to git. Problem is job is created using service principal but service principal don't seem to have access to the repo. few questions we have:1) can we giv...

  • 5871 Views
  • 11 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @Rahul Samant​ I'm sorry you could not find a solution to your problem in the answers provided.Our community strives to provide helpful and accurate information, but sometimes an immediate solution may only be available for some issues.I suggest p...

  • 2 kudos
10 More Replies
kj1
by New Contributor III
  • 5086 Views
  • 8 replies
  • 0 kudos

When running DBT pipeline with column docs persisted we get error at least one column must be specified

Problem:When running dbt with persist column docs enabled we get the following error: org.apache.hadoop.hive.ql.metadata.HiveException: at least one column must be specified for the tableBackground:There is an issue on the dbt-spark github that was c...

  • 5086 Views
  • 8 replies
  • 0 kudos
Latest Reply
Dooley
Valued Contributor II
  • 0 kudos

Also confirming that you do not have any of these limitations:From DBT's website: Some databases limit where and how descriptions can be added to database objects. Those database adapters might not support persist_docs, or might offer only partial su...

  • 0 kudos
7 More Replies
Phani1
by Valued Contributor II
  • 1744 Views
  • 2 replies
  • 3 kudos

Efficiently orchestrate data bricks jobs

Hi Team,How efficiently can orchestrate data bricks jobs which involve a lot of transformations, dependencies, and complexity?At source have a lot of SSIS packages that have complex dependencies and more transformation.     We have the following opti...

  • 1744 Views
  • 2 replies
  • 3 kudos
Latest Reply
Phani1
Valued Contributor II
  • 3 kudos

My question is, how do we reliably orchestrate multiple Databricks Jobs/Workflows that are running in a mixed latency and can write to the same silver and gold delta tables? Could you please suggest the best approach and practices for the same?

  • 3 kudos
1 More Replies
Jfoxyyc
by Valued Contributor
  • 2076 Views
  • 2 replies
  • 2 kudos

How to use partial_parse.msgpack with workflow dbt task?

I'm looking for direction on how to get the dbt task in workflows to use the partial_parse.msgpack file to skip parsing files that haven't changed. I'm downloading my artifacts after each run and the partial_parse file is being saved back to adls.Wha...

  • 2076 Views
  • 2 replies
  • 2 kudos
Latest Reply
Debayan
Databricks Employee
  • 2 kudos

Hi, Could you please confirm what will be your expectation and the used case? Do you want the file to be saved somewhere else?

  • 2 kudos
1 More Replies
Labels