- 13937 Views
- 7 replies
- 1 kudos
Please explain with some use cases which show the difference between DLT and dbt.
- 13937 Views
- 7 replies
- 1 kudos
Latest Reply
Hi @Prachi Sankhala Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answe...
6 More Replies
- 6323 Views
- 11 replies
- 2 kudos
we are trying to execute the databricks jobs for dbt task type but it is failing to autheticate to git. Problem is job is created using service principal but service principal don't seem to have access to the repo. few questions we have:1) can we giv...
- 6323 Views
- 11 replies
- 2 kudos
Latest Reply
Hi @Rahul Samant I'm sorry you could not find a solution to your problem in the answers provided.Our community strives to provide helpful and accurate information, but sometimes an immediate solution may only be available for some issues.I suggest p...
10 More Replies
by
kj1
• New Contributor III
- 5433 Views
- 8 replies
- 0 kudos
Problem:When running dbt with persist column docs enabled we get the following error: org.apache.hadoop.hive.ql.metadata.HiveException: at least one column must be specified for the tableBackground:There is an issue on the dbt-spark github that was c...
- 5433 Views
- 8 replies
- 0 kudos
Latest Reply
Also confirming that you do not have any of these limitations:From DBT's website: Some databases limit where and how descriptions can be added to database objects. Those database adapters might not support persist_docs, or might offer only partial su...
7 More Replies
by
Phani1
• Valued Contributor II
- 1869 Views
- 2 replies
- 3 kudos
Hi Team,How efficiently can orchestrate data bricks jobs which involve a lot of transformations, dependencies, and complexity?At source have a lot of SSIS packages that have complex dependencies and more transformation. We have the following opti...
- 1869 Views
- 2 replies
- 3 kudos
Latest Reply
My question is, how do we reliably orchestrate multiple Databricks Jobs/Workflows that are running in a mixed latency and can write to the same silver and gold delta tables? Could you please suggest the best approach and practices for the same?
1 More Replies
- 2234 Views
- 2 replies
- 2 kudos
I'm looking for direction on how to get the dbt task in workflows to use the partial_parse.msgpack file to skip parsing files that haven't changed. I'm downloading my artifacts after each run and the partial_parse file is being saved back to adls.Wha...
- 2234 Views
- 2 replies
- 2 kudos
Latest Reply
Hi, Could you please confirm what will be your expectation and the used case? Do you want the file to be saved somewhere else?
1 More Replies