cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Dave_Nithio
by Contributor
  • 8463 Views
  • 6 replies
  • 7 kudos

Resolved! Delta Live Table Pipeline with Multiple Notebooks

I have two notebooks created for my Delta Live Table pipeline. The first is a utils notebook with functions I will be reusing for other pipelines. The second contains my actual creation of the delta live tables. I added both notebooks to the pipeline...

image.png
  • 8463 Views
  • 6 replies
  • 7 kudos
Latest Reply
JackyL
New Contributor II
  • 7 kudos

Hi Dave,You can solve this by putting your utils into a python file and referencing your .py file in the DLT notebook. I provided a template for the python file below:STEP 1: #import functions from pyspark.sql import SparkSession import IPython dbut...

  • 7 kudos
5 More Replies
jfvizoso
by New Contributor II
  • 7896 Views
  • 4 replies
  • 0 kudos

Can I pass parameters to a Delta Live Table pipeline at running time?

I need to execute a DLT pipeline from a Job, and I would like to know if there is any way of passing a parameter. I know you can have settings in the pipeline that you use in the DLT notebook, but it seems you can only assign values to them when crea...

  • 7896 Views
  • 4 replies
  • 0 kudos
Latest Reply
lprevost
Contributor
  • 0 kudos

This seems to be the key to this question:parameterize for dlt  My understanding of this is that you can add the parameter either in the DLT settings UI via Advanced Config/Add Configuration, key, value dialog.   Or via the corresponding pipeline set...

  • 0 kudos
3 More Replies
shagun
by New Contributor III
  • 4982 Views
  • 3 replies
  • 0 kudos

Resolved! Delta live tables target schema

The first time i run my delta live table pipeline after setup, I get this error on starting it :-------------------------------------org.apache.spark.sql.catalyst.parser.ParseException: Possibly unquoted identifier my-schema-name detected. Please con...

  • 4982 Views
  • 3 replies
  • 0 kudos
Latest Reply
BenTendo
New Contributor II
  • 0 kudos

This still errors on internal databricks spark/python code likedeltaTable.history()@shagun wrote:The first time i run my delta live table pipeline after setup, I get this error on starting it :-------------------------------------org.apache.spark.sql...

  • 0 kudos
2 More Replies
J_M_W
by Contributor
  • 4205 Views
  • 2 replies
  • 3 kudos

Resolved! Can you use %run or dbutils.notebook.run in a Delta Live Table pipeline?

Hi there, Can you use a %run or dbutils.notebook.run() in a Delta Live Table (DLT) pipeline?When I try, I get the following error: "IllegalArgumentException: requirement failed: To enable notebook workflows, please upgrade your Databricks subscriptio...

  • 4205 Views
  • 2 replies
  • 3 kudos
Latest Reply
J_M_W
Contributor
  • 3 kudos

Hi all.@Kaniz Fatma​ thanks for your answer. I am on the premium pricing tier in Azure.After digging around the logs it would seem that you cannot run magic commands in a Delta Live Table pipeline. Therefore, you cannot use %run in a DLT pipeline - w...

  • 3 kudos
1 More Replies
d_meaker
by New Contributor II
  • 1769 Views
  • 3 replies
  • 0 kudos

map_keys() returns an empty array in Delta Live Table pipeline.

We are exploding a map type column into multiple columns based on the keys of the map column. Part of this process is to extract the keys of a map type column called json_map as illustrated in the snippet below. The code executes as expected when run...

  • 1769 Views
  • 3 replies
  • 0 kudos
Latest Reply
d_meaker
New Contributor II
  • 0 kudos

Hi @Suteja Kanuri​ , Thank you for you response and explanation. The code I have shown above is not the exact snippet we are using. Please find the exact snippet below. We are dynamically extracting the keys of the map and then using getitem() to mak...

  • 0 kudos
2 More Replies
NM447101
by New Contributor II
  • 2539 Views
  • 3 replies
  • 1 kudos

Error when creating a delta live table pipeline

INVALID_PARAMETER_VALUE: Validation failed for node_type_id, the value must be Standard_DS3_v2 (is "Standard_F8s") 

image
  • 2539 Views
  • 3 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Nitya Mehta​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers y...

  • 1 kudos
2 More Replies
BigMF
by New Contributor III
  • 3624 Views
  • 2 replies
  • 1 kudos

Resolved! Can I use Widgets in a Delta Live Table pipeline

Hello, I'm pretty new to Databricks in general and Delta Live Tables specifically. My problem statement is that I'd like loop through a set of files and run a notebook that loads the data into some Delta Live Tables. Additionally, I'd like to include...

image image
  • 3624 Views
  • 2 replies
  • 1 kudos
Latest Reply
Aviral-Bhardwaj
Esteemed Contributor III
  • 1 kudos

may be it is possible in the job run file there will be configurations option the user can add there

  • 1 kudos
1 More Replies
Aaron1234567
by New Contributor III
  • 4106 Views
  • 9 replies
  • 3 kudos

Delta Live Table Pipeline - Azure cluster fail

Failed to launch pipeline cluster 0802-171503-4m02lexd: The operation could not be performed on your account with the following error message: azure_error_code: OperationNotAllowed, azure_error_message: Operation could not be completed as it results ...

  • 4106 Views
  • 9 replies
  • 3 kudos
Latest Reply
Vidula
Honored Contributor
  • 3 kudos

Hi there @Aaron LeBato​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you...

  • 3 kudos
8 More Replies
aladda
by Databricks Employee
  • 2493 Views
  • 1 replies
  • 0 kudos
  • 2493 Views
  • 1 replies
  • 0 kudos
Latest Reply
aladda
Databricks Employee
  • 0 kudos

Here's the difference a View and Table in the context of a Delta Live Table PIpelineViews are similar to a temporary view in SQL and are an alias for some computation. A view allows you to break a complicated query into smaller or easier-to-understan...

  • 0 kudos
aladda
by Databricks Employee
  • 1117 Views
  • 1 replies
  • 0 kudos
  • 1117 Views
  • 1 replies
  • 0 kudos
Latest Reply
aladda
Databricks Employee
  • 0 kudos

Yes. You can specify a "target" database as part of your DLT pipeline configuration to publish results to a target database in the metastore. See - https://docs.databricks.com/data-engineering/delta-live-tables/delta-live-tables-quickstart.html#publi...

  • 0 kudos
aladda
by Databricks Employee
  • 933 Views
  • 1 replies
  • 0 kudos
  • 933 Views
  • 1 replies
  • 0 kudos
Latest Reply
aladda
Databricks Employee
  • 0 kudos

DLT Pipeline results are published to the "Storage Location" defined as part of configuring the Pipeline. Ex:- https://docs.databricks.com/_images/dlt-create-notebook-pipeline.pngIf an explicit Storage Location is not specified, the pipeline results ...

  • 0 kudos
User16783854357
by New Contributor III
  • 954 Views
  • 1 replies
  • 1 kudos

How to run a Delta Live Table pipeline with a different runtime?

I would like to run a DLT pipeline with the 8.2 runtime.

  • 954 Views
  • 1 replies
  • 1 kudos
Latest Reply
User16783854357
New Contributor III
  • 1 kudos

You can add the below JSON property to the Delta Live Table pipeline specification at the parent level:"dbr_version": "8.2"

  • 1 kudos
Labels