cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Ayur
by New Contributor II
  • 4889 Views
  • 3 replies
  • 4 kudos

Resolved! Unsupported_operation : Magic commands (e.g. %py, %sql and %run) are not supported with the exception of %pip within a Python notebook. Cells containing magic commands are ignored - DLT pipeline

Hi,I'm trying to use magic command(to change to python in a notebook with sql as a default language) in a dlt pipeline,.When starting the pipeline cells containing magic command are ignored., with the warning message below:"Magic commands (e.g. %py, ...

  • 4889 Views
  • 3 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hi @Yassine Dehbi​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Than...

  • 4 kudos
2 More Replies
anujsen18
by New Contributor
  • 2951 Views
  • 2 replies
  • 0 kudos

How to overwrite partition in DLT pipeline ?

I am trying to replicate my existing spark pipeline in DLT. I am not able to achieve desired result using DLT . Current pipeline : source set up : CSV file ingested in bronze using SCP frequency : monthly bronze dir : /cntdlt/bronze/emp/year=2022 /...

  • 2951 Views
  • 2 replies
  • 0 kudos
Latest Reply
kfoster
Contributor
  • 0 kudos

What I have observed, @dlt.table with a spark.read or dlt.read will create the table in mode=overwrite@dlt.table with a spark.readStream or dlt.readStream will append new datato get the update, use the CDC: Change data capture with Delta Live Tables ...

  • 0 kudos
1 More Replies
Mado
by Valued Contributor II
  • 3638 Views
  • 0 replies
  • 1 kudos

How to get a snapshot of a streaming delta table as a static table?

Hi,Assume that I have a streaming delta table. Is there any way to get snapshot of the streaming table as a static table?Reason is that I need to join this streaming table with a static table by:output = output.join(country_information, ["Country"], ...

  • 3638 Views
  • 0 replies
  • 1 kudos
Justin_Stuparit
by New Contributor II
  • 1776 Views
  • 2 replies
  • 1 kudos

Configure DLT Pipeline to use existing running cluster

How can I configure a DLT pipeline to use an existing running cluster? I don't see where in the settings to set the pipeline to use an existing cluster. Instead it wants to always standup a new cluster.

  • 1776 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Justin Stuparitz​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.T...

  • 1 kudos
1 More Replies
Jennifer_Lu
by New Contributor III
  • 1361 Views
  • 1 replies
  • 3 kudos

How do I programmatically get the database name in a DLT notebook?

I have configured a database in the settings of my DLT pipeline. Is there a way to retrieve that value programmatically from within a notebook? I want to do something likespark.read.table(f"{database}.table")

  • 1361 Views
  • 1 replies
  • 3 kudos
Latest Reply
Jfoxyyc
Valued Contributor
  • 3 kudos

You could also set it as a config value as database:value, and then retrieve it in the notebook using spark.conf.get().I'm hoping they update DLT to support UC, and then allow us to set database/schema at the notebook level in @dlt.table(schema_name,...

  • 3 kudos
hello_world
by New Contributor III
  • 3705 Views
  • 7 replies
  • 3 kudos

What happens if I have both DLTs and normal tables in a single notebook?

I've just learned Delta Live Tables on Databricks Academy and have no environment to try it out.I'm wondering what happens to the pipeline if the notebook consists of both normal tables and DLTs. For exampleTable ADLT A that reads and cleans Table AT...

  • 3705 Views
  • 7 replies
  • 3 kudos
Latest Reply
Rishabh-Pandey
Esteemed Contributor
  • 3 kudos

hey ,@S L​  According to you , you have normal table table A and DLT table Table B , so it will give thrown an error that your upstream table is not streaming Live table and you need to create streaming live table Table a , if you want to use the ou...

  • 3 kudos
6 More Replies
Aviral-Bhardwaj
by Esteemed Contributor III
  • 2878 Views
  • 6 replies
  • 30 kudos

DLT PipeLine Understanding

Hey, guys, I hope you are doing very well today I was going through some databricks documentation and I found dlt documentation but when I am trying to implement it, it is not working very well can anyone can share with me whole code step by step and...

  • 2878 Views
  • 6 replies
  • 30 kudos
Latest Reply
Meghala
Valued Contributor II
  • 30 kudos

even Im also going through some databricks documentation

  • 30 kudos
5 More Replies
Reda
by New Contributor II
  • 1914 Views
  • 1 replies
  • 6 kudos

Creating a DLT pipeline that reads from a JDBC source

Hey,I'm trying to create a DLT pipeline that reads from a JDBC source, and the code I'm using looks something like this in python:import dlt @dlt.table def table_name(): driver = 'oracle.jdbc.driver.OracleDriver' url = '...' query = 'SELECT ......

  • 1914 Views
  • 1 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Hi @Reda Bitar​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else bricksters will get back to you soon. Thanks

  • 6 kudos
J_M_W
by Contributor
  • 3200 Views
  • 2 replies
  • 5 kudos

Resolved! Databricks is automatically creating a _apply_changes_storage table in the database when using apply_changes for Delta Live Tables

Hi there,I am using apply_changes (aka. Delta Live Tables Change Data Capture) and it works fine. However, it seems to automatically create a secondary table in the database metastore called _apply_storage_changes_{tableName}So for every table I use ...

image image
  • 3200 Views
  • 2 replies
  • 5 kudos
Latest Reply
J_M_W
Contributor
  • 5 kudos

Hi - Thanks @Hubert Dudek​ I will look into disabling access for the users!

  • 5 kudos
1 More Replies
Arumugam
by New Contributor II
  • 4023 Views
  • 5 replies
  • 1 kudos

DLT Pipeline failed to Start due to "The Execution Contained atleast one disallowed language

Hi , im trying to setup DLT pipeline ,its a basic pipeline for testing purpose im facing the issue while starting the pipeline , any help is appreciated Code :@dlt.table(name="dlt_bronze_cisco_hardware")def dlt_cisco_networking_bronze_hardware(): ret...

Capture.PNG Capture
  • 4023 Views
  • 5 replies
  • 1 kudos
Latest Reply
Vivian_Wilfred
Databricks Employee
  • 1 kudos

Hi @Arumugam Ramachandran​ seems like you have a spark config set on your DLT job cluster that allows only python and SQL code. Check the spark config (cluster policy).In any case, the python code should work. Verify the notebook's default language, ...

  • 1 kudos
4 More Replies
ef-zee
by New Contributor III
  • 14512 Views
  • 3 replies
  • 7 kudos

How to resolve the error INVALID_PARAMETER_VALUE error in the Delta Live Table pipeline?

I am trying to execute a DLT pipeline, but I am getting an error which says - "INVALID_PARAMETER_VALUE: The field 'node_type_id' cannot be supplied when an instance pool ID is provided."I am using my company's Azure Databricks platform with premium b...

  • 14512 Views
  • 3 replies
  • 7 kudos
Latest Reply
Debayan
Databricks Employee
  • 7 kudos

Do you have cluster ACL enabled?

  • 7 kudos
2 More Replies
159312
by New Contributor III
  • 2482 Views
  • 4 replies
  • 2 kudos

Access workflow settings from within a notebook.

I have a notebook used for a dlt pipeline. The pipeline should perform an extra task if the pipeline is run as a full refresh. Right now, I have to set an extra configuration parameter when I run a full refresh. Is there a way to programmatically...

  • 2482 Views
  • 4 replies
  • 2 kudos
Latest Reply
Vidula
Honored Contributor
  • 2 kudos

Hi @Ben Bogart​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!

  • 2 kudos
3 More Replies
Labels