<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to work with DLT pipelines? Best practices? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/how-to-work-with-dlt-pipelines-best-practices/m-p/15816#M10089</link>
    <description>&lt;P&gt;I guess it may just not be what we expected, but probably still a powerful tool for the right use cases. &lt;/P&gt;</description>
    <pubDate>Thu, 22 Dec 2022 13:39:36 GMT</pubDate>
    <dc:creator>espenol</dc:creator>
    <dc:date>2022-12-22T13:39:36Z</dc:date>
    <item>
      <title>How to work with DLT pipelines? Best practices?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-work-with-dlt-pipelines-best-practices/m-p/15812#M10085</link>
      <description>&lt;P&gt;So I'm used to developing notebooks interactively. Write some code, run to see if I made an error and if no error, filter and display dataframe to see that I did what I intended. With DLT pipelines, however, I can't run interactively. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is my understanding correct that, to develop a DLT pipeline, I should first develop a notebook interactively, and then AFTER everything works, put the DLT decorators all around the code before creating a DLT pipeline? Is my understanding correct?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To me it seems like a big hassle to develop in this way, especially if an error occurs and I have to debug the pipeline. I would then have to remove the DLT decorators again before running interactively. Perhaps using two side-by-side notebooks can alleviate these issues, where one has the interactive code, and the other imports the interactive and applies DLT decorators, dlt.read etc? I think that may work. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If someone can give me some pointers on how to develop and maintain DLT pipelines in practice I'd be super grateful. I feel like I'm missing some selling points. &lt;/P&gt;</description>
      <pubDate>Tue, 20 Dec 2022 06:59:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-work-with-dlt-pipelines-best-practices/m-p/15812#M10085</guid>
      <dc:creator>espenol</dc:creator>
      <dc:date>2022-12-20T06:59:55Z</dc:date>
    </item>
    <item>
      <title>Re: How to work with DLT pipelines? Best practices?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-work-with-dlt-pipelines-best-practices/m-p/15813#M10086</link>
      <description>&lt;P&gt;yes exactly i am also working on the dlt , and what i get to know about from this is that if we want to check our error , we have to run the pipeline again and again for debugging it  , but this is  not the best practice to do so , so the other method is we can create the same notebook without dlt decorator so that we can debug the pipelines with the particular error , this is the only option we have for now , lets hope with the passage of time we can get to know some more ways to debug our pipelines in the efficient way &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;thanks Rishabh &lt;/P&gt;</description>
      <pubDate>Tue, 20 Dec 2022 07:06:23 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-work-with-dlt-pipelines-best-practices/m-p/15813#M10086</guid>
      <dc:creator>Rishabh-Pandey</dc:creator>
      <dc:date>2022-12-20T07:06:23Z</dc:date>
    </item>
    <item>
      <title>Re: How to work with DLT pipelines? Best practices?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-work-with-dlt-pipelines-best-practices/m-p/15814#M10087</link>
      <description>&lt;P&gt;Well, I'm glad I'm not the only one. I think two side-by-side notebooks will be our solution going forward, unless someone here can give us a better suggestion. Things break all the time (we're not a mature organization), so it needs to be a smooth process to fix problems. Maybe we shouldn't even be using DLT at our level of maturity. &lt;/P&gt;</description>
      <pubDate>Tue, 20 Dec 2022 07:18:59 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-work-with-dlt-pipelines-best-practices/m-p/15814#M10087</guid>
      <dc:creator>espenol</dc:creator>
      <dc:date>2022-12-20T07:18:59Z</dc:date>
    </item>
    <item>
      <title>Re: How to work with DLT pipelines? Best practices?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-work-with-dlt-pipelines-best-practices/m-p/15815#M10088</link>
      <description>&lt;P&gt;delta live tables is still not up to that expectations to be used for the productions as there is still some drawbacks in DLT , so it not suggested to use for productions , well lets hope for the best&lt;/P&gt;</description>
      <pubDate>Tue, 20 Dec 2022 09:39:49 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-work-with-dlt-pipelines-best-practices/m-p/15815#M10088</guid>
      <dc:creator>Rishabh-Pandey</dc:creator>
      <dc:date>2022-12-20T09:39:49Z</dc:date>
    </item>
    <item>
      <title>Re: How to work with DLT pipelines? Best practices?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-work-with-dlt-pipelines-best-practices/m-p/15816#M10089</link>
      <description>&lt;P&gt;I guess it may just not be what we expected, but probably still a powerful tool for the right use cases. &lt;/P&gt;</description>
      <pubDate>Thu, 22 Dec 2022 13:39:36 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-work-with-dlt-pipelines-best-practices/m-p/15816#M10089</guid>
      <dc:creator>espenol</dc:creator>
      <dc:date>2022-12-22T13:39:36Z</dc:date>
    </item>
  </channel>
</rss>

