<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Databricks and DDD in Administration &amp; Architecture</title>
    <link>https://community.databricks.com/t5/administration-architecture/databricks-and-ddd/m-p/30676#M181</link>
    <description>&lt;P&gt;Our architecture is according to Domain Driven Design. The data is therefore distributed among different domains.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We would like to run workloads on top of our data but we would like to avoid to have a dedicated (duplicated) data lake just for Databricks. Instead we would rather like to directly rely on our own data sources (accessible via REST APIs) in order to always run on the same, latest data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Could anybody point me to some resources in order to get started? It would definitively be fine to have an abstraction layer between what we use in a notebook and how our backend APIs look like...&lt;/P&gt;</description>
    <pubDate>Tue, 18 Mar 2025 16:50:06 GMT</pubDate>
    <dc:creator>Dunken</dc:creator>
    <dc:date>2025-03-18T16:50:06Z</dc:date>
    <item>
      <title>Databricks and DDD</title>
      <link>https://community.databricks.com/t5/administration-architecture/databricks-and-ddd/m-p/30676#M181</link>
      <description>&lt;P&gt;Our architecture is according to Domain Driven Design. The data is therefore distributed among different domains.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We would like to run workloads on top of our data but we would like to avoid to have a dedicated (duplicated) data lake just for Databricks. Instead we would rather like to directly rely on our own data sources (accessible via REST APIs) in order to always run on the same, latest data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Could anybody point me to some resources in order to get started? It would definitively be fine to have an abstraction layer between what we use in a notebook and how our backend APIs look like...&lt;/P&gt;</description>
      <pubDate>Tue, 18 Mar 2025 16:50:06 GMT</pubDate>
      <guid>https://community.databricks.com/t5/administration-architecture/databricks-and-ddd/m-p/30676#M181</guid>
      <dc:creator>Dunken</dc:creator>
      <dc:date>2025-03-18T16:50:06Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks and DDD</title>
      <link>https://community.databricks.com/t5/administration-architecture/databricks-and-ddd/m-p/30677#M182</link>
      <description>&lt;P&gt;You can just use urlopen  or requests and than read json as a dataframe using spark.json(). Problem is that in that case you will need handle whole logic (when to load data, how to handle incremental load etc).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Easier solution o is to use streaming and put kafka with data from your API ( confluent.io can be registered also through Azure) or any other stream like eventHubs and than your newest data can be read as kafka stream in databricks and processed data will be saved in destination of your choice. Maybe on side of your infrastructure you can just deploy microservice which read from rest apis and save to stream.&lt;/P&gt;</description>
      <pubDate>Thu, 27 Jan 2022 17:17:44 GMT</pubDate>
      <guid>https://community.databricks.com/t5/administration-architecture/databricks-and-ddd/m-p/30677#M182</guid>
      <dc:creator>Hubert-Dudek</dc:creator>
      <dc:date>2022-01-27T17:17:44Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks and DDD</title>
      <link>https://community.databricks.com/t5/administration-architecture/databricks-and-ddd/m-p/30678#M183</link>
      <description>&lt;P&gt;So basically you do not want to persist data outside of your source systems.&lt;/P&gt;&lt;P&gt;I think the so called 'Kappa architecture' could be a fit, where everything is treated like a stream.&lt;/P&gt;&lt;P&gt;Hubert already mentioned Kafka, which is an excellent source to build this (there are also others). And on top of that you could use Spark, or Flink or whatever.&lt;/P&gt;&lt;P&gt;There is also Apache Nifi and Streamsets and ...&lt;/P&gt;&lt;P&gt;Kappa architecture is pretty cool, but not without it's flaws.&lt;/P&gt;&lt;P&gt;​&lt;/P&gt;&lt;P&gt;There is also the pretty recent &lt;A href="https://datakitchen.io/what-is-a-data-mesh/" alt="https://datakitchen.io/what-is-a-data-mesh/" target="_blank"&gt;'Data mesh'&lt;/A&gt;, where providing data is seen as domain-based.  This could be a match for your use case.&lt;/P&gt;&lt;P&gt;But this approach of course also has it's flaws (governance, gigantic overhead f.e.)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 28 Jan 2022 07:13:00 GMT</pubDate>
      <guid>https://community.databricks.com/t5/administration-architecture/databricks-and-ddd/m-p/30678#M183</guid>
      <dc:creator>-werners-</dc:creator>
      <dc:date>2022-01-28T07:13:00Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks and DDD</title>
      <link>https://community.databricks.com/t5/administration-architecture/databricks-and-ddd/m-p/30679#M184</link>
      <description>&lt;P&gt;Thanks. If I would use streaming I would replicate all my data sources, isn't it? This is actually something I would like to avoid... also, because I don't know up-front in which data I'm interested  in I would have to store everything at Databricks.&lt;/P&gt;</description>
      <pubDate>Fri, 28 Jan 2022 16:51:12 GMT</pubDate>
      <guid>https://community.databricks.com/t5/administration-architecture/databricks-and-ddd/m-p/30679#M184</guid>
      <dc:creator>Dunken</dc:creator>
      <dc:date>2022-01-28T16:51:12Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks and DDD</title>
      <link>https://community.databricks.com/t5/administration-architecture/databricks-and-ddd/m-p/30680#M185</link>
      <description>&lt;P&gt;if you really want to avoid replicating data (so this means reporting directly on your source systems), you can look into Presto or Trino or Dremio etc.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 31 Jan 2022 13:15:40 GMT</pubDate>
      <guid>https://community.databricks.com/t5/administration-architecture/databricks-and-ddd/m-p/30680#M185</guid>
      <dc:creator>-werners-</dc:creator>
      <dc:date>2022-01-31T13:15:40Z</dc:date>
    </item>
  </channel>
</rss>

