<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: CLuster Config in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/cluster-config/m-p/143484#M52184</link>
    <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/205136"&gt;@Naren1&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;Yes — you can pass parameters from ADF to a Databricks Job run, but you generally can’t use those parameters to change the job cluster configuration (node type, Spark version, autoscale, init scripts, etc.) for that run.&lt;BR /&gt;In an ADF Databricks Job activity, the supported runtime customization is jobParameters (key-value pairs) that get passed into the job run. Doc: &lt;A href="https://learn.microsoft.com/en-us/azure/data-factory/transform-data-databricks-job#databricks-job-activity-definition" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/data-factory/transform-data-databricks-job#databricks-job-activity-definition&lt;/A&gt;. With the Job activity, ADF would run an existing Databricks jobId, optionally with jobParameters.&lt;/P&gt;
&lt;P&gt;Would you please help me understand what do you mean by environment --&amp;nbsp;different libraries? different Spark version? different node size?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 09 Jan 2026 13:44:08 GMT</pubDate>
    <dc:creator>K_Anudeep</dc:creator>
    <dc:date>2026-01-09T13:44:08Z</dc:date>
    <item>
      <title>CLuster Config</title>
      <link>https://community.databricks.com/t5/data-engineering/cluster-config/m-p/143480#M52183</link>
      <description>&lt;P&gt;Hi, can we pass a parameter into job activity from ADF side to change the environment inside the job cluster configuration?&lt;/P&gt;</description>
      <pubDate>Fri, 09 Jan 2026 13:25:15 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/cluster-config/m-p/143480#M52183</guid>
      <dc:creator>Naren1</dc:creator>
      <dc:date>2026-01-09T13:25:15Z</dc:date>
    </item>
    <item>
      <title>Re: CLuster Config</title>
      <link>https://community.databricks.com/t5/data-engineering/cluster-config/m-p/143484#M52184</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/205136"&gt;@Naren1&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;Yes — you can pass parameters from ADF to a Databricks Job run, but you generally can’t use those parameters to change the job cluster configuration (node type, Spark version, autoscale, init scripts, etc.) for that run.&lt;BR /&gt;In an ADF Databricks Job activity, the supported runtime customization is jobParameters (key-value pairs) that get passed into the job run. Doc: &lt;A href="https://learn.microsoft.com/en-us/azure/data-factory/transform-data-databricks-job#databricks-job-activity-definition" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/data-factory/transform-data-databricks-job#databricks-job-activity-definition&lt;/A&gt;. With the Job activity, ADF would run an existing Databricks jobId, optionally with jobParameters.&lt;/P&gt;
&lt;P&gt;Would you please help me understand what do you mean by environment --&amp;nbsp;different libraries? different Spark version? different node size?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 09 Jan 2026 13:44:08 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/cluster-config/m-p/143484#M52184</guid>
      <dc:creator>K_Anudeep</dc:creator>
      <dc:date>2026-01-09T13:44:08Z</dc:date>
    </item>
  </channel>
</rss>

