<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Passing variables from python to sql in a notebook using serverless compute in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/passing-variables-from-python-to-sql-in-a-notebook-using/m-p/81516#M36337</link>
    <description>&lt;P&gt;I've got a notebook that I've written that's going to execute some python code to parse the workspace id to figure out which of my environments that I'm in and set a value for it. I then want to take that value, and pass it through to a code block of SQL that will execute, using the set value as a part of the table structure names that I'm executing DML on.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I was able to do this using standard shared compute cluster, using spark.conf.set() to create a parameter, and then calling that parameter within the SQL code using ${myparam} syntax (ie. SELECT * FROM {$myparam}_schema.MyTable). But in testing with Serverless, access to the spark.conf.set() function isn't available.&lt;/P&gt;&lt;P&gt;Does anyone have any suggestions on how I might be able to accomplish the same thing in Serverless Compute?&lt;/P&gt;</description>
    <pubDate>Thu, 01 Aug 2024 14:54:36 GMT</pubDate>
    <dc:creator>emorgoch</dc:creator>
    <dc:date>2024-08-01T14:54:36Z</dc:date>
    <item>
      <title>Passing variables from python to sql in a notebook using serverless compute</title>
      <link>https://community.databricks.com/t5/data-engineering/passing-variables-from-python-to-sql-in-a-notebook-using/m-p/81516#M36337</link>
      <description>&lt;P&gt;I've got a notebook that I've written that's going to execute some python code to parse the workspace id to figure out which of my environments that I'm in and set a value for it. I then want to take that value, and pass it through to a code block of SQL that will execute, using the set value as a part of the table structure names that I'm executing DML on.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I was able to do this using standard shared compute cluster, using spark.conf.set() to create a parameter, and then calling that parameter within the SQL code using ${myparam} syntax (ie. SELECT * FROM {$myparam}_schema.MyTable). But in testing with Serverless, access to the spark.conf.set() function isn't available.&lt;/P&gt;&lt;P&gt;Does anyone have any suggestions on how I might be able to accomplish the same thing in Serverless Compute?&lt;/P&gt;</description>
      <pubDate>Thu, 01 Aug 2024 14:54:36 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/passing-variables-from-python-to-sql-in-a-notebook-using/m-p/81516#M36337</guid>
      <dc:creator>emorgoch</dc:creator>
      <dc:date>2024-08-01T14:54:36Z</dc:date>
    </item>
    <item>
      <title>Re: Passing variables from python to sql in a notebook using serverless compute</title>
      <link>https://community.databricks.com/t5/data-engineering/passing-variables-from-python-to-sql-in-a-notebook-using/m-p/83968#M37085</link>
      <description>&lt;P&gt;Thanks Kaniz, this is a great suggestion. I'll look into it and how it can work for my projects.&lt;/P&gt;</description>
      <pubDate>Thu, 22 Aug 2024 19:25:40 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/passing-variables-from-python-to-sql-in-a-notebook-using/m-p/83968#M37085</guid>
      <dc:creator>emorgoch</dc:creator>
      <dc:date>2024-08-22T19:25:40Z</dc:date>
    </item>
  </channel>
</rss>

