cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Passing variables from python to sql in a notebook using serverless compute

emorgoch
New Contributor II

I've got a notebook that I've written that's going to execute some python code to parse the workspace id to figure out which of my environments that I'm in and set a value for it. I then want to take that value, and pass it through to a code block of SQL that will execute, using the set value as a part of the table structure names that I'm executing DML on. 

I was able to do this using standard shared compute cluster, using spark.conf.set() to create a parameter, and then calling that parameter within the SQL code using ${myparam} syntax (ie. SELECT * FROM {$myparam}_schema.MyTable). But in testing with Serverless, access to the spark.conf.set() function isn't available.

Does anyone have any suggestions on how I might be able to accomplish the same thing in Serverless Compute?

1 REPLY 1

emorgoch
New Contributor II

Thanks Kaniz, this is a great suggestion. I'll look into it and how it can work for my projects.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group