I've got a notebook that I've written that's going to execute some python code to parse the workspace id to figure out which of my environments that I'm in and set a value for it. I then want to take that value, and pass it through to a code block of SQL that will execute, using the set value as a part of the table structure names that I'm executing DML on.
I was able to do this using standard shared compute cluster, using spark.conf.set() to create a parameter, and then calling that parameter within the SQL code using ${myparam} syntax (ie. SELECT * FROM {$myparam}_schema.MyTable). But in testing with Serverless, access to the spark.conf.set() function isn't available.
Does anyone have any suggestions on how I might be able to accomplish the same thing in Serverless Compute?