cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Passing variables from python to sql in a notebook using serverless compute

emorgoch
New Contributor II

I've got a notebook that I've written that's going to execute some python code to parse the workspace id to figure out which of my environments that I'm in and set a value for it. I then want to take that value, and pass it through to a code block of SQL that will execute, using the set value as a part of the table structure names that I'm executing DML on. 

I was able to do this using standard shared compute cluster, using spark.conf.set() to create a parameter, and then calling that parameter within the SQL code using ${myparam} syntax (ie. SELECT * FROM {$myparam}_schema.MyTable). But in testing with Serverless, access to the spark.conf.set() function isn't available.

Does anyone have any suggestions on how I might be able to accomplish the same thing in Serverless Compute?

1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz_Fatma
Community Manager
Community Manager

Hi @emorgoch, In Databricks Serverless Compute, the `spark.conf.set()` function isn't available due to secure shared access mode, but you can achieve similar functionality using alternative methods. You can set environment variables in your notebook and access them in your SQL code with `dbutils.widgets.get()`. Databricks widgets can also pass parameters between notebook cells, and temporary views can be created in Python and used in SQL code. These methods help achieve the same functionality in Serverless computing. For detailed guidance, refer to the [Best practices for serverless compute](https://docs.databricks.com/en/compute/serverless/best-practices.html).

View solution in original post

2 REPLIES 2

Kaniz_Fatma
Community Manager
Community Manager

Hi @emorgoch, In Databricks Serverless Compute, the `spark.conf.set()` function isn't available due to secure shared access mode, but you can achieve similar functionality using alternative methods. You can set environment variables in your notebook and access them in your SQL code with `dbutils.widgets.get()`. Databricks widgets can also pass parameters between notebook cells, and temporary views can be created in Python and used in SQL code. These methods help achieve the same functionality in Serverless computing. For detailed guidance, refer to the [Best practices for serverless compute](https://docs.databricks.com/en/compute/serverless/best-practices.html).

emorgoch
New Contributor II

Thanks Kaniz, this is a great suggestion. I'll look into it and how it can work for my projects.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group