cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

databricks api to create a serverless job

anazen13
New Contributor

I am trying to follow your documentation on how to create serverless job via API https://docs.databricks.com/api/workspace/jobs/create#environments-spec-environment_version So i see that sending the json request resulted for me to see serverless cluster for my test job in Databricks UI. However even though i specify the environment or dependencies in my json, they seem to be completely ignored. By default i get serverless environement 3. Can you give me a hint of what is going wrong in this case?

Here is my json 

{
              "environments": [
                {
                  "environment_key": "default_python",
                  "spec": {
                    "environment_version": "1",
                    "dependencies": [
                      "/Volumes/<mycatalog>/..."
                    ]
                  }
                }
              ],
              "name": "${{ parameters.TEST_RUN_TITLE }}",
              "tasks": [
                {
                  "task_key": "pytest_task",
                  "notebook_task": {
                    "notebook_path": "${{ parameters.NOTEBOOK_PATH }}",
                    "base_parameters": {
                      "build_id": "$(Build.BuildId)"
                    }
                  }
                }
              ]
            }

 

9 REPLIES 9

szymon_dybczak
Esteemed Contributor III

Hi @anazen13 ,

Are you using databricks cli to create serveless job or are you calling REST API directly?

yeah i define the json based on the doc link provided in the original message and then use cli to create a new job. 

 

JOB_RESPONSE=$(databricks jobs create --json "$JOB_JSON")
JOB_ID=$(echo "$JOB_RESPONSE" | jq -r '.job_id')

 

 

szymon_dybczak
Esteemed Contributor III

Ok, thanks for info @anazen13 . If I were you I would double check that you're using latest version of databricks cli. Your payload looks correct, but if you're using old version of the cli, new environment_version attribute could not be visible.
If that doesnโ€™t help, then itโ€™s possible Iโ€™ve found a bug. In that case, you can report it to Databricks support.

hey, thanks for responding quickly. The version i use Databricks CLI v0.265.0. I guess should be good enough cause I see the most recent v0.266.0 released just yesterday.

szymon_dybczak
Esteemed Contributor III

So, I've tested it on my environment and it worked as expected. But I've used the newest version of cli.

So, here's my payload:

szymon_dybczak_0-1756370880145.png

Here's cli command I've used:

databricks jobs create --json @payload.json

 

And here's an outcome. New Job was created with specification I defined in my json payload:

szymon_dybczak_1-1756370958379.png

 

szymon_dybczak_2-1756371020237.png

 

 

I just tried by updating my json to match yours as much as possible but still no success ๐Ÿ˜ž

I think I will report this issue anyway to their support to debug further what's not okay in my case. But thanks so much for your time.

szymon_dybczak
Esteemed Contributor III

Sure, no problem. Let us know about the outcome ๐Ÿ™‚

siennafaleiro
New Contributor

It looks like youโ€™re hitting one of the current limitations of Databricks serverless jobs. Even though the API supports passing an environments object, only certain fields are honored right now. In particular:

  • The environment_version parameter will default to the latest supported runtime (currently Serverless Environment 3), and you canโ€™t override it with "1" or other values yet.
  • Custom dependencies arenโ€™t picked up from the job JSON directly. For serverless jobs, dependency management has to be handled through Databricks Asset Bundles, package installation in the notebook itself (e.g., %pip install), or by referencing volumes/libraries within the task.

So in your example, the JSON is valid, but the fields youโ€™re setting are ignored because serverless doesnโ€™t currently allow custom environment pinning or inline dependency injection.

If you need strict control over environment/dependencies, youโ€™ll want to either:

  1. Use a standard job cluster with your desired runtime + libraries.
  2. Or package dependencies in a wheel/requirements file and install them at task start in the serverless job.

Iโ€™d also keep an eye on the Jobs API release notes, since Databricks has been expanding serverless features fairly quickly.

siennafaleiro

Hi siennafaleiro, thanks for you reply and you might be right, but then I find it strange that they list the entire json in the official docs as an example and then some of it worked and some fields are ignored.