- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-13-2025 01:37 PM
To create a serverless job using the API, you no longer need to specify one of new_cluster, existing_cluster_id, or job_cluster_key in each task. Instead, only tasks with task_key and your task to run is required. Here is an example of how you can create a serverless job:
{
"name": "Serverless Job",
"tasks": [
{
"task_key": "My_task",
"python_wheel_task": {
"package_name": "databricks_jaws",
"entry_point": "run_analysis",
"named_parameters": {
"dry_run": "true"
}
},
"environment_key": "my-serverless-compute"
}
],
"tags": {
"department": "sales"
},
"environments": [
{
"environment_key": "default",
"spec": {
"client": "1",
"dependencies": [
"/Volumes/<catalog>/<schema>/<volume>/<path>.whl",
"/Workspace/my_project/dist.whl",
"simplejson",
"-r /Workspace/my_project/requirements.txt"
]
}
}
]
}
In this example, the job is named "Serverless Job" and it has a single task with the key "My_task". The task is a Python wheel task that runs the "run_analysis" function from the "databricks_jaws" package. The task is run in the "my-serverless-compute" environment. The job also has a tag indicating that it belongs to the "sales" department.