04-29-2024 11:55 AM
Hey guys, I'm trying to find what are the options we can pass to
spark_conf.spark.databricks.cluster.profileI know looking around that some of the available configs are singleNode and serverless, but there are others?
Where is the documentation of it?
Thank you!
09-21-2024 02:48 PM
So, small update (sorry, I meant to post a few days ago but totally forgot): at least for multi-node, there is no value. Meaning, if you set a value, it will be that value, but if you don't it will be multinode. So if you, like I was, are setting up cluster policies and you wanted to, e.g., set single node as default but still allow multi node, you'd set spark.databricks.cluster.profile as singleNode, but set it to be optional. Can't remember the syntax off the top of my head, but it's along the lines of isOptional: true. Maybe not what you're looking for exactly, but that's what I needed. Good luck!
yesterday
Looking internally, I was able to find the following:
So, in summary, the options are 'serverless', 'singleNode', or not set.
Cheers.
09-18-2024 10:02 AM
So, the "Other Profiles" that you mention are not documented in the documentation that you linked to, and I haven't been able to find any list of the other possible values despite what I thought were pretty decent googling skills (GPT4 plus web search is also not seemingly able to find it). I'm basically just trying to find the value for multi-node. Single-node is singleNode, and multi-node is... something? 😕
09-21-2024 02:16 PM
Hi @Retired_mod thank you for the detailed response. But it still lacks the documentation about the options to be passed to the config.
As @msamson shared, there are no docs about the other options beyond SingleNode and Serverless. I've tried to look for it even on databricks github repo but with no sucess.
Can you share with us what are the options we can use? Reviewing old clusters creation JSON, that config is never used. Does it means that this config was replaced by any other like data_security_mode?
Thank you!
09-21-2024 02:48 PM
So, small update (sorry, I meant to post a few days ago but totally forgot): at least for multi-node, there is no value. Meaning, if you set a value, it will be that value, but if you don't it will be multinode. So if you, like I was, are setting up cluster policies and you wanted to, e.g., set single node as default but still allow multi node, you'd set spark.databricks.cluster.profile as singleNode, but set it to be optional. Can't remember the syntax off the top of my head, but it's along the lines of isOptional: true. Maybe not what you're looking for exactly, but that's what I needed. Good luck!
09-03-2025 12:44 AM
Recently I got stuck with the same issue. However, in the new view of the form/template to create a policy, you have and option to delete the setting "spark_conf.spark.databricks.cluster.profile" by clicking on the "bin" icon. Once you did that, you are actually going multinode. Please ensure you have "ResourceClass" set to "MultiNode" and enabled autoscaling with min and max range.
yesterday
Looking internally, I was able to find the following:
So, in summary, the options are 'serverless', 'singleNode', or not set.
Cheers.
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now