spark-submit Error "Unrecognized option: --executor-memory 3G" although --executor-memory is available in Options.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-01-2022 10:48 PM
Executed a spark-submit job through databricks cli with the following job configurations.
{
"job_id": 123,
"creator_user_name": "******",
"run_as_user_name": "******",
"run_as_owner": true,
"settings": {
"name": "44aa-8447-c123aad310",
"email_notifications": {},
"max_concurrent_runs": 1,
"tasks": [
{
"task_key": "4aa-8447-c90aad310",
"spark_submit_task": {
"parameters": [
"--driver-memory 3G",
"--executor-memory 3G",
"--conf",
"spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=2",
"--conf",
"spark.speculation=false",
"--conf",
"spark.sql.parquet.fs.optimized.committer.optimization-enabled=true",
"--conf",
"spark.executorEnv.JAVA_HOME=/usr/lib/jvm/jdk-11.0.1",
"--conf",
"spark.executor.instances=3",
"--conf",
"spark.network.timeout=600s",
"--conf",
"spark.yarn.appMasterEnv.JAVA_HOME=/usr/lib/jvm/jdk-11.0.1",
"--conf",
"spark.driver.maxResultSize=1g",
"--conf",
"spark.yarn.maxAppAttempts=1",
"--jars",
"/home/hadoop/somejar.jar,/home/hadoop/somejar2.jar",
"--class",
"we.databricks.some.path.ER",
"/home/hadoop/some-jar-SNAPSHOT.jar",
"'******'"
]
},
"new_cluster": {
"spark_version": "10.4.x-scala2.12",
"spark_conf": {
"spark.databricks.delta.preview.enabled": "true",
"spark.hadoop.fs.azure.account.key": "******"
},
"node_type_id": "Standard_DS3_v2",
"custom_tags": {
"application": "******",
"name": "******",
"environment": "******",
"owner": "******",
"CURRENT_VERSION": "1.20.0-ab6303d9d"
},
"cluster_log_conf": {
"dbfs": {
"destination": "******"
}
},
"spark_env_vars": {
"ENVIRONMENT": "******",
"AZURE_ACCOUNT_KEY": "******",
"AZURE_ACCOUNT_NAME": "******",
"PYSPARK_PYTHON": "/databricks/python3/bin/python3",
"JNAME": "zulu11-ca-amd64",
"AZURE_CONTAINER_NAME": "******"
},
"enable_elastic_disk": true,
"init_scripts": [
{
"abfss": {
"destination": "******"
}
}
],
"num_workers": 3
},
"timeout_seconds": 0
}
],
"format": "MULTI_TASK"
},
"created_time": 1662096418457
}
But this gives error in spark submit. Error: Unrecognized option: --executor-memory 3G
- Labels:
-
Databricks cli
-
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-02-2022 02:34 PM
Hi, Thanks for reaching out to community.databricks.com.
Are you running spark in local mode?
Please check https://stackoverflow.com/questions/26562033/how-to-set-apache-spark-executor-memory, please let us know if this helps, Also please let us know in case if you have further queries on the same.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-05-2022 03:23 AM
Not really sure if running spark on local mode. But have used alternate property
spark.executor.memory
and passed it as --conf and now it works
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-09-2022 04:29 PM
Hi @Muhammad Talha Jamil,
We don't recommend to change the default settings. I would like to undertand better the reason why you would like to change the default values. Are you trying to defined the Executor memory because you had an error in the past? or what would be the reason?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2022 02:58 AM
We are moving from aws emr to azure databricks. In emr we used to change executor memory with respect to job requirements. Wont we require that on databricks?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-31-2022 11:33 AM
I will highly recommend to run your job with the default values. Then you can have a good reference point in case you would like to optimize further. Check your cluster utilization and Spark UI. This will help you to undertand better what is happening as your job is running

