cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

spark-submit Error "Unrecognized option: --executor-memory 3G" although --executor-memory is available in Options.

talha
New Contributor III

Executed a spark-submit job through databricks cli with the following job configurations.

{
  "job_id": 123,
  "creator_user_name": "******",
  "run_as_user_name": "******",
  "run_as_owner": true,
  "settings": {
    "name": "44aa-8447-c123aad310",
    "email_notifications": {},
    "max_concurrent_runs": 1,
    "tasks": [
      {
        "task_key": "4aa-8447-c90aad310",
        "spark_submit_task": {
          "parameters": [
            "--driver-memory 3G",
            "--executor-memory 3G",
            "--conf",
            "spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=2",
            "--conf",
            "spark.speculation=false",
            "--conf",
            "spark.sql.parquet.fs.optimized.committer.optimization-enabled=true",
            "--conf",
            "spark.executorEnv.JAVA_HOME=/usr/lib/jvm/jdk-11.0.1",
            "--conf",
            "spark.executor.instances=3",
            "--conf",
            "spark.network.timeout=600s",
            "--conf",
            "spark.yarn.appMasterEnv.JAVA_HOME=/usr/lib/jvm/jdk-11.0.1",
            "--conf",
            "spark.driver.maxResultSize=1g",
            "--conf",
            "spark.yarn.maxAppAttempts=1",
            "--jars",
            "/home/hadoop/somejar.jar,/home/hadoop/somejar2.jar",
            "--class",
            "we.databricks.some.path.ER",
            "/home/hadoop/some-jar-SNAPSHOT.jar",
            "'******'"
          ]
        },
        "new_cluster": {
          "spark_version": "10.4.x-scala2.12",
          "spark_conf": {
            "spark.databricks.delta.preview.enabled": "true",
            "spark.hadoop.fs.azure.account.key": "******"
          },
          "node_type_id": "Standard_DS3_v2",
          "custom_tags": {
            "application": "******",
            "name": "******",
            "environment": "******",
            "owner": "******",
            "CURRENT_VERSION": "1.20.0-ab6303d9d"
          },
          "cluster_log_conf": {
            "dbfs": {
              "destination": "******"
            }
          },
          "spark_env_vars": {
            "ENVIRONMENT": "******",
            "AZURE_ACCOUNT_KEY": "******",
            "AZURE_ACCOUNT_NAME": "******",
            "PYSPARK_PYTHON": "/databricks/python3/bin/python3",
            "JNAME": "zulu11-ca-amd64",
            "AZURE_CONTAINER_NAME": "******"
          },
          "enable_elastic_disk": true,
          "init_scripts": [
            {
              "abfss": {
                "destination": "******"
              }
            }
          ],
          "num_workers": 3
        },
        "timeout_seconds": 0
      }
    ],
    "format": "MULTI_TASK"
  },
  "created_time": 1662096418457
}

But this gives error in spark submit. Error: Unrecognized option: --executor-memory 3G

5 REPLIES 5

Debayan
Esteemed Contributor III
Esteemed Contributor III

Hi, Thanks for reaching out to community.databricks.com.

Are you running spark in local mode?

Please check https://stackoverflow.com/questions/26562033/how-to-set-apache-spark-executor-memory, please let us know if this helps, Also please let us know in case if you have further queries on the same.

talha
New Contributor III

Not really sure if running spark on local mode. But have used alternate property

spark.executor.memory

and passed it as --conf and now it works

Hi @Muhammad Talha Jamil​,

We don't recommend to change the default settings. I would like to undertand better the reason why you would like to change the default values. Are you trying to defined the Executor memory because you had an error in the past? or what would be the reason?

talha
New Contributor III

We are moving from aws emr to azure databricks. In emr we used to change executor memory with respect to job requirements. Wont we require that on databricks?

I will highly recommend to run your job with the default values. Then you can have a good reference point in case you would like to optimize further. Check your cluster utilization and Spark UI. This will help you to undertand better what is happening as your job is running

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.