cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Run Pyspark job of Python egg package using spark submit on databricks

ItsMe
New Contributor II

Error: missing application resource

Getting this error while running job with spark submit.​ I have given following parameters while creating job:

--conf spark.yarn.appMasterEnv.PYSAPRK_PYTHON=databricks/path/python3

--py-files dbfs/path/to/.egg job_main.py

Above given appropriately as per expectation given by databricks spark submit syntax.

Can anyone please let me know if anything missing while giving spark submit parameters?​

1 ACCEPTED SOLUTION

Accepted Solutions

User16752246494
Contributor

Hi,

We tried a simulate the question on our end and what we did was packaged a module inside a whl file.

Now to access the wheel file we created another python file test_whl_locally.py. Inside test_whl_locally.py to access the content of the wheel file first you have to impart the module or the class you want to access eg

#Syntax
# from <wheelpackagedirname>.<module> import <className>
# refVar = <className>()
# example :
 
from somewhlpackage.module_two import ModuleTwo
 
moduleTwo = ModuleTwo()
moduleTwo.print()

now upload both the wheel package in your case it will be the egg file and the calling python file (in our case it is test_whl_locally.py but in your case it is job_main.py) to dbfs. Once done configure your spark-submit.

["--py-files","dbfs:/FileStore/tables/whls/somewhlpackage-1.0.0-py3-none-any.whl","dbfs:/FileStore/tables/whls/test_whl_locally.py"]

If you look closely we have provided the fully qualified path of dbfs in --py-files. so when py-files runs it will install both the wheel file/egg file in the virtual environment that it creates.

image 

View solution in original post

3 REPLIES 3

User16752246494
Contributor

Hi @D M​ ,

Good day.

Can you please try with

--py-files /dbfs/path/to/.egg job_main.py

The above will invoke the fuse driver.

If the spark-submit still fails, Can you please provide the full stack trace?

User16752246494
Contributor

Hi,

We tried a simulate the question on our end and what we did was packaged a module inside a whl file.

Now to access the wheel file we created another python file test_whl_locally.py. Inside test_whl_locally.py to access the content of the wheel file first you have to impart the module or the class you want to access eg

#Syntax
# from <wheelpackagedirname>.<module> import <className>
# refVar = <className>()
# example :
 
from somewhlpackage.module_two import ModuleTwo
 
moduleTwo = ModuleTwo()
moduleTwo.print()

now upload both the wheel package in your case it will be the egg file and the calling python file (in our case it is test_whl_locally.py but in your case it is job_main.py) to dbfs. Once done configure your spark-submit.

["--py-files","dbfs:/FileStore/tables/whls/somewhlpackage-1.0.0-py3-none-any.whl","dbfs:/FileStore/tables/whls/test_whl_locally.py"]

If you look closely we have provided the fully qualified path of dbfs in --py-files. so when py-files runs it will install both the wheel file/egg file in the virtual environment that it creates.

image 

Thanks a lot for looking into this issue and providing above solution but ​my expected scenario is where I want to read main function .py file from .zip package (including number of py files). Can you please tell me how to pass main function python file or how will it take reference of that?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group