cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

What's the best way to develop Apache Spark Jobs from an IDE (such as IntelliJ/Pycharm)?

Anonymous
Not applicable

A number of people like developing locally using an IDE and then deploying. What are the recommended ways to do that with Databricks jobs?

1 REPLY 1

Anonymous
Not applicable

The Databricks Runtime and Apache Spark use the same base API. One can create Spark jobs that run locally and have them run on Databricks with all available Databricks features.

It is required that one uses SparkSession.builder.getOrCreate() to create the SparkSession. The SparkSession is created in the Databricks environment and is treated as a singleton.

In addition, one can also test using databricks connect. Databricks connect replaces Apache Spark/Pyspark on your local machine and allows for your local machine to execute jobs on a Databricks cluster. https://docs.databricks.com/dev-tools/databricks-connect.html

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.