cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Containerized Databricks/Spark database

knight007
New Contributor II

Hello. I'm fairly new to Databricks and Spark.

I have a requirement to connect to Databricks using JDBC and that works perfectly using the driver I downloaded from the Databricks website ("com.simba.spark.jdbc.Driver")

What I would like to do now is have a locally running instance of a database in docker that I can connect to using the same driver. I'd like to automatically initialise the database by creating tables when it starts up. Very much like how you would use docker-entrypoint-initdb.d when creating tables on startup for Postgresql.

I'd then like to insert some data and run some tests locally.

Is any of this possible?

8 REPLIES 8

-werners-
Esteemed Contributor III

the simba driver is for spark connections, I doubt it will work with a database.

Why would you use this driver to connect to a database in a container? Or do you mean running Databricks in a local container?

If the latter: that is not available.

knight007
New Contributor II

ok so maybe I've not asked the right question.

At the moment we use the Simba driver to connect to databricks and we can perform sql queries.

Can I achieve the same thing locally using a dockerized Databricks or spark runtime?

-werners-
Esteemed Contributor III

Databricks: no,

Spark: I guess so, but it will take some effort to gather all necessary dependencies and create a container (or look for one on dockerhub)

-werners-
Esteemed Contributor III

If you are just running queries on tables, you could also look into something like Dremio which can be run on docker in single node.

knight007
New Contributor II

can I connect to that using the same Simba driver?

-werners-
Esteemed Contributor III

I don´t know if the Databricks driver is the same as the classic simba driver.

Hubert-Dudek
Esteemed Contributor III

@Gurps Bassi​ , "running instance of a database in docker" - that is hive metastore, so it just mapping to data which is usually physically on the data lake. Databricks are so much on the cloud that setting metastore locally doesn't make sense. Instead, place two Databricks workspaces, one with stage Repo (where you will make development) and another workspace with master Repo.

For those who want to develop locally in IDE, soon, the databricks tunnel for Visual Studio Code will be available.

Kaniz
Community Manager
Community Manager

Hi @Gurps Bassi​ , Just a friendly follow-up. Do you still need help, or do @Hubert Dudek (Customer)​ and @Werner Stinckens​ 's responses help you find the solution? Please let us know.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.