โ06-03-2024 01:07 AM
We need to read a table from 2 different spark.hadoop.hive.metastore.uris and do some validations.
We are not able to connect to both spark.hadoop.hive.metastore.uris at the same time using sparkSession.
I will be using Spark version: 3.1.1 and the language is Scala.
Please comment if any suggestions.
โ06-11-2024 11:24 PM
Hi there @maskepravin02,
We have once implemented this approach of two reading two different hive metasores, but it was not on AWS and GCP, maybe the docs can help.
Though it is not recommended
The best approach is to create separate spark applications to connect each metastore, maybe orchestrate and write them and then join them.
- One other method can be dynamic switching but it is quite error-prone, I don't know whether it will support for AWS and GCP or not :
Here are the docs :
1. https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html
2. https://spark.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties
3. https://stackoverflow.com/questions/32714396/querying-on-multiple-hive-stores-using-apache-spark
4. Some code I extracted from GPT and Gemini:
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder()
.appName("Dynamic Hive Metastore")
.enableHiveSupport()
.getOrCreate()
def switchMetastore(spark: SparkSession, metastoreUri: String): Unit = {
// Set the Hive metastore URI dynamically
spark.conf.set("spark.hadoop.hive.metastore.uris", metastoreUri)
// Refresh the catalog to ensure it uses the new metastore
spark.catalog.refreshTable("your_table")
}
// Example usage
switchMetastore(spark, "thrift://aws-metastore-uri:9083")
val awsDf = spark.sql("SELECT * FROM your_table")
awsDf.show()
switchMetastore(spark, "thrift://gcp-metastore-uri:9083")
val gcpDf = spark.sql("SELECT * FROM your_table")
gcpDf.show()
spark.stop()
Hope this helps you move forward.
โ06-06-2024 04:46 AM - edited โ06-06-2024 04:46 AM
Hi @maskepravin02,
.config("hive.metastore.uris", ...)
, try using .config("spark.hadoop.hive.metastore.uris", ...)
.โ06-11-2024 09:58 PM
@Kaniz_Fatma We have used spark.hadoop.hive.metastore.uris.
Created 2 spark session in same application, with different hive metastore uris 1st is for AWS with all AWS properties and 2nd is for GCP with all GCP connection properties.
Where we have 1st spark session and 2nd spark session also pointing to 1st only if at all we created 2nd spark session.
It seems internally we will only create only 1 spark context per applications, let me know if you have any sample code or any other documentation regarding the same.
Thanks in advance !
โ06-11-2024 11:24 PM
Hi there @maskepravin02,
We have once implemented this approach of two reading two different hive metasores, but it was not on AWS and GCP, maybe the docs can help.
Though it is not recommended
The best approach is to create separate spark applications to connect each metastore, maybe orchestrate and write them and then join them.
- One other method can be dynamic switching but it is quite error-prone, I don't know whether it will support for AWS and GCP or not :
Here are the docs :
1. https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html
2. https://spark.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties
3. https://stackoverflow.com/questions/32714396/querying-on-multiple-hive-stores-using-apache-spark
4. Some code I extracted from GPT and Gemini:
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder()
.appName("Dynamic Hive Metastore")
.enableHiveSupport()
.getOrCreate()
def switchMetastore(spark: SparkSession, metastoreUri: String): Unit = {
// Set the Hive metastore URI dynamically
spark.conf.set("spark.hadoop.hive.metastore.uris", metastoreUri)
// Refresh the catalog to ensure it uses the new metastore
spark.catalog.refreshTable("your_table")
}
// Example usage
switchMetastore(spark, "thrift://aws-metastore-uri:9083")
val awsDf = spark.sql("SELECT * FROM your_table")
awsDf.show()
switchMetastore(spark, "thrift://gcp-metastore-uri:9083")
val gcpDf = spark.sql("SELECT * FROM your_table")
gcpDf.show()
spark.stop()
Hope this helps you move forward.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group