11-19-2023 10:28 PM
ERROR RetryingHMSHandler: NoSuchObjectException(message:There is no database named global_temp)
at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:508)
at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:519)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
at com.sun.proxy.$Proxy53.getDatabase(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:796)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
at com.sun.proxy.$Proxy55.get_database(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:949)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
at com.sun.proxy.$Proxy56.getDatabase(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1165)
at org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1154)
at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$databaseExists$1(HiveClientImpl.scala:441)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:348)
at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$retryLocked$1(HiveClientImpl.scala:251)
at org.apache.spark.sql.hive.client.HiveClientImpl.synchronizeOnObject(HiveClientImpl.scala:287)
at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:243)
at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:330)
at org.apache.spark.sql.hive.client.HiveClientImpl.databaseExists(HiveClientImpl.scala:441)
at org.apache.spark.sql.hive.client.PoolingHiveClient.$anonfun$databaseExists$1(PoolingHiveClient.scala:321)
at org.apache.spark.sql.hive.client.PoolingHiveClient.$anonfun$databaseExists$1$adapted(PoolingHiveClient.scala:320)
at org.apache.spark.sql.hive.client.PoolingHiveClient.withHiveClient(PoolingHiveClient.scala:149)
at org.apache.spark.sql.hive.client.PoolingHiveClient.databaseExists(PoolingHiveClient.scala:320)
at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$databaseExists$1(HiveExternalCatalog.scala:302)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$withClient$2(HiveExternalCatalog.scala:151)
at org.apache.spark.sql.hive.HiveExternalCatalog.maybeSynchronized(HiveExternalCatalog.scala:112)
at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$withClient$1(HiveExternalCatalog.scala:150)
at com.databricks.backend.daemon.driver.ProgressReporter$.withStatusCode(ProgressReporter.scala:377)
at com.databricks.backend.daemon.driver.ProgressReporter$.withStatusCode(ProgressReporter.scala:363)
at com.databricks.spark.util.SparkDatabricksProgressReporter$.withStatusCode(ProgressReporter.scala:34)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:149)
at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:302)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.databaseExists(ExternalCatalogWithListener.scala:77)
at org.apache.spark.sql.internal.SharedState.$anonfun$globalTempViewManager$1(SharedState.scala:213)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.sql.internal.SharedState.globalTempViewManager$lzycompute(SharedState.scala:213)
at org.apache.spark.sql.internal.SharedState.globalTempViewManager(SharedState.scala:210)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.$anonfun$hiveCatalog$2(HiveSessionStateBuilder.scala:67)
at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.globalTempViewManager$lzycompute(SessionCatalog.scala:447)
at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.globalTempViewManager(SessionCatalog.scala:447)
at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.setCurrentDatabaseWithoutCheck(SessionCatalog.scala:671)
at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.copyStateTo(SessionCatalog.scala:2435)
at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.copyStateTo(ManagedCatalogSessionCatalog.scala:962)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.$anonfun$catalog$3(HiveSessionStateBuilder.scala:80)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.$anonfun$catalog$3$adapted(HiveSessionStateBuilder.scala:80)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:80)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:75)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.v2SessionCatalog$lzycompute(BaseSessionStateBuilder.scala:170)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.v2SessionCatalog(BaseSessionStateBuilder.scala:170)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.catalogManager$lzycompute(BaseSessionStateBuilder.scala:173)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.catalogManager(BaseSessionStateBuilder.scala:172)
at com.databricks.sql.DatabricksEdge$$anon$1.<init>(DatabricksEdge.scala:113)
at com.databricks.sql.DatabricksEdge.optimizer(DatabricksEdge.scala:113)
at com.databricks.sql.DatabricksEdge.optimizer$(DatabricksEdge.scala:107)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.optimizer(HiveSessionStateBuilder.scala:44)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.$anonfun$build$3(BaseSessionStateBuilder.scala:377)
at org.apache.spark.sql.internal.SessionState.optimizer$lzycompute(SessionState.scala:104)
at org.apache.spark.sql.internal.SessionState.optimizer(SessionState.scala:104)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$optimizedPlan$1(QueryExecution.scala:112)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:300)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:180)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:854)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:180)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:109)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:109)
at org.apache.spark.sql.execution.columnar.InMemoryRelation$.apply(InMemoryRelation.scala:302)
at org.apache.spark.sql.execution.CacheManager.$anonfun$cacheQuery$3(CacheManager.scala:165)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:854)
at org.apache.spark.sql.execution.CacheManager.cacheQuery(CacheManager.scala:160)
at org.apache.spark.sql.Dataset.persist(Dataset.scala:3257)
at org.apache.spark.sql.Dataset.cache(Dataset.scala:3267)
11-22-2023 11:09 PM
Hi @RahulPatidar , The error message you’re encountering, “NoSuchObjectException (message: There is no database named global_temp),” is related to the use of the special database called “global_temp” in Spark.Here’s what you need to know:
Global Temporary Views: The “global_temp” database is used for global temporary views. These views are shared across different Spark sessions. When you create a temporary view with the prefix “global_temp”, it becomes accessible to other Spark sessions as well.
Harmless Error: The error message you’re seeing is harmless. It occurs because Spark checks for the existence of the “global_temp” database when you use certain operations like dataset.cache(). If the database doesn’t exist, Spark raises this exception. However, it doesn’t impact the functionality of your code.
Ignore the Error: You can safely ignore this error. It won’t affect your data processing or caching. The “global_temp” database is automatically created by Spark and doesn’t require any manual setup.
Keep coding confidently!
11-19-2023 11:25 PM
Hi @RahulPatidar , Are your spark operations getting hampered by this error?
11-20-2023 12:37 AM
Yes as we are using spark cache at many places it is blocking us.
11-20-2023 12:40 AM
Yes spark operations getting hampered by this error.
11-22-2023 10:50 PM
@Kaniz_Fatma can you please help me to resolve this issue.
11-22-2023 11:09 PM
Hi @RahulPatidar , The error message you’re encountering, “NoSuchObjectException (message: There is no database named global_temp),” is related to the use of the special database called “global_temp” in Spark.Here’s what you need to know:
Global Temporary Views: The “global_temp” database is used for global temporary views. These views are shared across different Spark sessions. When you create a temporary view with the prefix “global_temp”, it becomes accessible to other Spark sessions as well.
Harmless Error: The error message you’re seeing is harmless. It occurs because Spark checks for the existence of the “global_temp” database when you use certain operations like dataset.cache(). If the database doesn’t exist, Spark raises this exception. However, it doesn’t impact the functionality of your code.
Ignore the Error: You can safely ignore this error. It won’t affect your data processing or caching. The “global_temp” database is automatically created by Spark and doesn’t require any manual setup.
Keep coding confidently!
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group