cancel
Showing results for 
Search instead for 
Did you mean: 
Data Governance
Join discussions on data governance practices, compliance, and security within the Databricks Community. Exchange strategies and insights to ensure data integrity and regulatory compliance.
cancel
Showing results for 
Search instead for 
Did you mean: 

"Create external Hive function is not supported in Unity Catalog" when registering Hive UDFs

FRG96
New Contributor III

Hive Permanent Function ErrorI have enabled Unity Catalog in my AWS Databricks workspace, switching from the standard Hive Metastore. I have a few custom Hive UDFs in a JAR file that I have copied to /databricks/jars/ folder through init script. I want to be able to register those UDFs as permanent functions in Unity Catalog database using CREATE FUNCTION SQL Query in Notebook.

But when I do so, I get this error:

com.databricks.backend.common.rpc.SparkDriverExceptions$SQLExecutionException: org.apache.spark.sql.AnalysisException: [UC_COMMAND_NOT_SUPPORTED] Create external Hive function is not supported in Unity Catalog.
    at com.databricks.sql.managedcatalog.ManagedCatalogErrors$.operationNotSupportedException(ManagedCatalogErrors.scala:50)
    at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.rejectOperationsForUnityCatalog(ManagedCatalogSessionCatalog.scala:288)
    at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.rejectOperationsForUnityCatalog(ManagedCatalogSessionCatalog.scala:294)
    at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.createFunction(ManagedCatalogSessionCatalog.scala:1633)
    at org.apache.spark.sql.execution.command.CreateFunctionCommand.run(functions.scala:95)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:80)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:78)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:89)
    at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:241)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:243)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:392)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:188)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:985)
    at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:142)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:342)
    at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:241)
    at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:226)
    at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:239)
    at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:232)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:99)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:268)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:264)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:488)
    at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:232)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:324)
    at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:232)
    at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:186)
    at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:177)
    at org.apache.spark.sql.Dataset.<init>(Dataset.scala:238)
    at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:107)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:985)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:104)
    at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:820)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:985)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:815)
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:695)
    at com.databricks.backend.daemon.driver.SQLDriverLocal.$anonfun$executeSql$1(SQLDriverLocal.scala:91)
    at scala.collection.immutable.List.map(List.scala:293)
    at com.databricks.backend.daemon.driver.SQLDriverLocal.executeSql(SQLDriverLocal.scala:37)
    at com.databricks.backend.daemon.driver.SQLDriverLocal.repl(SQLDriverLocal.scala:145)
    at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$23(DriverLocal.scala:728)
    at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:41)
    at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:99)
    at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$20(DriverLocal.scala:711)
    at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:398)
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
    at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:147)
    at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:396)
    at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:393)
    at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:62)
    at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:441)
    at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:426)
    at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:62)
    at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:688)
    at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$1(DriverWrapper.scala:622)
    at scala.util.Try$.apply(Try.scala:213)
    at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:614)
    at com.databricks.backend.daemon.driver.DriverWrapper.executeCommandAndGetError(DriverWrapper.scala:533)
    at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:568)
    at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:438)
    at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:381)
    at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:232)
    at java.lang.Thread.run(Thread.java:750)

4 REPLIES 4

Debayan
Databricks Employee
Databricks Employee

Hi, Custom UDFs are not yet supported in Unity Catalog.

FRG96
New Contributor III

Oh. Is this info officially documented somewhere in the Databricks documentation for Unity Catalog?

Also, will there be any issue if I register the UDFs as temporary functions?

jose_gonzalez
Databricks Employee
Databricks Employee

According to the docs from here https://docs.databricks.com/data-governance/unity-catalog/index.html

  • Python UDF support on shared clusters is supported in Private Preview. Contact your account team for access.

Anonymous
Not applicable

Hi @Franklin George​ 

Hope everything is going great.

Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you. 

Cheers!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group