โ09-18-2024 01:39 AM
am trying to run the set up notebook "_COMMON" for my academy data engineering,
am getting the below error: "Configuration dbacademy.deprecation.logging is not available."
โ09-18-2024 02:20 PM
Same issue here and the AI Assistant cannot help...
JVM stacktrace:
org.apache.spark.sql.AnalysisException
at com.databricks.sql.connect.SparkConnectConfig$.assertConfigAllowedForRead(SparkConnectConfig.scala:203)
at org.apache.spark.sql.connect.service.SparkConnectConfigHandler$RuntimeConfigWrapper.get(SparkConnectConfigHandler.scala:106)
at org.apache.spark.sql.connect.service.SparkConnectConfigHandler.transform(SparkConnectConfigHandler.scala:241)
at org.apache.spark.sql.connect.service.SparkConnectConfigHandler.$anonfun$handleGetWithDefault$1(SparkConnectConfigHandler.scala:282)
at org.apache.spark.sql.connect.service.SparkConnectConfigHandler.$anonfun$handleGetWithDefault$1$adapted(SparkConnectConfigHandler.scala:280)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at org.apache.spark.sql.connect.service.SparkConnectConfigHandler.handleGetWithDefault(SparkConnectConfigHandler.scala:280)
at org.apache.spark.sql.connect.service.SparkConnectConfigHandler.handle(SparkConnectConfigHandler.scala:199)
at org.apache.spark.sql.connect.service.SparkConnectService.config(SparkConnectService.scala:122)
at org.apache.spark.connect.proto.SparkConnectServiceGrpc$MethodHandlers.invoke(SparkConnectServiceGrpc.java:805)
at grpc_shaded.io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182)
at com.databricks.spark.connect.service.AuthenticationInterceptor$AuthenticatedServerCallListener.$anonfun$onHalfClose$1(AuthenticationInterceptor.scala:312)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45)
at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103)
at com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$3(RequestContext.scala:265)
at com.databricks.spark.connect.service.RequestContext$.com$databricks$spark$connect$service$RequestContext$$withLocalProperties(RequestContext.scala:525)
at com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$2(RequestContext.scala:265)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:435)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:216)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:433)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:427)
at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:27)
at com.databricks.spark.util.UniverseAttributionContextWrapper.withValue(AttributionContextUtils.scala:225)
at com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$1(RequestContext.scala:264)
at com.databricks.spark.connect.service.RequestContext.withContext(RequestContext.scala:277)
at com.databricks.spark.connect.service.RequestContext.runWith(RequestContext.scala:257)
at com.databricks.spark.connect.service.AuthenticationInterceptor$AuthenticatedServerCallListener.onHalfClose(AuthenticationInterceptor.scala:312)
at grpc_shaded.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:340)
at grpc_shaded.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:866)
at grpc_shaded.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at grpc_shaded.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:840)
File <command-2080937168644028>, line 2
1 import pyspark.sql.functions as F
----> 2 from dbacademy import dbgems
3 from dbacademy.dbhelper import DBAcademyHelper, Paths, CourseConfig, LessonConfig
5 # The following attributes are externalized to make them easy
6 # for content developers to update with every new course.
File /databricks/python/lib/python3.10/site-packages/pyspark/sql/connect/client/core.py:1988, in SparkConnectClient._handle_rpc_error(self, rpc_error)
1985 info = error_details_pb2.ErrorInfo()
1986 d.Unpack(info)
-> 1988 raise convert_exception(
1989 info,
1990 status.message,
1991 self._fetch_enriched_error(info),
1992 self._display_server_stack_trace(),
1993 ) from None
1995 raise SparkConnectGrpcException(status.message) from None
1996 else:
โ11-10-2024 05:42 PM
Did anyone figure out twhat this is? I have the exact same issue:
Configuration dbacademy.deprecation.logging is not available. SQLSTATE: 42K0I
โ12-04-2024 10:08 PM
Create a separate cluster with DBR 13.3, to resolve this error.
โ11-10-2024 08:37 PM
The SQLSTATE code 42K0I corresponds to a configuration not being available. This means that the configuration key you are trying to use does not exist or is not supported in your current setup.
Please note that these files were created specifically for the Databricks Academy Labs environment (through our lab partner, Vocareum) and may not work as expected in other environments.
If youโre interested in working on labs in a provided Databricks environment, you can purchase the Databricks Academy Labs subscription directly from the Databricks Academy website
โ01-29-2025 06:17 AM
I'm using a provided Databricks environment and I've installed the dbacademy package, that you can find on github.
I catch the same error, how I can use this installed package in the setup scripts ?
โ01-30-2025 09:26 AM
I reported here the solution
Setup learning environment failed: Configuration d... - Databricks Community - 82441
yesterday - last edited yesterday
Databricks is passing from the Community Edition to the Free Edition, which I am currently using.
Inspecting the code, the problem seems to be related to the spark.conf.get() method, which is declared as follows in the documentation:
--------------------------------------------------------------------
get(self, key: str, default: Union[str, NoneType, pyspark._globals._NoValueType] = <no value>) -> Optional[str] Parameters
key : str | key of the configuration to get.
default : str, optional | value of the configuration to get if the key does not exist.
Returns The string value of the configuration set, or None.
Examples : spark.conf.get("non-existent-key", "my_default") // 'my_default'
--------------------------------------------------------------------
However, testing the method clearly shows that it is raising an exception instead of returning the default value for the missing key.
This seems to be solvable using spark.conf.getAll.get(key, default), where spark.conf.getAll is retuning a python dict containing all the configurations. However, this problem is present also in the inner code of the "v3.0.23" dbacademy library version I am using, which raises the [CONFIG_NOT_AVAILABLE] error when importing modules of the library (in my case the dbgems module).
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now