DLT Pipeline event_log error - invalid pipeline name / The Spark SQL phase analysis failed
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2025 02:55 AM
I am trying the below queries using both SQL warehouse and a shared cluster on Databricks runtime (15.4/16.1) with Unity Catalog:
{
"ts": "2025-02-03 07:39:56,967",
"level": "ERROR",
"logger": "pyspark.sql.connect.client.logging",
"msg": "GRPC Error received",
"context": {},
"exception": {
"class": "_MultiThreadedRendezvous",
"msg": "<_MultiThreadedRendezvous of RPC that terminated with:\n\tstatus = StatusCode.INTERNAL\n\tdetails = \"[INTERNAL_ERROR] The Spark SQL phase analysis failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace. SQLSTATE: XX000\"\n\tdebug_error_string = \"UNKNOWN:Error received from peer {grpc_message:\"[INTERNAL_ERROR] The Spark SQL phase analysis failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace. SQLSTATE: XX000\", grpc_status:13, created_time:\"2025-02-03T07:39:56.966893191+00:00\"}\"\n>",
"stacktrace": [
"Traceback (most recent call last):",
" File \"/databricks/spark/python/pyspark/sql/connect/client/core.py\", line 1910, in _execute_and_fetch_as_iterator",
" for b in generator:",
" File \"<frozen _collections_abc>\", line 356, in __next__",
" File \"/databricks/spark/python/pyspark/sql/connect/client/reattach.py\", line 140, in send",
" if not self._has_next():",
" ^^^^^^^^^^^^^^^^",
" File \"/databricks/spark/python/pyspark/sql/connect/client/reattach.py\", line 201, in _has_next",
" raise e",
" File \"/databricks/spark/python/pyspark/sql/connect/client/reattach.py\", line 173, in _has_next",
" self._current = self._call_iter(",
" ^^^^^^^^^^^^^^^^",
" File \"/databricks/spark/python/pyspark/sql/connect/client/reattach.py\", line 298, in _call_iter",
" raise e",
" File \"/databricks/spark/python/pyspark/sql/connect/client/reattach.py\", line 278, in _call_iter",
" return iter_fun()",
" ^^^^^^^^^^",
" File \"/databricks/spark/python/pyspark/sql/connect/client/reattach.py\", line 174, in <lambda>",
" lambda: next(self._iterator) # type: ignore[arg-type]",
" ^^^^^^^^^^^^^^^^^^^^",
" File \"/databricks/spark/python/pyspark/sql/connect/client/core.py\", line 657, in __iter__",
" for response in self._call:",
" File \"/databricks/python/lib/python3.12/site-packages/grpc/_channel.py\", line 540, in __next__",
" return self._next()",
" ^^^^^^^^^^^^",
" File \"/databricks/python/lib/python3.12/site-packages/grpc/_channel.py\", line 966, in _next",
" raise self",
"grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:",
"\tstatus = StatusCode.INTERNAL",
"\tdetails = \"[INTERNAL_ERROR] The Spark SQL phase analysis failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace. SQLSTATE: XX000\"",
"\tdebug_error_string = \"UNKNOWN:Error received from peer {grpc_message:\"[INTERNAL_ERROR] The Spark SQL phase analysis failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace. SQLSTATE: XX000\", grpc_status:13, created_time:\"2025-02-03T07:39:56.966893191+00:00\"}\"",
">"
]
}
}
at org.apache.spark.SparkException$.internalError(SparkException.scala:116)
at org.apache.spark.sql.execution.QueryExecution$.toInternalError(QueryExecution.scala:1220)
at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1233)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:599)
at com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:595)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1422)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:595)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$1(QueryExecution.scala:271)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1676)
at org.apache.spark.util.Utils$.getTryWithCallerStacktrace(Utils.scala:1737)
at org.apache.spark.util.LazyTry.get(LazyTry.scala:58)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:303)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:251)
at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:131)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1422)
at org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1429)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1429)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:123)
at org.apache.spark.sql.SparkSession.$anonfun$sql$4(SparkSession.scala:1102)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1422)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:1054)
at org.apache.spark.sql.connect.planner.SparkConnectPlanner.executeSQL(SparkConnectPlanner.scala:3455)
at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleSqlCommand(SparkConnectPlanner.scala:3287)
at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:3222)
at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:413)
at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:299)
at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:220)
at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:404)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1422)
at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:404)
at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97)
at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:90)
at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:240)
at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:89)
at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:403)
at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:220)
at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:139)
at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$2(ExecuteThreadRunner.scala:639)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)
at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104)
at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:109)
at scala.util.Using$.resource(Using.scala:269)
at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:108)
at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:639)
Suppressed: org.apache.spark.util.Utils$OriginalTryStackTraceException: Full stacktrace of original doTryWithCallerStacktrace caller
at org.apache.spark.SparkException$.internalError(SparkException.scala:116)
at org.apache.spark.sql.execution.QueryExecution$.toInternalError(QueryExecution.scala:1220)
at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1233)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:599)
at com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:595)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1422)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:595)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$1(QueryExecution.scala:271)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1676)
at org.apache.spark.util.LazyTry.tryT$lzycompute(LazyTry.scala:46)
at org.apache.spark.util.LazyTry.tryT(LazyTry.scala:46)
... 36 more
Caused by: java.lang.AssertionError: assertion failed: invalid pipeline schema: [Ljava.lang.String;@1ccd7699
at scala.Predef$.assert(Predef.scala:223)
at com.databricks.sql.dlt.EventLog.getPipelineEventLogTable(EventLog.scala:225)
at com.databricks.sql.dlt.EventLog.getPipelineEventLogTable(EventLog.scala:187)
at com.databricks.sql.dlt.EventLog.getPipelineIdAndEventLogTable(EventLog.scala:173)
at com.databricks.sql.dlt.EventLog.x$1$lzycompute(EventLog.scala:98)
at com.databricks.sql.dlt.EventLog.x$1(EventLog.scala:98)
at com.databricks.sql.dlt.EventLog.eventLogTable$lzycompute(EventLog.scala:98)
at com.databricks.sql.dlt.EventLog.eventLogTable(EventLog.scala:98)
at com.databricks.sql.dlt.EventLog.loadEventLogTable(EventLog.scala:108)
at com.databricks.sql.dlt.EventLogAnalysis.com$databricks$sql$dlt$EventLogAnalysis$$loadEventLogTable(EventLogAnalysis.scala:46)
at com.databricks.sql.dlt.EventLogAnalysis$$anonfun$rewrite$2.applyOrElse(EventLogAnalysis.scala:42)
at com.databricks.sql.dlt.EventLogAnalysis$$anonfun$rewrite$2.applyOrElse(EventLogAnalysis.scala:37)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$3(AnalysisHelper.scala:141)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:85)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$1(AnalysisHelper.scala:141)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:436)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning(AnalysisHelper.scala:137)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning$(AnalysisHelper.scala:133)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUpWithPruning(LogicalPlan.scala:41)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$2(AnalysisHelper.scala:138)
at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1314)
at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1313)
at org.apache.spark.sql.catalyst.plans.logical.Project.mapChildren(basicLogicalOperators.scala:87)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$1(AnalysisHelper.scala:138)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:436)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning(AnalysisHelper.scala:137)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning$(AnalysisHelper.scala:133)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUpWithPruning(LogicalPlan.scala:41)
at com.databricks.sql.dlt.EventLogAnalysis.rewrite(EventLogAnalysis.scala:37)
at com.databricks.sql.dlt.EventLogAnalysis.rewrite(EventLogAnalysis.scala:32)
at com.databricks.sql.optimizer.DatabricksEdgeRule.apply(DatabricksEdgeRule.scala:36)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$14(RuleExecutor.scala:470)
at org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule(RuleExecutor.scala:620)
at org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule$(RuleExecutor.scala:603)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.processRule(RuleExecutor.scala:130)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$13(RuleExecutor.scala:470)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$12(RuleExecutor.scala:469)
at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:91)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$11(RuleExecutor.scala:465)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeBatch$1(RuleExecutor.scala:442)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$20(RuleExecutor.scala:575)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$20$adapted(RuleExecutor.scala:575)
at scala.collection.immutable.List.foreach(List.scala:431)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:575)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:348)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeSameContext(Analyzer.scala:493)
at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:486)
at org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:383)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:486)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:402)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:340)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:211)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:340)
at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.resolveInFixedPoint(HybridAnalyzer.scala:190)
at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.$anonfun$apply$1(HybridAnalyzer.scala:76)
at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.withTrackedAnalyzerBridgeState(HybridAnalyzer.scala:111)
at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.apply(HybridAnalyzer.scala:71)
at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:473)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:443)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:473)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$2(QueryExecution.scala:277)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:525)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$5(QueryExecution.scala:600)
at org.apache.spark.sql.execution.SQLExecution$.withExecutionPhase(SQLExecution.scala:145)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$4(QueryExecution.scala:600)
at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1231)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:599)
at com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:595)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1422)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:595)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$1(QueryExecution.scala:271)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1676)
at org.apache.spark.util.LazyTry.tryT$lzycompute(LazyTry.scala:46)
at org.apache.spark.util.LazyTry.tryT(LazyTry.scala:46)
File "/databricks/python_shell/lib/dbruntime/sql_magic/sql_magic.py", line 165, in execute_via_sql_comm_handler
df = self.get_query_request_result(request["query"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python_shell/lib/dbruntime/sql_magic/sql_magic.py", line 122, in get_query_request_result
df = self.asserting_spark.sql(query, widget_bindings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/spark/python/pyspark/sql/connect/session.py", line 796, in sql
data, properties, ei = self.client.execute_command(cmd.command(self._client))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1472, in execute_command
data, _, metrics, observed_metrics, properties = self._execute_and_fetch(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1948, in _execute_and_fetch
for response in self._execute_and_fetch_as_iterator(
File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1924, in _execute_and_fetch_as_iterator
self._handle_error(error)
File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 2244, in _handle_error
self._handle_rpc_error(error)
File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 2348, in _handle_rpc_error
raise convert_exception(
pyspark.errors.exceptions.connect.SparkException: [INTERNAL_ERROR] The Spark SQL phase analysis failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace. SQLSTATE: XX000
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2025 04:43 AM
Hello @N38,
Thanks for your report! I have validated internally and our engineering team is aware of this and working on it via ES-1282279
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2025 05:06 AM
Thank you @Alberto_Umana - please keep me posted as you progress. We are currently unable to use this function and it will be great to have this resolved as soon as possible.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2025 08:23 AM
Sure @N38 - Engineering is still looking and trying to reproduce the issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2025 08:31 AM
Thank you for the update, if you need any further information please let me know.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-05-2025 12:34 PM
Is this a universal issue as I've been having the same problem?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-05-2025 12:45 PM
Yes @Mbunko - our engineering team is working on it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2025 12:17 AM
Thank you for the update @Alberto_Umana, I look forward to hearing more about the solution!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2025 01:53 AM
Good morning, is there any update on this issue?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2025 04:30 AM
Hello Team,
I do not see any current update about this. I will follow up on this internally and get back.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-21-2025 07:14 AM
Hi, is there an ETA on this fix?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2025 06:03 PM
Hi,
I am also facing same issue, is there any ETA to fix it?

