<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Error 'Missing Credential Scope' when using R Sparklyr on runtime &amp;gt; 10.4 in Data Governance</title>
    <link>https://community.databricks.com/t5/data-governance/error-missing-credential-scope-when-using-r-sparklyr-on-runtime/m-p/6542#M139</link>
    <description>&lt;P&gt;Hi @Robin LOCHE​&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you for posting your question in our community! We are happy to assist you.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance!&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Tue, 04 Apr 2023 05:07:39 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2023-04-04T05:07:39Z</dc:date>
    <item>
      <title>Error 'Missing Credential Scope' when using R Sparklyr on runtime &gt; 10.4</title>
      <link>https://community.databricks.com/t5/data-governance/error-missing-credential-scope-when-using-r-sparklyr-on-runtime/m-p/6540#M137</link>
      <description>&lt;P&gt;We have multiple processing chains that uses R notebooks with Sparklyr, and we try to migrate them from runtime 10.4 to 12.2. unfortunately, there seems to be an incompatibility with Sparlyr with runtimes &amp;gt; 10.4:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Steps to reproduce:&lt;/P&gt;&lt;P&gt;1) Create a notebook "test" with the following code:&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;%r
library(sparklyr)
sc &amp;lt;- spark_connect(method = "databricks")
sdf_sql(sc, "SELECT * FROM samples.nyctaxi.trips limit 100")&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;2) Clone the notebook as "test2"&lt;/P&gt;&lt;P&gt;3) Execute the notebook "test" on a 12.2 cluster: it works as expected&lt;/P&gt;&lt;P&gt;4) Exectute the "test2" on the SAME cluster: you get the following error:&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;Error : org.apache.spark.SparkException: Missing Credential Scope. 
	at com.databricks.unity.UCSDriver$Manager.$anonfun$scope$1(UCSDriver.scala:104)
	at scala.Option.getOrElse(Option.scala:189)
	at com.databricks.unity.UCSDriver$Manager.scope(UCSDriver.scala:104)
	at com.databricks.unity.UCSDriver$Manager.currentScope(UCSDriver.scala:98)
	at com.databricks.unity.UnityCredentialScope$.currentScope(UnityCredentialScope.scala:100)
	at com.databricks.unity.UnityCredentialScope$.getSAMRegistry(UnityCredentialScope.scala:120)
	at com.databricks.unity.SAMRegistry$.getSAMOpt(SAMRegistry.scala:358)
	at com.databricks.unity.CredentialScopeSQLHelper$.registerPathForDeltaLog(CredentialScopeSQLHelper.scala:254)
	at com.databricks.sql.transaction.tahoe.DeltaLog$.apply(DeltaLog.scala:931)
	at com.databricks.sql.transaction.tahoe.DeltaLog$.apply(DeltaLog.scala:864)
	at com.databricks.sql.transaction.tahoe.DeltaLog$.apply(DeltaLog.scala:844)
	at com.databricks.sql.transaction.tahoe.DeltaLog$.forTable(DeltaLog.scala:791)
	at com.databricks.sql.transaction.tahoe.DeltaLog$.$anonfun$forTableWithSnapshot$1(DeltaLog.scala:870)
	at com.databricks.sql.transaction.tahoe.DeltaLog$.withFreshSnapshot(DeltaLog.scala:903)
	at com.databricks.sql.transaction.tahoe.DeltaLog$.forTableWithSnapshot(DeltaLog.scala:870)
	at com.databricks.sql.managedcatalog.SampleTable.readSchema(SampleTables.scala:109)
	at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.$anonfun$getSampleTableMetadata$1(ManagedCatalogSessionCatalog.scala:954)
	at scala.Option.map(Option.scala:230)
	at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.getSampleTableMetadata(ManagedCatalogSessionCatalog.scala:949)
	at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.$anonfun$fastGetTablesByName$6(ManagedCatalogSessionCatalog.scala:1057)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
	at scala.collection.immutable.List.foreach(List.scala:431)
	at scala.collection.generic.TraversableForwarder.foreach(TraversableForwarder.scala:38)
	at scala.collection.generic.TraversableForwarder.foreach$(TraversableForwarder.scala:38)
	at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:47)
	at scala.collection.TraversableLike.map(TraversableLike.scala:286)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
	at scala.collection.AbstractTraversable.map(Traversable.scala:108)
	at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.fastGetTablesByName(ManagedCatalogSessionCatalog.scala:1057)
	at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.fetchFromCatalog(DeltaCatalog.scala:498)
	at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$loadTables$1(DeltaCatalog.scala:439)
	at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
	at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:265)
	at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:263)
	at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:86)
	at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.loadTables(DeltaCatalog.scala:436)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anon$3.$anonfun$submit$1(Analyzer.scala:1870)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$record(Analyzer.scala:1929)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anon$3.submit(Analyzer.scala:1852)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:1472)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:1412)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$4(RuleExecutor.scala:229)
	at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$3(RuleExecutor.scala:229)
	at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
	at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
	at scala.collection.immutable.List.foldLeft(List.scala:91)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:226)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeBatch$1(RuleExecutor.scala:218)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$8(RuleExecutor.scala:296)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$8$adapted(RuleExecutor.scala:296)
	at scala.collection.immutable.List.foreach(List.scala:431)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:296)
	at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:197)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.executeSameContext(Analyzer.scala:361)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:354)
	at org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:261)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:354)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:282)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:189)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:153)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:189)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:334)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:379)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:333)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:153)
	at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:319)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$3(QueryExecution.scala:372)
	at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:808)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:372)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1020)
	at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:369)
	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:147)
	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:147)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:137)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:111)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1020)
	at org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1027)
	at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
	at org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1027)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:109)
	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:830)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1020)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:822)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:856)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.refl
Error: org.apache.spark.SparkException: Missing Credential Scope. &lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;There is not this problem on a 10.4 runtime.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To note: &lt;/P&gt;&lt;P&gt;- The cluster is Unrestricted / single user access type&lt;/P&gt;&lt;P&gt;- Unity catalog is activated on our workspace, but I don't think there is an interaction, as apparently there is other people with similar problems: &lt;A href="https://stackoverflow.com/questions/74575249/sparklyr-multiple-databricks-notebooks-second-connections-fails-with-org-apache" alt="https://stackoverflow.com/questions/74575249/sparklyr-multiple-databricks-notebooks-second-connections-fails-with-org-apache" target="_blank"&gt;stackoverflow link&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also, I'm not sure but I think there may is the same problem when doing %run to run other notebooks.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Right now it stops our migration to newer version, and it can quickly become an issue if we can't access new functionnalities.&lt;/P&gt;</description>
      <pubDate>Mon, 03 Apr 2023 09:59:54 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-governance/error-missing-credential-scope-when-using-r-sparklyr-on-runtime/m-p/6540#M137</guid>
      <dc:creator>Robin_LOCHE</dc:creator>
      <dc:date>2023-04-03T09:59:54Z</dc:date>
    </item>
    <item>
      <title>Re: Error 'Missing Credential Scope' when using R Sparklyr on runtime &gt; 10.4</title>
      <link>https://community.databricks.com/t5/data-governance/error-missing-credential-scope-when-using-r-sparklyr-on-runtime/m-p/6541#M138</link>
      <description>&lt;P&gt;Hi @Robin LOCHE​&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;For this, I will recommend connecting with databricks they will guide you in this.&lt;/P&gt;&lt;P&gt;Or you can &lt;A href="https://help.databricks.com/s/login/?ec=302&amp;amp;startURL=%2Fs%2Fsubmitrequest" alt="https://help.databricks.com/s/login/?ec=302&amp;amp;startURL=%2Fs%2Fsubmitrequest" target="_blank"&gt;create a support request&lt;/A&gt; for the same.&lt;/P&gt;</description>
      <pubDate>Mon, 03 Apr 2023 11:58:13 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-governance/error-missing-credential-scope-when-using-r-sparklyr-on-runtime/m-p/6541#M138</guid>
      <dc:creator>Ajay-Pandey</dc:creator>
      <dc:date>2023-04-03T11:58:13Z</dc:date>
    </item>
    <item>
      <title>Re: Error 'Missing Credential Scope' when using R Sparklyr on runtime &gt; 10.4</title>
      <link>https://community.databricks.com/t5/data-governance/error-missing-credential-scope-when-using-r-sparklyr-on-runtime/m-p/6542#M139</link>
      <description>&lt;P&gt;Hi @Robin LOCHE​&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you for posting your question in our community! We are happy to assist you.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance!&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 04 Apr 2023 05:07:39 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-governance/error-missing-credential-scope-when-using-r-sparklyr-on-runtime/m-p/6542#M139</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2023-04-04T05:07:39Z</dc:date>
    </item>
  </channel>
</rss>

