cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Can't create database - UnsupportedFileSystemException No FileSystem for scheme "dbfs"

allan-silva
New Contributor III

I'm following a class "DE 3.1 - Databases and Tables on Databricks", but it is not possible create databases due to "AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Got exception: org.apache.hadoop.fs.UnsupportedFileSystemException No FileSystem for scheme "dbfs")"

I'm able to list my files on 'dbfs://...' using `dbutils.fs.ls` utility.

Expected behavior: Successfully create database.

There are any misconfiguration ?

Detailed stacktrace:

Error in SQL statement: AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Got exception: org.apache.hadoop.fs.UnsupportedFileSystemException No FileSystem for scheme "dbfs")
com.databricks.backend.common.rpc.DatabricksExceptions$SQLExecutionException: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Got exception: org.apache.hadoop.fs.UnsupportedFileSystemException No FileSystem for scheme "dbfs")
	at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$withClient$2(HiveExternalCatalog.scala:163)
	at org.apache.spark.sql.hive.HiveExternalCatalog.maybeSynchronized(HiveExternalCatalog.scala:115)
	at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$withClient$1(HiveExternalCatalog.scala:153)
	at com.databricks.backend.daemon.driver.ProgressReporter$.withStatusCode(ProgressReporter.scala:377)
	at com.databricks.backend.daemon.driver.ProgressReporter$.withStatusCode(ProgressReporter.scala:363)
	at com.databricks.spark.util.SparkDatabricksProgressReporter$.withStatusCode(ProgressReporter.scala:34)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:152)
	at org.apache.spark.sql.hive.HiveExternalCatalog.createDatabase(HiveExternalCatalog.scala:261)
	at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.createDatabase(ExternalCatalogWithListener.scala:55)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.createDatabase(SessionCatalog.scala:686)
	at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.createDatabase(ManagedCatalogSessionCatalog.scala:441)
	at com.databricks.sql.managedcatalog.UnityCatalogV2Proxy.$anonfun$createNamespace$1(UnityCatalogV2Proxy.scala:150)
	at com.databricks.sql.managedcatalog.UnityCatalogV2Proxy.$anonfun$createNamespace$1$adapted(UnityCatalogV2Proxy.scala:142)
	at com.databricks.sql.managedcatalog.UnityCatalogV2Proxy.assertSingleNamespace(UnityCatalogV2Proxy.scala:114)
	at com.databricks.sql.managedcatalog.UnityCatalogV2Proxy.createNamespace(UnityCatalogV2Proxy.scala:142)
	at org.apache.spark.sql.execution.datasources.v2.CreateNamespaceExec.run(CreateNamespaceExec.scala:47)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)

1 ACCEPTED SOLUTION

Accepted Solutions

allan-silva
New Contributor III

A colleague from my work figured out the problem: the cluster being used wasn't configured to use DBFS when running notebooks.

View solution in original post

3 REPLIES 3

UmaMahesh1
Honored Contributor III

Can you please ping the screenshot of the cmd you are running and the error together?

Cheers...

Uma Mahesh D

Lab variables:

Lab vars 

Db creation:

DB creation 

Thanks!

allan-silva
New Contributor III

A colleague from my work figured out the problem: the cluster being used wasn't configured to use DBFS when running notebooks.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group