- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2022 10:18 AM
I wanted to save my delta tables in my Databricks database.
When I saveAsTable, there is an error message Azure Databricks: AnalysisException: Database 'bf' not found
Ye, There is no database named "bf" in my database.
Here is my full code:
import os
import numpy as np
import pandas as pd
from pyspark import SparkFiles
from pyspark import SparkContext
from pyspark.sql import SparkSession
from pyspark.sql import functions
from pyspark.sql.functions import * #import avg, col, udf
from pyspark.sql import SQLContext
from pyspark.sql import DataFrame
from pyspark.sql.types import *
import json
#LIST, RENAME, AND SAVE ALL FILES AS DELTA LAKE AUTOMATICALLY
#Data written to mount point paths ( /mnt ) is stored outside of the DBFS root
path_30min = '/dbfs/mnt/finance/FirstRate30min'
filename_lists_30min = os.listdir(path_30min)
df_30min_ = {}
delta_30min ={}
for filename_30min in os.listdir(path_30min):
#split file name
rawname_30min = filename_30min.split('_')[0]
name_30min = rawname_30min.split('-')[0]
#create clolumn header names
temp_30min = StructType([StructField(name_30min+"_dateTime", StringType(), True),StructField(name_30min+"_adjOpen", FloatType(), True),StructField(name_30min+"_adjHigh", FloatType(), True),StructField(name_30min+"_adjLow", FloatType(), True),StructField(name_30min+"_adjClose", FloatType(), True),StructField(name_30min+"_adjVolume", IntegerType(), True)])
#list and create csv dataframes
temp_df_30min = spark.read.format("csv").option("header", "false").schema(temp_30min).load("/mnt/finance/FirstRate30min/"+filename_30min).withColumn("Ticker", lit(name_30min))
#name each dataframes
df_30min_[name_30min] = temp_df_30min
#name each table
table_name_30min = name_30min+'_30min_delta'
#create delta lake for each dataframes
df_30min_[name_30min].write.format("delta").mode("overwrite").option("overwriteSchema","True").saveAsTable(table_name_30min)
I tried to debug, but only this step failed.
df_30min_[name_30min].write.format("delta").mode("overwrite").option("overwriteSchema","True").saveAsTable(table_name_30min)
.
- Labels:
-
Azure
-
Azure databricks
-
Database
-
DBFS
-
Delta Tables
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2022 10:40 AM
Please print(table_name_30min). It seems you have a dot in the variable, which is why it is recognized as the "bf" database.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2022 10:40 AM
Please print(table_name_30min). It seems you have a dot in the variable, which is why it is recognized as the "bf" database.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2022 11:50 AM
I print(table_name_30min), but the table names are correct.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2022 11:53 AM
Does changing the database cause the issue?
Before I run the full code, I also changed my default database to another database.
%sql
USE DATABASE deltabase
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2022 11:59 AM
Some data can be saved as delta tables while some cannot.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2022 12:01 PM
I understand the issue now😂
BF.B_30min_delta

