I'm creating dashboard with multiple visualizations from a notebook.Whenever I add a new visualization, the default position in dashboard is top left which mess up all the format I did for previous graph. Is there a way to default add to the bottom o...
@shan_chandra I'm using Lakeview dashboard. In dbx notebook, there is Add to dashboard > button on the right of each visualization. It's super handy. Actually I have this issue solved.
Manual ApproachWe can Update SQL Warehouse manually in Databricks.Click SQL Warehouses in the sidebarIn Advanced optionsWe can find Unity Catalog toggle button there! While Updating Existing SQL Warehouse in Azure to enable unity catalog using terraf...
Hello Raphael,Thank you for the update and for looking into the feature request. I appreciate your efforts in following up on this matter.If possible, could you please provide me with any updates or insights you receive from the Terraform team regard...
We have been evaluating Databricks SQL and its capability to be used as DW. We are using Unity catalog in our implementation.There seems to be a functionality mismatch between Azure and AWS versions as where table rename is supported on Azure side, i...
Hi, I am a newbie. Can someone please help me with the below? Details for the latest failure: Error: Error code: QuotaExceeded, error message: Operation could not be completed as it results in exceeding approved standardEDSv4Family Cores quota. Addit...
I am very new to DB. Can someone show me how to resolve the error below please? Assistant The error message you're encountering indicates that when creating a catalog, you need to specify a managed storage location for it. This is a requirement in o...
SQL serverless now use cache even after termination thanks to the remote cache. You can benefit from 1 min auto-termination when still utilizing cache benefits.
HiI’m new to data modelling so could use some help.I’m building a personal project using a fairly standard 3NF sales database as the source data. So far I have a pipeline that incrementally extracts data from the source system each day into a Raw sto...
This is what my medallion architecture looks like - 1) Bronze Layer - append raw data.2) Silver Layer, reflect current(active) data and I do business logic transformations. The Silver layer should serve as your cleaned and transformed staging area. H...
I have an on-premises Power BI Report Server that uses the Simba Spark ODBC Driver (2.8) to connect to Databricks. It can connect to a serverless warehouse successfully and run its queries, but it never seems to disconnect the session, and so the war...
It's working sometimes. Only correlation I have found so far is that a successful query will disconnect as expected but any error will keep the connection to the warehouse open indefinitely.
Hi,As a formal requirement in my project I need to keep original, raw (mainly CSVs and XMLs) files on the lake. Later on they are being ingested into Delta format backed medallion stages, bronze, silver, gold etc.From the audit, operations and discov...
Hi @KrzysztofPrzyso, It sounds like you’re dealing with an interesting challenge related to performance and data organization in your Azure Databricks environment.
Let’s break down the issues you’ve mentioned and explore potential solutions:
Scan...
Hello,I'm wondering if there's a method or workaround to execute JDBC table queries in a similar manner to other cluster types. Currently, attempting to do so results in an error stating that only text-based files (such as JSON, Parquet, Delta, etc.)...
We have created a Unity Catalog instance on top of our Lakehouse (built entirely with Azure Databricks). We are using Power BI to develop and serve our analytics and reporting needs. I've granted the "Account Users" group the appropriate privileges f...
Thanks for explaining this! This doesn't do exactly what I was hoping—it doesn't block all access to the workspace. Users can still login and access their own workspace and run SQL queries, explore the catalog, etc. But they ARE blocked from accessin...
In relational data warehouse systems it was best practise to represent date values as YYYYMMDD integer type values in tables. Date comparison could be done easily without using date-functions and with low performance impact.Is this still the recomme...
Hi @DataFarmer I Databricks I will advise you to use date type instead of int, this will make your life much simpler while working on the date type data.
When trying to connect to a SQL warehouse using the JDBC connector with Spark the below error is thrown. Note that connecting directly to a cluster with similar connection parameters works without issue, the error only occurs with SQL Warehouses.py4j...
Same error here, I am trying to save spark dataframe to Delta lake using JDBC driver and pyspark using this code:#Spark session
spark_session = SparkSession.builder \
.appName("RCT-API") \
.config("spark.metrics.namespace", "rct-a...
In order to create a ci/cd pipeline to deliver dashboards (here monitoring), how to export / import a dashboard created in databricks sql dashboard from one workspace to another?Thanks