Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
Here's your Data + AI Summit 2024 - Warehousing & Analytics recap as you use intelligent data warehousing to improve performance and increase your organization’s productivity with analytics, dashboards and insights.
Keynote: Data Warehouse presente...
Resultset returned by DatabaseMetadata.getColumns does not include the variant column in a table. Only includes the non-variant column.Databricks JDBC driver 02.06.40.1071create table tvariant(rnum int, c1 variant);
Hi,Is it possible to convert existing delta table with partition having data to clustering? If so can you please suggest the steps required? I tried and searched but couldn't find any. Is it that liquid clustering can be done only for new Delta table...
Hi,I have used "SET spark.sql.warehouse.dir" which creates the directory by default.Then I have created the database by command "CREATE DATABASE IF NOT EXISTS database_name;",but when I used "DESCRIBE DATABASE database_name" I could not find the loca...
Hi @Pavan3 ,If running DESCRIBE DATABASE the location is empty, then was created in the default CATALOG directory.What you can do is to create any table in that database and run DESCRIBE DETAIL on that table:Hope it helps
Hi! I'm developing a .NET app and i want to use the databricks warehouse as database. I have gold delta tables that i want to query. In the documentation, i can see a ODBC/JDBC driver, are those connector fast? there are another way to connect? what ...
We have been using .Net apps connected to Databricks delta tables through Clusters, we have been using ODBC to achieve this. However we recently hit a roadblock after UC migration, where the UC all purpose cluster started giving issues with queries ...
Hi everyone,I'm curious about Databricks' approach to encrypting and decrypting Parquet files. Does Databricks adhere to standard encryption/decryption methods for Parquet? If not, what specific methods or techniques are used?I'd appreciate any insig...
I would like to create a simple governance dashboard with multiple queries (a query to see user login events, a query to see sql statements ran, a query for jobs executed, etc.)What i would like to do is have a single user name parameter which would ...
Yes you can set dashboard parameters so you provide the username on the parameter or widget and it gets distributed to the different queries https://docs.databricks.com/en/dashboards/parameters.html
I have huge datasets, transformation, display, print, show are working well on this data when read in a pandas dataframe. But the same dataframe when converted to a spark dataframe, is taking minutes to display even a single row and hours to write th...
I understand you want it sooner. Did it at least write the data in 10 minutes compared to not writing before?
There are more knobs you can tweak like
spark.sql.shuffle.partitions=auto
Do you have any index columns in your spatial data that can be us...
I am unable to obtain a count of a dataframe, it always get stuck at 1 stage, I have tried reducing the size, what can be the issue? How can I read cluster logs to identify the issue?
Driver memory is good enough, it is able to handle 90 lakhs data, what I am giving it is definitely less than that, what can I do about skewed data and shuffling?
Hi all, I need to add a shortcut key for R's pip operator (%>%) to my Databricks notebook. I want the operator to be written in my code snippet when I hold down the shortcut keys (shift + ctrl + m). Is there a straightforward way to add such shortcut...
Hi Community,I try to pass the result of a CTE as a function parameter as code below WITH t1 AS (
SELECT array_join(collect_list(output), ',') AS x
FROM my_catalog.my_db.get_x(:startTime, :endTime)
)
SELECT 'AM_offline' as Type, CASE WHEN off...
Hi @szymon_dybczak Thanks for replying. I don't the issue is related to datatype, since the query works if I pass the subquery to _x parameter without CTE.Please see as below code:SELECT 'AM_offline' as Type, CASE WHEN offline_ratio > 1.5 THEN 'no-Go...
I have created Python modules containing some Python functions and I would like to import them from a notebook contained in the Workspace. For example, I have a "etl" directory, containing a "snapshot.py" file with some Python functions, and an empty...
Hi @sachamourier ,It will work, but you need carefully craft path to sys.path.append(), you even do not need __init__.py to make it work.Try to hard-code the path to the snapshot.py in workspace.Add this to your notebook: import sys
import os
absolu...
According to the official Databricks documentation on GCP, I should have the ability to deploy a serverless SQL warehouse inside Databricks. Following the documentation, it is requested to turn on Serverless SQL warehouses (On), but there is nothing ...
Hello, Following abnormally high costs when using serverless sql on September 9 and 10, I noticed that the cluster sometimes stays on for an hour even though it's not receiving any new requests, and that the auto-stop is set to 5 minutes of inactivit...
Hi @EmmaP!I have encountered this. Even though the UI says that they are complete, they actually are not. While the query itself completed, the client is still fetching the data from the SQL Warehouse.To check if this is your issue, from the monitori...
Cool. This is a very convenient feature since most people now use the PDF format when working with text files. If anyone has ever had any issues with this format, I can say that I recently needed to merge several PDF files into one, and with the help...