Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
Here's your Data + AI Summit 2024 - Warehousing & Analytics recap as you use intelligent data warehousing to improve performance and increase your organization’s productivity with analytics, dashboards and insights.
Keynote: Data Warehouse presente...
is there a SQL equivalent of overwriteSchema ?https://docs.databricks.com/en/delta/update-schema.html#explicitly-update-schema-to-change-column-type-or-name
In place schema adjustment =>Then ALTER TABLE XXX ADD/DROP COLUMN XXX INTExamplecreate table test (id int, first_name string, last_name string ); insert into test values (1, 'john', 'smith'); alter table test add column age int; select * from testCr...
I'm using a SQL warehouse with autostop after 5 minutes of inactivity. However, the cluster is constantly activating and deactivating without any explanation. There are no queries being executed, and I can't identify any reasons why it is happening,...
Hi @msolcuadrado ,In your case I would try to contact directly with databricks support team. This is a serious issue and I feel your pain. They should help you pinpoint an excat cause + maybe you'll get a refund
Hi,I'm receiving the error Incorrect syntax near '=' when I run simple queries like the example below. This only happens when I use a column created using a CASE statement in the WHERE clause. I can use any other column in the WHERE clause, includi...
What jumps out to me at first is the backticks on `Peak Vertical Force / BW`, but I'm assuming that's just a column name and not an attempt at division.Next that jumps out is TestType and TestTypeName being aliased as testType and testTypeName- spark...
I was trying to install a personalized .whl file located in the "shared" folder but I'm obtaining this error:org.apache.spark.SparkException: Process List(/bin/su, libraries, -c, bash /local_disk0/.ephemeral_nfs/cluster_libraries/python/python_start_...
Hi,I'm using the REST API for SQL Warehouse in order to execute queries. I have experienced multiple times that query validation fails over the REST API, while executing the same query in the Databricks UI on the same cluster succeeds. An example: [P...
Had to try for myself and it seems the sql execution context in the REST API is different than that of an *.sql script, notebook or query made against an sql warehouse through the ui. The error stems from the fact that the SET command can also be use...
Hi Joeshph,How are you doing today?Give a try with below inputs and let me know if works well.Filter and aggregate data in Databricks to reduce load before it reaches Power BI. Use DirectQuery carefully, simplify measures, and reduce the number of vi...
I have a query using LCA. When referencing another table that has a column with the same name as the column used as LCA, the behavior of the query changes and it starts referencing the table column instead of the column that is already in the select ...
Hi @Kaniz_Fatma,we had the same problem as @paulocorrea.That's why it would be correct for to me to throw an error on ambiguous columns and the LCA could/must be addressed with a default identifier.Thanks
When trying to execute a query via sql warehouse, I get the following error:INVALID_PARAMETER_MARKER_VALUE.DUPLICATE_NAMEthe sql statement uses ? placeholders and the correct number of arguments are being passed.I am not able to use named placeholder...
Hi @RickB ,Which API are you using to invoke this? Parameter markers can be provided by:Python using its pyspark.sql.SparkSession.sql() API.Scala using its org.apache.spark.sql.SparkSession.sql() API.Java using its org.apache.spark.sql.SparkSession.s...
I'm planning to connect SQL Server Management Studio (SSMS) with Databricks using Lakehouse Federation. I understand that there are some differences in the SQL dialects between SSMS and Databricks SQL. For instance, in SSMS, we use TOP 10 to limit th...
To add on this:if you really have to use T-SQL (the MS dialect of SQL), you can define the SQL warehouse from databricks as a linked server on your SQL server.As said: SSMS is merely a sql client, the SQL dialect to be used is defined by the database...
Hello Community . I am a newbie here having an experience in tableau and power bi . I wanted to explore Dashboard creation in Lakeview . I have created a free trial databricks account . Although there are plenty of articles and videos on how to crea...
Did you know the default timeout setting for SQL #databricks Warehouse is two days?The default timeout can be too long for most use cases. You can easily change this for your session or in the general SQL warehouse configuration.
Hi @Hubert-Dudek we have a java service which uses JDBC Template for connecting to databricks warehouse. We recently set a high connection idle timeout and maxLifeTime for our hikari pool connection. We are now seeing error related to invalid session...
Hi @Martin_Pham ,At the current moment SQL Warehouses don't have this option. But according to one of the employees at databricks, this feature is coming soon: https://community.databricks.com/t5/warehousing-analytics/sqlwarehouse-case-insensitive/td...
Application connects to Databricks serverless SQL warehouse via Databricks JDBC driver. It executes SQL select statements only. We see small number of statements failed each day with the following error detail: java.sql.SQLException: [Databricks][JDB...
Hi there,I'm building a Sink that uses the Databricks JDBC Driver version 2.6.27. It appears that regardless of using a try-with-resources block and explicitly closing the Connection, the driver's internals are not releasing IdleConnectionEvictor thr...
Still not fixed got 14k instances of com.databricks.client.jdbc42.internal.apache.http.impl.client.IdleConnectionEvictor 14,037 (0%) 673,776 B (0%) n/a And due to that application got to crashed. can you please help out as this is blocking to release...
Hi Team,There is a requirement to load a backup file into a database inside a SQL Warehouse. However, I don't see any option to directly load abackup file.I have tried reading the backup file using a notebook, but I’m unable to interpret the contents...