Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
Has anyone done this and share details? I have a sample sql which ran on large SQL endpoint in 8min and synapse 1000DWU setting in 1hr. On small SQL endpoint it took 34min. What's the equivalent SQL Endpoint compute for Synapse@1000DWU? I know there ...
Hello, we're receiving an error when running glue jobs to try and connect to and read from a Databricks SQL endpoint.
Hello, we're receiving an error when running glue jobs to try and connect to and read from a Databricks SQL endpoint.
An error occ...
Hello @Vidula Khanna @Debayan Mukherjee ,I wanted to give you an update that might be helpful for your future customers, we worked with @Pavan Kumar Chalamcharla and through lots of trial and error we figured out a combination that works for SQL e...
Availability of SQL Warehouse to Data Science and Engineering persona
Hi All,
Now we can use SQL Warehouse in our notebook execution.
It's in preview now and soon will be GA.
I've triend this code in Databricks SQL
create table people_db.GLAccount
USING PARQUET
LOCATION "abfss://datamesh@dlseu2dtaedwetldtlak9.dfs.core.windows.net/PricingAnalysis/rdv_60_134.vGLAccount.parquet"
But I'm getting a "Invalid configuration...
you can define 'data access configuration' in the admin panel.go to SQL warehouse settings -> Data Access configurationhttps://learn.microsoft.com/en-us/azure/databricks/sql/admin/data-access-configuration
I can see on Databricks SQL warehouse Data tab that clusters, catalogs and schemas have a unique ID. User created tables, views and functions must have and unique ID too, but it is not exposed to the user as far as I can tell.
I need to retrieve the ...
Hello Everyone,
We are trying to connect DataBricks SQL warehouse using ODBC url but we are not able to do it. We can see only JDBC url in connection details which works fine.
Was anyone able to connect using ODBC url? Can someone please help?
As of today, can we use any Databricks SQL Restful API to query Delta Tables stored in ADLS from any external UI?There is some information mentioned over this link https://docs.databricks.com/sql/api/index.html? but not sure how to use them!!Checked ...
Hi @vinay kumar Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks...
When I first login and start using Databricks SQL, the endpoints always take a while to start. Is there anything I can do to improve the cold start experience on the platform?
How do I change the vCPU type for SQL Endpoint on Azure Databricks? Currently it picks as default DSv4 CPUs which reaches its quota limit although I have unused quota in other vCPU types. I faced similar issue while using Delta live tables but I was ...
Hi @Tarique Anwar , I don't think changing the vCPU type, for now, is possible. However, I can check with the product team on this.https://docs.microsoft.com/en-us/azure/databricks//sql/admin/sql-endpoints#required-azure-vcpu-quota
Great question! There are similarities and differences:SimilaritiesPhoton is enabled on bothYou have Databricks Runtime on bothDifferencesDatabricks Runtime (DBR) version is managed and auto-upgraded in Databricks SQL. Because SQL is a narrower workl...
Hi, is there any way/workaround to query JDBC tables the same way one can do with other type of clusters?Doing so right now causes an error saying that only text based files are supported (json, parquet, delta etc) even though the tables are recogniz...
I've been waiting patiently for this option since public preview early 2021. The vast majority of our data is in SQL Server databases, and because we are unable to query these data sources is the primary reason the data team hasn't adopted SQL Worksp...
I see that cluster sizes are mentioned here
https://docs.databricks.com/sql/admin/sql-endpoints.html#cluster-size but I would like to know when to pick what type of cluster (data size/ users/ concurrency) without having to do too much trial and error...
I'd like to be able to limit the rows users see when querying tables in Databricks SQL based on what access level each user is supposed to be granted. Is this possible in the SQL environment?
Using dynamic views you can specify permissions down to the row or field level e.g. CREATE VIEW sales_redacted AS
SELECT
user_id,
country,
product,
total
FROM sales_raw
WHERE
CASE
WHEN is_member('managers') THEN TRUE
ELSE total <= 1...