You may have noticed that the local SQL endpoint is not listed in the options for getting started with APEX. The local SQL endpoint is an extremely useful feature for getting ADO.NET web services started. I say check this uk-dissertation.com review f...
Hi, I'm running couple of Notebooks in my pipeline and I would like to set fixed value of 'spark.sql.shuffle.partitions' - same value for every notebook. Should I do that by adding spark.conf.set.. code in each Notebook (Runtime SQL configurations ar...
Hi, Thank you all for the tips. I tried before to set this option in Spark Config but didn't work for some reason. Today I tried again and it's working :).
I have a cluster running on 7.3 LTS and it has about 35+ databases. When i tried to setup an endpoint on Databricks SQL, i do not see any database listed.
hi @Arif Ali​ You may have to check the data access config to add the params for external metastore: spark.hadoop.javax.jdo.option.ConnectionDriverName org.mariadb.jdbc.Driverspark.hadoop.javax.jdo.option.ConnectionUserName <mysql-username>spark.had...
@Manoj Kumar Rayalla​ DBSQL currently limits execution to 10 concurrent queries per cluster so there could be some queuing with 30 concurrent queries. You may want to turn on multi-cluster load balancing to horizontally scale with 1 more cluster for...
Feature request: It is possible to add comments to both databricks sql databases and tables. It would be really usefull if these comments could show up (if they are provided) in PowerBI when one connects to the Databricks SQL endpoint, e.g. in this w...
We have ADLS container location which contains several (100+) different data subjects folders which contain Parquet files with partition column and we want to expose each of the data subject folder as a table in Databricks SQL. Is there any way to au...
Updating dazfuller suggestion, but including code for one level of partitioning, of course if you have deeper partitions then you will have to make a function and do a recursive call to get to the final directory containing parquet files. Parquet wil...
@Rathna Sundaralingam​ Yes, in the visualization editor select the following:Type: MapUnder General: Map: USAKey Column: you need a state column here (for ex: CA, NY)Target Field: USPS AbbreviationValue Column: your desired value for the heatmap.
I have turned Photon on in my endpoint, but I don't know if it's actually being used in my queries. Is there some way I can see this other than manually testing queries with Photon turned on and off?
@Trevor Bishop​ If you go to the History tab in DBSQL, click on the specific query and look at the execution details. At the bottom, you will see "Task time in Photon".
In the UI, Databricks will list the running endpoints on top. Programmatically you can get information about the endpoints using the REST APIs. You will likely need to use a combo of the list endpoint to get all the endpoints. The for each endpoint u...
Generally, interactive clusters and jobs are better suited for data engineering and transformations as they support more than just SQL. However, if you are using pure SQL, then endpoints can be used for data transformations. All of the Spark SQL fun...
At this time the only available visuals are the ones that are included in the Databricks SQL environment. There is no way to import or create custom visuals.
Like Databricks clusters, SQL endpoints are created and managed in your Cloud Account (like GCP,AZURE,cloud). SQL endpoints manage SQL-optimized clusters automatically in your account and scale to match end-user demand.