cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Chris_Shehu
by Valued Contributor III
  • 1483 Views
  • 5 replies
  • 3 kudos
  • 1483 Views
  • 5 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

You may have noticed that the local SQL endpoint is not listed in the options for getting started with APEX. The local SQL endpoint is an extremely useful feature for getting ADO.NET web services started. I say check this uk-dissertation.com review f...

  • 3 kudos
4 More Replies
Leszek
by Contributor
  • 1981 Views
  • 5 replies
  • 11 kudos

Resolved! Runtime SQL Configuration - how to make it simple

Hi, I'm running couple of Notebooks in my pipeline and I would like to set fixed value of 'spark.sql.shuffle.partitions' - same value for every notebook. Should I do that by adding spark.conf.set.. code in each Notebook (Runtime SQL configurations ar...

  • 1981 Views
  • 5 replies
  • 11 kudos
Latest Reply
Leszek
Contributor
  • 11 kudos

Hi, Thank you all for the tips. I tried before to set this option in Spark Config but didn't work for some reason. Today I tried again and it's working :).

  • 11 kudos
4 More Replies
DB_007
by New Contributor III
  • 5186 Views
  • 8 replies
  • 4 kudos

Resolved! Databricks SQL not displaying all the databases that i have on my cluster.

I have a cluster running on 7.3 LTS and it has about 35+ databases. When i tried to setup an endpoint on Databricks SQL, i do not see any database listed.

  • 5186 Views
  • 8 replies
  • 4 kudos
Latest Reply
User16871418122
Contributor III
  • 4 kudos

hi @Arif Ali​  You may have to check the data access config to add the params for external metastore: spark.hadoop.javax.jdo.option.ConnectionDriverName org.mariadb.jdbc.Driverspark.hadoop.javax.jdo.option.ConnectionUserName <mysql-username>spark.had...

  • 4 kudos
7 More Replies
Manoj
by Contributor II
  • 6770 Views
  • 4 replies
  • 8 kudos

Resolved! Is there a way to submit multiple queries to data bricks SQL END POINT using REST API ?

Is there a way to submit multiple queries to data bricks SQL END POINT using REST API ?

  • 6770 Views
  • 4 replies
  • 8 kudos
Latest Reply
BilalAslamDbrx
Honored Contributor II
  • 8 kudos

@Manoj Kumar Rayalla​  DBSQL currently limits execution to 10 concurrent queries per cluster so there could be some queuing with 30 concurrent queries. You may want to turn on multi-cluster load balancing to horizontally scale with 1 more cluster for...

  • 8 kudos
3 More Replies
Erik
by Valued Contributor II
  • 921 Views
  • 4 replies
  • 3 kudos

Feature request: It is possible to add comments to both databricks sql databases and tables. It would be really usefull if these comments could show u...

Feature request: It is possible to add comments to both databricks sql databases and tables. It would be really usefull if these comments could show up (if they are provided) in PowerBI when one connects to the Databricks SQL endpoint, e.g. in this w...

bilde
  • 921 Views
  • 4 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 3 kudos

Nice idea!

  • 3 kudos
3 More Replies
vasanthvk
by New Contributor III
  • 5172 Views
  • 7 replies
  • 3 kudos

Resolved! Is there a way to automate Table creation in Databricks SQL based on a ADLS storage location which contains multiple Parquet files?

We have ADLS container location which contains several (100+) different data subjects folders which contain Parquet files with partition column and we want to expose each of the data subject folder as a table in Databricks SQL. Is there any way to au...

  • 5172 Views
  • 7 replies
  • 3 kudos
Latest Reply
User16857282152
Contributor
  • 3 kudos

Updating dazfuller suggestion, but including code for one level of partitioning, of course if you have deeper partitions then you will have to make a function and do a recursive call to get to the final directory containing parquet files. Parquet wil...

  • 3 kudos
6 More Replies
User16826987838
by Contributor
  • 721 Views
  • 1 replies
  • 0 kudos

What type of aws instance and how many are used for an L sized Databricks SQL(SQLA) cluster ?

What type of aws instance and how many are used for an L sized Databricks SQL(SQLA) cluster with Photon enabled

  • 721 Views
  • 1 replies
  • 0 kudos
Latest Reply
Taha
New Contributor III
  • 0 kudos

L size is 16 workers of i3.8xlarge

  • 0 kudos
User16826987838
by Contributor
  • 645 Views
  • 1 replies
  • 1 kudos
  • 645 Views
  • 1 replies
  • 1 kudos
Latest Reply
Digan_Parikh
Valued Contributor
  • 1 kudos

@Rathna Sundaralingam​  Yes, in the visualization editor select the following:Type: MapUnder General: Map: USAKey Column: you need a state column here (for ex: CA, NY)Target Field: USPS AbbreviationValue Column: your desired value for the heatmap.

  • 1 kudos
User16826992666
by Valued Contributor
  • 716 Views
  • 1 replies
  • 0 kudos

Resolved! In Databricks SQL how can I tell if my query is using Photon?

I have turned Photon on in my endpoint, but I don't know if it's actually being used in my queries. Is there some way I can see this other than manually testing queries with Photon turned on and off?

  • 716 Views
  • 1 replies
  • 0 kudos
Latest Reply
Digan_Parikh
Valued Contributor
  • 0 kudos

@Trevor Bishop​ If you go to the History tab in DBSQL, click on the specific query and look at the execution details. At the bottom, you will see "Task time in Photon".

  • 0 kudos
User16826992783
by New Contributor II
  • 1356 Views
  • 1 replies
  • 0 kudos

Find Databricks SQL endpoints runtime

Is there a way to find out which runtime SQL endpoints are running?

  • 1356 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryan_Chynoweth
Honored Contributor III
  • 0 kudos

In the UI, Databricks will list the running endpoints on top. Programmatically you can get information about the endpoints using the REST APIs. You will likely need to use a combo of the list endpoint to get all the endpoints. The for each endpoint u...

  • 0 kudos
aladda
by Honored Contributor II
  • 653 Views
  • 1 replies
  • 0 kudos
  • 653 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryan_Chynoweth
Honored Contributor III
  • 0 kudos

Generally, interactive clusters and jobs are better suited for data engineering and transformations as they support more than just SQL. However, if you are using pure SQL, then endpoints can be used for data transformations. All of the Spark SQL fun...

  • 0 kudos
User16826992666
by Valued Contributor
  • 453 Views
  • 1 replies
  • 0 kudos

Does Databricks SQL support any kind of custom visuals?

Wondering if I can make any kind of custom visuals or are the ones that come built in the only options?

  • 453 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826992666
Valued Contributor
  • 0 kudos

At this time the only available visuals are the ones that are included in the Databricks SQL environment. There is no way to import or create custom visuals.

  • 0 kudos
User16826994223
by Honored Contributor III
  • 609 Views
  • 1 replies
  • 2 kudos

Where do SQL endpoints run?

Where do Databricks SQL endpoints run?

  • 609 Views
  • 1 replies
  • 2 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 2 kudos

Like Databricks clusters, SQL endpoints are created and managed in your Cloud Account (like GCP,AZURE,cloud). SQL endpoints manage SQL-optimized clusters automatically in your account and scale to match end-user demand.

  • 2 kudos
Labels