- 1903 Views
- 1 replies
- 3 kudos
How to change compression codec of sql warehouse written files?
Hi, I'm currently starting to use SQL Warehouse, and we have most of our lake in a compression different than snappy.How can I set the SQL warehouse to use a compression like gzip, zstd, on CREATE, INSERT, etc?Tried this:set spark.sql.parquet.compre...
- 1903 Views
- 1 replies
- 3 kudos
- 3 kudos
Hi @Alejandro Martinez​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question. Thanks.
- 3 kudos
- 1412 Views
- 0 replies
- 2 kudos
Availability of SQL Warehouse to Data Science and Engineering persona ​Hi All,Now we can use SQL Warehouse in our notebook execution.It's in previ...
Availability of SQL Warehouse to Data Science and Engineering persona​Hi All,Now we can use SQL Warehouse in our notebook execution.It's in preview now and soon will be GA.
- 1412 Views
- 0 replies
- 2 kudos
- 3143 Views
- 1 replies
- 9 kudos
Refreshing SQL DashboardYou can schedule the dashboard to automatically refresh at an interval.At the top of the page, click Schedule.If the dashboard...
Refreshing SQL DashboardYou can schedule the dashboard to automatically refresh at an interval.At the top of the page, click Schedule.If the dashboard already has a schedule, you see Scheduled instead of Schedule.Select an interval, such as Every 1 h...
- 3143 Views
- 1 replies
- 9 kudos
- 16554 Views
- 5 replies
- 2 kudos
SQL Warehouse high number of concurrent queries
We are going to be a databricks customer and did some PoC tests. Our one test contains dataset in one partitioned table (15colums) is roughly 250M rows, each partition is ~50K-150K rows. Occasionally we have hundreds (up to one thousand) concurrent u...
- 16554 Views
- 5 replies
- 2 kudos
- 2 kudos
Hi @Marian Kovac​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thank...
- 2 kudos
- 5225 Views
- 4 replies
- 6 kudos
SQL Query execution plan explain and optimize the performance for query run.
When we executing SQL query in databricks SQL warehouse editor what will be best practices to optimize the execution plan and get result faster
- 5225 Views
- 4 replies
- 6 kudos
- 6 kudos
Hi @vinay kumar​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks...
- 6 kudos
-
Aad
1 -
Access
1 -
Access control
1 -
ADLS Gen
1 -
API
3 -
AWS
1 -
Azure
3 -
Azure databricks
4 -
Azure Databricks SQL
1 -
Azure SQL DB
1 -
Azure synapse
1 -
Batch Processing
1 -
Best Data Warehouse
1 -
Best practice
1 -
Bi
5 -
Bigquery
1 -
Billing and Cost Management
1 -
Broadcast variable
1 -
Bug
1 -
Business Intelligence
2 -
Cache
1 -
Caching
1 -
Catalyst
1 -
CD Pipeline
1 -
Certification
1 -
Certification Voucher
1 -
Class
1 -
Cloud Fetch
1 -
Cluster
3 -
Cluster config
1 -
Cluster Metrics
1 -
ClusterSize
1 -
Code
1 -
ConcurrentQueries
1 -
Connect
1 -
Credential passthrough
1 -
CSV
1 -
CustomKeyVault
1 -
DAIS2023
1 -
Dashboard
1 -
Dashboards
1 -
Data Engineering
1 -
Data Ingestion & connectivity
2 -
Data Science
2 -
databricks
1 -
Databricks Certification
1 -
Databricks Certification Voucher
1 -
Databricks Cluster
3 -
Databricks JDBC
1 -
Databricks notebook
1 -
Databricks Runtime
1 -
Databricks SQL
21 -
Databricks SQL Alerts
1 -
Databricks SQL Analytics
1 -
Databricks SQL Connector
1 -
Databricks SQL Endpoints
2 -
Databricks Table Usage
1 -
Databricks workspace
1 -
DatabrickSQL
1 -
Dataset
1 -
DBeaver
1 -
DBR
2 -
DBSQL
12 -
DBSQL Queries
1 -
Dbu
1 -
Delta
5 -
Delta Live Table Pipeline
1 -
Delta Live Tables
2 -
Delta Pipeline
1 -
Delta table
1 -
Delta Tables
1 -
Different Types
1 -
DLT
3 -
E2
1 -
Endpoint
7 -
Error
1 -
Error Message
2 -
ETL Process
1 -
External Data Sources
1 -
External Hive
1 -
External Table
1 -
File
1 -
Files
1 -
Global Temp Views
1 -
Glue
1 -
Gpu
1 -
Group
1 -
Hive
1 -
Hive Table
1 -
Import
1 -
Jdbc
6 -
Jdbc connection
2 -
Job Cluster
1 -
Key
1 -
Library
1 -
Limit
1 -
LTS
1 -
LTS ML
1 -
Metadata
1 -
Migration
1 -
Multi Cluster Load Balancer
1 -
Mysql
2 -
NodeJS
1 -
Notebook
2 -
Odbc
3 -
Oracle
1 -
OracleDBPackage
1 -
PARAMETER VALUE
1 -
Parquet
1 -
Party Libraries
1 -
Password
1 -
Performance
2 -
Permissions
1 -
Photon
2 -
Pip
1 -
Possible
1 -
PostgresSQL
1 -
Powerbi
7 -
Prod Workspace
1 -
Programming language
1 -
Pyspark
1 -
Python
6 -
Python Dataframe
1 -
Query
6 -
Query History
1 -
Query Parameters
1 -
Query Snippets
1 -
Row Level Security
1 -
Row Limit
1 -
Schedule
1 -
Schema
1 -
ServiceNow User
1 -
Session
1 -
Simba Odbc Connector
1 -
SKU
1 -
Spark
2 -
Spark sql
1 -
Sparkcontext
1 -
Special Characters
1 -
SQL
40 -
SQL Dashboard
3 -
SQL Databricks
1 -
SQL Endpoint
3 -
SQL Endpoints
5 -
SQL Option
1 -
SQL Queries
3 -
Sql query
3 -
SQL Query Execution Plan
1 -
Sql table
1 -
Sql Warehouse
5 -
Sql Workbench
2 -
SQL Workspace Option
2 -
Sqlanalytics
1 -
Sqlexecutionexception
1 -
Sqlserver
1 -
SRC
1 -
Ssl
1 -
ST
1 -
String Agg
1 -
Structfield
1 -
Structured streaming
1 -
Summit22
1 -
Table
1 -
Table Pipeline
1 -
Temporary View
1 -
Trying
1 -
UI SQL
1 -
Unity Catalogue
1 -
Usage
1 -
Usge Statistics
1 -
Value Pair
1 -
Version
1 -
Version Queries
1 -
Visualization
1 -
Vnet Injection
1 -
Works
1 -
Workspace
1 -
Workspace SKU
1 -
Writing
1 -
Xml
1 -
Yarn
2 -
Zip file
1
- « Previous
- Next »