cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Warehousing, Analytics, and BI

Forum Posts

MadelynM
by Databricks Employee
  • 826 Views
  • 0 replies
  • 0 kudos

[Recap] Data + AI Summit 2024 - Warehousing & Analytics | Improve performance and increase insights

Here's your Data + AI Summit 2024 - Warehousing & Analytics recap as you use intelligent data warehousing to improve performance and increase your organization’s productivity with analytics, dashboards and insights.  Keynote: Data Warehouse presente...

Screenshot 2024-07-03 at 10.15.26 AM.png
Warehousing & Analytics
AI BI Dashboards
AI BI Genie
Databricks SQL
  • 826 Views
  • 0 replies
  • 0 kudos
Anonymous
by Not applicable
  • 651 Views
  • 1 replies
  • 0 kudos
  • 651 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

Frequency of logs in stdout / stderr would be a function of what code you run on the databricks clusters .The default log level for log4j is INFO - you could change it following the instructions here

  • 0 kudos
Digan_Parikh
by Valued Contributor
  • 1372 Views
  • 1 replies
  • 0 kudos

Resolved! DBSQL connection to other BI tools

How do i connect DBSQL to other BI tools?

  • 1372 Views
  • 1 replies
  • 0 kudos
Latest Reply
Digan_Parikh
Valued Contributor
  • 0 kudos

Generally, you can connect to SQL endpoint using a ODBC or JDBC driver. More information can be found here. https://docs.databricks.com/integrations/bi/index-sqla.html

  • 0 kudos
User16826992666
by Valued Contributor
  • 1651 Views
  • 2 replies
  • 0 kudos
  • 1651 Views
  • 2 replies
  • 0 kudos
Latest Reply
Digan_Parikh
Valued Contributor
  • 0 kudos

When sizing, this is the recommendation. Data set Cluster SizeITB / rows X-Large+500GB / 1B rows X-LargeSOGB / IOOM+ rows LargeIOOGB / rows MediumIOGB / -M rows SmallThis table maps SQL endpoint cluster sizes to Databricks cluster driver sizes and wo...

  • 0 kudos
1 More Replies
User16826987838
by Contributor
  • 766 Views
  • 1 replies
  • 0 kudos
  • 766 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

Second question. Yes. Just don't grant CAN_RUN to a user/grouphttps://docs.databricks.com/sql/user/security/access-control/dashboard-acl.html#dashboard-permissions

  • 0 kudos
User16826992666
by Valued Contributor
  • 4036 Views
  • 1 replies
  • 0 kudos

Resolved! Can I implement Row Level Security for users when using SQL Endpoints?

I'd like to be able to limit the rows users see when querying tables in Databricks SQL based on what access level each user is supposed to be granted. Is this possible in the SQL environment?

  • 4036 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

Using dynamic views you can specify permissions down to the row or field level e.g. CREATE VIEW sales_redacted AS SELECT user_id, country, product, total FROM sales_raw WHERE CASE WHEN is_member('managers') THEN TRUE ELSE total <= 1...

  • 0 kudos
User16826992666
by Valued Contributor
  • 1405 Views
  • 1 replies
  • 0 kudos

Resolved! Should I enable Photon on my SQL Endpoint?

I see the option to enable Photon when creating a new SQL Endpoint. The description says that enabling it helps speed up up queries, which sounds good, but are there any downsides I need to be aware of?

  • 1405 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryan_Chynoweth
Esteemed Contributor
  • 0 kudos

Generally, yes you should enable photon. The majority of functionality is available and will perform extremely well. There are some limitations with it that can be found here. Limitations: Works on Delta and Parquet tables only for both read and writ...

  • 0 kudos
User16826992666
by Valued Contributor
  • 1773 Views
  • 1 replies
  • 0 kudos

Resolved! How can I see the performance of individual queries in Databricks SQL?

If I want to get more information about how an individual query is performing in the Databricks SQL environment, is there anywhere I can see that?

  • 1773 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

You could see details on different queries that ran against an endpoint under the query history section

  • 0 kudos
User16826992666
by Valued Contributor
  • 5161 Views
  • 1 replies
  • 0 kudos

Resolved! Can you run Structured Streaming on a job cluster?

Need to know if I can use job clusters to start and run streaming jobs or if it has to be interactive

  • 5161 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

Yes. Here is a doc containing some info on running Structured Streaming in production using Databricks jobs

  • 0 kudos
User16788316451
by New Contributor II
  • 991 Views
  • 1 replies
  • 0 kudos

How to troubleshoot SSL certificate errors while connecting Business Intelligence (BI) tools to Databricks in a Private Cloud (PVC) environment?

How to troubleshoot SSL certificate errors while connecting Business Intelligence (BI) tools to Databricks in a Private Cloud (PVC) environment?

  • 991 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16788316451
New Contributor II
  • 0 kudos

See attached for steps to inspect the certificate chain using openssl

  • 0 kudos
Srikanth_Gupta_
by Valued Contributor
  • 1970 Views
  • 1 replies
  • 1 kudos

Reading bulk CSV files from Spark

While trying to read 100GB of csv.gz file from Spark which is taking forever to read, what are best options to read this file faster?

  • 1970 Views
  • 1 replies
  • 1 kudos
Latest Reply
sean_owen
Databricks Employee
  • 1 kudos

Part of the problem here is that .gz files are not splittable. If you have 1 huge 100GB .gz file, it can only be processed by one task. Can you change your input to use a splittable compression like .bz2? it'll work much better.

  • 1 kudos
User16826992666
by Valued Contributor
  • 1138 Views
  • 1 replies
  • 0 kudos
  • 1138 Views
  • 1 replies
  • 0 kudos
Latest Reply
sean_owen
Databricks Employee
  • 0 kudos

No. If you use %pip or %conda to attach a library, then it will only affect the execution of the notebook. A separate virtualenv is created for each notebook and its dependencies, even on a shared cluster.If you create a Library in the workspace and ...

  • 0 kudos
User16753724663
by Valued Contributor
  • 4902 Views
  • 1 replies
  • 0 kudos

Unable to use JDBC/ODBC url with sql workbench

SQL Workbench is not able to connect to Cluster using JDBC/ODBC connection. Getting the following error. I used the configuration provided by the cluster (jdbc:spark://<host>.cloud.databricks.com:443/default;transportMode=http;ssl=1;httpPath=sql/prot...

  • 4902 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16753724663
Valued Contributor
  • 0 kudos

As we are getting 401 error that means an authentication issue. We should use Personal access token for password.The username should be "token" and the password should be PAT token.

  • 0 kudos
User16753724663
by Valued Contributor
  • 1619 Views
  • 1 replies
  • 0 kudos

Unable to install kneed library in cluster with DBR version 5.5 LTS

I have an issue to install and use kneed python libary. https://pypi.org/project/kneed/I can install it and check it from log.[Install command]%shpip install kneed[log]Installing collected packages: kneedSuccessfully installed kneed-0.7.0but when I c...

  • 1619 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16753724663
Valued Contributor
  • 0 kudos

The kneed library has a dependency and we need to install them as well in order to work:numpy==1.18scipy==1.1.0scikit-learn==0.21.3Once we install the above libraries using GUI, we can run the below command to check the installed library with the cor...

  • 0 kudos
User16753724663
by Valued Contributor
  • 2875 Views
  • 1 replies
  • 0 kudos

Unable to construct the sql url as the password is having special characters.

while using the sqlalchemy, unable to connect with sql server from databricks:user='user@host.mysql.database.azure.com' password='P@test' host="host.mysql.database.azure.com" database = "db" connect_args={'ssl':{'fake_flag_to_enable_tls': True}} conn...

  • 2875 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16753724663
Valued Contributor
  • 0 kudos

We can use urllib.parse to handle special characters. Here is an example:import urllib.parse user='user@host.mysql.database.azure.com'   password=urllib.parse.quote_plus("P@test") host="host.mysql.database.azure.com" database = "db"   connect_args={'...

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels
Top Kudoed Authors