- 1441 Views
- 0 replies
- 0 kudos
Is there a preferred method for hosting an odbc connection to a warehouse on a server for use by a report server (SSRS/PBIRS)? I know the odbc driver doesn't support pass-through authentication, so is there a way to configure it with an unattended ac...
- 1441 Views
- 0 replies
- 0 kudos
- 997 Views
- 0 replies
- 0 kudos
I am having issues with Datatbricks SQL and dbt at the moment. When running a query a string column is somehow converted to a number. Does anybody have any idea why this would be happenning?
- 997 Views
- 0 replies
- 0 kudos
by
cyong
• New Contributor II
- 1628 Views
- 1 replies
- 0 kudos
Hi, currently we are using Power BI as the semantic layer because it allows us to build custom measures to do aggregates and business logic calculation, and provides native connection to Excel. I am thinking to move these logics to Databricks using S...
- 1628 Views
- 1 replies
- 0 kudos
Latest Reply
Thanks @Retired_mod , I think Power Query can only perform pre-data transformations, not on-the-fly calculations in response to user filters.
- 3072 Views
- 1 replies
- 0 kudos
Using Delta Sharing connector with Power BI, does it only works for import and currently no support for direct query?
- 3072 Views
- 1 replies
- 0 kudos
Latest Reply
@scrimpton currently it only supports import https://learn.microsoft.com/en-us/power-query/connectors/delta-sharing
- 1112 Views
- 0 replies
- 0 kudos
Hi,I'm implementing a Databricks connector using the ODBC driver and currently working on the functionality to Cancel an ongoing SQL statement.However, I can't seem to find any ODBC function or SQL function to do so.The only other alternative I see i...
- 1112 Views
- 0 replies
- 0 kudos
- 9187 Views
- 5 replies
- 10 kudos
I am trying to connect to DBeaver from Databricks and getting this error message:
[Databricks][DatabricksJDBCDriver](500593) Communication link failure. Failed to connect to server. Reason: javax.net.ssl.SSLHandshakeException: PKIX path building fa...
- 9187 Views
- 5 replies
- 10 kudos
Latest Reply
Hardy
New Contributor III
I have the same issue after upgrading cluster to DBR 12.2. Working fine with DBR 10.4
4 More Replies
- 4235 Views
- 2 replies
- 1 kudos
Hello,I created a sql warehouse (cluster size = 2X-Small) and I wanted to use it to execute a query using the sql query api:- url : https://databricks-host/api/2.0/preview/sql/statements- params = {'warehouse_id': 'warehouse_id','statement': 'SELECT ...
- 4235 Views
- 2 replies
- 1 kudos
Latest Reply
@Yahya24 can you please remove preview in query, they are not in preview any more "/api/2.0/sql/statements/", you should see json response, can you please check drop down menu and change to json, some times it may be setted into text, but usual respo...
1 More Replies
- 6043 Views
- 2 replies
- 2 kudos
I've been doing some testing with Partitions vs Z-Ordering to optimize the merge process.As the documentation says, tables smaller than 1TB should not be partitioned and can benefit from the Z-Ordering process to optimize the reading process.Analyzin...
- 6043 Views
- 2 replies
- 2 kudos
- 2440 Views
- 1 replies
- 2 kudos
How do you handle reporting monthly trends within a data lakehouse? Can this be done with timetravel to get the table state at the end of each month or is it better practice to build a data warehouse with SCD types? We are new to databricks and lak...
- 2440 Views
- 1 replies
- 2 kudos
Latest Reply
@Mswedorske IMO it would be better to use SCD.When you do VACUUM on a table, it removes the data files that are necessary for Time Travel, so it's not a best choice to rely on Time Travel.
by
BamBam
• Databricks Partner
- 3428 Views
- 1 replies
- 0 kudos
In an All-Purpose Cluster, it is pretty easy to get at the Driver logs. Where do I find the Driver Logs for a SQL Pro Warehouse? The reason I ask is because sometimes in a SQL Editor we get generic error messages like "Task failed while writing row...
- 3428 Views
- 1 replies
- 0 kudos
by
Kaz
• New Contributor II
- 5073 Views
- 4 replies
- 1 kudos
Within our team, there are certain (custom) python packages we always use and import in the same way. When starting a new notebook or analysis, we have to import these packages every time. Is it possible to automatically make these imports available ...
- 5073 Views
- 4 replies
- 1 kudos
Latest Reply
@Kaz
You can install these libraries using the Libraries section in the Compute.
All of the libraries mentioned here would be installed whenever the cluster is spun up.
3 More Replies
- 4281 Views
- 1 replies
- 1 kudos
Is there a way to create a calculated field in a dashboard from the data that has been put into it?I have an aggregated dataset that goes into a dashboard, but using an average in the calculation will only work if I display the average by the grouped...
- 4281 Views
- 1 replies
- 1 kudos
by
Erik
• Valued Contributor III
- 9746 Views
- 0 replies
- 1 kudos
We have a setup where we process sensor data in databricks using pyspark structured streaming from kafka streams, and continuisly write these to delta tables. These delta tables are served through a SQL warehouse endpoint to the users. We also store ...
- 9746 Views
- 0 replies
- 1 kudos