- 1535 Views
- 2 replies
- 0 kudos
During the execution of the next code we can observe a lost thread that will never end:@Testpublic void pureConnectionErrorTest() throws Exception { try { DriverManager.getConnection(DATABRICKS_JDBC_URL, DATABRICKS_USERNAME, DATABRICKS_PASS...
- 1535 Views
- 2 replies
- 0 kudos
Latest Reply
This issue is reported as fixed since v2.6.34. I validated version 2.6.36- it works normal. Many thanks to the developers for the work done!
1 More Replies
- 311 Views
- 1 replies
- 0 kudos
How do I do a simple left join of a static table and a streaming table under catalog in the streaming pipeline of a Delta Live Table?
- 311 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @rt-slowth I would like to share with you the Databricks documentation, which contains details about stream-static table joins
https://docs.databricks.com/en/delta-live-tables/transform.html#stream-static-joins
Stream-static joins are a good choic...
- 2437 Views
- 6 replies
- 3 kudos
Hello, we have Databricks Python workbooks accessing Delta tables. These workbooks are scheduled/invoked by Azure Data Factory. How can I enable Photon on the linked services that are used to call Databricks?If I specify new job cluster, there does n...
- 2437 Views
- 6 replies
- 3 kudos
Latest Reply
When you create a cluster on Databricks, you can enable Photon by selecting the "Photon" option in the cluster configuration settings. This is typically done when creating a new cluster, and you would find the option in the advanced cluster configura...
5 More Replies
- 526 Views
- 2 replies
- 0 kudos
The documentation is a little ambiguous:"Row-level concurrency is only supported on tables without partitioning, which includes tables with liquid clustering."https://docs.databricks.com/en/release-notes/runtime/14.2.html Tables with liquid clusterin...
- 526 Views
- 2 replies
- 0 kudos
Latest Reply
Cluster-on-write is something being worked on. The limitations at the moment have to do with accommodating streaming workloads.I found the following informative:https://www.youtube.com/watch?v=5t6wX28JC_M
1 More Replies
- 1691 Views
- 1 replies
- 1 kudos
Hi all.Just trying to implement adb sql scripts using identifier clause but I have errors like that using an example:DECLARE mytab = 'tab1'; CREATE TABLE IDENTIFIER(mytab) (c1 INT); [UNSUPPORTED_FEATURE.TEMP_VARIABLE_ON_DBSQL] The feature is not supp...
- 1691 Views
- 1 replies
- 1 kudos
Latest Reply
@RobsonNLPT - Engineering is still working on the feature that allow DECLARE statements in DBSQL. This is with a tentative ETA of Feb 20 available on preview channel.
- 1362 Views
- 4 replies
- 0 kudos
Hello,Could anyone please help regarding the scenario below?Scenario• I'm using the DLT SQL Language• Parquet files are landed each day from a source system.• Each day, the data contains the 7 previous days of data. The source system can have very la...
- 1362 Views
- 4 replies
- 0 kudos
Latest Reply
Yes, it is available in DLT. Check this document: https://docs.databricks.com/en/delta-live-tables/cdc.html
3 More Replies
by
Phani1
• Valued Contributor
- 736 Views
- 2 replies
- 1 kudos
Hi Team, Databricks recommends storing data in a cloud storage location, but if we directly connect to Snowflake using the Snowflake connector, will we face any performance issues?Could you please suggest the best way to read a large volume of data f...
- 736 Views
- 2 replies
- 1 kudos
- 359 Views
- 1 replies
- 1 kudos
I have created a file NBF_TextTranslationspark = SparkSession.builder.getOrCreate()
df_TextTranslation = spark.read.format('delta').load(textTranslation_path)
def getMediumText(TextID, PlantName):
df1 = spark.sql("SELECT TextID, PlantName, Langu...
- 359 Views
- 1 replies
- 1 kudos
Latest Reply
You should create a udf on top of getMediumText function and then use the udf in the sql statement.
by
Volker
• New Contributor III
- 3077 Views
- 4 replies
- 2 kudos
Hello Databricks Community,we are currently looking for a way to persist and manage our unity catalog tables in an IaC manner. That is, we want to trace any changes to a table's schema and properties and ideally be able to roll back those changes sea...
- 3077 Views
- 4 replies
- 2 kudos
Latest Reply
As you mentioned, using notebooks with Data Definition Language (DDL) scripts is a viable option. You can create notebooks that contain the table creation scripts and version control these notebooks along with your application code.
3 More Replies
by
Heman2
• Valued Contributor II
- 6420 Views
- 4 replies
- 21 kudos
Is there any way to export the ​output data in the Excel format into the dbfs?, I'm only able to do it in the CSV format
- 6420 Views
- 4 replies
- 21 kudos
Latest Reply
The easiest way I fount is to create a dashboard and export from there. It will enable a context menu with options to export to some file types including csv and excel.
3 More Replies
- 3209 Views
- 7 replies
- 2 kudos
Hi allI have a task of type Notebook, source is Git (Azure DevOps). This task runs fine with my user, but if I change the Owner to a service principal, I get the following error:Run result unavailable: run failed with error message Failed to checkout...
- 3209 Views
- 7 replies
- 2 kudos
Latest Reply
@pgruetter​ :To enable a service principal to access a specific Azure DevOps repository, you need to grant it the necessary permissions at both the organization and repository levels.Here are the steps to grant the service principal the necessary per...
6 More Replies
by
sher
• Valued Contributor II
- 487 Views
- 2 replies
- 1 kudos
i want to read column mapping metadatahttps://github.com/delta-io/delta/blob/master/PROTOCOL.md#column-mappingin above link we can able to find the code block with json data. the same data i want to read in pyspark.. is there any option to read that ...
- 487 Views
- 2 replies
- 1 kudos
Latest Reply
Hi,Information about the delta table such as history information could be found by running a `describe history table_name`. A `rename column` operation could be found in the `operation` column with a value of `RENAME COLUMN`. If you then look at the ...
1 More Replies
by
QQ
• New Contributor III
- 2572 Views
- 2 replies
- 0 kudos
- 2572 Views
- 2 replies
- 0 kudos
Latest Reply
I got solution I forgot to create SaaS users with the same subject as the AD users.Preprovisioned usersPreprovisioned users, means users must already exist in the downstream SaaS application. For instance, you may need to create SaaS users with the s...
1 More Replies
- 920 Views
- 1 replies
- 1 kudos
Hi All,Wondering if anyone else getting this problem:We trying to host krb5.conf and jaas.conf for our compute to be able to connect to Kerberised JDBC sources, we attempting to store these files in Catalog volumes, but at run time when initiating th...
- 920 Views
- 1 replies
- 1 kudos
Latest Reply
Haven't been able to access volume path when using jdbc format.
by
Sas
• New Contributor II
- 398 Views
- 1 replies
- 0 kudos
HiI am new to databricks and i am trying to understand the use case of deta lakehouse. Is it good idea to build datawarehouse using deltalake architecture. Is it going to give same performance as that of RDBMS clouse datawarehous like snowflake? Whic...
- 398 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @Sas ,
One of the benefits of the Data Lakehouse architecture, is that it combines the best of both Data Warehouses and Data Lakes all on one unified platform to help you reduce costs and deliver on your data and AI initiatives faster. It brings t...