Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
What's the best way to organize our data lake and delta setup? We’re trying to use the bronze, silver and gold classification strategy. The main question is how do we know what classification the data is inside Databricks if there’s no actual physica...
Hi Everyone, I am using the below sql query to generate the days in order in hive & it is working fine. The table got migrated to delta and my query is failing. It would be appreciated if someone helps me to figure out the issue.SQL Query :with ex...
Step 1: Download and Reference the JDBC Driver Download the Databricks JDBC Driver: Visit the Databricks JDBC Driver download page. Download the appropriate version for your operating system. Extract the DatabricksJDBC42.jar file from the downloaded ...
Hi @subhankar , Good Day!
Looking at the error you are getting here shows that it tries to find some kind of JVM file and probably refers to the JAVA_HOME variable to achieve it. It looks as if it is not set correctly in your Environment Variables.
...
I'm trying to debug a task that is a DLT workflow and I've tried putting in log statements and print statements but I can't seem to see the output in the event log after the run nor can I see the print statements anywhere. Can someone point me to whe...
Hi , Hope you are doing well. I was trying to extract a specific email attachment from the outlook, and inject into the dbfs loaction, but something went wrong. Could you please help. I am hereby giving the code whcih I used. import imaplibimport em...
If you face issues with IMAP, consider using Microsoft Graph API for email access. It provides robust support for Outlook without handling IMAP details and enhances security with OAuth2 tokens.Followed is a sample script, but I didn't tested it: pip ...
Hi,I am working on Databricks workspace setup on AWS and trying to use Service Principal to execute API calls (CI/CD) deployment through Bitbucket. So I created secret for the service principal and trying to test the token. The test failed with below...
I have been able to resolve this issue. Apparently you need to generate access token using service principal client id and client secret. saurabh18cs solution is more relevant to Azure Databricks. Got below link from Databricks which provide generic...
Dear Team,I have successfully completed the Databricks Fundamentals training and aced the certificate quiz with a perfect score of 200 out of 200. However, I have not yet received the certificate. Can you please let me know the expected timeline for ...
HiI'm executing simple merge, however it always stucks at "MERGE operation - scanning files for matches". Both delta tables are not big - source has about 100MiB in 1 file and target has 1,5GiB, 7 files, so it should be quite fast operation, however ...
Well, in the end, it was caused by skewed data. Document_ID was -1 for returns in sales, so a big part of the table was filled with -1 values. Adding an extra column to the merger solved the problem.This article helped me a lot: https://www.databrick...
I'm working with Delta Live Tables (DLT) in Databricks and have noticed that AI-suggested comments for columns are not showing up for tables populated using DLT. Interestingly, this feature works fine for tables that are not populated using DLT. Is t...
It's because materialized view in DLT (MV) and streaming table in DLT (ST) don't support ALTER (which is needed to persist those AI generated comments)
Howdy!I wanted to know how I can change some spark configs in a Serverless compute. I have a base.yml file and tried placing: spark_conf: - spark.driver.maxResultSize: "16g"but I still get his error:[CONFIG_NOT_AVAILABLE] Configuration spark.driv...
To address the memory issue in your Serverless compute environment, you can consider the following strategies:
Optimize the Query:
Filter Early: Ensure that you are filtering the data as early as possible in your query to reduce the amount of data b...
Hi All,Recently we have implemented the change to make databricks workspace accessible only via a private network. After this change, we found lot of errors on connectivity like from Power BI to Databricks, Azure Data factory to Databricks etc.I was ...
Hi @Uj337,How are you doing today?This issue seems to be tied to the private network setup affecting access to the .whl file on DBFS. i recommend you to start by ensuring the driver node has proper access to the dbfs:/Volumes/any.whl path and that al...
Databricks acquired the iceberg kafka connect repo this past summer. There are open issues and PRs that devs would like to address and collaborate on to improve the connector. But Databricks has not yet engaged with this community in the ~6 months si...
As the AI revolution takes off in 2025, there is a renewed emphasis on adopting a Data-First approach. Organizations are increasingly recognizing the need to establish a robust data foundation while preparing a skilled fleet of Data Engineers to tack...
Hello, I have a question about why materialized views are created in "__databricks_internal" catalog?We specified catalog and schemas in the DLT Pipeline.
Hello @AxelBrsn
Materialized views created by Delta Live Tables (DLT) pipelines are stored in the __databricks_internal catalog for several reasons:
Isolation: The __databricks_internal catalog is used to store system-generated tables, such as mater...