Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
If I were to stop a rather large job run, say half way thru execution, will any actions performed on our Delta tables persist or will they be rolled back?Are there any other risks that I need to be aware of in terms of cancelling a job run half way t...
Hi All,I have created three parameters in an SQL query in Databricks. If no value is entered for a parameter, I would like the query to retrieve all values for that particular column. Currently, I'm getting an error message: "Missing selection for Pa...
I'm creating this query with parameters in SQL Editor in Databricks and added it to the SQL Dashboard.Do we need to create Widget while creating parameters in SQL Editor? When i tried creating widget in SQL editor, Im getting syntax error near Widget...
Natural language queries provided by Genie are really powerful and a compeling tool.Is there any way to execute these natural language queries through the REST API to integrate them into in-house applications?
@Gusman wrote:Natural language queries provided by Genie are really powerful and a compeling tool.Is there any way to execute these natural language queries through the REST API to integrate them into in-house applications?While there's no direct RES...
Hello,I am currently using table_lineage from system.access.table_lineage. It is a great feature but I am experiencing missing data. After some search I have seen that "Because lineage is computed on a one-year rolling window, lineage collected more ...
Hi @Clara ,I don't think so. But you can build history such history tables by yourself. Desing ETL process that will extract data from system tables and store them in your own data tables.
Dear All,Is it possible to create Materialized view on view and table (Joining view and table)?I suspect it is not possible. Please suggest.Also please provide best way to schedule the refresh of Materialized view. Regards,Surya
Trying out databricks for the first time and followed the Get Started steps. I managed to successfully create a cluster and ran the simple sql tutorial to query data from a notebook. However, got the following error:Query:DROP TABLE IF EXISTS diamond...
It seems as though you're doing great with your Databricks arrangement, however this sort of mistake could be connected with a couple of expected issues. In light of the subtleties you've shared, here are a few things you should check:Group Setup: Gu...
Hi @ashraf1395,
The term "rate" refers to a special source in Apache Spark's Structured Streaming that generates data at a specified rate. This source is primarily used for testing and benchmarking purposes. When you use spark.readStream.format("rate...
Can anyone tell me the correct syntax for applying a column tag to a specific tableThese are what I tried ALTER TABLE accounts_and_customer.bronze.BB1123_loans ALTER/CHANGE COLUMN loan_number SET TAGS ('classification' = 'confidential')<p>I got thi...
Hi there @Takuya-Omi ,I agree. The syntax was correct. I was facing some completely different problems with schemas and I solved it. Thanks though. Or I would have spent hours banging my head to find the reason for error.
Hi team,We are currently working in loading CDF table using data events from Kafka. The table is going to hold data across geographies. When we tried partitioning it is slowing down the ingestion time. But without partition the downstream application...
1. Instead of using many small partitions (e.g., country or region), opt for larger partitions, such as continent or time-based partitions (e.g., weekly or monthly). This will reduce the number of partitions and improve performance.2. Write data to ...
Hi All,Have a VARIANT column with the following data;CREATE TABLE unpivot_valtype AS
SELECT parse_json(
'{
"Id": 1234567,
"Result": {
"BodyType": "NG",
"ProdType": "Auto",
"ResultSets": [
{
"R1": {
"AIn...
Hi @binsel ,You need to use variant_explode function.Here is the working code:WITH first_explode AS (
SELECT
uv.rowData:Id AS Id,
uv.rowData:Result:BodyType AS BodyType,
uv.rowData:Result:ProdType AS ProdType,
v.value AS result_se...
Hi,We are offering data products through a central catalog for our users. To minimize data duplication and to display relationships between tables, we use shallow clones to provide access to the data.However, since implementing this approach, we occa...
Hi @schluca ,I’ve encountered an issue where an error occurred when trying to reference a table after deleting and recreating the source table for a Shallow Clone, and then performing the Shallow Clone again. As a solution, try deleting the destinati...
Hi,I wanted to know if it is possible to edit the lineage that we see in databricks, like the one shown below.Can I edit this lineage graph, like add other ETL tools (at the start of the tables) that I have used to get data in aws and then in databri...
This will be extremely beneficial. We have certain use cases where we do not leverage Spark in our pipelines and lose the lineage. I would prefer to set an extra parameter when writing a table to specify the lineage.
I am trying to run a notebook job with a git repo hosted on Gitlab. I have Linked my Gitlab account using Gitlab tokenYet i am getting the following error on running the job How to resolve this?
Hi @vinitkhandelwal,
Looks like the token could be missing required permissions for the operation. Please refer to:
You can clone public remote repositories without Git credentials (a personal access token and a username). To modify a public remote r...
Hi, Databricks community,I recently encountered an issue while using the 'azure.identity' Python library on a cluster set to the personal compute policy in Databricks. In this case, Databricks successfully returns the Azure Databricks managed user id...
I'm having a similar problem, my aim is from an Azure DataBricks notebook to invoke an AzureDataDactory pipeline I created an Access Connector for Azure Databricks to which I gave Data Factory Contributor permissions. Using these lines pythonfrom azu...
Hello Community, I want to pass parameters to my Databricks job through the DABs CLI. Specifically, I'd like to be able to run a job with parameters directly using the command: databricks bundle run -t prod --params [for example: table_name="client"...
Hi @jeremy98 ,You can pass parameters using CLI in following way: databricks bundle run -t ENV --params Param1=Value1,Param2=Value2 Job_Name And in your yml file you should define parameters in similar way to following:You can find more info in follo...