Hi Databricks experts,I have a table in Snowflake that tracks newly added items, and a downstream data processing workflow that needs to be triggered whenever new items are added. I'm currently using Lakehouse Federation to query the Snowflake tables...
I need to automatically trigger a Databricks job whenever a new row is inserted to a Snowflake table. Additionally, I need the job to receive the exact details of the newly inserted row as parameters.What are the best approaches to achieve this? I’m ...
Hi I would like to configure near real time streaming on Databricks to process data as soon as a new data finish processing on snowflake e.g. with DLT pipelins and Auto Loader. Which option would be better for this setup? Option A)Export the Snowpark...
Hi i have the following error, but the job kept running, is that normal?{ "message": "The service at /api/2.0/jobs/runs/get?run_id=899157004942769 is temporarily unavailable. Please try again later. [TraceId: -]", "error_code": "TEMPORARILY_U...
Hey @Brahmareddy, I ended up creating a Delta table as a mirror of the source Snowflake table (accessed via Lakehouse Federation). I set up logic to append only new records to the Delta table based on a timestamp column—so only records where the time...
Hey @Brahmareddy, I ended up creating a Delta table as a mirror of the source Snowflake table (accessed via Lakehouse Federation). I set up logic to append only new records to the Delta table based on a timestamp column—so only records where the time...
what do you mean by "set file trigger for the location where the mv table writes and then create your job whenever anything will be written in the mv table your job will trigger." ? Isn't the file trigger for external location such as s3? It will wor...