cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Trigger on a table

elgeo
Valued Contributor II

Hello! Is there an equivalent of Create trigger on a table in Databricks sql?

CREATE TRIGGER [schema_name.]trigger_name

ON table_name

AFTER {[INSERT],[UPDATE],[DELETE]}

[NOT FOR REPLICATION]

AS

{sql_statements}

Thank you in advance!

1 ACCEPTED SOLUTION

Accepted Solutions

AdrianLobacz
Contributor

You can try Auto Loader: Auto Loader supports two modes for detecting new files: directory listing and file notification.

Directory listing: Auto Loader identifies new files by listing the input directory. Directory listing mode allows you to quickly start Auto Loader streams without any permission configurations other than access to your data on cloud storage. In Databricks Runtime 9.1 and above, Auto Loader can automatically detect whether files are arriving with lexical ordering to your cloud storage and significantly reduce the amount of API calls it needs to make to detect new files.

File notification: Auto Loader can automatically set up a notification service and queue service that subscribe to file events from the input directory. File notification mode is more performant and scalable for large input directories or a high volume of files but requires additional cloud permissions for set up.

Refer - https://learn.microsoft.com/en-us/azure/databricks/spark/latest/structured-streaming/auto-loader

View solution in original post

2 REPLIES 2

Anonymous
Not applicable

There currently isn't stored procedures or triggers.

AdrianLobacz
Contributor

You can try Auto Loader: Auto Loader supports two modes for detecting new files: directory listing and file notification.

Directory listing: Auto Loader identifies new files by listing the input directory. Directory listing mode allows you to quickly start Auto Loader streams without any permission configurations other than access to your data on cloud storage. In Databricks Runtime 9.1 and above, Auto Loader can automatically detect whether files are arriving with lexical ordering to your cloud storage and significantly reduce the amount of API calls it needs to make to detect new files.

File notification: Auto Loader can automatically set up a notification service and queue service that subscribe to file events from the input directory. File notification mode is more performant and scalable for large input directories or a high volume of files but requires additional cloud permissions for set up.

Refer - https://learn.microsoft.com/en-us/azure/databricks/spark/latest/structured-streaming/auto-loader

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now