Hi Pratikmsbsvm,
How are you doing today? Great question, For error logging in your Bronze to Silver pipeline, yes, you can absolutely store logs in a Delta table, ideally in your Silver layer on ADLS Gen2. A good approach is to create a separate Delta table like error_logs where you capture useful details such as: timestamp, table name, pipeline step, error message, source file, and maybe a JSON column to store the problematic row if possible. Use try-except blocks in your PySpark or notebooks and append errors into this log table. As for orchestration, Databricks Workflows is a solid built-in optionโyou can schedule, chain tasks, and set up alerts or retries. You donโt need an extra tool unless your org requires it. Keep a clean folder structure, like /logs/errors/, and organize logs by date or pipeline. This setup will keep your pipeline more transparent and easier to monitor.
Regards,
Brahma