Hi I would like to configure near real time streaming on Databricks to process data as soon as a new data finish processing on snowflake e.g. with DLT pipelins and Auto Loader. Which option would be better for this setup?
Option A)
Export the Snowpark DataFrame to Databricks to an external cloud storage (e.g. S3 as parquet).
Option B)
use apache iceberg with polaris and configure from Databricks in order to read that information.