Hi Zavi,
One potential workaround is to establish multiple DLT pipelines, with each pipeline specifically configured to point to a unique target. This approach effectively allows for a diverse range of output data to be stored across various targets.
To manage these multiple pipelines efficiently, you can utilize Databricks Workflows. Workflows can orchestrate the execution of these pipelines, ensuring they operate in harmony and maintain the integrity of your data processes.
Best regards, Rafael Ribeiro