I am relatively new to Databricks, and from my recent experience it appears that every step in a DLT Pipeline, we define each LIVE TABLES (be it streaming or not) to pull data upstream.
I have yet to see an implementation where data from upstream would push its data downstream, say, I could create a bronze table and configure in its definition the silver tables it can push its data into.
This would be especially useful, I think, when ingesting data from Kafka where different topics contain differing payload(message) schema and would like to segregate these messages by topic, that is, to put each topic to its own table.