Interesting question that (from my view) implies a change of mindset between SQL Transactional/Relational and Lakehouse worlds. I have ended up working with both worlds but I came here from SQL relational world and then, first advise is that one. Do not deal with delta tables as they were relational tables. I could explain more in detail if needed but now I'll go straight to the most important.
In SQL it is very common to have multiple tables to store current or historical data, to have materialized views to divide data by dates (or other fields), partition tables in different file groups, etc. as a way of ensuring transactional daily operations and trying to get a balance between writes and reads. So, it makes a lot of sense to move historical data to other tables, databases, etc. as scalability is compromised. Those tables can keep kind of same schemas and as from them you could even run ETLs to feed datawarehouses.
With delta tables, scalability is one of the main advantages as long as you define correctly partitions or liquid clustered indexes. I mean, there wouldn't be a need to to separate data in different tables unless there are specific requirements in your business. Databricks also features very cool options to deal with historical changes as change data capture (CDC), time travel, etc. you can take a look at docs.
Having said that, if you want to isolate data in different data objects, I'd go with separate managed tables or even catalogs for being able to select even different storage accounts to support that data. Then, you can define views to join, union or consolidate your data. It's true that the less data you have in tables, the less will take queries or "optimizations" jobs applied to them but penalization is by far less in lakehouse scenarios than SQL relational world.
I hope this helps. Happy to get other opinions.