Assuming this is an on going request, there's a few ways you can tackle it:
Anti joins - you'll need to keep a table of identifiers where the associated data shouldn't be stored. Before writing out the batch you can do an `anti join` on the table so only the remainder are kept. Great if you can broadcast the table, but if not, only apply this to the silver table as it'll slow down streams that are hard to recover /retry if they crash.
Also consider how this ID table will get updated. If your table will get updated hourly, you'll need to update the broadcast table with the same frequency.
Using hashes if your ID is PII - Let's say for example that your ID is credit card number. You can't keep a table of credit card numbers because that's also PII (arguably). Instead store a hash of these values, then do the anti join of your values.
Assuming you're retroactively dropping data:
Delta's ACID compliance and optimistic concurrency means you should be able to drop values without rewriting data, which is especially true if you're using deletion vectors. Keep in mind the parquet files will still exist underneath until vacuum is run (manually or through Predictive Optimisation) so it's not truly deleted until that step. You can use the MERGE INTO syntax to make this happen.
Both
You could use of these approaches together as a two step process, but make sure you're not trying to update the same parquet files at the same time. ie, catch the first set of rows with an ANTI JOIN, then a day later delete any that slipped through due to operational lags in the lookup table.
This can be a tricky thing to design, and depending on your operational requirements this is going to look different for everyone. The easiest will be 'compliance within x days' compared to COPPA adherence which can be very strict. I highly recommend drawing it all out before starting any coding. I did this with a customer team and it took us two days of workshops to be happy with the design.