Hello, if you're planning on building your own open source stack of spark+iceberg, it can be a good choice.
If you're on Databricks, however, you're going to miss out a *lot* on delta features that are baked into the platform. Specifically compute + storage performance based optimisations and UC integrations. Delta has ACID compliance, works beautifully with large datasets and you have many performance choices with liquid clustering or legacy z ordering.
If you're integrating with other systems that are only iceberg compatible, check out uniform to write out additional metadata so other systems can read from it: https://docs.databricks.com/en/delta/uniform.html