A lot of questions š
Concerning usage of serverless clusters in databricks.yml and assuming you're using those clusters in jobs, you must define them in the job definition. Take a look here: https://github.com/databricks/bundle-examples/tree/main/knowledge_base/serverless_job Notice how there is no explicit reference to "existing all-purpose" or classic "jobs compute" cluster.
Concerning configuration to access to your private storage accounts backing Unity Catalog managed tables, you must enable your firewall on them. Otherwise, serverless clusters are not allowed to access. This is the same for jobs/notebook serverless cluster or SQL Warehouse clusters. Take a look here: https://docs.databricks.com/aws/en/security/network/serverless-network-security/serverless-firewall
If you switch to all-purpose or jobs compute, there are pros and cons. I really like that serverless compute is very fast to start workloads but not the same for jobs compute, as it takes minutes. In my case, that delay is completely unacceptable, so using all-purpose clusters already active and/or serverless compute, depending type of job workload. Concerning pros and cons, there are a lot to talk about. I'm not going to copy/paste content from chat-gpt xDD Take a look here: https://docs.databricks.com/gcp/en/compute/choose-compute
I would recommend you use service principals to run DAB from CI/CD pipelines or even manually via Databricks CLI while learning how to use it.