While you can limit or cap resource utilization with "classic" compute (self-hosted) by setting cluster policies, it still takes a focus on fin/ops and end-user enablement to truly manage costs. Same as Serverless Budget Policies.
The big draw of serverless compute is to get out of that infrastructure management. You shouldn't have to manage cores/memory/concurrency, it should just work. In this sense, I don't view it as a limitation and to answer your explicit question, there is no reference that I know of to this as a limitation. Keep in mind you still have controls, for example, to automatically terminate long-running jobs. Another big benefit is that serverless enables user-level attribution. Where classic compute took some creative reporting to attribute specific user behavior to a cost.
That being said, we have heard from many customers that have expressed a similar desire to have more control over their serverless costs, and I fully expect to see it on the roadmap sometime soon. Remember, even if serverless becomes your default, Databricks is committed to providing you choice. Classic compute, with a bit more control but with that more overhead, and serverless compute, less knobs and levers, but with some potentially serious performance/efficiency gains.