You could tweak the default value 200 by changing spark.sql.shuffle.partitions configuration to match your data volume. Here is a sample python code for calculating the value
However if you have multiple workloads with different data volumes, instead of manually specifying the configuration for each of these, it is worth looking at AQE & Auto-Optimized Shuffle
AQE adjusts the shufzfle partition number automatically at each stage of the query, based on the size of the map-side shuffle output. So as data size grows or shrinks over different stages, the task size will remain roughly the same, neither too big nor too small. However, AQE does not change the initial partition number by default - so if you are seeing spilling in your jobs you could enable auto optimized shuffle by setting <db_prefix>.autoOptimizeShuffle.enabled to true.
More details at
https://databricks.com/blog/2020/10/21/faster-sql-adaptive-query-execution-in-databricks.html