Hi,
You're not putting in the wrong place, it's just that Databricks doesn't allow certain configs, because they are managed by Databricks for you. For example your core spark config you've shown above won't be recognised as this is set by selected compute type. So rather than specifying the number of cores in your spark config you would select the compute that had the desired number of cores. In a serverless scenario it should autoscale to the optimum number of cores.
For the log.level setting, i've just tested with my own job and it does impact the run. You can view it in the spark ui for the job after it's run, I tested it by setting it to warn. And my environment setting shows as WARN.
I hope this helps.
Many Thanks,
Emma