If you're looking to learn a new language to develop Data Engineering/Science and/or Machine-Learning code in general, the choice is really up-to-you between R or Python based on which syntax you'll find more intuitive (Keeping in mind that based on public stats, you'll find that Python is more popular, since R was originally designed by statisticians for statisticians).
When it comes to Spark for analyzing large amounts of data: at a lower level, spark executes Scala code, meaning that both Python/R APIs are just easy means to communicate with the spark engine (i.e. catalyst optimizer) for writing highly optimized Scala code based on what you're trying to achieve, with minor/negligible language-specific overhead. So if you're used to high-level interpreted languages without having to worry about what's happening at a lower level, then Python or R would be the way to go. While I personally like the Python API, I do encourage you to learn the basics of Scala for multi-threading, because if you have a lot of existing SQL-based workloads and code, you can with minimal Scala knowledge multi-thread the execution of these queries to optimize performance and resource usage.
To conclude, when it comes to using Databricks, there's no real language winner, the value and beauty of the platform is the ability to mingle between all and get the best out of each, to maximize efficiency of your code (e.g. leverage Scala multi-threading on top of SQL queries).
Hope this helps