From spark's documentation dynamic resource can be used on more then just YARN. Is it possible to use it in databricks as well and how does serverless work under the hood in these regards?
But isn’t that a hard disadvantage compared to yarn clusters?And the way I understood workflows (and the team behind the UI component among other things), we clearly shall reuse the same compute cluster and run parallel tasks.If I would run spark-sub...
Hello,In the past I used rdd.mapPartitions(lambda ...) to call functions that access third party APIs like azure ai translate text to batch call the API and return the batched data.How would one do this now?