I wish there was a configuration to toggle upscaling behavior. I want the clusters to scale up only if the bottleneck is approaching 70% memory usage. Currently the autoscaling is only based on CPU not Memory (RAM).
@Hubert-Dudek Hi, thanks for the detailed tutorial. With slight tweaks to the init script I was able to make Selenium work on single-node cluster. However, I haven't had much luck with shared clusters in DB Runtime 14.0. Btw, I'm using Volumes to st...
Try this:# Change column_name to the actual column name:placeholder_list = spark.sql("select column from table").collect()desired_list = [row.column_name for row in placeholder_list]print(desired_list)