Hello,
I would like to set the default "spark.driver.maxResultSize" from the notebook on my cluster. I know I can do that in the cluster settings, but is there a way to set it by code?
I also know how to do it when I start a spark session, but in my case I directly load from the feature store and want to transform my pyspark data frame to pandas.
from databricks import feature_store
import pandas as pd
import pyspark.sql.functions as f
from os.path import join
fs = feature_store.FeatureStoreClient()
prediction_data = fs.read_table(name=NAME)
prediction_data_pd = prediction_data.toPandas()