With the recommended autoscaling, e.g, https://docs.databricks.com/clusters/cluster-config-best-practices.html, setting; is it possible to dynamically set a fine tuned spark job, given that the number of executors could be changing at any time?
I want to try incorporating these options into my databricks cluster.spark.driver.extraJavaOptions -XX:+UseG1GC -XX:+G1SummarizeConcMark
spark.executor.extraJavaOptions -XX:+UseG1GC -XX:+G1SummarizeConcMarkIf I put them under Compute -> Cluster -> Co...
I am trying to read in files via the COPY INTO command but I am getting this error lately for a certain subset of the data;`Error while reading file: Schema conversion error: cannot convert Parquet type INT64 to Photon type double`These are my option...
I have a notebook that runs many notebooks in order, along the lines of:```%pythonnotebook_list = ['Notebook1', 'Notebook2'] for notebook in notebook_list: print(f"Now on Notebook: {notebook}") try: dbutils.notebook.run(f'{notebook}', 3600) e...
I found the problem. Even if a notebook creates and specifies a widget fully, the notebook run process, e.g, dbutils.notebook.run('notebook') will not know how to use it. If I replace my widget with a non-widget provided value, the process works fine...