Hi Team,
We are currently planning to implement Databricks cell-level code parallel execution through the Python threading library. We are interested in comprehending the resource consumption and allocation process from the cluster. Are there any potential implications or challenges regarding resources if we proceed with this method?
Below is the code snippet for your reference.
import threading
def table_creation(sql_statement):
spark.sql(sql_statement)
s1 = """CREATE TABLE a1(time timestamp)"""
s2 = """CREATE TABLE b1(time timestamp)"""
try:
notebook_a_thread = threading.Thread(target=table_creation, args=(s1,))
notebook_b_thread = threading.Thread(target=table_creation, args=(s2,))
notebook_a_thread.start()
notebook_b_thread.start()
notebook_a_thread.join()
notebook_b_thread.join()
except Exception as e:
print(e)
Regards,
Janga