@uzair mustafa : Using a threadpool executor to parallelize the execution of notebooks may not be enough to distribute the load across your cluster. When you use threadpool executor, all threads are running on the same node, might run out of memory as well -> this is the desired result.
To tackle your problem, can you try running each notebook as a separate process and create a Spark Context within that process. Please try using "subprocess" module in Python to spawn a new process for each notebook.