โ02-03-2023 06:15 AM
What is the problem?
I am getting this error every time I run a python notebook on my Repo in Databricks.
Background
The notebook where I am getting the error is a notebook that creates a dataframe and the last step is to write the dataframe to a Delta table already created in Databricks.
The dataframe created has approximately 16,000,000 records.
In thenotebook I don't have any display(), print(), ... command, only the creation of this dataframe through other created dataframes.
This notebook with the same amount of records was working a few days ago but now I am getting that error. I have been reading in other discussions in the chat and have seen that it could be a memory problem so I have taken the following steps:
Could you help me? I don't know if the problem comes from the cluster configuration or from where, because a few days ago I was able to run the notebook without any problem.
Thank you so much in advance, I look forward to hearing from you.
โ02-09-2023 05:21 AM
Please set this configuration spark.databricks.python.defaultPythonRepl pythonshell for "Fatal error: The Python kernel is unresponsive" error.
However, I expect that the original issue i.e "Error: The spark driver has stopped unexpectedly and is restarting" will not be resolved with this. But you can give it a try.
โ02-03-2023 09:56 AM
Hi @Sara Corralโ , The issue happens when the driver is under memory pressure. It is difficult to tell from the provided information what is causing the driver to be under memory pressure. But here are some options you can try:
โ02-07-2023 04:29 AM
Thank you for your quick response.
Unfortunately, I have tried all the changes you mentioned but it still doesn't work. I have also been reading more about the cluster size recommendations on your website (https://docs.databricks.com/clusters/cluster-config-best-practices.html) and I have followed them (I have tried with a smaller number of workers but with a larger size of these), but the result is still the same error.
This is the current configuration I have in my cluster.
I have also tried to take a sample of only 5 records (so now our dataframe has 5 records and not 15.000.000) and surprisingly I'm having this error after more than 1h30 running:
Fatal error: The Python kernel is unresponsive.
During the modification of our original dataframe there is no problem, I get the error when I try to copy that data into a table (df.format("delta"). ... ).
In order to check if the problem was that write to the delta table, I have replaced that step by display(df) (which is now only 5 records) and I get exactly the same error.
Any idea what might be going on?
Thank you so much in advanced.
โ02-09-2023 05:21 AM
Please set this configuration spark.databricks.python.defaultPythonRepl pythonshell for "Fatal error: The Python kernel is unresponsive" error.
However, I expect that the original issue i.e "Error: The spark driver has stopped unexpectedly and is restarting" will not be resolved with this. But you can give it a try.
โ09-07-2023 01:10 PM
Hi! I'm on 12.2 LTS and getting a similar error. When I try this solution, I get an error ERROR PythonDriverWrapper: spark.databricks.python.defaultPythonRepl no longer supports pythonshell using ipykernel instead.
Any idea how I can fix?
โ04-10-2023 01:55 AM
Hi @Sara Corralโ
Thank you for posting your question in our community! We are happy to assist you.
To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?
This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance!
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group