11-20-2023 01:36 AM
11-20-2023 03:20 AM
@Retired_mod but same code was working 5 days ago.
11-20-2023 04:43 AM
11-20-2023 10:02 PM
@Retired_mod yeah, in one dataset there are slightly higher data points.schema are same.when the spark crashed i have checked the memory usage.it was around 50%.
05-15-2025 03:14 AM - edited 05-15-2025 03:16 AM
The error is caused due to overlap of connectors or instance, if you see an error as below:
And you can see Multiple clusters with same name, which is caused due to running the notebook_1 under a cluster attached to it and re-running a notebook_2 with %run / sub notebook run, which is connect to the same notebook_1 cluster. Which cause below scenario: Same cluster under two resource.
Solution:
Detach the cluster from notebook_1 & notebook_2,
Now refresh the web page and re-attach the cluster to notebook_2, run individual cells one by one not RUN_ALL at a time. Once you've run two to three cells successfully. Now Run All and test, it will work as below.
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now