<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Received Fatal error: The Python kernel is unresponsive. in Machine Learning</title>
    <link>https://community.databricks.com/t5/machine-learning/received-fatal-error-the-python-kernel-is-unresponsive/m-p/36920#M1922</link>
    <description>&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;I am running a databricks job on a cluster and I keep running into the following issue (pasted below in bold)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;The job trains a machine learning model on a modestly sized dataset (~ half GB). Note that I use pandas dataframes for the data, sklearn for ML modeling, and optuna for hyper-parameter optimization. I am not using pyspark or MLlib for this job.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;The job is run on a single node cluster.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;The DB Runtime is 13.1ML.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;Previously, I was able to run this job with the same cluster. Ever since, I updated the sklearn version, I am not able to run the job anymore. My hunch is that version of joblib, which is used by sklearn for parallelization, may have changed as well and so that may have contributed to the issue.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;I used one of the advices given in this thread&amp;nbsp;&lt;A href="https://community.databricks.com/t5/data-engineering/fatal-error-the-python-kernel-is-unresponsive/td-p/9808" target="_self"&gt;fatal-error-the-python-kernel-is-unresponsive:&amp;nbsp;&lt;/A&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;I added&amp;nbsp;&lt;SPAN&gt;spark.databricks.python.defaultPythonRepl pythonshell to the configuration but that did not resolved the issue.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN&gt;I also played around with resizing the memory/number of cpus in the cluster and that did not yield any resolution either.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;STRONG&gt;&lt;SPAN class=""&gt;Fatal error&lt;/SPAN&gt;: The Python kernel is unresponsive.&lt;/STRONG&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;STRONG&gt;&lt;SPAN class=""&gt;---------------------------------------------------------------------------&lt;/SPAN&gt; &lt;SPAN class=""&gt;The Python process exited with exit code 134 (SIGABRT: Aborted).&lt;/SPAN&gt; &lt;SPAN class=""&gt;The last 10 KB of the process's stderr and stdout can be found below. See driver logs for full logs.&lt;/SPAN&gt; &lt;SPAN class=""&gt;---------------------------------------------------------------------------&lt;/SPAN&gt; &lt;SPAN class=""&gt;Last messages on stderr:&lt;/SPAN&gt; &lt;SPAN class=""&gt;one&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/databricks/python/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 510 in dispatch_queue&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/usr/lib/python3.10/asyncio/events.py", line 80 in _run&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/usr/lib/python3.10/asyncio/base_events.py", line 1896 in _run_once&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/usr/lib/python3.10/asyncio/base_events.py", line 600 in run_forever&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/databricks/python/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 199 in start&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/databricks/python/lib/python3.10/site-packages/ipykernel/kernelapp.py", line 712 in start&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/databricks/python_shell/scripts/db_ipykernel_launcher.py", line 126 in &amp;lt;module&amp;gt;&lt;/SPAN&gt; &lt;SPAN class=""&gt;Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, psutil._psutil_linux, psutil._psutil_posix, zmq.backend.cython.context, zmq.backend.cython.message, zmq.backend.cython.socket, zmq.backend.cython._device, zmq.backend.cython._poll, zmq.backend.cython._proxy_steerable, zmq.backend.cython._version, zmq.backend.cython.error, zmq.backend.cython.utils, tornado.speedups, _pydevd_bundle.pydevd_cython, ujson, matplotlib._c_internal_utils, PIL._imaging, matplotlib._path, kiwisolver._cext, matplotlib._image, simplejson._speedups, charset_normalizer.md, yaml._yaml, google._upb._message, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.tslib, pandas._libs.lib, pandas._libs.hashing, pyarrow.lib, pyarrow._hdfsio, pandas._libs.ops, numexpr.interpreter, pyarrow._compute, pandas._libs.arrays, pandas._libs.index, pandas._libs.join, pandas._libs.sparse, pandas._libs.reduction, pandas._libs.indexing, pandas._libs.internals, pandas._libs.writers, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.tslibs.strptime, pandas._libs.groupby, pandas._libs.testing, pandas._libs.parsers, pandas._libs.json, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.sparse.linalg._isolve._iterative, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.linalg._flinalg, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, numpy.linalg.lapack_lite, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize.__nnls, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.special.cython_special, scipy.stats._stats, scipy.stats.beta_ufunc, scipy.stats._boost.beta_ufunc, scipy.stats.binom_ufunc, scipy.stats._boost.binom_ufunc, scipy.stats.nbinom_ufunc, scipy.stats._boost.nbinom_ufunc, scipy.stats.hypergeom_ufunc, scipy.stats._boost.hypergeom_ufunc, scipy.stats.ncf_ufunc, scipy.stats._boost.ncf_ufunc, scipy.stats.ncx2_ufunc, scipy.stats._boost.ncx2_ufunc, scipy.stats.nct_ufunc, scipy.stats._boost.nct_ufunc, scipy.stats.skewnorm_ufunc, scipy.stats._boost.skewnorm_ufunc, scipy.stats.invgauss_ufunc, scipy.stats._boost.invgauss_ufunc, scipy.interpolate._fitpack, scipy.interpolate.dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy._lib._uarray._uarray, scipy.stats._statlib, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, sklearn.__check_build._check_build, sklearn.utils._isfinite, sklearn.utils.murmurhash, sklearn.utils._openmp_helpers, Cython.Plex.Actions, Cython.Plex.Scanners, Cython.Compiler.Scanning, sklearn.utils._logistic_sigmoid, sklearn.utils.sparsefuncs_fast, sklearn._loss._loss, sklearn._isotonic, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.preprocessing._target_encoder_fast, sklearn.metrics._dist_metrics, sklearn.metrics._pairwise_distances_reduction._datasets_pair, sklearn.utils._cython_blas, sklearn.metrics._pairwise_distances_reduction._base, sklearn.metrics._pairwise_distances_reduction._middle_term_computer, sklearn.utils._heap, sklearn.utils._sorting, sklearn.metrics._pairwise_distances_reduction._argkmin, sklearn.metrics._pairwise_distances_reduction._argkmin_classmode, sklearn.utils._vector_sentinel, sklearn.metrics._pairwise_distances_reduction._radius_neighbors, sklearn.metrics._pairwise_fast, sklearn.utils._random, sklearn.utils._seq_dataset, sklearn.linear_model._cd_fast, sklearn.utils.arrayfuncs, sklearn.svm._liblinear, sklearn.svm._libsvm, sklearn.svm._libsvm_sparse, sklearn.utils._weight_vector, sklearn.linear_model._sgd_fast, sklearn.linear_model._sag_fast, sklearn.utils._fast_dict, sklearn.cluster._hierarchical_fast, sklearn.cluster._k_means_common, sklearn.cluster._k_means_elkan, sklearn.cluster._k_means_lloyd, sklearn.cluster._k_means_minibatch, sklearn.neighbors._partition_nodes, sklearn.neighbors._ball_tree, sklearn.neighbors._kd_tree, sklearn.decomposition._online_lda_fast, sklearn.decomposition._cdnmf_fast, sklearn.cluster._dbscan_inner, sklearn.cluster._hdbscan._tree, sklearn.cluster._hdbscan._linkage, sklearn.cluster._hdbscan._reachability, sklearn.tree._utils, sklearn.tree._tree, sklearn.tree._splitter, sklearn.tree._criterion, sklearn.neighbors._quad_tree, sklearn.manifold._barnes_hut_tsne, sklearn.manifold._utils, scipy.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, sklearn.datasets._svmlight_format_fast, sklearn.feature_extraction._hashing_fast, sklearn.ensemble._gradient_boosting, sklearn.ensemble._hist_gradient_boosting.common, sklearn.ensemble._hist_gradient_boosting._gradient_boosting, sklearn.ensemble._hist_gradient_boosting._binning, sklearn.ensemble._hist_gradient_boosting._bitset, sklearn.ensemble._hist_gradient_boosting.histogram, sklearn.ensemble._hist_gradient_boosting._predictor, sklearn.ensemble._hist_gradient_boosting.splitting, sklearn.ensemble._hist_gradient_boosting.utils, _fastcluster, scipy.cluster._vq, scipy.cluster._hierarchy, scipy.cluster._optimal_leaf_ordering, numba.core.typeconv._typeconv, numba._helperlib, numba._dynfunc, numba._dispatcher, numba.core.runtime._nrt_python, numba.np.ufunc._internal, numba.experimental.jitclass._box, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, shaperone._cext, sqlalchemy.cyextension.collections, sqlalchemy.cyextension.immutabledict, sqlalchemy.cyextension.processors, sqlalchemy.cyextension.resultproxy, sqlalchemy.cyextension.util, greenlet._greenlet, statsmodels.robust._qn, scipy.signal._sigtools, scipy.signal._max_len_seq_inner, scipy.signal._upfirdn_apply, scipy.signal._spline, scipy.signal._sosfilt, scipy.signal._spectral, scipy.signal._peak_finding_utils, statsmodels.tsa._innovations, statsmodels.nonparametric._smoothers_lowess, statsmodels.nonparametric.linbin, statsmodels.tsa.statespace._smoothers._conventional, statsmodels.tsa.statespace._smoothers._univariate, statsmodels.tsa.statespace._smoothers._univariate_diffuse, statsmodels.tsa.statespace._smoothers._classical, statsmodels.tsa.statespace._smoothers._alternative, statsmodels.tsa.statespace._kalman_smoother, statsmodels.tsa.statespace._filters._conventional, statsmodels.tsa.statespace._filters._univariate, statsmodels.tsa.statespace._filters._univariate_diffuse, statsmodels.tsa.statespace._filters._inversions, statsmodels.tsa.statespace._kalman_filter, statsmodels.tsa.statespace._tools, statsmodels.tsa.statespace._representation, statsmodels.tsa.statespace._initialization, statsmodels.tsa.statespace._simulation_smoother, statsmodels.tsa.statespace._cfa_simulation_smoother, statsmodels.tsa.innovations._arma_innovations, statsmodels.tsa.exponential_smoothing._ets_smooth, scipy.fftpack.convolve, statsmodels.tsa.stl._stl, statsmodels.tsa.holtwinters._exponential_smoothers, statsmodels.tsa.regime_switching._hamilton_filter, statsmodels.tsa.regime_switching._kim_smoother, _cffi_backend, multidict._multidict, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, frozenlist._frozenlist, grpc._cython.cygrpc (total: 323)&lt;/SPAN&gt; &lt;SPAN class=""&gt;---------------------------------------------------------------------------&lt;/SPAN&gt; &lt;SPAN class=""&gt;Last messages on stdout:&lt;/SPAN&gt; &lt;SPAN class=""&gt;NOTE: When using the `ipython kernel` entry point, Ctrl-C will not work.&lt;/SPAN&gt; &lt;SPAN class=""&gt;To exit, you will have to explicitly quit this process, by either sending&lt;/SPAN&gt; &lt;SPAN class=""&gt;"quit" from a client, or using Ctrl-\ in UNIX-like environments.&lt;/SPAN&gt; &lt;SPAN class=""&gt;To read more about this, see &lt;A href="https://github.com/ipython/ipython/issues/2049" target="_blank" rel="noopener"&gt;https://github.com/ipython/ipython/issues/2049&lt;/A&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
    <pubDate>Tue, 04 Jul 2023 09:55:41 GMT</pubDate>
    <dc:creator>HappyScientist</dc:creator>
    <dc:date>2023-07-04T09:55:41Z</dc:date>
    <item>
      <title>Received Fatal error: The Python kernel is unresponsive.</title>
      <link>https://community.databricks.com/t5/machine-learning/received-fatal-error-the-python-kernel-is-unresponsive/m-p/36920#M1922</link>
      <description>&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;I am running a databricks job on a cluster and I keep running into the following issue (pasted below in bold)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;The job trains a machine learning model on a modestly sized dataset (~ half GB). Note that I use pandas dataframes for the data, sklearn for ML modeling, and optuna for hyper-parameter optimization. I am not using pyspark or MLlib for this job.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;The job is run on a single node cluster.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;The DB Runtime is 13.1ML.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;Previously, I was able to run this job with the same cluster. Ever since, I updated the sklearn version, I am not able to run the job anymore. My hunch is that version of joblib, which is used by sklearn for parallelization, may have changed as well and so that may have contributed to the issue.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;I used one of the advices given in this thread&amp;nbsp;&lt;A href="https://community.databricks.com/t5/data-engineering/fatal-error-the-python-kernel-is-unresponsive/td-p/9808" target="_self"&gt;fatal-error-the-python-kernel-is-unresponsive:&amp;nbsp;&lt;/A&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;I added&amp;nbsp;&lt;SPAN&gt;spark.databricks.python.defaultPythonRepl pythonshell to the configuration but that did not resolved the issue.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;&lt;SPAN&gt;I also played around with resizing the memory/number of cpus in the cluster and that did not yield any resolution either.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;STRONG&gt;&lt;SPAN class=""&gt;Fatal error&lt;/SPAN&gt;: The Python kernel is unresponsive.&lt;/STRONG&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;STRONG&gt;&lt;SPAN class=""&gt;---------------------------------------------------------------------------&lt;/SPAN&gt; &lt;SPAN class=""&gt;The Python process exited with exit code 134 (SIGABRT: Aborted).&lt;/SPAN&gt; &lt;SPAN class=""&gt;The last 10 KB of the process's stderr and stdout can be found below. See driver logs for full logs.&lt;/SPAN&gt; &lt;SPAN class=""&gt;---------------------------------------------------------------------------&lt;/SPAN&gt; &lt;SPAN class=""&gt;Last messages on stderr:&lt;/SPAN&gt; &lt;SPAN class=""&gt;one&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/databricks/python/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 510 in dispatch_queue&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/usr/lib/python3.10/asyncio/events.py", line 80 in _run&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/usr/lib/python3.10/asyncio/base_events.py", line 1896 in _run_once&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/usr/lib/python3.10/asyncio/base_events.py", line 600 in run_forever&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/databricks/python/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 199 in start&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/databricks/python/lib/python3.10/site-packages/ipykernel/kernelapp.py", line 712 in start&lt;/SPAN&gt; &lt;SPAN class=""&gt;File "/databricks/python_shell/scripts/db_ipykernel_launcher.py", line 126 in &amp;lt;module&amp;gt;&lt;/SPAN&gt; &lt;SPAN class=""&gt;Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, psutil._psutil_linux, psutil._psutil_posix, zmq.backend.cython.context, zmq.backend.cython.message, zmq.backend.cython.socket, zmq.backend.cython._device, zmq.backend.cython._poll, zmq.backend.cython._proxy_steerable, zmq.backend.cython._version, zmq.backend.cython.error, zmq.backend.cython.utils, tornado.speedups, _pydevd_bundle.pydevd_cython, ujson, matplotlib._c_internal_utils, PIL._imaging, matplotlib._path, kiwisolver._cext, matplotlib._image, simplejson._speedups, charset_normalizer.md, yaml._yaml, google._upb._message, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.tslib, pandas._libs.lib, pandas._libs.hashing, pyarrow.lib, pyarrow._hdfsio, pandas._libs.ops, numexpr.interpreter, pyarrow._compute, pandas._libs.arrays, pandas._libs.index, pandas._libs.join, pandas._libs.sparse, pandas._libs.reduction, pandas._libs.indexing, pandas._libs.internals, pandas._libs.writers, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.tslibs.strptime, pandas._libs.groupby, pandas._libs.testing, pandas._libs.parsers, pandas._libs.json, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.sparse.linalg._isolve._iterative, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.linalg._flinalg, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, numpy.linalg.lapack_lite, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize.__nnls, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.special.cython_special, scipy.stats._stats, scipy.stats.beta_ufunc, scipy.stats._boost.beta_ufunc, scipy.stats.binom_ufunc, scipy.stats._boost.binom_ufunc, scipy.stats.nbinom_ufunc, scipy.stats._boost.nbinom_ufunc, scipy.stats.hypergeom_ufunc, scipy.stats._boost.hypergeom_ufunc, scipy.stats.ncf_ufunc, scipy.stats._boost.ncf_ufunc, scipy.stats.ncx2_ufunc, scipy.stats._boost.ncx2_ufunc, scipy.stats.nct_ufunc, scipy.stats._boost.nct_ufunc, scipy.stats.skewnorm_ufunc, scipy.stats._boost.skewnorm_ufunc, scipy.stats.invgauss_ufunc, scipy.stats._boost.invgauss_ufunc, scipy.interpolate._fitpack, scipy.interpolate.dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy._lib._uarray._uarray, scipy.stats._statlib, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, sklearn.__check_build._check_build, sklearn.utils._isfinite, sklearn.utils.murmurhash, sklearn.utils._openmp_helpers, Cython.Plex.Actions, Cython.Plex.Scanners, Cython.Compiler.Scanning, sklearn.utils._logistic_sigmoid, sklearn.utils.sparsefuncs_fast, sklearn._loss._loss, sklearn._isotonic, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.preprocessing._target_encoder_fast, sklearn.metrics._dist_metrics, sklearn.metrics._pairwise_distances_reduction._datasets_pair, sklearn.utils._cython_blas, sklearn.metrics._pairwise_distances_reduction._base, sklearn.metrics._pairwise_distances_reduction._middle_term_computer, sklearn.utils._heap, sklearn.utils._sorting, sklearn.metrics._pairwise_distances_reduction._argkmin, sklearn.metrics._pairwise_distances_reduction._argkmin_classmode, sklearn.utils._vector_sentinel, sklearn.metrics._pairwise_distances_reduction._radius_neighbors, sklearn.metrics._pairwise_fast, sklearn.utils._random, sklearn.utils._seq_dataset, sklearn.linear_model._cd_fast, sklearn.utils.arrayfuncs, sklearn.svm._liblinear, sklearn.svm._libsvm, sklearn.svm._libsvm_sparse, sklearn.utils._weight_vector, sklearn.linear_model._sgd_fast, sklearn.linear_model._sag_fast, sklearn.utils._fast_dict, sklearn.cluster._hierarchical_fast, sklearn.cluster._k_means_common, sklearn.cluster._k_means_elkan, sklearn.cluster._k_means_lloyd, sklearn.cluster._k_means_minibatch, sklearn.neighbors._partition_nodes, sklearn.neighbors._ball_tree, sklearn.neighbors._kd_tree, sklearn.decomposition._online_lda_fast, sklearn.decomposition._cdnmf_fast, sklearn.cluster._dbscan_inner, sklearn.cluster._hdbscan._tree, sklearn.cluster._hdbscan._linkage, sklearn.cluster._hdbscan._reachability, sklearn.tree._utils, sklearn.tree._tree, sklearn.tree._splitter, sklearn.tree._criterion, sklearn.neighbors._quad_tree, sklearn.manifold._barnes_hut_tsne, sklearn.manifold._utils, scipy.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, sklearn.datasets._svmlight_format_fast, sklearn.feature_extraction._hashing_fast, sklearn.ensemble._gradient_boosting, sklearn.ensemble._hist_gradient_boosting.common, sklearn.ensemble._hist_gradient_boosting._gradient_boosting, sklearn.ensemble._hist_gradient_boosting._binning, sklearn.ensemble._hist_gradient_boosting._bitset, sklearn.ensemble._hist_gradient_boosting.histogram, sklearn.ensemble._hist_gradient_boosting._predictor, sklearn.ensemble._hist_gradient_boosting.splitting, sklearn.ensemble._hist_gradient_boosting.utils, _fastcluster, scipy.cluster._vq, scipy.cluster._hierarchy, scipy.cluster._optimal_leaf_ordering, numba.core.typeconv._typeconv, numba._helperlib, numba._dynfunc, numba._dispatcher, numba.core.runtime._nrt_python, numba.np.ufunc._internal, numba.experimental.jitclass._box, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, shaperone._cext, sqlalchemy.cyextension.collections, sqlalchemy.cyextension.immutabledict, sqlalchemy.cyextension.processors, sqlalchemy.cyextension.resultproxy, sqlalchemy.cyextension.util, greenlet._greenlet, statsmodels.robust._qn, scipy.signal._sigtools, scipy.signal._max_len_seq_inner, scipy.signal._upfirdn_apply, scipy.signal._spline, scipy.signal._sosfilt, scipy.signal._spectral, scipy.signal._peak_finding_utils, statsmodels.tsa._innovations, statsmodels.nonparametric._smoothers_lowess, statsmodels.nonparametric.linbin, statsmodels.tsa.statespace._smoothers._conventional, statsmodels.tsa.statespace._smoothers._univariate, statsmodels.tsa.statespace._smoothers._univariate_diffuse, statsmodels.tsa.statespace._smoothers._classical, statsmodels.tsa.statespace._smoothers._alternative, statsmodels.tsa.statespace._kalman_smoother, statsmodels.tsa.statespace._filters._conventional, statsmodels.tsa.statespace._filters._univariate, statsmodels.tsa.statespace._filters._univariate_diffuse, statsmodels.tsa.statespace._filters._inversions, statsmodels.tsa.statespace._kalman_filter, statsmodels.tsa.statespace._tools, statsmodels.tsa.statespace._representation, statsmodels.tsa.statespace._initialization, statsmodels.tsa.statespace._simulation_smoother, statsmodels.tsa.statespace._cfa_simulation_smoother, statsmodels.tsa.innovations._arma_innovations, statsmodels.tsa.exponential_smoothing._ets_smooth, scipy.fftpack.convolve, statsmodels.tsa.stl._stl, statsmodels.tsa.holtwinters._exponential_smoothers, statsmodels.tsa.regime_switching._hamilton_filter, statsmodels.tsa.regime_switching._kim_smoother, _cffi_backend, multidict._multidict, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, frozenlist._frozenlist, grpc._cython.cygrpc (total: 323)&lt;/SPAN&gt; &lt;SPAN class=""&gt;---------------------------------------------------------------------------&lt;/SPAN&gt; &lt;SPAN class=""&gt;Last messages on stdout:&lt;/SPAN&gt; &lt;SPAN class=""&gt;NOTE: When using the `ipython kernel` entry point, Ctrl-C will not work.&lt;/SPAN&gt; &lt;SPAN class=""&gt;To exit, you will have to explicitly quit this process, by either sending&lt;/SPAN&gt; &lt;SPAN class=""&gt;"quit" from a client, or using Ctrl-\ in UNIX-like environments.&lt;/SPAN&gt; &lt;SPAN class=""&gt;To read more about this, see &lt;A href="https://github.com/ipython/ipython/issues/2049" target="_blank" rel="noopener"&gt;https://github.com/ipython/ipython/issues/2049&lt;/A&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Tue, 04 Jul 2023 09:55:41 GMT</pubDate>
      <guid>https://community.databricks.com/t5/machine-learning/received-fatal-error-the-python-kernel-is-unresponsive/m-p/36920#M1922</guid>
      <dc:creator>HappyScientist</dc:creator>
      <dc:date>2023-07-04T09:55:41Z</dc:date>
    </item>
    <item>
      <title>Re: Received Fatal error: The Python kernel is unresponsive.</title>
      <link>https://community.databricks.com/t5/machine-learning/received-fatal-error-the-python-kernel-is-unresponsive/m-p/83118#M3581</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/84316"&gt;@HappyScientist&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;Can you increase the memory size of your cluster and try again?&lt;/P&gt;</description>
      <pubDate>Thu, 15 Aug 2024 16:50:27 GMT</pubDate>
      <guid>https://community.databricks.com/t5/machine-learning/received-fatal-error-the-python-kernel-is-unresponsive/m-p/83118#M3581</guid>
      <dc:creator>KumaranT</dc:creator>
      <dc:date>2024-08-15T16:50:27Z</dc:date>
    </item>
  </channel>
</rss>

