<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: [CANNOT_OPEN_SOCKET] Can not open socket: [&amp;quot;tried to connect to ('127.0.0.1', 45287) in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134140#M50031</link>
    <description>&lt;P&gt;Thx for your details.&lt;/P&gt;&lt;P&gt;How we can roll back to 15.4.24?&lt;/P&gt;&lt;P&gt;We config the cluster type only at an yaml, not the running time version.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;job_clusters:&lt;BR /&gt;- job_cluster_key: default&lt;BR /&gt;new_cluster:&lt;BR /&gt;spark_version: 15.4.x-scala2.12&lt;BR /&gt;node_type_id: Standard_D64s_v3&lt;BR /&gt;autoscale:&lt;BR /&gt;min_workers: 1&lt;BR /&gt;max_workers: 5&lt;BR /&gt;enable_elastic_disk: true&lt;BR /&gt;data_security_mode: SINGLE_USER&lt;BR /&gt;spark_conf:&lt;BR /&gt;spark.databricks.pip.ignoreSSL: true&lt;BR /&gt;spark.sql.inMemoryColumnarStorage.compressed: true&lt;BR /&gt;spark.sql.adaptive.enabled: true&lt;BR /&gt;spark.sql.adaptive.coalescePartitions.enabled: true&lt;BR /&gt;spark.databricks.delta.schema.autoMerge.enabled: true&lt;BR /&gt;spark.databricks.adaptive.autoOptimizeShuffle.enabled: true&lt;BR /&gt;spark.executor.heartbeatInterval: 300000&lt;BR /&gt;spark.network.timeout: 320000&lt;BR /&gt;spark.sql.codegen: true&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Greetings&lt;/P&gt;</description>
    <pubDate>Wed, 08 Oct 2025 05:30:39 GMT</pubDate>
    <dc:creator>timo82</dc:creator>
    <dc:date>2025-10-08T05:30:39Z</dc:date>
    <item>
      <title>[CANNOT_OPEN_SOCKET] Can not open socket: ["tried to connect to ('127.0.0.1', 45287)</title>
      <link>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134032#M49993</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;after databricks update the Runtime from&amp;nbsp;Release: 15.4.24 to&amp;nbsp;Release: 15.4.25 we getting in all jobs the Error:&lt;/P&gt;&lt;P&gt;[CANNOT_OPEN_SOCKET] Can not open socket: ["tried to connect to ('127.0.0.1', 45287)&lt;/P&gt;&lt;P&gt;What we can do here?&lt;/P&gt;&lt;P&gt;Greetings&lt;/P&gt;</description>
      <pubDate>Tue, 07 Oct 2025 08:56:45 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134032#M49993</guid>
      <dc:creator>timo82</dc:creator>
      <dc:date>2025-10-07T08:56:45Z</dc:date>
    </item>
    <item>
      <title>Re: [CANNOT_OPEN_SOCKET] Can not open socket: ["tried to connect to ('127.0.0.1', 45287)</title>
      <link>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134068#M50009</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/189352"&gt;@timo82&lt;/a&gt;!&lt;/P&gt;
&lt;P&gt;Can you try adding 'spark.databricks.pyspark.useFileBasedCollect': 'true' to your Spark config?&lt;/P&gt;</description>
      <pubDate>Tue, 07 Oct 2025 13:35:29 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134068#M50009</guid>
      <dc:creator>Advika</dc:creator>
      <dc:date>2025-10-07T13:35:29Z</dc:date>
    </item>
    <item>
      <title>Re: [CANNOT_OPEN_SOCKET] Can not open socket: ["tried to connect to ('127.0.0.1', 45287)</title>
      <link>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134076#M50011</link>
      <description>&lt;P&gt;Hey&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/189352"&gt;@timo82&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;This error indicates Python workers cannot communicate with the JVM after the maintenance update. Since it's affecting all jobs after upgrading to 15.4.25.&lt;/P&gt;&lt;P&gt;try these steps:&lt;/P&gt;&lt;P&gt;--&amp;gt; Completely restart the cluster (stop then start, not just restart) to reinitialize socket listeners&lt;BR /&gt;--&amp;gt; Check init scripts, Temporarily remove any cluster init scripts and test if jobs succeed without them, as maintenance updates can introduce incompatibilities&lt;BR /&gt;--&amp;gt; Review Spark configurations - Check driver logs for deprecated or conflicting Spark configs that may have changed between 15.4.24 and 15.4.25&lt;/P&gt;&lt;P&gt;Code workarounds:&lt;BR /&gt;--&amp;gt; Add warmup operations, Insert a simple operation like df.limit(1).collect() at the start of your jobs before the main processing to establish the connection&lt;BR /&gt;--&amp;gt; Implement retry logic, Wrap initial Spark actions in try-catch blocks, as socket errors can be transient during startup&lt;/P&gt;&lt;P&gt;The code workarounds help address the timing and initialization issues that cause the socket error between Python workers and the JVM.&lt;/P&gt;&lt;P&gt;If still failing:&lt;BR /&gt;--&amp;gt; Check cluster access mode,Verify you're using the appropriate access mode (Shared or Single User) for your workload&lt;BR /&gt;--&amp;gt; Increase cluster resources, Scale up memory if errors are intermittent under load&lt;BR /&gt;--&amp;gt; Roll back to 15.4.24, If blocking production, temporarily revert while investigating further&lt;BR /&gt;--&amp;gt; Contact Databricks support, Since this affects all jobs after a maintenance update, there may be a regression in 15.4.25&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 07 Oct 2025 14:10:10 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134076#M50011</guid>
      <dc:creator>HariSankar</dc:creator>
      <dc:date>2025-10-07T14:10:10Z</dc:date>
    </item>
    <item>
      <title>Re: [CANNOT_OPEN_SOCKET] Can not open socket: ["tried to connect to ('127.0.0.1', 45287)</title>
      <link>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134140#M50031</link>
      <description>&lt;P&gt;Thx for your details.&lt;/P&gt;&lt;P&gt;How we can roll back to 15.4.24?&lt;/P&gt;&lt;P&gt;We config the cluster type only at an yaml, not the running time version.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;job_clusters:&lt;BR /&gt;- job_cluster_key: default&lt;BR /&gt;new_cluster:&lt;BR /&gt;spark_version: 15.4.x-scala2.12&lt;BR /&gt;node_type_id: Standard_D64s_v3&lt;BR /&gt;autoscale:&lt;BR /&gt;min_workers: 1&lt;BR /&gt;max_workers: 5&lt;BR /&gt;enable_elastic_disk: true&lt;BR /&gt;data_security_mode: SINGLE_USER&lt;BR /&gt;spark_conf:&lt;BR /&gt;spark.databricks.pip.ignoreSSL: true&lt;BR /&gt;spark.sql.inMemoryColumnarStorage.compressed: true&lt;BR /&gt;spark.sql.adaptive.enabled: true&lt;BR /&gt;spark.sql.adaptive.coalescePartitions.enabled: true&lt;BR /&gt;spark.databricks.delta.schema.autoMerge.enabled: true&lt;BR /&gt;spark.databricks.adaptive.autoOptimizeShuffle.enabled: true&lt;BR /&gt;spark.executor.heartbeatInterval: 300000&lt;BR /&gt;spark.network.timeout: 320000&lt;BR /&gt;spark.sql.codegen: true&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Greetings&lt;/P&gt;</description>
      <pubDate>Wed, 08 Oct 2025 05:30:39 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134140#M50031</guid>
      <dc:creator>timo82</dc:creator>
      <dc:date>2025-10-08T05:30:39Z</dc:date>
    </item>
    <item>
      <title>Re: [CANNOT_OPEN_SOCKET] Can not open socket: ["tried to connect to ('127.0.0.1', 45287)</title>
      <link>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134141#M50032</link>
      <description>&lt;P&gt;&lt;SPAN&gt;spark_version: 15.4.x-scala2.12&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;to&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;spark_version: 15.4.24-scala2.12&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Correct?&lt;/P&gt;</description>
      <pubDate>Wed, 08 Oct 2025 05:33:47 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134141#M50032</guid>
      <dc:creator>timo82</dc:creator>
      <dc:date>2025-10-08T05:33:47Z</dc:date>
    </item>
    <item>
      <title>Re: [CANNOT_OPEN_SOCKET] Can not open socket: ["tried to connect to ('127.0.0.1', 45287)</title>
      <link>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134144#M50033</link>
      <description>&lt;P&gt;yes, exactly, Changing from&amp;nbsp;15.4.x-scala2.12&amp;nbsp;to&amp;nbsp;15.4.24-scala2.12&amp;nbsp;will pin your cluster to the 15.4.24 patch and prevent it from auto-upgrading to the problematic 15.4.25 version.&lt;/P&gt;</description>
      <pubDate>Wed, 08 Oct 2025 05:50:13 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134144#M50033</guid>
      <dc:creator>HariSankar</dc:creator>
      <dc:date>2025-10-08T05:50:13Z</dc:date>
    </item>
    <item>
      <title>Re: [CANNOT_OPEN_SOCKET] Can not open socket: ["tried to connect to ('127.0.0.1', 45287)</title>
      <link>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134231#M50057</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/121576"&gt;@HariSankar&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;Using Bundles doesn't seem to allow to provide a fixed patch version:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Error: cannot update job: INVALID_PARAMETER_VALUE: Invalid spark version 15.4.24-scala2.12.
  with databricks_job.pdv-partnerbul-dbxservice-housekeeping,
  on bundle.tf&lt;/LI-CODE&gt;</description>
      <pubDate>Wed, 08 Oct 2025 13:31:48 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134231#M50057</guid>
      <dc:creator>Hansjoerg</dc:creator>
      <dc:date>2025-10-08T13:31:48Z</dc:date>
    </item>
    <item>
      <title>Re: [CANNOT_OPEN_SOCKET] Can not open socket: ["tried to connect to ('127.0.0.1', 45287)</title>
      <link>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134235#M50058</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/37802"&gt;@Hansjoerg&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;Apologies for the confusion earlier. You are right Bundles doesn't allow pinning to specific patch versions like 15.4.24.&lt;/P&gt;&lt;P&gt;Your best option is to skip Bundles for now and use the regular Databricks Jobs setup (via UI or Jobs API) where you can specify exactly 15.4.24-scala2.12&lt;BR /&gt;to avoid the broken 15.4.25 version.&lt;/P&gt;&lt;P&gt;This will let you roll back to the working version while Databricks fixes the socket issue in 15.4.25.&lt;/P&gt;</description>
      <pubDate>Wed, 08 Oct 2025 14:16:33 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/cannot-open-socket-can-not-open-socket-quot-tried-to-connect-to/m-p/134235#M50058</guid>
      <dc:creator>HariSankar</dc:creator>
      <dc:date>2025-10-08T14:16:33Z</dc:date>
    </item>
  </channel>
</rss>

