cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks report error: unexpected end of stream, read 0 bytes from 4 (socket was closed by server)

SolaireOfAstora
New Contributor

Has anyone encountered this error and knows how to resolve it?

"Unexpected end of stream, read 0 bytes from 4 (socket was closed by server)."

This occurs in Databricks while generating reports.

I've already adjusted the wait_timeout to 28,800, and both and net_write_timeout to 31,536,000 (the maximum values) on the database side (MySQL).

Are there any other configurations I should modify, either on the database side or in Databricks?

1 REPLY 1

mark_ott
Databricks Employee
Databricks Employee

Yes, the "Unexpected end of stream, read 0 bytes from 4 (socket was closed by server)" error has been encountered by other Databricks users when generating reports with MySQL. You've already set the major MySQL timeout parameters to their maximums, which is a solid first step, but there are additional settings and potential causes to consider.​

Additional Settings and Fixes

  • Databricks/SQL Statement Timeout: Databricks also has its own SQL timeout parameter (STATEMENT_TIMEOUT). The default is 172,800 seconds (2 days), but you can set this at the workspace or session level. If your queries are long-running, increasing this value may help. Adjust via:​

    text
    SET STATEMENT_TIMEOUT = [your_value];

    Or through the admin UI under warehouse SQL configuration settings.​

  • JDBC Connection Socket Timeout: If connecting via JDBC, both Databricks and many proxy layers (Databricks SQL Warehouses, Privacera, etc.) provide a JDBC socket timeout setting. Ensure that the JDBC connection from Databricks is configured with a long enough socket timeout so the client doesn't disconnect during long queries.​

  • Network/Firewall Issues: Sometimes, firewalls or network interruptions close sockets unexpectedly. Check for intermediate network devices between Databricks and MySQL that might drop idle or long-lived connections. If possible, review firewall logs and consult networking teams.​

  • max_allowed_packet: Though less common, if your reports transmit large result sets, increasing MySQL's max_allowed_packet can avoid disconnections due to packets exceeding the limit. Try setting it higher than the default.​

  • MySQL Version and Drivers: In rare cases, certain driver/db version combinations cause compatibility issues (especially with SSL or newer MySQL/MariaDB versions). Trying an alternative JDBC driver or verifying compatibility can help.​

Diagnostic Tips

  • Enable ODBC/JDBC trace logging to pinpoint the exact call and timing of the error.​

  • Test with simpler queries or smaller result sets to isolate whether it's query complexity, size, or timing that triggers termination.

  • If running inside a Databricks notebook, check for resource exhaustion (memory, compute quotas) that might lead to abrupt process termination on either side.​

Summary Table

Setting Where to Configure Suggested Value/Action Reference
wait_timeout/net_write_timeout/net_read_timeout MySQL server Maximum (already set)
STATEMENT_TIMEOUT Databricks SQL Up to 172800 seconds
JDBC Socket Timeout JDBC connection / Privacera Up to desired duration in seconds
max_allowed_packet MySQL server Increase beyond default
Network/firewall settings Network/firewall config Review and increase idle/max timeouts
Driver Version Databricks connection Test alternate drivers
 
 

Applying these additional configurations and checking the network paths should help resolve—or further isolate—the socket closed error in Databricks. If the problem persists after these adjustments, reviewing query design or splitting very large result sets may be requiredequired.