01-27-2022 02:29 AM
We have a Denodo big data platform hosted on Databricks. Recently we have been facing the exception with message '[Simba][SparkJDBCDriver](500550)' with the Databricks which interrupts the Databricks connection after the certain time Interval usually b/w 12 to 15 mins). The SQL query returns the results fine. But when integrating with other platforms, the connection is getting closed with this exception.
Any guidance you have will be much appreciated.
QUERY [VIRTUAL] [ERROR]
QUERY [JDBC WRAPPER] [ERROR]
QUERY [JDBC ROUTE] [ERROR] Received exception with message '[Simba][SparkJDBCDriver](500550);
Error fetching next row]
02-12-2022 08:00 AM
@Karthikeyan Gunasekaran
Could you please enable logging and see if you can collect more information on this (https://simba.wpengine.com/resources/drivers/enable-logging-odbc-driver/)
01-27-2022 02:48 PM
Hello, @Karthikeyan Gunasekaran! My name is Piper, and I'm one of the Databricks moderators. It's nice to meet you. Thank you for bringing your question to us. Let's give it a bit longer to give the other members of the community a chance to respond before we circle back around to this.
02-07-2022 08:27 AM
@Karthikeyan Gunasekaran - We are looking for someone to help you.
02-11-2022 02:14 PM
Hi @Karthikeyan Gunasekaran , the problem might be in the file format. Can you please check the file format?
02-12-2022 08:00 AM
@Karthikeyan Gunasekaran
Could you please enable logging and see if you can collect more information on this (https://simba.wpengine.com/resources/drivers/enable-logging-odbc-driver/)
03-01-2022 06:12 PM
Hi @Karthikeyan Gunasekaran ,
Did you enable the extra logging and narrow down the issue? do you still need help?
03-03-2022 06:58 AM
@Karthikeyan Gunasekaran please let us know if you got a chance to go through our earlier comment
08-02-2022 11:10 AM
Hi All,
We are also experiencing the same behavior:
[Simba][SimbaSparkJDBCDriver] (500550) The next rowset buffer is already marked as consumed. The fetch thread might have terminated unexpectedly. Foreground thread ID: xxxx. Background thread ID: yyyy.
It does not happen all the time, so we are really not sure on what may be happening here...
Wonder if @Karthikeyan Gunasekaran was able to fix or at least understand it?
Thanks
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group