Dear Databricks and community,
โ
I have been struggling with a bug related to using golang and the Databricks ODBC driver.
โ
It turns out that `SQLDescribeColW` consequently returns 256 as a length for `string` columns. However, in Spark, strings might be much longer than that.
This is normally not an issue unless the ODBC library actively uses this information when allocating and reading data. This is the case for alexbrainman/odbc, which allocates 256 bytes for the strings and seg-faults when it reads cells where the strings are longer than 256 bytes.
The issue, and the fix, is described in detail here: https://github.com/alexbrainman/odbc/issues/165
As far as I know, the driver is developed by Magnitude (https://www.magnitude.com/drivers/spark-jdbc-odbc) , so I will see if there is a way for me to let them know about this bug. Still, since the driver is distributed by Databricks, I felt a post/bug report would be beneficial in this repository as well.
Thank you!
Hรฅkon