Hello all!
I couldn't find anything definitive related to this issue so I hope I'm not duplicating another topic :).
I have imported an R repository that normally runs on another machine and uses ODBC driver to issue sparkSQL commands to a compute (let's call it main compute). No issues there, everything works flawlessly.
Now I would like to turn that repo into a databricks-hosted Shiny app so we've created another compute for hosting it. I tried to use the same ODBC connection string to send SQL from the app's compute to the main compute but it fails (we're talking about the same workspace). Currently rewriting this code is not an option.
The error I get (both in R and isql) is:
Error from ThriftHiveClient: No more data to read
Sometimes with isql command I was getting SASL errors.
I tried many things:
- Using both preinstalled drivers and installing manually one from the download side
- Connecting to localhost
- Experimenting with various connection string parameters
- Customising the odbc.ini and odbcinst.init files
- Using the app's compute as the target of the ODBC connection
In theory, such a scenario should work, in the worst case I should be able to achieve inefficient communication between clusters. My admin took a look at the networking stuff but couldn't find anything problematic (although this is a new scenario for him).
Is there anything additional that is required for such a scenario to work? I'd appreciate any input! Thank you!