How do you properly read database-files (.db) with Spark in Python after the JDBC update?

jomt
New Contributor III

I have a set of database-files (.db) which I need to read into my Python Notebook in Databricks. I managed to do this fairly simple up until July when a update in SQLite JDBC library was introduced. 

Up until now I have read the files in question with this (modified) code:

    `df = spark.read.format("jdbc").options(url='<url>',

                                       dbtable='<tablename>',
                                       driver="org.sqlite.JDBC").load()`
 
However, after the update the data that is being read in is completely wrong (e.g. numeric columns with non-negative numbers, all of a sudden contains some negative numbers very different from the real value of the files).
 
Is there a better way to read in the .db files in the new SQLite JDBC 3.42.0.0 upgrade?