Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2023 06:00 AM
I have a set of database-files (.db) which I need to read into my Python Notebook in Databricks. I managed to do this fairly simple up until July when a update in SQLite JDBC library was introduced.
Up until now I have read the files in question with this (modified) code:
`df = spark.read.format("jdbc").options(url='<url>',
dbtable='<tablename>',
driver="org.sqlite.JDBC").load()`
However, after the update the data that is being read in is completely wrong (e.g. numeric columns with non-negative numbers, all of a sudden contains some negative numbers very different from the real value of the files).
Is there a better way to read in the .db files in the new SQLite JDBC 3.42.0.0 upgrade?