When the numbers in the table are really big (millions and billions) or really low (e.g. 1e-15), SQLite JDBC may struggle to import the correct values. To combat this, a good idea could be to use customSchema in options to define the schema using Decimals with a high range (or many decimals when numbers are really low).
`df = spark.read.format("jdbc").options(url='<url>',
dbtable='<tablename>',
driver="org.sqlite.JDBC",
customSchema="<col1> DECIMAL(38, 0), <col2> DECIMAL(38, 0), <col3> DECIMAL(38, 0)"
).load()`