Hi everyone,
I'm trying to read data from an Oracle database into Databricks using JDBC with timestamp-based partitioning. However, it seems that the partitioning doesn't work as expected when I specify lowerBound and upperBound using timestamp strings (for Azure SQL this works fine).
Here's a simplified version of my code:
df = (
spark.read.format("jdbc")
.option("url", jdbcUrl)
.option("dbtable", query)
.options(**connectionProperties)
.option("partitionColumn", "TIJDSTEMPEL")
.option("lowerBound", "2025-07-04 17:00:00")
.option("upperBound", "2025-07-14 16:00:00")
.option("numPartitions", 4)
.load()
)
The TIJDSTEMPEL column is a TIMESTAMP in Oracle. If I don't use partitioning, the query works fine. But as soon as I add the .option("partitionColumn", ...) and bounds, it returns the error: ORA-01861: literal does not match format string
Has anyone successfully loaded Oracle data using TIMESTAMP column for partitioning? Any advice or working examples would be appreciated!
Thanks in advance!