In unity catalog, I have a connection to a SQL Server database. When I try to filter by a datetime column using a datetime with fractional seconds, Databricks gives me this error:
Job aborted due to stage failure: com.microsoft.sqlserver.jdbc.SQLServerException: An error occurred during the current command (Done status 0). Conversion failed when converting date and/or time from character string.
On the databricks side, I use Azure Databricks, and I have tried this both using Serverless compute with Environment version 3 and with classic compute using 16.4 LTS (includes Apache Spark 3.5.2, Scala 2.12). On the SQL Server side, I have see it using both Enterprise Edition: Core-based Licensing (64-bit) and SQL Azure.
Here's a minimal case to replicate. In SQL Server:
create table test_datetimes (dt datetime);
INSERT INTO test_datetimes (dt) VALUES ('2025-07-17 14:33:21');
INSERT INTO test_datetimes (dt) VALUES ('2025-07-17 15:53:35');
INSERT INTO test_datetimes (dt) VALUES ('2025-07-17 16:36:33');
INSERT INTO test_datetimes (dt) VALUES ('2025-08-07 13:41:27');
INSERT INTO test_datetimes (dt) VALUES ('2025-08-07 15:41:51');
INSERT INTO test_datetimes (dt) VALUES ('2025-08-07 15:46:22');
INSERT INTO test_datetimes (dt) VALUES ('2025-08-14 11:07:32');
INSERT INTO test_datetimes (dt) VALUES ('2025-09-08 15:04:08');
INSERT INTO test_datetimes (dt) VALUES ('2025-09-08 20:57:18');
INSERT INTO test_datetimes (dt) VALUES ('2025-09-08 21:40:42');
INSERT INTO test_datetimes (dt) VALUES ('2025-09-08 22:24:11');
INSERT INTO test_datetimes (dt) VALUES ('2025-09-09 10:49:18');
INSERT INTO test_datetimes (dt) VALUES ('2025-09-09 11:18:32');
INSERT INTO test_datetimes (dt) VALUES ('2025-09-10 13:47:41');
INSERT INTO test_datetimes (dt) VALUES ('2025-09-29 15:59:32');
INSERT INTO test_datetimes (dt) VALUES ('2025-09-29 23:03:27');
In Databricks:
from pyspark.sql import functions as sf
from datetime import datetime
# No error here:
ts_no_fraction = datetime(2025, 9, 28, 13, 51, 37)
spark.table('my_sql_server_catalog.dbo.test_datetimes').filter(sf.col('dt') >= sf.lit(ts_no_fraction)).display()
# We get the error here:
ts_with_fraction = datetime(2025, 9, 28, 13, 51, 37, 10)
spark.table('my_sql_server_catalog.dbo.test_datetimes').filter(sf.col('dt') >= sf.lit(ts_with_fraction)).display()