01-14-2026 06:27 AM - edited 01-14-2026 06:31 AM
I created an external connection to an Oracle Database 12c Standard Edition Release 12.1.0.2.0.
The connection is working, I executed successfully a lot of queries with different filters and joins. But I couldn't find a way to make any date filter pushdown work. When I try to run a simple query with a date filter I get this error:
select dt_updated
from external_conn.my_schema.my_table
where dt_updated > cast('2026-01-01' as date)ORA-17041: Missing IN or OUT parameter at index: 1 https://docs.oracle.com/error-help/db/ora-17041/
The "dt_updated" is a timestamp field and databricks recognizes it when I see the catalog.
I tried a lot of different syntaxes:
- where dt_updated > '2024-01-01'
- where dt_updated > DATE '2024-01-01'
- where dt_updated > TIMESTAMP('2024-01-01 00:00:00')
- where dt_updated > TO_DATE('2024-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
- where dt_updated > TIMESTAMP '2024-01-01 00:00:00'
- where dt_updated > CAST('2024-01-01 00:00:00' AS TIMESTAMP)
- where dt_updated > TO_DATE('2024-01-01', 'YYYY-MM-DD')
But all of them compile to the same external engine query, that I can see when I run EXPLAIN FORMATTED:
External engine query:
SELECT "DT_UPDATED"
FROM "MY_SCHEMA"."MY_TABLE"
WHERE ("DT_UPDATED" IS NOT NULL)
AND ("DT_UPDATED" > 2026-01-01T00:00:00Z) Looks like the engine is not parsing correctly =/
I tried """datediff(dt_updated, cast('2026-01-01' as date)) > 0""" and the query run, but the performance is terrible because the table is too large and this function doesn't pushdown to oracle.
Am I doing something wrong?
Here is the oracle federated queries docs:
https://docs.databricks.com/aws/en/query-federation/oracle
01-14-2026 08:37 AM
I'm checking on this internally. Will loop back.
01-14-2026 08:52 AM
Hi @diogofreitaspe ,
Could you try one another option? Something like below:
WHERE CAST(dt_updated AS DATE) > DATE '2026-01-01'
01-15-2026 04:13 AM
Same error, same external engine query =/
But you gave me an idea that is a little bit better, because the cast is pushed down to oracle:
WHERE cast(dt_updated as string) > '2026-01-01'
01-14-2026 11:48 AM
Can you spin up a DBR 18+ all-purpose compute cluster with this spark config set to true
spark.databricks.connector.oracle.properlyCompileJava8Values true
and then re-run your tests?
If this works then I think it might be a known issue and we can find a path forward for dbsql/serverless compute (where you won't be able to set this config yourself).
01-15-2026 04:39 AM - edited 01-15-2026 04:46 AM
Interesting...It worked perfectly with a Cluster 18.0 Beta and 16.4 LTS, but I can only use this in notebooks, not in SQL queries and dashboards and it's very limiting for my use case.
I tried all the SQL Warehouses: Classic/Serverless/Pro and Preview/Current and none worked.
I also enabled all the preview features and tried to run the raw query, it worked, but it's a very inconvenient to ask my team to write all this code in every query:
select *
from remote_query(
my_conn,
service_name => 'my_svc',
query =>
'''
SELECT *
FROM my_schema.my_table
WHERE dt_updated > DATE\'2026-01-01\'
'''
);
03-08-2026 05:54 PM
Hi @diogofreitaspe,
This is a known behavior with how the Lakehouse Federation Oracle connector serializes date and timestamp literals during predicate pushdown. When Databricks pushes the filter down to Oracle, the date value gets rendered as a bare ISO 8601 literal (e.g. 2026-01-01T00:00:00Z) rather than being properly bound as a JDBC parameter or wrapped in Oracle-compatible date syntax. Oracle does not recognize that format, which triggers the ORA-17041 error.
RECOMMENDED FIX (DBR 18+)
If you are on Databricks Runtime 18.0 or later, set this Spark configuration on your cluster or SQL warehouse:
spark.databricks.connector.oracle.properlyCompileJava8Values true
This config corrects how Java 8 date/time values are compiled into the pushed-down query, so Oracle receives a properly formatted and bound date parameter. After setting this, re-run your original query:
SELECT dt_updated
FROM external_conn.my_schema.my_table
WHERE dt_updated > CAST('2026-01-01' AS DATE)
If you are running on a SQL warehouse (serverless), you may not be able to set arbitrary Spark configs. In that case, try the workarounds below while the fix propagates to the serverless runtime.
WORKAROUNDS FOR SERVERLESS OR OLDER RUNTIMES
1. Use the ANSI DATE literal syntax:
SELECT dt_updated FROM external_conn.my_schema.my_table WHERE CAST(dt_updated AS DATE) > DATE '2026-01-01'
The DATE 'YYYY-MM-DD' literal is part of the ANSI SQL standard and may be pushed down differently than CAST expressions.
2. Use a pass-through string comparison (if the column permits it):
SELECT dt_updated FROM external_conn.my_schema.my_table WHERE dt_updated > '2026-01-01'
This sometimes avoids the date-specific pushdown path entirely, though behavior depends on Oracle's implicit type conversion settings.
3. Disable filter pushdown temporarily to confirm the root cause:
SET spark.databricks.connector.oracle.pushDownFilter = false;
SELECT dt_updated
FROM external_conn.my_schema.my_table
WHERE dt_updated > CAST('2026-01-01' AS DATE)
With pushdown disabled, Databricks will pull all rows and apply the filter locally. This is not ideal for large tables, but it confirms whether the issue is specifically in the pushdown layer.
ADDITIONAL NOTES
- Oracle DATE columns are mapped to TimestampType (or TimestampNTZType depending on your spark.sql.timestampType setting) in Databricks. This means your filter comparisons should use timestamp-compatible syntax.
- The supported pushdown operations for Oracle federation include filters, projections, limit, aggregates, offset, cast, and string operations. Date filter pushdown is supported, but the literal serialization issue you are hitting is the specific problem addressed by the Spark config above.
- If you are on Oracle 11.2.0.3.0 or later and see ORA-01882 timezone errors as well, check the spark.databricks.connector.oracle.timezoneAsRegion config documented here: https://docs.databricks.com/en/query-federation/oracle.html
REFERENCES
- Lakehouse Federation for Oracle: https://docs.databricks.com/en/query-federation/oracle.html
- Lakehouse Federation overview: https://docs.databricks.com/en/query-federation/index.html
* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.
If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.