Hello @aonurdemir,
Could you please re-run your pipeline now and check? This issue should be mitigated now. It is due to a recent internal bug that led to the unexpected handling of file paths with special characters.
You should set ignoreMissingFile...
Hello @EricCournarie ,
I believe this is a JDBC driver limitation. The Databricks JDBC driver serializes complex types (STRUCT/ARRAY) to a JSON-like string but doesn’t always quote DATE/TIMESTAMP (and some characters) correctly, so rs.getObject()/rs....
Hello @Akshay_Petkar ,
%run can break in Jobs due to path resolution—use relative paths (e.g., %run ./lib/utils) and keep all targets in the same Repo.
Don’t rely on widget state. Define widgets only in the entry notebook and pass values exp...
Hello @Dhruv-22 ,
No—mergeSchema doesn’t auto-widen an incoming INT column to a table’s BIGINT (nor does it auto-cast). mergeSchema mainly helps add new columns (and historically only a tiny set of numeric upcasts), but it won’t change an existing co...
Hello @anusha98 ,
You’re hitting a real limitation of Structured Streaming: non-time window functions (like row_number() over (...)) aren’t allowed on streaming DFs.
You need to use agg().max() to get the “latest value per key”
@dlt.table(name="temp_...