Hi TENDK,
That is expected behavior and does not necessarily mean something is wrong. Here is what is happening:
When you run nslookup from inside a Databricks notebook, the notebook is executing on a cluster that sits inside the Databricks-managed V...
Hi excavator-matt,
Thanks for the follow-up and glad to hear you got Option C (PAT-based) working with Copilot and VSCode, and that you have moved to Claude Code with the official skills.
Regarding the issues with Options A and B:
Option A (OAuth U2M...
Hi @Anish_2,
Looking at your pipeline DAG, the issue is that you have two separate APPLY CHANGES INTO flows both targeting the same silver table (ag_vlc_hist), one from ag_swt_vlchistory_historical and one from ag_swt_vlchistory. When you define mult...
Hi @antgei ,
Thanks for sharing your experience and the workaround. You raise a valid point -- the platform should ensure task files are fully synced before attempting execution, regardless of API rate limiting on the backend. When a job has hundreds...
Hi IM_01,
You can set pipelines.reset.allowed as a table property directly in your pipeline definition. The approach depends on whether you are using Python or SQL:
Python:
@dlt.table(
table_properties={"pipelines.reset.allowed": "true"}
)
def my...