cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How to force delta live tables legacy execution mode?

Erik_L
Contributor II

We've been running delta live tables for some time with unity catalog and it's as slow as a sloth on a Hawaiian vacation.

Anyway, DLT had three consecutive failures (due to the data source being unreliable) and then the logs printed: "MaxRetryThreshold reached. Flow <my_flow> fallback to legacy execution mode. No user action required." After this, the pipelines ran like 3-4x faster. How can I force Delta Live Tables to use legacy execution mode?

1 REPLY 1

Kaniz
Community Manager
Community Manager

Hi @Erik_L

  • Legacy execution mode in DLT is a backward compatibility mode that can be useful in certain scenarios. It allows you to use the older execution behaviour, which might improve performance in specific cases.
  • If your pipelines ran significantly faster after falling back to legacy execution mode, itโ€™s worth exploring how to enforce this mode.
  • DLT processes new data as it arrives in data sources to keep tables throughout the pipeline fresh. The execution mode is independent of the type of table being computed.
  • Both materialized views and streaming tables can be updated in either execution mode
  • To force DLT to use legacy execution mode, you can follow these steps:
    • Cluster Policy: Select a cluster policy for your DLT pipeline.
    • Source Code Libraries: Configure any necessary source code libraries.
    • Storage Location: Specify a storage location for your pipeline output tables.
    • Target Schema: Define a target schema for the output tables.
    • Compute Settings: Configure your compute settings, including instance types.
    • Autoscaling: Consider using autoscaling to increase efficiency and reduce resource usage.
    • Delay Compute Shutdown: Optionally delay compute shutdown.
    • Single Node Cluster: Create a single-node cluster if needed.
    • Note that DLT doesnโ€™t directly support โ€œcatalog.โ€ However, you can import data to a target schema in your Hive Metastore and then โ€œupgradeโ€ that schema to Unity Catalog.
    • The originating pipeline pointing at the old schema can pipe data into the appropriate place in Unit....

If you have any further questions or need additional assistance, feel free to ask! ๐Ÿ˜Š

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!