I created simplistic DLT pipeline that create one table. When I delete the pipeline the tables is dropped as well. That's not really desired behavior. Since I remember there was a strong distinction between data (stored in tables) and processing (spark). It's kind of unexpected that when I delete/recreate my job definition all the associated data is gone as well. I'd expect that after recreating the pipeline it will pick up where it left - existing tables will be imported rather that recreated from scratch.
Has anyone faced the similar issue?
Are there any workaround to make sure that tables are keep intact even when pipeline is deleted?