Hey,
I'm trying to create a DLT pipeline that reads from a JDBC source, and the code I'm using looks something like this in python:
import dlt
@dlt.table
def table_name():
driver = 'oracle.jdbc.driver.OracleDriver'
url = '...'
query = 'SELECT ... FROM ...'
df = spark.read.format("jdbc")\
.option("driver", driver)\
.option("url",url)\
.option("user", username)\
.option("password", password)\
.option("query", query)\
.load()
return df
it works perfectly fine when outside of the DLT pipeline, and I know for sure that the "df" DataFrame should be created successfully.
I keep getting an error in my DLT pipeline logs that fails at the "setting up tables" stage which says: " Failed to resolve flow: 'table_name' ".
I tried creating a basic DLT that just reads from an existing delta table (one that read from the same JDBC table but outside of the DLT pipeline) and it works fine, so I know that my environment setup is working fine.
Can anyone pinpoint what is going wrong here?