Monday - last edited Monday
Hello guys,
I'm building ETL pipeline and need to access HANA data lake file system. In order to do that I need to have sap-hdlfs library in compute environment, library is available in maven repository.
My job will have multiple notebook task and ETL pipeline and from what I've researched, notebook tasks will use the same compute with the job, but ETL pipeline will have its own compute. And from UI, I cannot see where to add library into it.
Could anyone confirm whether my understanding is correct and how to add library to ETL pipeline compute?
Thanks in advanced.
yesterday
Hey @anhnnguyen, you can add libraries a few ways when building a notebook-based ETL pipeline:
The best practice, scalable approach to add libraries across multiple workloads or clusters is to use Policy-scoped libraries. Any compute that uses the cluster policy you define will add any dependencies to the cluster at runtime. Check this: Policy-scoped libraries
If you only need to add libraries to a single workload or cluster, use compute-scoped libraries.
Check this: Compute-scoped Libraries
Monday
Hi @anhnnguyen ,
Unfortunately, Scala or Java libraries are not supported in lakeflow declarative pipeline (ETL Pipelines). So you need to use regular job if you want to install maven dependencies.
Manage Python dependencies for pipelines | Databricks on AWS
yesterday
DLT doesn’t have a UI for library installation, but you can:
Use libraries configuration in the pipeline JSON or YAML spec:
{
"libraries": [
{
"maven": {
"coordinates": "com.sap.hana.hadoop:sap-hdlfs:<version>"
}
}
]
}Or, if you’re using Python, add the dependency in your requirements.txt and reference it in the pipeline settings.
yesterday
I tried and it does not work. After saving config, Databricks will revert it back.
The way seem possible is load library from init script but as @szymon_dybczak mentioned, it's not a good way since it will cause unexpected behavior
yesterday
Hey @anhnnguyen, you can add libraries a few ways when building a notebook-based ETL pipeline:
The best practice, scalable approach to add libraries across multiple workloads or clusters is to use Policy-scoped libraries. Any compute that uses the cluster policy you define will add any dependencies to the cluster at runtime. Check this: Policy-scoped libraries
If you only need to add libraries to a single workload or cluster, use compute-scoped libraries.
Check this: Compute-scoped Libraries
yesterday
thanks @XP, it worked like a charm
actually I did try with policy before but the one I tried is usage policy so that I could not find where to add library lol
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now