cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Adding maven dependency to ETL pipeline

anhnnguyen
New Contributor

Hello guys,

I'm building ETL pipeline and need to access HANA data lake file system. In order to do that I need to have sap-hdlfs library in compute environment, library is available in maven repository.

My job will have multiple notebook task and ETL pipeline and from what I've researched, notebook tasks will use the same compute with the job, but ETL pipeline will have its own compute. And from UI, I cannot see where to add library into it.

anhnnguyen_0-1763437214864.png

Could anyone confirm whether my understanding is correct and how to add library to ETL pipeline compute?

Thanks in advanced.

 

1 ACCEPTED SOLUTION

Accepted Solutions

XP
Databricks Employee
Databricks Employee

Hey @anhnnguyen, you can add libraries a few ways when building a notebook-based ETL pipeline:

The best practice, scalable approach to add libraries across multiple workloads or clusters is to use Policy-scoped libraries. Any compute that uses the cluster policy you define will add any dependencies to the cluster at runtime. Check this: Policy-scoped libraries 

If you only need to add libraries to a single workload or cluster, use compute-scoped libraries.
Check this: Compute-scoped Libraries 

View solution in original post

5 REPLIES 5

szymon_dybczak
Esteemed Contributor III

Hi @anhnnguyen ,

Unfortunately, Scala or Java libraries are not supported in lakeflow declarative pipeline (ETL Pipelines). So you need to use regular job if you want to install maven dependencies.

Manage Python dependencies for pipelines | Databricks on AWS

szymon_dybczak_0-1763447590675.png

 

nayan_wylde
Esteemed Contributor

DLT doesn’t have a UI for library installation, but you can:

Use libraries configuration in the pipeline JSON or YAML spec:

{
  "libraries": [
    {
      "maven": {
        "coordinates": "com.sap.hana.hadoop:sap-hdlfs:<version>"
      }
    }
  ]
}

Or, if you’re using Python, add the dependency in your requirements.txt and reference it in the pipeline settings.

 

I tried and it does not work. After saving config, Databricks will revert it back.

The way seem possible is load library from init script but as @szymon_dybczak mentioned, it's not a good way since it will cause unexpected behavior

XP
Databricks Employee
Databricks Employee

Hey @anhnnguyen, you can add libraries a few ways when building a notebook-based ETL pipeline:

The best practice, scalable approach to add libraries across multiple workloads or clusters is to use Policy-scoped libraries. Any compute that uses the cluster policy you define will add any dependencies to the cluster at runtime. Check this: Policy-scoped libraries 

If you only need to add libraries to a single workload or cluster, use compute-scoped libraries.
Check this: Compute-scoped Libraries 

anhnnguyen
New Contributor

thanks @XP, it worked like a charm

actually I did try with policy before but the one I tried is usage policy so that I could not find where to add library lol

 

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now