To integrate your CI/CD pipeline with Databricks, you have a couple of options:
1. Using Artifact Registry as an intermediary: Currently, Databricks does not directly support Google Artifact Registry URLs. However, you can use an intermediate storage mechanism, such as Google Cloud Storage (GCS) or AWS S3, to store your JAR files. Then, you can use Databricks' ability to reference JAR files from these storage solutions. You would need to modify your CI/CD pipeline to publish the artifacts to GCS or S3 after they are built and then reference them in your Databricks jobs.
Directly uploading JAR files to DBFS: Another approach is to upload your JAR files directly to Databricks File System (DBFS) from your CI/CD pipeline. You can achieve this using the Databricks CLI or API. Here's a high-level overview of how you can do this: