cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Read Files from Adobe and Push to Delta table ADLS Gen2

Pratikmsbsvm
Contributor

The Upstream is sending 2 files of different schema. 

The Storage Account has Private Endpoints. there is no public access.

no public IP (NPIP) = yes.

How to design using only Databricks :-

1. Databricks API to read data file from Adobe and Push it to ADLS Container.

2. Pulling new Data file whenever available. (Polling or pulling)

Pratikmsbsvm_0-1756741451588.png

3. I want to replace Event Grid and Function App with Databricks , please help how to do that.

Thanks

 

 

2 REPLIES 2

Khaja_Zaffer
Contributor

Hello @Pratikmsbsvm 

Good day

Here is the design for your requirements. 

Recommended Architecture (High-Level View)

[ SAP / Salesforce / Adobe ]
โ”‚
โ–ผ
Ingestion Layer (via ADF / Synapse / Partner Connectors / REST API)
โ”‚
โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Azure Data Lake Gen2 โ”‚ (Storage layer - centralized)
โ”‚ + Delta Lake for ACID โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ”‚
โ–ผ
Azure Databricks (Primary Workspace)
โ”œโ”€ Bronze: Raw Data
โ”œโ”€ Silver: Cleaned & Transformed
โ””โ”€ Gold: Aggregated / Business Logic Applied
โ”‚
โ”œโ”€โ”€> Load to Hightouch / Mad Mobile (via REST APIs / Hightouch Sync)
โ””โ”€โ”€> Share curated Delta Tables to Other Databricks Workspace (via Delta Sharing or External Table Mount)

Key Components & Patterns

1. Ingestion Options

  • Use Azure Data Factory or Partner Connectors (like Fivetran- We use it often our project) to ingest data from:

    • SAP โ†’ via OData / RFC connectors

    • Salesforce โ†’ via REST/Bulk API

    • Adobe โ†’ via API or S3 data export

2. Storage & Processing Layer

  • Store all raw and processed data in ADLS Gen2, with Delta Lake format

  • Organize Lakehouse zones:

    • Bronze: Raw ingested files

    • Silver: Cleaned & de-duplicated

    • Gold: Ready for consumption (BI / API sync)

Cross-Workspace Databricks Access (This is Your Core Challenge and most important)

Delta Sharing (Recommended if in different orgs/subscriptions)

  • Securely share Delta tables from one workspace to another without copying data

  • Works across different cloud accounts

 

Governance / Security Recommendations

  • Use Unity Catalog (if available) for fine-grained access control

  • Encrypt data at rest (ADLS) and in transit

  • Use service principals or managed identities for secure access between services

 

Summary Visual (Simplified)

 Sources โ†’           Ingestion โ†’    Delta Lakehouse โ†’            Destinations
[SAP, SFDC, Adobe] [ADF, APIs] [Bronze, Silver, Gold] [Hightouch, Mad Mobile, Other DBX]
โ–ฒ
โ”‚
Cross-Workspace Access (Delta Sharing / Mounting / Jobs)

Let me know if this helps 

 
Do you have any idea about APIs the code with you to connect from adobe to databricks?

szymon_dybczak
Esteemed Contributor III

Hi @Pratikmsbsvm ,

Okay, since youโ€™re going to use Databricks compute for data extraction and you wrote that your workspace is deployed with the secure connectivity cluster (NPIP) option enabled, you first need to make sure that you have a stable egress IP address.

Assuming that your workspace uses VNET injection (and not a managed VNET), to add explicit outbound methods for your workspace, use an Azure NAT gateway or user-defined routes (UDRs):

  • Azure NAT gateway: Use an Azure NAT gateway to provide outbound internet connectivity for your deployments with a stable egress public IP. Configure the gateway on both of the workspace's subnets to ensure that all outbound traffic to the Azure backbone and public network transits through it. Clusters have a stable egress public IP, and you can modify the configuration for custom egress needs. You can configure this using either an Azure template or from the Azure portal.
  • UDRs: Use UDRs if your deployments require complex routing requirements or your workspaces use VNet injection with an egress firewall. UDRs ensure that network traffic is routed correctly for your workspace, either directly to the required endpoints or through an egress firewall. To use UDRs, you must add direct routes or allowed firewall rules for the Azure Databricks secure cluster connectivity relay and other required endpoints listed at User-defined route settings for Azure Databricks.

Once you have the stable egress IP issue sorted out, you will then need to write code to fetch the data from Adobe and save it to ADLS.
If your source data is in one of the following formats, I recommend using Auto Loader:

  • avro : Avro files

  • binaryFile : Binary files

  • csv : CSV files

  • json : JSON files

  • orc : ORC files

  • parquet : Parquet files

  • text : TXT files

  • xml : XML files

Auto Loader incrementally and efficiently processes new data files as they arrive in cloud storage. It provides a Structured Streaming source called cloudFiles. So to keep it simple, it will automatically detect that new files arrived on data lake and process only new files (with exactly once semantic).

You can connect Auto Loader with a file arrival trigger. So when new files arrive in the storage, an event will be generated that automatically starts the workflow to process the new files using autloader mechanism described above.

Trigger jobs when new files arrive - Azure Databricks | Microsoft Learn

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now