- 2196 Views
- 2 replies
- 0 kudos
We are implementing a lakehouse architecture and using Notebook to transform data from object storage. Most of the time our source is database for which there one folder per table in object storage.We have structure like below for various notebooksGO...
- 2196 Views
- 2 replies
- 0 kudos
Latest Reply
Adding @Vidula Khanna and @Kaniz Fatma for visibility to help with your request
1 More Replies
- 1021 Views
- 0 replies
- 0 kudos
Hi,I have a solution design question on which I am looking for some help. We have 2 environments in azure (dev and prod), each env has its own ADLS storage account with a different name of course. Within Databricks code we are NOT leveraging the mou...
- 1021 Views
- 0 replies
- 0 kudos
- 1675 Views
- 1 replies
- 4 kudos
I am building an ETL pipeline which reads data from a Kafka topic ( data is serialized in Thrift format) and writes it to Delta Table in databricks. I want to have two layersBronze Layer -> which has raw Kafka dataSilver Layer -> which has deserializ...
- 1675 Views
- 1 replies
- 4 kudos
Latest Reply
@John Constantine , "Bronze Layer -> which has raw Kafka data"If you use confluent.io, you can also utilize a direct sink to DataLake Storage - bronze layer."Silver Layer -> which has deserialized data"Then use Delta Live Tables to process it to del...