cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Recommended way to integrate MongoDB as a streaming source

amichel
New Contributor III

Current state:

  • Data is stored in MongoDB Atlas which is used extensively by all services
  • Data lake is hosted in same AWS region and connected to MongoDB over private link
  •  

Requirements:

  • Streaming pipelines that continuously ingest, transform/analyze and serve data with lowest possible latency
  • Downstream processed data is aggregated and stored in the data lake, while it is also required to be available as a stream to external subscribers (via AWS MSK potentially)

Question: what is the recommended (and reliable) way to ingest MongoDB Atlas as a stream?

option 1: Use mongo change streams and have Kafka Connect and Kafka topic to proxy between Mongo and Databricks, such that Databricks is only aware of Kafka topics

option 2: Connect to mongo directly using mongo-spark connector and watching the collection explicitly. This might require some binding via in-memory queue or something similar that can be observed in scala, as well as managing checkpoints, etc.

any other ideas? any feedback from someone who implemented this in production appreciated.

1 ACCEPTED SOLUTION

Accepted Solutions

robwma
New Contributor III

Another option if you'd like to use Spark as the ingestion is to use the new Spark Connector V10.0 which support Spark Structured Streaming. https://www.mongodb.com/developer/languages/python/streaming-data-apache-spark-mongodb/.

If you use Kafka, the MongoDB Connector when used as a source creates a change stream under the covers and has a flag, "copy.existing" which will copy the existing data first then start the stream of data.

View solution in original post

5 REPLIES 5

dbarrundiag
New Contributor II

We went with pretty much approach No. 1 you outlined above. One thing that I would recommend is that you setup a schema registry and leverage Avro, such that the messages that are going to the Kafka topic are Avro messages that must comply to a schema registry that your Databricks streaming service will be able to ingest by checking the schema first against such registry.

robwma
New Contributor III

Another option if you'd like to use Spark as the ingestion is to use the new Spark Connector V10.0 which support Spark Structured Streaming. https://www.mongodb.com/developer/languages/python/streaming-data-apache-spark-mongodb/.

If you use Kafka, the MongoDB Connector when used as a source creates a change stream under the covers and has a flag, "copy.existing" which will copy the existing data first then start the stream of data.

Kaniz
Community Manager
Community Manager

Hi @Alex Michel​ , We haven’t heard from you on the last response from the community members, and I was checking back to see if you have a resolution yet. If you have any solution, please share it with the community as it can be helpful to others. Otherwise, we will respond with more details and try to help.

amichel
New Contributor III
Hi,
Eventually we agreed on a solution to use MongoDB Atlas $out feature to export to S3 and ingest files using Databricks Autoloader as a stream.
https://www.mongodb.com/developer/products/atlas/automated-continuous-data-copying-from-mongodb-to-s...
Will need to try the new connector too as suggested here.

Kaniz
Community Manager
Community Manager

Awesome. Thanks for the update @Alex Michel​!

Would you mind choosing a best answer for us?

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.