cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Issue with Reading MongoDB Data in Unity Catalog Cluster

naveenprasanth
New Contributor

I am encountering an issue while trying to read data from MongoDB in a Unity Catalog Cluster using PySpark. I have shared my code below:

 

from pyspark.sql import SparkSession

database = "cloud"
collection = "data"
Scope = "XXXXXXXX"
Key = "XXXXXX-YYYYYY-ZZZZZZ"
connectionString = dbutils.secrets.get(scope=Scope, key=Key)

spark = (
SparkSession.builder.config("spark.mongodb.input.uri", connectionString)
.config("spark.mongodb.output.uri", connectionString)
.config("spark.jars.packages", "org.mongodb.spark:mongo-spark-connector_2.12:3.2.0")
.getOrCreate()
)

# Reading from MongoDB
df = (
spark.read.format("mongo")
.option("uri", connectionString)
.option("database", database)
.option("collection", collection)
.load()
)

 

However, I am encountering the following error:

 

org.apache.spark.SparkClassNotFoundException: [DATA_SOURCE_NOT_FOUND] Failed to find data source: mongo. Please find packages at `https://spark.apache.org/third-party-projects.html`.

 

I have already included the necessary MongoDB Spark Connector package, but it seems like Spark is unable to find the data source. Can someone please help me understand what might be causing this issue and how I can resolve it? Any insights or suggestions would be greatly appreciated. Thank you!

1 REPLY 1

Wojciech_BUK
Valued Contributor III

Few points 

1. chce if you installed exactly same driver version as you are pointing this in code (2.12:3.2.0) it has to match 100percent

org.mongodb.spark:mongo-spark-connector_2.12:3.2.0

2. I have seen people configuring  connction to atlas in two ways

Option 1

Back in Databricks in your cluster configuration, under Advanced Options (bottom of page), paste the connection string for both the spark.mongodb.output.uri and spark.mongodb.input.uri variables. Plase populate the username and password field appropriatly. This way all the workbooks you are running on the cluster will use this configuration.

Option 2:

Alternativley you can explictly set the option when calling APIs like: spark.read.format("mongo").option("spark.mongodb.input.uri", connectionString).load(). If congigured the variables in the cluster, you don't have to set the option.

3. Try with single user cluster ( if you are using shared cluster) .

4. Check mongo db driver compatibility with spark version you are using 

 

Hope that it will help .

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group