Unable to read data from Elasticsearch with spark in Databricks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-20-2022 09:18 PM
When I am trying to read data from elasticsearch by spark sql, it throw an error like
RuntimeException: Error while encoding: java.lang.RuntimeException: scala.collection.convert.Wrappers$JListWrapper is not a valid external type for schema of string
Caused by: RuntimeException: scala.collection.convert.Wrappers$JListWrapper is not a valid external type for schema of string
It show like schema generated with spark is not matching with data received from elasticsearch.
Could you let know how I can read the data from elastic via either csv, or excel format?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-21-2022 03:33 AM
How are you reading data from Elastic search?
Are you exporting data from ES in JSON or CSV format and then reading it via Spark or directly connecting to ES?
If you're connecting directly, then you can use following snippet:
df = (spark.read
.format( "org.elasticsearch.spark.sql" )
.option( "es.nodes", hostname )
.option( "es.port", port )
.option( "es.net.ssl", ssl )
.option( "es.nodes.wan.only", "true" )
.load( f"index/{index}" )
)
display(df)
If you're exporting in say JSON format using elastic dump service then use the following code snippet:
df = spark.read.json("<dbfs_path>/*.json").select("_id","_source.*")
This is because your file is exported as follows:
_id:string
_index:string
_score:long
_source:struct
col_1:<data_type>
col_2:<data_type>
col_3:<data_type>
col_4:<data_type>
col_n:<data_type>
All your columns are nested inside _source.
Hope this helps.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-21-2022 07:45 AM
Hi @Aman Sehgal
I am trying to read elastic data by directly connect to it.
I am using below snippet
df = spark.read.format("org.elasticsearch.spark.sql")
.option("es.read.metadata", "false")
.option("spark.es.nodes.discovery", "true")
.option("es.net.ssl", "false")
.option("es.index.auto.create", "true")
.option("es.field.read.empty.as.null", "no")
.option("es.read.field.as.array.exclude","true")
.option("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.option("es.nodes", "*")
.option("es.nodes.wan.only", "true")
.option("es.net.http.auth.user", elasticUsername)
.option("es.net.http.auth.pass", elasticPassword)
.option("es.resource", "indexname")
But I am getting runtime error showing that
RuntimeException: Error while encoding: java.lang.RuntimeException: scala.collection.convert.Wrappers$JListWrapper is not a valid external type for schema of string
Caused by: RuntimeException: scala.collection.convert.Wrappers$JListWrapper is not a valid external type for schema of string
do you have solution to it?
Note: i think error due to schema getting generated by spark is not matching with schema present in elastic.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2022 01:44 PM
I believe this could be a known bug reported on the Elasticsearch Spark connector for Spark 3.0.
This connector is maintained by the Open source community and we don't have any ETA on the fix yet.
Bug details:
https://github.com/elastic/elasticsearch-hadoop/issues/1635
You can look for the latest connector to support Spark3.0 in Maven repo.
What is the DBR version that you are using for the cluster?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-05-2022 05:16 AM
Hi there @KARTHICK N
Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.
We'd love to hear from you.
Thanks!

