- 2361 Views
- 3 replies
- 1 kudos
I have an always-on job cluster triggering Spark Streaming jobs. I would like to stop this streaming job once a week to run table maintenance. I was looking to leverage the foreachBatch function to check a condition and stop the job accordingly.
- 2361 Views
- 3 replies
- 1 kudos
Latest Reply
Hi @Nolan Lavender​, How is it going?Were you able to resolve your problem?
2 More Replies
- 1148 Views
- 3 replies
- 3 kudos
Specifically for write and read streaming data to HDFS or s3 etc. For IoT specific scenario how it performs on time series transactional data. Can we consider delta table as time series table?
- 1148 Views
- 3 replies
- 3 kudos
Latest Reply
Hi @Arindam Halder​ , How is it going?Were you able to resolve your problem?
2 More Replies
- 1992 Views
- 3 replies
- 1 kudos
Hi,I have setup a streaming process that consumers files from HDFS staging directory and writes into target location. Input directory continuesouly gets files from another process.Lets say file producer produces 5 million records sends it to hdfs sta...
- 1992 Views
- 3 replies
- 1 kudos
Latest Reply
If it helps , you run try running the Left-Anti join on source and sink to identify missing records and see whether the record is in match with the schema provided or not
2 More Replies
- 5066 Views
- 1 replies
- 0 kudos
We are streaming data from kafka source with json but in some column we are getting .(dot) in column names.streaming json data:
df1 = df.selectExpr("CAST(value AS STRING)")
{"pNum":"A14","from":"telecom","payload":{"TARGET":"1","COUNTRY":"India"...
- 5066 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @Mithu Wagh you can use backticks to enclose the column name.df.select("`col0.1`")