I have a use case where I have to process stream data and have to create categorical table's(500 table count). I'm using concurrent threadpools to parallelize the whole process, but while seeing the spark UI, my code dosen't utilizes all the workers(Cluster configuration: Standard_e8ads type for both driver and worker, and 4 workers 32gb memory and 4 cores each). I'm using 4 threads.
the code sometimes executes on the driver or the worker, I never get utilization more than 40 to 45% for 5million records.
The function I call using threadpool has all spark code in it.
Any help on the issue will be highly appriciated, and thanks in advance.