single node Cluster CPU not fully used
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ08-08-2024 02:42 AM
Hello community,
I use a cluster (Single node: Standard_F64s_v2 ยท DBR: 14.3 LTS (includes Apache Spark 3.5.0, Scala 2.12)) for a job. In this job I didn't use spark multiprocessing. Instead I use this single node cluster as a VM and use python multiprocessing to finish my job.
Before 2024-08-06 04:30 (CET) I can use the CPU of the single node cluster relative full. And the job can be finished less than one hour. But after 2024-08-06 04:30 (CET) the CPU of the single node cluster can not be fully used, it remains under 25%. Even though I didn't make any changes.
So does anyone know why this is happening? Is there any changes made around 2024-08-06 04:30 (CET) on Data Bricks? some update or something like that?
Thanks for any advice and really appreciate for any help.
Best regards,
Narsu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ12-09-2024 12:07 AM
Hi,
Yes there was a maintainance release on 7th august which might have caused this issue.
If you are still experience this issue, please file a support ticket.

