Hi Kumar,
How are you? As per my understanding, please consider checking if your jobs running on ALL_PURPOSE_COMPUTE are being tracked properly in the system.billing.usage table. For ALL_PURPOSE_COMPUTE workloads, billing can sometimes be aggregated under interactive clusters, and the costs might not be attributed directly to specific jobs, making it harder to get a job-specific breakdown. You might want to cross-reference cluster usage with job runs using the cluster usage metrics or cluster events logs. This will help you map costs from ALL_PURPOSE_COMPUTE clusters to the jobs they are supporting. Alternatively, you can explore Databricks' cost management tools or integrate with external billing tools to get a more granular view of job-level costs on these compute types.
Give a try and let me know.
Regards,
Brahma