AutoMl Dataset too large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-13-2024 06:58 AM
Hello community,
i have the following problem: I am using automl to solve a regression model, but in the preprocessing my dataset is sampled to ~30% of the original amount.
I am using runtime 14.2 ML
Driver: Standard_DS4_v2 28GB Memory 8 cores
Worker: Standard_DS4_v2 28GB Memory 8 cores (min 1, max 2)
i allready set spark.task.cpus = 8, but my dataset is still down sampled 😞
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2024 01:56 AM
Thank you for your detailed answer. I followed your sugestions with the following result:
- repartioing of the data didnt change anything
- i checked the metrics of the workers and the memory is indeed nearly fully used (10gig is used, nearly 17gig is cached)
- i do not fully understand why my relativ small dataset creates such a big memory demand, maybe it results in the amount of categorial features. One hot encoding could result in many "extra columns"
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-19-2024 04:50 AM
I am pretty sure that i know what the problem was. I had a timestamp column (with second precision) as a feature. If they get one hot encoded, the dataset can get pretty large.
![](/skins/images/8C2A30E5B696B676846234E4B14F2C7B/responsive_peak/images/icon_anonymous_message.png)
![](/skins/images/8C2A30E5B696B676846234E4B14F2C7B/responsive_peak/images/icon_anonymous_message.png)