We observe the following behavior when we keep adding new runs to an experiment:
- In the beginning, the runs are still displayed correctly in the UI.
- After a certain number of total runs, the following bug occurs in the UI:
- In the UI, there are no longer any runs in the experiment. ("No runs yet.")
- If you change the way of sorting the runs, some of the runs will be displayed again.
- Then when you scroll down to look at the runs below, the UI breaks down. Instead of the experiment, you see the message: "Something went wrong".
- We have not yet been able to clearly determine the number of runs above which the UI breaks, but it seems to be in the low three-digit range.
- Regardless of the number of runs, they are correctly stored in the underlying storage, in our case an S3 bucket. Thus it seams to be an UI bug.
- Furthermore, you can still get the meta information from the last 100 runs using the "Download CSV" Button.
Is this behavior already known? Is a fix planned for this issue?