@royinblr11
You're absolutely right to question the answer โ the correct model for an application needing a foundation LLM with a large context window is: DBRX
Why DBRX is the Best Fit:
- It is a foundation model, designed for generation tasks.
- It supports a 32K token context window out of the box โ ideal for handling long documents, chats, or code.
- It is open-source and production-ready for enterprise-level generative AI tasks.
So, if the goal is:
Build a generative application using an open-source model with a large context window,
Then the best answer is DBRX, not MPT-30B.
LR