Q: 3
A Generative Al Engineer is tasked with developing an application that is based on an open source
large language model (LLM). They need a foundation LLM with a large context window.
Which model fits this need?
Options
Discussion
D. similar question showed up on another practice set and DBRX is known for its massive context window compared to the rest.
Official docs and the Databricks practice exam both point to D for max context window size. Worth double-checking any recent updates in the release notes just in case, but pretty sure D is safest based on specs.
C or D? Both are open source LLMs, but official docs say DBRX has a much larger context window. I'd review the latest Databricks release notes and hit the official guide just to be sure.
C vs D
Official doc and whitepapers both mention DBRX has a 32k token window, which is way beyond Llama2-70B. But since specs change fast, I'd double-check the latest release notes and maybe hit up the official Databricks practice sets just to be safe. Not totally sure but leaning D for now.
C/D? Official practice and Databricks docs highlight context window differences, I'd cross-check the specs for each.
C imo, I think Llama2-70B is also designed for big use cases. Trap is missing context window size here.
Be respectful. No spam.