Q: 19
A research company implemented a chatbot by using a foundation model (FM) from Amazon
Bedrock. The chatbot searches for answers to questions from a large database of research papers.
After multiple prompt engineering attempts, the company notices that the FM is performing poorly
because of the complex scientific terms in the research papers.
How can the company improve the performance of the chatbot?
Options
Discussion
I’d say A for this one. Few-shot prompting might help the FM better understand how to interpret those scientific terms, since it gives concrete examples. Not 100 percent but seems reasonable to try before fine-tuning.
Not totally sure but I'd pick B here. Fine-tuning makes sense when the model struggles with domain-specific language like scientific terms.
I don’t think A works here. B is the right call since domain adaptation actually helps with those scientific terms, few-shot prompting won’t be enough.
A or C? Few-shot prompting (A) often helps with tricky questions and unusual words, so I think it might improve the model's handling of scientific terms too. Not convinced fine-tuning is always needed. Open to other views if I'm missing something.
B, not A
Why pick A over B? Domain adaptation fine-tuning seems way more effective for those scientific terms than just more prompts.
Maybe A, since few-shot prompting can help the FM handle unusual or complex queries by showing more examples. I think this could boost its performance with those scientific terms, but not totally sure if it's the best fit here.
A , seems like few-shot prompting would help with better answers in complex cases. B is a classic trap here.
B , this is in all the official AWS docs and practice tests about adapting models for specialized terms.
A, Official AWS study guide and some hands-on labs usually cover prompt engineering.
Be respectful. No spam.