Q: 5
Which technique breaks a complex task into smaller subtasks that are sent sequentially to a large
language model (LLM)?
Options
Discussion
Yeah, that's prompt chaining, so B fits. It’s all about breaking a bigger problem down and sending smaller prompts to the LLM sequentially. RAG (D) comes up when external data or retrieval is involved, which isn’t mentioned here. I’ve seen similar wording on practice exams, but open to other takes if someone has evidence otherwise.
B not C. Tree of thoughts is tempting but that's more for branching logic than sending subtasks in sequence. Pretty sure prompt chaining fits the description best.
B. not D
B tbh
Option B
Hmm, I'd actually say D for this one. Sequential tasks could involve RAG if each subtask pulls in new context.
B
I don’t think it’s D. B is right since prompt chaining handles those sequential subtasks, not external data like RAG does.
Seen similar in practice tests, pretty sure it's B. Also suggest checking the official AWS study guide for prompt chaining details.
Option D RAG involves bringing in external info to help answer questions, so I think it's about enriching the model's knowledge, not breaking up tasks. Sequential subtasks sounds more like how RAG retrieves multiple docs step by step. Not 100% sure though, open to corrections.
Be respectful. No spam.