I think B is best here. Custom retrievers let you filter by recent updates, so the AI won't pull in stale docs anymore. Option A would need switching the whole data source, which isn't mentioned as possible. Pretty sure it's B, but open if anyone sees a catch.
Option A for me. LLMs usually surface similar topics and the actions you might need, but I thought execution order is more system logic than model duty. Maybe missing some nuance, but A feels closer from what I've seen in docs. Disagree?
I picked A since the LLM usually finds similar topics and returns related actions. Not sure if it also figures out the order of execution every time. Anyone see a reason why A wouldn't work here?
Honestly, I don't think it's C here. Prompt Builder is mainly for generating natural language content, so creating a draft newsletter (A) is the key use case. Predicting churn or calculating CLV usually needs analytics or ML tools instead, not prompt-driven generative AI. Pretty sure A is correct but open if someone has seen a scenario where C fits.
I don’t think A is right for this one. It’s C since Action Instructions are there specifically to tell the LLM when to use the action, not about user experience directly. A sounds good but it’s more of a distractor because it focuses on UI, not the AI side. Pretty sure about C but open if someone sees it differently.
Pretty sure C is right. Action Instructions are mainly for the LLM to figure out when and how to use the action, not for end users directly. A feels more like a UI/UX thing, not what happens under the hood. Let me know if you see it differently though.