Q: 1
An AI development company is working on an AI-assisted chatbot for a customer, which happens to
be an online retail company. The goal is to create an assistant that can best answer queries regarding
the company policies as well as retain the chat history throughout a session. Considering the
capabilities, which type of model would be the best?
Options
Discussion
Option B RAG lets the chatbot pull company-specific data on policies, not just general info. Pretty sure that's needed for accuracy here.
B tbh, RAG is built for these use cases.
C/D? LLMs give good responses and can keep session context, so seems either one could fit. Not totally sure which is better.
Be respectful. No spam.
Q: 2
What is the purpose of embeddings in natural language processing?
Options
Discussion
C imo. Had something like this in a mock, always about numerical representations that hold semantic meaning in NLP.
C tbh, embeddings map words or phrases to vectors so the model can actually understand context and similarities. Not about compression or translation. Pretty sure that's what Oracle wants here, unless I'm missing something subtle.
Its C, embeddings basically turn words or phrases into vectors that capture their meaning and relationships. Pretty standard NLP stuff.
Be respectful. No spam.
Q: 3
When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?
Options
Discussion
Option A
Its B, saw a similar question on practice exams. Fine-tuning is for when prompt engineering isn't enough.
Be respectful. No spam.
Q: 4
In the simplified workflow for managing and querying vector data, what is the role of indexing?
Options
Discussion
B tbh, indexing is all about making the search faster by organizing the vectors so you don’t have to brute force everything. D looks tempting because you do sometimes categorize data, but that's not what 'indexing' specifically means here. I’ve seen similar questions in other AI/vector DB practice sets. If anyone’s picking C for compression, pretty sure that’s a distraction.
Its B, indexing creates a structure so you can search vectors way faster. Is the question asking for "the main" role though, or could it be about storage too? If storage size was the requirement, C would make sense instead.
Be respectful. No spam.
Q: 5
When should you use the T-Few fine-tuning method for training a model?
Options
Discussion
Option C is right here. T-Few is designed for cases where you only have a small dataset, like a few thousand samples or less-saw this in practice exam reports. Pretty sure D is too large for T-Few.
D , since with big data setups (hundreds of thousands+), you usually need scalable fine-tuning. Not totally sure though, maybe I'm missing a constraint here.
C, not D. T-Few is for small datasets, hundreds of thousands is way too big. Seen similar logic in practice stuff.
Be respectful. No spam.
Q: 6
Given the following code:
PromptTemplate(input_variables=["human_input", "city"], template=template)
Which statement is true about PromptTemplate in relation to input_variables?
Options
Discussion
Practice test matched this as C, but I'd skim the Oracle docs before final review.
Not B, C. Saw similar on practice exam, double check official docs to be sure.
Be respectful. No spam.
Q: 7
Which technique involves prompting the Large Language Model (LLM) to emit intermediate
reasoning steps as part of its response?
Options
Discussion
C/B? I know C can trick you since it does break down problems, but B is the one that always outputs visible reasoning steps. Still, with these questions, easy to mix them up.
B, Chain-of-Thought makes the LLM show its reasoning in steps. Pretty sure that's what's being asked here, since options like Step-Back or In-Context don't specifically force the model to explain thinking. Agree?
Be respectful. No spam.
Q: 8
How does the structure of vector databases differ from traditional relational databases?
Options
Discussion
Option C, Official Oracle docs and some hands-on lab examples make it clear, vector DBs focus on similarities in high-dimensional space, not just rows or columns like relational databases.
Not B, C. Quick question: are we looking for what fundamentally separates vector from relational databases or just querying style? If the question was about data format only, I'd consider A.
C or D. Kinda tempted by D since both store rows, but pretty sure vector DBs are mainly about the vector space and finding data by similarity, which is C. Saw a similar one on another practice set that tripped me up though.
Be respectful. No spam.
Q: 9
What does in-context learning in Large Language Models involve?
Options
Discussion
C
C tbh, matches what I saw in the official guide and some practice tests.
Probably C here. In-context learning is about feeding the model instructions or demos as part of the prompt, not retraining it (so A and B are out). D is tempting but adding layers is just architecture stuff, doesn't relate to how LLMs adapt at inference time. Pretty sure it's C, unless I'm missing some edge case.
Tricky if you haven't seen the term, but it's about using prompts to guide the model so C.
Be respectful. No spam.
Q: 10
What does the Loss metric indicate about a model's predictions?
Options
Discussion
B
C/D? I saw a similar question in an exam report, not sure which one fits better here.
Its B
Probably B, since loss tells you how far off the predictions are from the actual values. It's not about counting right answers or total predictions. Pretty sure about this, but let me know if I'm missing something.
Be respectful. No spam.
Question 1 of 20 · Page 1 / 2