Yeah, C makes sense here since Oracle's docs mention fine-tuning clusters use 2 units. Easy to overthink and pick A if you miss that detail. Pretty sure it's always 2 for fine-tune unless specified otherwise.
Q: 11
You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom
training dat
a. How many unit hours are required for fine-tuning if the cluster is active for 10 hours?
Options
Discussion
C tbh, trap is A if you miss the 2 units per cluster detail from Oracle docs.
C , similar question came up in my practice set. Oracle's default fine-tune clusters have 2 units, so 2 x 10 hours is 20 unit hours. If it wasn't the default config it might be different but C fits here.
Option D
That works out to 20 unit hours, so C is right.
Be respectful. No spam.
Q: 12
How are prompt templates typically designed for language models?
Options
Discussion
I thought it was A, since templates sometimes use logic, but maybe I'm overthinking.
B imo, saw this phrasing a few times in exam reports for Oracle AI. Templates are basically recipes to standardize prompts.
B , saw this recipe-style phrasing in the official Oracle guide and some practice exams. Templates guide prompt structure, they're not really complex algorithms. Pretty sure that's what they're looking for here.
B , not sure why people lean toward A here since prompt templates are more recipe-like, not compiled algorithms.
Check the official study guide and Oracle's practice tests, both cover this recipe-style prompt template concept for B.
Pretty sure B, prompt templates are just structured recipes for LLM prompts. None of the rest really match how they're typically used.
Oracle questions love to overcomplicate stuff. A.
Curious why some keep picking A here, isn’t B the one that fits? The others look like distractors (numerical data, no modification). Am I missing something?
B tbh, most resources describe prompt templates as like predefined recipes or blueprints for prompts. They're meant to guide structure and can be reused with different inputs. Saw similar wording in the official study guide and practice tests, so I'm pretty sure it's B. Anyone see anything else in the docs?
Be respectful. No spam.
Q: 13
What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?
Options
Discussion
Option C is tempting if you only focus on the term, but it's actually D that's closer, since hallucination is about generating false info not visualizing images.
Yeah, it's definitely D. Hallucination is when the model makes stuff up and acts like it's true.
Careful, if the scenario mentioned multimodal LLMs with image output then C might fit, but standard usage points to D.
C/D? If the question means making up wrong info, then D fits but if they sneak in something about image generation in the scenario I'd worry about C. Not sure since wording feels a bit open.
Be respectful. No spam.
Q: 14
What is the purpose of frequency penalties in language model outputs?
Options
Discussion
A
B , frequency penalty is all about discouraging repeats. It lowers the probability for tokens that have already been picked a lot, so you don't get stuck with the same word over and over. I think B best matches how it's handled in gen AI APIs I've worked with, but if someone read this different let me know.
I think it's B since frequency penalties are used to cut down on repeated tokens and avoid the model spamming the same words. At least that's what I've always seen in practice, but open to being corrected.
B for frequency penalty you want to discourage repeated tokens not boost them. Makes the output less repetitive.
B seen similar wording in the official guide and some practice exams.
B
B tbh
Its A, since tokens that appear more frequently should get more weight, right? Unless I'm missing a trick in the penalty logic.
Be respectful. No spam.
Q: 15
What happens if a period (.) is used as a stop sequence in text generation?
Options
Discussion
D
Annoying how Oracle words these, but D imo
Yeah, D makes sense for this one. With a period as the stop sequence, text gen will halt as soon as it hits the first sentence end, token limit doesn't matter. Seen this happen with OpenAI API too, but open to correction if anyone's had another case.
D imo, had something like this in a mock exam and it stopped at the first sentence when period was set as stop sequence.
C or D? I think it's D because using a period as a stop sequence will cause the model to stop at the first sentence end. That's how most APIs handle it, but if anyone has seen exceptions let me know.
Probably D here. If you set the stop sequence to a period, the model will end output right after the first full sentence, even if it could keep going. Pretty sure that's how most LLM APIs treat stop sequences.
Its D, period as stop sequence means generation halts right after the first sentence ends.
Be respectful. No spam.
Question 11 of 20 · Page 2 / 2