DATABRICKS-GENERATIVE-AI-ENGINEER-ASSOCI…
Q: 1
A Generative Al Engineer is building a system that will answer questions on currently unfolding news
topics. As such, it pulls information from a variety of sources including articles and social media
posts. They are concerned about toxic posts on social media causing toxic outputs from their system.
Which guardrail will limit toxic outputs?
Options
Q: 2
A Generative AI Engineer is developing a patient-facing healthcare-focused chatbot. If the patient’s
question is not a medical emergency, the chatbot should solicit more information from the patient to
pass to the doctor’s office and suggest a few relevant pre-approved medical articles for reading. If
the patient’s question is urgent, direct the patient to calling their local emergency services.
Given the following user input:
“I have been experiencing severe headaches and dizziness for the past two days.”
Which response is most appropriate for the chatbot to generate?
Options
Q: 3
What is an effective method to preprocess prompts using custom code before sending them to an
LLM?
Options
Q: 4
A Generative Al Engineer is tasked with improving the RAG quality by addressing its inflammatory
outputs.
Which action would be most effective in mitigating the problem of offensive text outputs?
Options
Q: 5
A Generative Al Engineer has developed an LLM application to answer questions about internal
company policies. The Generative AI Engineer must ensure that the application doesn’t hallucinate
or leak confidential data.
Which approach should NOT be used to mitigate hallucination or confidential data leakage?
Options
Q: 6
A Generative Al Engineer has successfully ingested unstructured documents and chunked them by
document sections. They would like to store the chunks in a Vector Search index. The current format
of the dataframe has two columns: (i) original document file name (ii) an array of text chunks for
each document.
What is the most performant way to store this dataframe?
Options
Q: 7
A Generative AI Engineer is developing a chatbot designed to assist users with insurance-related
queries. The chatbot is built on a large language model (LLM) and is conversational. However, to
maintain the chatbot’s focus and to comply with company policy, it must not provide responses to
questions about politics. Instead, when presented with political inquiries, the chatbot should
respond with a standard message:
“Sorry, I cannot answer that. I am a chatbot that can only answer questions around insurance.”
Which framework type should be implemented to solve this?
Options
Q: 8
A Generative Al Engineer is building an LLM-based application that has an
important transcription (speech-to-text) task. Speed is essential for the success of the application
Which open Generative Al models should be used?
Options
Q: 9
A Generative Al Engineer is tasked with developing a RAG application that will help a small internal
group of experts at their company answer specific questions, augmented by an internal knowledge
base. They want the best possible quality in the answers, and neither latency nor throughput is a
huge concern given that the user group is small and they’re willing to wait for the best
Your Answer
Q: 10
A Generative Al Engineer has created a RAG application to look up answers to questions about a
series of fantasy novels that are being asked on the author’s web forum. The fantasy novel texts are
chunked and embedded into a vector store with metadata (page number, chapter number, book
title), retrieved with the user’s query, and provided to an LLM for response generation. The
Generative AI Engineer used their intuition to pick the chunking strategy and associated
configurations but now wants to more methodically choose the best values.
Which TWO strategies should the Generative AI Engineer take to optimize their chunking strategy
and parameters? (Choose two.)
Options
Question 1 of 10