Q: 16
Security researchers investigating a newly deployed internal developer assistant notice
that the model frequently suggests non-existent libraries and deprecated cryptographic
functions when asked to generate secure Python code. The system currently relies on a
Large Language Model (LLM) without any external data connections, leading to confident
but factually incorrect outputs that could introduce vulnerabilities into the production
pipeline.
Which of the following architectural changes would BEST mitigate the risk of these
hallucinations while ensuring the model provides up-to-date security recommendations?
Options
Discussion
No comments yet. Be the first to comment.
Be respectful. No spam.