Q: 10
Which of the following NVIDIA compute platforms is best suited for deploying AI workloads at the
edge with minimal latency?
Options
Discussion
My pick: D, Jetson is actually designed for edge AI use cases not Tesla or RTX.
Option D fits-Jetson is built for edge AI, small and handles on-device inference fast. Tesla (B) is datacenter gear, way too much power draw for typical edge use cases. D is definitely the go-to here unless they change what "edge" means. Open to other views if anyone's got counter experience.
Nah, not B for true edge cases. Jetson (D) is built for low-power, real-time AI at the edge, while Tesla is more for data centers and needs too much power. Pretty sure D is right unless "edge" means something weird here.
Had something like this in a mock, went with B back then.
Tricky wording but it needs to be Jetson, option D. Tesla is great for raw compute but edge deployments need low power and latency, which only Jetson is really built for. Unless they ask just about datacenter.
Option D reports from exam practice and the official guide suggest Jetson is built exactly for low-latency edge AI.
B , had something like this in a mock and picked Tesla.
D
B could work since Tesla boards are super powerful for AI inference, so latency should be low. Not 100% sure though because edge usually means smaller hardware. Let me know if I missed something.
I don’t think it’s B. D is built for edge stuff, super low latency.
Be respectful. No spam.