Q: 9
In an AI data center, you are working with a professional administrator to optimize the deployment
of AI workloads across multiple servers. Which of the following actions would best contribute to
improving the efficiency and performance of the data center?
Options
Discussion
B is wrong, A fits better. Distributing loads and using DPUs for network/storage aligns with Nvidia's current best practices for AI data centers. Saw a similar scenario in a recent mock exam. Let me know if you disagree.
A makes more sense here. Splitting AI workloads across GPU servers and letting DPUs handle networking/storage boosts overall throughput and reduces CPU bottlenecks, especially for scalability. Pretty sure that matches how Nvidia recommends designing modern AI data centers.
C or A? If the question is asking for the best way to improve efficiency and performance, I’d go with A since distributing workloads can scale better and DPUs help with networking. But if there are specific constraints on hardware (like only one high-performance server is available), maybe B could make sense. Does the scenario assume you have multiple GPU servers and DPUs available?
Be respectful. No spam.