Q: 20
During routine monitoring of your AI data center, you notice that several GPU nodes are consistently
reporting high memory usage but low compute usage. What is the most likely cause of this situation?
Options
Discussion
D . When you see high memory usage but compute is low, it's almost always data just sitting in GPU memory without enough ops to keep the cores busy. C's a trap because small models don't use tons of memory. Pretty sure D is what they want here, unless someone has seen otherwise?
D , high GPU memory and low compute usually means the data's loaded in but not much processing is actually happening. C messes people up, but small models don't drive high memory usage. Open to other thoughts if someone disagrees.
Probably D, that's what shows up when the GPU memory is packed but compute's barely touched.
D , seen this in official exam guides and practice labs as a typical cause.
Seen similar in practice tests, D is the right fit here.
Yeah this screams D for me. High memory with low compute almost always happens when big datasets are loaded but the GPU isn't actually crunching much, like inefficient use of CUDA cores. Pretty sure that's what they're pointing to here, but I'll change my mind if someone has a better example.
C or D but pretty sure it's D for this one. High GPU memory with low compute usually means lots of data loaded up without enough processing on it. I've seen official practice tests hit this scenario, matches their explanations. Always good to check labs and monitoring docs too if unsure.
D imo, but only if the workload is actually moving lots of data into GPU memory that doesn't need much processing. If compute ops were higher, it'd be something else. Seen exam reports flip to C if memory isn't spiked.
Its D. Seeing high GPU memory usage but low compute almost always points to big data being loaded without matching compute ops on the cores. Not 100% but I've run into this in labs before.
Had something like this in a mock, definitely D.
Be respectful. No spam.