Yeah I agree-A and D are the right picks here. Quantization mainly helps with power efficiency (A) and memory/cache savings (D), especially for running models on edge devices. C is tempting, but accuracy loss is usually minimal if done carefully. Open to other thoughts if I'm missing something.
Q: 6
Which of the following claims is correct about quantization in the context of Deep Learning? (Pick the
2 correct responses)
Options
Discussion
C/D? My thinking is that quantization can sometimes cause noticeable accuracy drops (C), especially with aggressive bit reduction, and D about memory sounds right too. Not 100 percent since it doesn't *always* wreck accuracy, but lots of practical cases C shows up.
Definitely not C, quantization doesn't always wreck accuracy. For this, I'd pick A and D since the main benefits are lower power use and less memory needed. Pretty sure that's right but open to other views if I missed something.
I think A and D, but if quantization was super aggressive then C could happen.
Not C, it’s A and D. C is a common trap since quantization doesn’t always ruin accuracy.
A and D make sense here. Quantization does help cut power and memory use, but it doesn't always destroy accuracy (C is too strong unless the model's really sensitive). If the question asked about extreme bit-width reduction, C might be right though.
Pretty sure it's C/D, sometimes quantization hurts accuracy and always helps memory.
A. D
Likely A and D. Quantization is really about lowering bit precision to save space and power, so both A and D fit. There isn't always substantial accuracy loss (sometimes it's barely noticeable). Correct me if I missed a scenario where C would apply.
Probably C and E. Quantization usually means using fewer bits for parameters, so E seems right. And I've heard some people mention it can hurt accuracy a lot, so C sounds possible too. Not sure if I'm missing something subtle, let me know if I'm off.
Be respectful. No spam.
Question 6 of 15