Option D. If the question asked about visualizing instead of selecting data, would Tableau (B) be correct?
I don’t think D is right here. C is designed for multi-step reasoning, especially when the prompt needs to guide the model through logical steps. D’s great for fact retrieval but doesn’t guarantee stepwise breakdown, which is what the question wants.
Had something like this in a mock, chain-of-thought works best for detailed math reasoning prompts.
Yeah I agree-A and D are the right picks here. Quantization mainly helps with power efficiency (A) and memory/cache savings (D), especially for running models on edge devices. C is tempting, but accuracy loss is usually minimal if done carefully. Open to other thoughts if I'm missing something.
I went with D because I figured image transformations like flipping and cropping could potentially make the model cheaper to train by optimizing the images up front. I've seen practice exams mention stuff about resource efficiency with preprocessing, so that's where my head was at. Not totally confident though, maybe official guides clarify this point better?
Option D I think these transformations help cut down on compute by making images more efficient before training, right? Not 100% sure though, maybe I’m missing something about how augmentation works.
I saw a similar question in practice and picked D. I thought fine-tuning sometimes needs you to look at architecture, like number of layers, before training. Not totally sure but that was my logic, maybe someone can confirm?
Option A and E fit here. Word2vec is an actual deep learning model for static embeddings, and BERT gives contextual embeddings using transformers. WordNet looks tempting because of the name but it's just a lexical database, not a deep learning model. Pretty sure it's not C or D either since those aren't related to NLP word representations at all. If anyone thinks otherwise let me know!