Option A and E fit here. Word2vec is an actual deep learning model for static embeddings, and BERT gives contextual embeddings using transformers. WordNet looks tempting because of the name but it's just a lexical database, not a deep learning model. Pretty sure it's not C or D either since those aren't related to NLP word representations at all. If anyone thinks otherwise let me know!
Q: 10
In Natural Language Processing, there are a group of steps in problem formulation collectively known
as word representations (also word embeddings). Which of the following are Deep Learning models
that can be used to produce these representations for NLP tasks? (Choose two.)
Options
Discussion
Seen similar on official practice sets. A and E are correct for deep learning word embeddings.
Yeah, A and E. Only those actually generate embeddings with deep learning models. Not 100 percent but pretty sure this is right.
Definitely A and E. Both Word2vec and BERT use deep learning to create embeddings, while the others don't actually generate word representations this way. Pretty confident unless there's some trick here.
D . TensorRT gets tossed around in deep learning contexts, so I picked it with A. Seems like a lot of NLP setups use it for acceleration, so I thought maybe it counted as a model for embeddings here. Not totally sure, could be a trap?
A/E-B (WordNet) is a trap since it's not deep learning based. Seen similar question on practice exams.
C and D won't fit. Has to be A and E for actual deep learning word embeddings.
Probably A and E. Had something like this in a mock before, and both Word2vec and BERT are actually deep learning models specifically for word embeddings. WordNet isn't a model, more of a lexical database, and the other options are unrelated to NLP embeddings. Anyone disagree?
A and E
Be respectful. No spam.
Question 10 of 15