1. Google Developers. (n.d.). Machine Learning Glossary: Embedding. "An embedding is a relatively low-dimensional space into which you can translate high-dimensional vectors. Embeddings make it easier to do machine learning on large inputs like sparse vectors representing words." Retrieved from Google's official developer documentation.
2. Mikolov
T.
Chen
K.
Corrado
G.
& Dean
J. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781. In the introduction (Section 1)
the paper states
"we propose two new model architectures for computing continuous vector representations of words... The main contribution of this paper is a new technique that allows the training of high-quality word vectors from huge data sets." This foundational paper introduces the concept of learning these numerical representations.
3. Manning
C.
Jurafsky
D.
& Potts
C. (2023). CS224N: Natural Language Processing with Deep Learning. Stanford University. In Lecture 2: "Word Vectors and Word Senses
" it is explained that a word embedding w is a "dense vector for each word
chosen so that it is similar to vectors of words that appear in similar contexts." (Slide 11).