Q: 13
In the transformer architecture, what is the purpose of positional encoding?
Options
Discussion
C . Transformers need positional encoding to know the order of tokens since parallel processing loses sequence info. D is tempting but importance is really handled by attention layers, not positional encoding. Seen similar confusion in practice sets.
C . Positional encoding is literally there so transformers can tell what position each token is since they have no built-in order tracking. Importance is handled by attention layers not positional stuff. Pretty sure about this but open to other views if I missed something.
Option C. Had something like this in a mock, positional encoding is for order info not importance.
Pretty sure it's C for this one. Positional encoding lets the model know where each token is in the sequence since transformers process everything in parallel. Without it, they'd have zero sense of order. If anyone thinks D makes sense here, let me know.
C , because transformers don't know token order unless you add that info. D's a trap since token importance is handled by attention, not positional encoding. Seen this mixup in some exam discussions before.
Probably C, it's just about injecting position so the model knows token order. Importance gets handled later by attention, not positional encoding.
C or D? But I think C is correct since transformers process tokens in parallel, and need some way to know position in the sequence. Not 100% though.
D imo
I don’t think it’s C, D fits as positional encoding highlights token importance in some setups.
C yeah, it's about letting the model know token order since transformers process everything at once. Not about importance here.
Be respectful. No spam.
Question 13 of 15