4 Comments
⭠ Return to thread

Thanks for such a clear and dense article!

Really liked the illustration on "Representing concepts as big vectors" on how it evolved circles on whiteboard to n-dimensions..

Expand full comment

For last paragraph in LSA section -

Even though I could make out what this section is pointing to at a very high level, didn’t quite clearly get ‘why’ do you say so. Any inputs/references for any context required?

Expand full comment

Here I was comparing "Feature Vectors" with Embeddings. you can represent a word with a Feature Vector, where each number in the vector represents a particular topic (is this word abstract or concrete, Does this word relate to a color, etc.) In case of feature vector all numbers are interpretable but the onus is on human intuition to pick the right topics and then score each word against the topic. You can imagine someone picking 1000 topics and yet missing a lot of important topics. On the other hand Embeddings computed by LSA or neural networks have numbers that are hard to interpret but they are computed automatically and they provide much more compact and comprehensive representation of meaning. Hope this makes sense.

Expand full comment

Thanks Amit. This helped.

Expand full comment