14 Comments

Very clear, concise and understandable. Thanks for writing this. Good stuff!

Expand full comment

Interesting writing Amit, It took close to one hour to understand this ,waiting for next part … few pictorial representation and more examples(‘Mango’) will help people like us

Expand full comment

Thanks for the feedback Rahul. Any suggestion for where a picture may have helped? Also next part is already online (link at the bottom of the article)

Expand full comment

Seems quiet insightful Amit! I’ll have to read the article again to understand better but would like to extend kudos 👍🏽

Expand full comment

Waiting for next one eagerly.

Expand full comment

Hey Ranvir, Thanks for reading! We published all 4 of them at the same time and they are linked. Here is the next one: https://amit.thoughtspot.com/p/what-is-chatgpt-and-how-does-it-work

Expand full comment

Loved it. Keep 'em coming!

Expand full comment

I am realizing that it wasn't obvious that the next one is already out: https://amit.thoughtspot.com/p/what-is-chatgpt-and-how-does-it-work

Expand full comment

This is a good start.

Expand full comment

Thank you for this write up. Very well done

Expand full comment

Thanks for such a clear and dense article!

Really liked the illustration on "Representing concepts as big vectors" on how it evolved circles on whiteboard to n-dimensions..

Expand full comment

For last paragraph in LSA section -

Even though I could make out what this section is pointing to at a very high level, didn’t quite clearly get ‘why’ do you say so. Any inputs/references for any context required?

Expand full comment

Here I was comparing "Feature Vectors" with Embeddings. you can represent a word with a Feature Vector, where each number in the vector represents a particular topic (is this word abstract or concrete, Does this word relate to a color, etc.) In case of feature vector all numbers are interpretable but the onus is on human intuition to pick the right topics and then score each word against the topic. You can imagine someone picking 1000 topics and yet missing a lot of important topics. On the other hand Embeddings computed by LSA or neural networks have numbers that are hard to interpret but they are computed automatically and they provide much more compact and comprehensive representation of meaning. Hope this makes sense.

Expand full comment

Thanks Amit. This helped.

Expand full comment