Unless you’ve been living under a rock with no internet access, you’ve no-doubt heard examples of how people are using the platform, and prophecies of how ChatGPT is set to change the course of society as we know it.
Interesting writing Amit, It took close to one hour to understand this ,waiting for next part … few pictorial representation and more examples(‘Mango’) will help people like us
Thanks for the feedback Rahul. Any suggestion for where a picture may have helped? Also next part is already online (link at the bottom of the article)
Even though I could make out what this section is pointing to at a very high level, didn’t quite clearly get ‘why’ do you say so. Any inputs/references for any context required?
Here I was comparing "Feature Vectors" with Embeddings. you can represent a word with a Feature Vector, where each number in the vector represents a particular topic (is this word abstract or concrete, Does this word relate to a color, etc.) In case of feature vector all numbers are interpretable but the onus is on human intuition to pick the right topics and then score each word against the topic. You can imagine someone picking 1000 topics and yet missing a lot of important topics. On the other hand Embeddings computed by LSA or neural networks have numbers that are hard to interpret but they are computed automatically and they provide much more compact and comprehensive representation of meaning. Hope this makes sense.
Very clear, concise and understandable. Thanks for writing this. Good stuff!
Interesting writing Amit, It took close to one hour to understand this ,waiting for next part … few pictorial representation and more examples(‘Mango’) will help people like us
Thanks for the feedback Rahul. Any suggestion for where a picture may have helped? Also next part is already online (link at the bottom of the article)
Seems quiet insightful Amit! I’ll have to read the article again to understand better but would like to extend kudos 👍🏽
Waiting for next one eagerly.
Hey Ranvir, Thanks for reading! We published all 4 of them at the same time and they are linked. Here is the next one: https://amit.thoughtspot.com/p/what-is-chatgpt-and-how-does-it-work
Loved it. Keep 'em coming!
I am realizing that it wasn't obvious that the next one is already out: https://amit.thoughtspot.com/p/what-is-chatgpt-and-how-does-it-work
This is a good start.
Thank you for this write up. Very well done
Thanks for such a clear and dense article!
Really liked the illustration on "Representing concepts as big vectors" on how it evolved circles on whiteboard to n-dimensions..
For last paragraph in LSA section -
Even though I could make out what this section is pointing to at a very high level, didn’t quite clearly get ‘why’ do you say so. Any inputs/references for any context required?
Here I was comparing "Feature Vectors" with Embeddings. you can represent a word with a Feature Vector, where each number in the vector represents a particular topic (is this word abstract or concrete, Does this word relate to a color, etc.) In case of feature vector all numbers are interpretable but the onus is on human intuition to pick the right topics and then score each word against the topic. You can imagine someone picking 1000 topics and yet missing a lot of important topics. On the other hand Embeddings computed by LSA or neural networks have numbers that are hard to interpret but they are computed automatically and they provide much more compact and comprehensive representation of meaning. Hope this makes sense.
Thanks Amit. This helped.