Hello, you've made it here which statistically means two things:
Embeddings, vectors, semantic space — it sounds smart but honestly it's just numbers doing vibes. This page is the "Make you sound smart at the dinner table” version. It's an easy read, promise.
Every word you add — “love,” “dog,” “existentialism,” whatever — gets turned into a giant list of numbers. Hundreds of numbers, not two or three. That list is a vector. **Same concept applies with phrases. Whether it's a single word or a chapter from a book,
Answer → if both pointing in same direction: similar. if pointing opposite directions → antonyms. That’s it. You now understand embeddings better than 90% of LinkedIn.
Where it gets fun is with phrases. Try adding these:
Same “blue”, three totally different vibes. The model has way more context to work with, so those phrases land in different spots. Embeddings don’t store dictionary definitions — they store relationships and patterns like:
A single word is a Lego brick. A whole sentence is a half-built spaceship with feelings. The coordinates are the model’s best guess at “where that belongs” in meaning space.
Under the hood, these vectors live in something like 768 dimensions. Your phone, laptop, brain, eyeballs — none of that can see 768D. If you think you can, I can’t fix that in one web page.
Your brain sees everything as 3D. Even a flat line on a screen gets turned into a 3D “scene” in your head. So this app cheats: it squishes the big 768D space down into 3D while trying to keep the important directions intact.
That squishing is called dimensionality reduction. Here it’s done with PCA (Principal Component Analysis), which is a fancy way of saying:
It’s a projection, not a perfect map. Like flattening the Earth — Greenland isn’t actually that big, calm down — but the continents are still in the right rough places.
So no, this isn’t the “true” 768D universe. But it’s a surprisingly honest sketch of how the model thinks things cluster together.
Because every embedding model is basically:
Switch models → all your text gets re-embedded with a different brain → the whole layout shifts. Not wrong, not right — just a different definition of “similar.”
Same words, same phrases, new geometry. That’s the point. Meaning is not a single fixed map; it’s more like three cartographers arguing over where to put the weird little island labeled “vibes.”
Real-talk: the way the same inputs create vastly different vector spaces exposes the biases of each model.
No.
Not at all. Neither are the “chat with me / write my essay / hallucinate a TED talk” apps you have used before. This app doesn’t generate text, feelings, predictions, or hot takes. It is programmed to 'recognize' the geometry of meaning that some other models already produced.
Under the hood, this thing is:
This app is more like a calculator with nice lighting or a semantic correlation engine. It is not thinking. It is not using voodoo. It is sophisticated mathematics.
It’s like giving a tourist a drone so they stop shouting, “WHERE IS THE BEACH??” and can just look.
The vectors don’t move randomly. You’re watching the shape of meaning — the way language groups, clusters, drifts apart, and occasionally contradicts itself.
Change the model and see how biases are baked into the training itself.
Once you’ve seen it a few times, it’s hard to unsee. You’ll start recognizing when people are actually talking about this kind of geometry — and when they’re just saying “embeddings” because it sounds spicy on a slide.
Then rotate the 3D view and tell me magic isn’t real.
On that wonderful, cursed November day when ChatGPT broke the internet, I was all in. “How did it just write a song about this picture?” was the only question that mattered for a while.
Then I went digging and got buried under:
Half of it was real math. Half of it felt like cosplay for sounding smarter than everyone else.
This app came from wanting the opposite: a way to see what’s going on without pretending it’s mystical. No hype, no “disruption,” just:
This app isn’t selling anything, convincing you of anything, or pretending to be deeper than it is. It’s a toy — but if you’re curious or confused, it will quietly give you a BS detector for the next time someone says “AI lives in a 768-dimensional latent hyperspace of emergent reasoning” with a straight face.
If that still doesn’t impress you… I can’t help you, my friend. You’re welcome to keep believing in magic. But if you’re curious, you’ll find that the geometry is real, and the math is beautiful.