A guided exploration of how artificial intelligence actually "thinks" about words, meaning, and relationships
When you open the app and see those colorful arrows floating in 3D space, you're looking at something remarkable: a direct window into how artificial intelligence understands language. Each arrow represents a word as AI "sees" it—not as letters or sounds, but as a precise mathematical fingerprint that captures meaning, context, and relationships.
Think of it like this: if human memory works through networks of association—where "dog" might trigger thoughts of "pet," "loyalty," "bark," or specific memories—then AI works through networks of numbers. Every word becomes a list of 384 numbers that encode everything the AI has learned about that word from processing millions of texts. What is fascinating about Vector Space is that it translates these abstract numbers into something our spatial intelligence can grasp.
When you type "happiness" into the input box and hit enter, something fascinating unfolds behind the scenes. The AI model—whether it's Microsoft's E5, the specialized MiniLM, or BAAI's BGE—doesn't look up "happiness" in a dictionary. Instead, it processes the word through layers of neural networks, each layer building increasingly sophisticated representations.
The first layers might recognize letter patterns and common prefixes. Middle layers begin understanding grammatical roles and syntactic relationships. The final layers capture something closer to meaning itself—the contexts where "happiness" typically appears, the emotional valence it carries, the concepts it relates to. This process produces a vector: 384 numbers that collectively represent everything the model has learned about happiness.
But here's where it gets interesting. The moment that new vector enters the space, everything else has to shift. This isn't because the meanings of existing words have changed, but because the visualization is showing you relative relationships, and adding a new point changes the optimal way to display those relationships in three dimensions.
Imagine you're looking at a map of your neighborhood from above. You know exactly where everything is. Now imagine someone builds a new house that's so significant—maybe the first skyscraper in a suburban area—that the map needs to be redrawn with a different perspective to show all the important relationships clearly. That's what happens in vector space when you add a new word.
The technical process is called Principal Component Analysis, but think of it as finding the best camera angle. The system looks at all the vectors—including your new addition—and asks: "What's the best way to position a 3D camera so that the most important patterns and relationships are visible?" The "camera" might need to rotate, zoom, or shift to accommodate the new information.
This is why you'll see all the arrows smoothly gliding to new positions when you add a word. They're not randomly shuffling; they're finding the optimal arrangement that preserves the most important relationships while making room for the newcomer. Words that were close together stay close together, but their absolute positions might shift dramatically.
One of the most revealing experiments you can perform is switching between the three AI models and watching the same words reorganize completely. Add "bank" using MiniLM, then switch to E5, then to BGE. You'll see the word move to entirely different positions relative to other concepts.
This isn't a flaw—it's a feature that reveals something profound about how context shapes meaning. MiniLM was trained primarily to detect paraphrases—sentences that mean the same thing said differently. So it might position "bank" closer to "financial institution" because it learned that these phrases often substitute for each other.
E5, trained on billions of web pages, might position "bank" closer to "money," "loan," and "deposit" because these words frequently appear together in online content. BGE, optimized for retrieval tasks, might create yet another arrangement based on how "bank" appears in search queries and relevant documents.
Each model is showing you a different slice of reality—different aspects of how "bank" relates to other concepts based on different learning experiences. This mirrors how human understanding works: an economist, a poet, and a fisherman might all have different mental maps for how "bank" connects to other ideas, based on their different experiences and contexts.
When you click on two vectors and see them light up with connecting lines and angle measurements, you're witnessing the mathematical foundation of how AI understands similarity. The angle between two vectors—measured in degrees—corresponds directly to cosine similarity, one of the most important concepts in AI.
Here's the beautiful part: when vectors point in nearly the same direction (small angle), the AI considers them very similar in meaning. When they point in opposite directions (large angle), they're semantically opposite. When they're perpendicular (90 degrees), they're unrelated in the model's understanding.
Watch what happens when you compare "king" and "queen." They likely form a small angle, indicating high similarity—the AI has learned they share contexts around royalty, power, and leadership. But compare "happy" and "mathematics," and you'll see a larger angle reflecting their semantic distance in most contexts.
The distance measurements tell a different story. While angle captures the direction of meaning, distance captures magnitude. Two words might point in similar directions but be different distances from the origin, suggesting they operate in similar semantic territory but with different intensities or specificities.
What you're exploring in this visualization is the same mathematical foundation that powers the AI systems you interact with daily. When ChatGPT understands that you're asking about "travel recommendations" even though you said "vacation ideas," it's using these same vector relationships to recognize that these phrases occupy similar positions in meaning space.
When Netflix recommends movies, when Google translates languages, when Spotify suggests music, when Amazon shows related products—all of these systems rely on similar vector representations to understand relationships between items, users, and contexts.
The revelation is that meaning, for AI, is geometry. Similarity is angle. Relatedness is distance. Context is the multidimensional space these vectors inhabit. What seems like understanding is actually sophisticated pattern matching in this numerical landscape.
As you add more words and watch the space evolve, you're witnessing something like the emergence of understanding. Each new concept doesn't just take its place in the space—it changes the space itself, creating new relationships and revealing new patterns.
Add "artificial intelligence," then "machine learning," then "neural networks." Watch how they cluster together, forming their own technological neighborhood. Then add "creativity," "consciousness," "intuition"—and observe whether they drift toward or away from the technical cluster. Different models will show different relationships based on their training data and objectives.
This reveals how AI's "understanding" is fundamentally statistical. The model hasn't been programmed with facts about which concepts are related. Instead, it has learned these relationships by processing vast amounts of text and identifying patterns in how words appear together, substitute for each other, and relate across different contexts.
As you explore the visualization, you'll discover both the power and limitations of how AI understands language. The spatial relationships can be startlingly insightful—"doctor" and "nurse" cluster together, "hot" and "cold" maintain semantic distance, "love" and "hatred" might show interesting angular relationships that suggest opposition within emotional space.
But you'll also notice peculiarities that reveal the system's alien nature. Words that seem obviously related to humans might be far apart in vector space if they rarely appeared together in the model's training data. Concepts that feel abstract to us might cluster tightly because they share statistical patterns the AI detected but humans don't naturally perceive.
The three-dimensional visualization is itself a compromise. The original 384-dimensional space contains far more nuanced relationships than can be preserved when squashing everything down to three dimensions. Some words that are actually very similar might appear distant simply because their primary relationship vector points along a dimension that got discarded in the reduction process.
Understanding vector spaces isn't just an academic exercise—it's becoming essential literacy for anyone working with AI systems. When you know that AI understands relationships through geometric positioning, you can better prompt language models, debug unexpected AI behavior, and design systems that leverage these mathematical relationships.
The visualization shows you that AI doesn't "think" the way humans do, but it also reveals that AI's approach to meaning is systematic, consistent, and surprisingly sophisticated. The mathematical relationships encode genuine insights about how concepts relate, even if they emerge from statistical processing rather than conscious understanding.
As AI systems become more powerful and ubiquitous, the vector space paradigm will likely extend beyond language. Images, sounds, user behaviors, scientific data—all of these can be encoded as vectors and positioned in meaningful spaces. The geometric understanding of relationships you're exploring with words will increasingly become the foundation for how AI systems understand and navigate reality itself.
Each time you add a word, switch models, or compare vectors, you're not just playing with a visualization—you're developing intuition about how artificial intelligence processes information. You're seeing the mathematical substrate that enables AI to perform what appears to be understanding, reasoning, and creativity.
Try adding a sequence of related words—perhaps emotions like "joy," "ecstasy," "contentment," "bliss"—and watch how they position themselves relative to each other. Experiment with opposites: "light" and "dark," "fast" and "slow," "artificial" and "natural." Observe how different models organize these relationships differently.
Consider adding technical terms from your field of expertise and seeing whether the AI's positioning matches your intuitive sense of how these concepts relate. You might discover that the model has learned relationships you hadn't consciously recognized, or you might spot limitations where the AI's understanding diverges from expert knowledge.
This is more than an educational exercise. You're developing the conceptual framework to understand, predict, and effectively collaborate with the AI systems that are increasingly shaping our world. You're learning to think in vectors—and in doing so, gaining insight into both the remarkable capabilities and fundamental limitations of artificial intelligence.