top of page

Mental Models: Are You Living A Simulation?

  • 1 day ago
  • 11 min read

Mental models, cognitive schemas, and why two people can witness the same event and describe completely different things.


mental models

You Were at the Same Party. You Were Not at the Same Party.


You and a friend go to the same house party on a Saturday night. Same house. Same people. Same questionable playlist. Same guy in the corner doing too-confident magic tricks with a deck of cards.


On Sunday morning, you compare notes.


Your friend says: "That party was so loud. I couldn't hear anyone. Also, did you notice the host seemed kind of stressed?"


You say: "What? No, it was chill. Great vibe. I had three conversations about hiking. The host was super warm."


You stare at each other. One of you begins to suspect the other attended a different party, possibly in a different dimension.


This post is about how you were both right. And both wrong. Because neither of you experienced the party. You each experienced your brain's simulation of the party – a hastily assembled, deeply biased, suspiciously confident internal model built from your expectations, your past experiences, your mood, and whatever cognitive furniture you happened to have lying around at the time.


Welcome to the world of mental models.

Your perception of reality is about to get significantly less trustworthy.


Your Brain Is Not a Camera. It's a Paranoid Screenwriter.


Our brains don't work like recording devices. It's be great, though, right? Light comes in through the eyes, sound comes in through the ears, and somewhere inside your skull a faithful transcript of reality is produced.


This is wrong. So wrong.


What actually happens is closer to this: your brain receives a noisy, incomplete, low-resolution feed of sensory data, panics slightly, and then writes a screenplay about what's probably happening based on what has happened before. It fills in gaps. It smooths over contradictions. It makes confident predictions and presents them to your conscious mind as "what you saw." Your brain is not a camera. It is an underfunded film studio with a tight deadline and a very strong opinion about how the plot should go.


Mental Model

A mental model is an internal representation that your brain uses to understand a situation and predict what will happen.


The cognitive scientist Philip Johnson-Laird formalized this idea in what he called mental models: internal simulations that your working memory constructs on the fly to understand situations and anticipate what's going to happen next (Johnson-Laird, 1983). These aren't static snapshots. They're dynamic, running models. Mini theater productions staged in your head using whatever props are available, which is to say: your prior knowledge, your beliefs, your assumptions, and that one time something similar happened to you in 2014.


Mental models are built from both explicit information (what someone actually told you) and implicit knowledge (everything you assume because of who you are and where you've been). They're how you understand a sentence, navigate a new city, predict what your boss is going to say in a meeting, and figure out that when someone says "I'm fine" in that tone of voice, they are decidedly not fine.


The problem (and you knew there was going to be a problem) is that your mental model is not reality. It's a guess about reality. A useful guess, usually. But a guess nonetheless. And everyone's guess is different, because everyone is building their model from different materials.


Schemas: The IKEA Furniture of Your Mind


If mental models are the simulations your brain runs, then schemas are the prefabricated components it uses to build them. Think of schemas as flat-pack cognitive furniture: pre-assembled knowledge structures that your brain pulls off the shelf whenever it needs to quickly make sense of something.


Schema

A schema is a mental framework your brain uses to quickly organize and interpret information. It’s like a pre-existing template for “how this kind of thing usually works,” and we use to make sense of the world quickly.


The psychologist Richard Mayer, building on work by David Rumelhart, described schemas as organized knowledge structures that actively guide how we interpret incoming information (Rumelhart, 1980). This is a crucial distinction. You are not a passive receiver of data, sitting there with your mouth open while the universe pours information into your head like a funnel. You are an active, opinionated interpreter. Every piece of information that reaches you gets immediately grabbed by your existing schemas and wrestled into a shape that makes sense to you.


You have schemas for everything. You have visual schemas – you know what a "kitchen" looks like, which is why you can walk into a kitchen you've never been in and immediately find the sink. You have scripts – step-by-step schemas for common scenarios. Your "going to a restaurant" script probably includes: enter, wait to be seated, receive menu, order food, eat food, ask for the check, pay, leave. You don't have to figure this out from scratch every time. The schema handles it.


You have domain schemas for areas you know well – if you're a programmer, you have schemas for how code is structured; if you're a cook, you have schemas for how flavors combine. You have social schemas – prototypes for types of people ("class clown," "strict teacher"), role schemas for how people in certain positions are expected to behave, and interpersonal schemas for how your specific relationships work ("when my mother calls on a weekday, something is either very wrong or she saw a bird she wants to tell me about").


Schemas are prototypical (they represent the typical version of something), flexible (they can stretch to accommodate new info), nested (your "restaurant" schema contains sub-schemas for "ordering" and "paying"), and (importantly!) updatable. When new information fits neatly into an existing schema, that's assimilation: the schema absorbs it without changing. When new information is so different that the schema has to reshape itself, that's accommodation: the schema rewrites its own code. Jean Piaget gave us these terms in the 1930s, and they remain useful.


The whole system works beautifully right up until the moment it doesn't.


The Ghost Story That Broke British Brains


In 1932, the psychologist Frederic Bartlett ran one of the most elegant experiments in the history of memory research. He didn't use word lists or nonsense syllables or any of the sterile laboratory materials that psychologists of his era were fond of. He used a story. Specifically, a Native American folk tale called "The War of the Ghosts."


The story is strange, nonlinear, and deeply embedded in Indigenous cultural logic. It involves two young men, a canoe trip, a battle that may or may not involve ghosts, a man who doesn't feel pain because he might already be dead, and something black coming out of someone's mouth. It's haunting and beautiful, and if you're a British university student in the 1930s, it makes approximately zero sense according to your existing schemas.


The Study

Bartlett, F. C. (1932). Remembering: A Study in Experimental and Social Psychology. Cambridge University Press.


Participants read "The War of the Ghosts" and were asked to retell it, first after a short delay, then again at longer intervals.


Here's what happened: the British participants didn't just forget parts of the story. They rewrote it. Systematically. Predictably. In ways that revealed exactly how schemas work.


The supernatural elements (the ghosts, the black thing coming out of the mouth) got dropped or rationalized. The nonlinear structure got straightened into a conventional beginning-middle-end narrative. Unfamiliar cultural details were quietly replaced with familiar British equivalents. The story got shorter, simpler, and more "normal" with every retelling. It became, essentially, a British story wearing a thin disguise.


Bartlett's participants weren't being lazy or stupid. Their schemas were doing exactly what schemas do: taking unfamiliar information and reshaping it to fit existing knowledge structures. The cultural furniture in their heads only came in certain shapes, and by God, the story was going to fit those shapes or lose a few legs trying.


Every time you hear a story, read the news, or listen to someone describe their weekend, you are not recording. You are rewriting.


The Balloons That Made People Lose Their Minds


If Bartlett showed that schemas distort what we remember, Bransford and Johnson showed that without the right schema, we can barely understand anything at all.


In 1972, they gave participants a passage of text. Here's a version of it:


"The procedure is actually quite simple. First you arrange things into different groups. Of course, one pile may be sufficient depending on how much there is to do. If you have to go somewhere else due to lack of facilities, that is the next step; otherwise you are pretty well set..."


It goes on. And on. And it makes almost no sense. Participants who read this passage without context rated it as incomprehensible and remembered almost none of it.


The Study

Bransford, J. D., & Johnson, M. K. (1972). "Contextual prerequisites for understanding: Some investigations of comprehension and recall." Journal of Verbal Learning and Verbal Behavior.


When participants were told beforehand that the passage was about doing laundry, comprehension and recall skyrocketed.



The passage didn't change. Not a word. The only thing that changed was that participants now had the right schema activated (the "doing laundry" script) and suddenly every sentence clicked into place like a key turning in a lock.


This is what schemas do at their most fundamental level. They don't just help you remember. They help you understand. Without the right schema, information is noise. With it, the same information is signal. The data doesn't change. Your interpretive framework does.


Think about how many times in your life you've sat in a meeting, a lecture, or a conversation and thought, "I have no idea what this person is talking about." It's tempting to blame the speaker. But often, the problem isn't that the information is bad. It's that you don't have the right shelf to put it on. The schema is missing, and without it, the words just... fall on the floor.


The Athens Taxi Problem


Here's a scenario that will make this visceral.


You're visiting Athens. You step off the curb, hail a taxi, and climb in. You give the driver your destination. So far, your "taxi" schema is running smoothly: car, driver, destination, meter, direct route, you arrive, you pay, you leave.


Then the driver pulls over and picks up another passenger.


And then another one.


Your schema is now on fire. This is not how taxis work. Except – it is how taxis work. In Athens. And in Cairo, and Merzouga, and dozens of other places around the world, shared taxis are completely standard. The driver is not being rude or running a scam. You are not being taken for a ride (well, you are, but in the intended sense). The service is operating exactly as designed.


The only thing that broke was your schema. Your internally constructed model of "what a taxi ride is," built entirely from your own cultural experience, assumed that certain features were universal when they were, in fact, local. Your simulation was running on incomplete data and presenting its conclusions with the unearned confidence of a man explaining wine at a dinner party.


This is the Athens taxi problem in miniature, but it's actually the everything problem in full size. Every schema you carry was built somewhere specific, by someone specific (you), from a specific set of experiences. And every schema carries with it the invisible assumption that the way you learned the world is the way the world is.


Why Two People See Two Different Realities


Now we can return to the party.


Your friend walked in with a schema that said "parties are loud, overwhelming social gauntlets where you have to perform extroversion for hours." You walked in with a schema that said "parties are casual spaces where interesting conversations happen organically." Same room, with different simulations and different realities.


Your friend's mental model primed them to notice the noise level, the social pressure, the host's furrowed brow. Your mental model primed you to notice the interesting strangers, the relaxed atmosphere, the host's welcoming smile. Both of you saw real things. Both of you missed real things. And both of you walked away convinced that your experience was the experience.


Johnson-Laird (1983) was explicit about this: mental models are constructed from a combination of the situation and the model-builder's existing knowledge. Two different builders, two different models. Always.


Think about the implications. Every disagreement you've ever had about "what happened" (with a partner, a colleague, a friend, a sibling) was, at least in part, a collision between two different mental models running two different simulations of the same event. You weren't arguing about facts. You were arguing about whose janky simulation was less janky.


Mental Models: You Don't Know You're Simulating


The really tricky thing about mental models and schemas isn't that they're biased. It's that they're invisible. You don't experience your mental model as a model. You experience it as reality. The simulation is seamless. There's no watermark in the corner that says "THIS IS A RECONSTRUCTION." Your brain presents its best guess with the same confidence it would present a mathematical proof. There is no felt difference between "I saw that" and "my schema predicted that and filled it in for me."


This is why eyewitness testimony is unreliable. This is why two people can watch the same political debate and both be certain the other side lost. This is why you can read an email, be absolutely sure it was rude, and discover on rereading that it was perfectly neutral. Your social schema just painted hostility onto ambiguous words.


Bartlett's participants didn't know they were rewriting the ghost story. They thought they were remembering it. The participants in Bransford and Johnson's study didn't feel confused because they were doing something wrong, they felt confused because their brains couldn't find the right schema to activate, and without it, the entire system stalls.


This architecture trades accuracy for speed and coherence, without asking your permission.


Practical Takeaways (Or: What to Do Now That You Know You're Running a Janky Simulation)


1. Treat your first interpretation as a draft, not a final version.

The next time you're certain about what someone meant, what happened at a meeting, or what an email was "really saying" – pause. That certainty is your schema talking, and schemas are confident even when they're wrong. Ask yourself: "What would this look like if my first interpretation were incomplete?"


2. Seek the missing schema.

If something doesn't make sense to you (a person's behavior, a cultural practice, a colleague's decision) consider that you might be missing context. The Bransford and Johnson experiment proved it: the same information goes from nonsense to obvious when you activate the right schema. Before deciding something is stupid, ask whether you might just not have the right framework yet.


3. Name your scripts.

You run scripts constantly: for meetings, conversations, relationships, conflicts. Most of them were written years ago and never updated. Try to notice when a script is running. "Oh, I'm in my 'being criticized' script right now. Is this actually criticism, or does it just pattern-match to my schema for criticism?"


4. Assume other people's simulations are different from yours.

Not wrong. Different. Built from different materials, shaped by different experiences, optimized for different things. When someone describes the same event differently than you, your first question should be "What are they seeing that I'm not?" rather than "Why are they getting this wrong?"


5. Update deliberately.

Schemas update through assimilation (absorbing new info) and accommodation (restructuring the schema itself). Assimilation is easy. Accommodation is uncomfortable – it means admitting your existing framework was inadequate. Seek accommodation. It's where the real learning happens.


The Squeeze


The Squeeze

Your brain does not show you reality. It shows you a simulation of reality, assembled in real time from incomplete data and old assumptions, rendered with the confidence of a documentary narrator and the accuracy of a guy who "definitely remembers the directions." This simulation is useful. It's fast. It's efficient. It is also, in important ways, wrong – or at least, incomplete.


Mental models and schemas are the invisible architecture behind every thought you have, every conversation you interpret, every event you "witness." Bartlett showed us in 1932 that we rewrite stories to fit our cultural furniture. Bransford and Johnson showed us in 1972 that without the right schema, we can't even understand a paragraph about laundry. Johnson-Laird showed us that every situation we encounter is not experienced directly but modeled, and different people build different models from the same raw materials.


You cannot turn this off. You cannot bypass it. You cannot achieve some pure, unfiltered perception of reality by trying harder or being smarter. What you can do is know it's happening. You can hold your interpretations a little more loosely. You can get curious about other people's simulations instead of dismissing them. You can treat your schemas like what they are: useful, outdated, improvised, and always in need of a software update.


The simulation is all you've got. But knowing it's a simulation is already an upgrade.


References:

  • Bartlett, F. C. (1932). Remembering: A Study in Experimental and Social Psychology. Cambridge University Press.

  • Bransford, J. D., & Johnson, M. K. (1972). Contextual prerequisites for understanding: Some investigations of comprehension and recall. Journal of Verbal Learning and Verbal Behavior.

  • Johnson-Laird, P. N. (1983). Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Harvard University Press.

  • Mayer, R. E. (1992). Thinking, problem solving, cognition. W.H. Freeman.

  • Rumelhart, D. E. (1980). Schemata: The building blocks of cognition.

Comments


🍋 Science of Efficiency. Practical psychology for humans.

  • LinkedIn

© 2026. Made with curiosity and mild irreverence.

bottom of page