Whenever I tell people that my ability to navigate the world has improved over time—even though my vision, as proven by ERG (electroretinography), has not—their response often boils down to a simplistic idea: “Well, you’ve learned to cope because you had no crutch. You were forced to adapt.” This explanation is not only lazy, it’s dangerously wrong.
Your brain doesn’t just passively absorb sensory information. It actively predicts what’s out there. This is called predictive processing, and it’s how we all perceive the world. Sensory input is incomplete, delayed, noisy—and yet we perceive a stable reality. That’s because the brain stitches together expectations and signals to generate an internal model. We don’t see with our eyes. We see with our brain.
So when people say my navigation is better, they’re right—but it’s not because my eyes work better or I’ve stopped being lazy. It’s because my internal model of the world has become more accurate through experience. Your brain creates an illusion based on that model—an internal simulation of the world that feels seamless. The closer this illusion aligns with actual reality, the better you are at navigating and manipulating that reality.
Here’s an example: imagine you’re reaching out to grab a glass of water. The moment your fingers touch it is the same moment you see yourself touch it. But in reality, this isn’t true. Light hits your retina and gets processed by your visual cortex faster than the touch signals travel from your fingertips to your somatosensory cortex. The signals take different paths and different amounts of time to reach the brain. Yet, you don’t perceive them as separate. Why? Because your brain synchronizes them into a single moment of experience. This is called temporal binding. And yes, our brains are fast enough to detect the discrepancy—but they choose to resolve it instead, in favor of coherence.
Likewise, when you look around, it feels like you’re seeing a full high-resolution image—like a video feed from a perfect camera. But you’re not. Your visual field is made up of receptive fields, like pixels on a screen. At any given moment, only a small fraction of them are receiving detailed input. Yet, you experience a full picture. Why? Because your brain fills in the gaps using its internal model of the world. This is part of what’s called perceptual completion or filling-in, and it’s a result of your brain’s predictive model smoothing over missing information.
The brain doesn’t rely solely on vision. In fact, when vision is impaired, it recruits other modalities—touch, hearing, proprioception, smell—to build a coherent model. This cross-modal plasticity is well documented in neuroscience. So, what’s really improving is not the input quality, but the model’s robustness. The more varied and structured my experience with the environment becomes, the more refined that model gets.
This improvement has nothing to do with being deprived of help or having to “push harder.” That’s an ableist myth. It’s about data accumulation over time. If your sensory data is low quality (as mine is due to severe cone dystrophy and weak rod function), you simply need more data to train a comparable model. But the principle is the same: feed the system enough varied experience, and it will learn.
And here’s the thing: no matter how hard I tried, I would have never achieved the same level of navigational dexterity I have today if you subtracted the years of lived experience that trained my brain. It is, as supported by both lived reality and centuries of cognitive science research, just not possible—frankly, even outright foolish—to imply such. This isn’t about personal willpower; it’s about the inescapable reality of how complex systems learn.
That’s not grit. That’s computation.
The idea that disabled people just need to “push themselves more” is not only wrong—it’s dangerous. In a country like India, where infrastructure and urban design often ignore accessibility, asking someone with a disability to “try harder” can mean risking their life. It’s easy to preach resilience when the streets are built for you. It’s ignorance when you don’t realize others are navigating cliffs with broken ropes.
Drop a sighted European in a chaotic Indian street with no rulebook and no context, and they’ll likely fail to navigate. Why? Because they lack a mental model of how things work here. That failure isn’t due to poor vision—it’s due to model mismatch. Navigating the world successfully is about how well your internal model fits the environment you’re in—not how sharp your eyesight is.
So no, it’s not about lacking a crutch. It’s not about being stronger or tougher. It’s about giving the brain enough time and varied input to learn. The reality is that I, and others like me, navigate not because we overcame disability with grit—but because our brains, like any good learning system, adapted through feedback, not force.
You can call that bravery if it helps you sleep. But I—and others who know what they’re talking about—call it computation.
I am a visually impaired computational researcher, neurodivergent thinker, and aspiring scientist, writing about the intersection of disability, cognition, and society.