Computational Neuroscience
A Beginner's Dive into the Brain's Code
“Brains are biology. But to understand them deeply, we must speak the language of math.”
Today, I expected to be tangled in equations and dense theory. With Dyan & Abbott’s classic open, alongside videos, papers and scattered notes, I was expecting to be overwhelmed by technical jargon. But somewhere in the mix, something clicked. Not just a concept - but a way of seeing. A more structured way of seeing. A more structured, intuitive lens for thinking about the most complex object in the known universe. That shift in perspective - that was the real takeaway.
Join me on this learning journey as I share my insights and discoveries chapter by chapter.
The Brain is a system..but not like anything else
We often hear the brain compared to a computer, but this analogy quickly falls apart under scrutiny. While computers are digital, deterministic, and lightning-fast, neurons are analog, probabilistic, and surprisingly slow.
A typical neuron might fire at most 100 times per second, which is painfully sluggish compared to modern CPUs running at gigahertz speeds. Yet somehow, this collection of “slow” cells orchestrates everything from breathing to composing symphonies.
The paradox- how such a system built from unreliable, noisy components achieves such remarkable computational feats- sits at the heart of theoretical neuroscience.
What is Theoretical Neuroscience anyway?
At its core, theoretical neuroscience uses mathematical and computational models to understand brain function. It’s where biology meets equations, where wet lab meets code.
Think of it as:
Physiology interpreted through physics
Neural data translated into mathematical patterns
Biological complexity distilled into testable models
Traditional neuroscience approaches- studying individual neurons under microscopes or measuring brain activity with fMRI- are essential but insufficient. They give us data points, but not the underlying principles.
Theoretical neuroscience aims to bridge this gap by constructing mathematical frameworks that can:
Explain experimental observations
Generate testable predictions
Reveal organizing principles hidden within neural complexity
Why do we need theory in Neuroscience?
Consider these staggering numbers:
~86 billion neurons in the human brain
~100 trillion synaptic connections
Signals transmitted in milliseconds
Information processed across multiple spatial scales
The complexity is simply overwhelming. Imagine trying to understand how a city functions by watching individual residents without any concept of economics, social structures, or infrastructure. You’d collect mountains of data but miss the organizing principles.
Similarly, we need theoretical frameworks to make sense of neural data. As physicist Richard Feynman famously said, “What I cannot create, I do not understand.” By building mathematical models of neural systems, we can test whether we truly understand them.
Theory helps us:
Extract patterns from noisy data
Link observations across different scales(from molecules to behaviour)
Formulate new questions that drive experimental design
Connect neuroscience to adjacent fields like physics, computer science, and psychology
The Three Pillars of Theoretical Neuroscience
1. Neuronal Biophysics- The Physics of neural circuits
This domain examines how neurons generate and transmit electrical signals. At this level, neurons function as sophisticated electrical circuits, and we can model them with differential equations.
The landmark Hodgkin-Huxley model(which won the 1963 Nobel Prize) describes how ion channels in neural membranes control action potentials- the brain’s basic information units. Simpler models like the “integrate-and-fire” neuron capture essential dynamics while being computationally tractable.
These models let us ask: How do neurons transform input to output? What determines whether a neuron will fire? How do networks of neurons interact?
2. Neural Coding
How does the brain represent information? This question drives the neural coding pillar.
When you see a red apple, specific patterns of neural activity represent “redness” and “apple-ness” in your brain. But exactly how is this transformation encoded? Possibilities include:
Rate coding: Information is encoded in the frequency of neural firing.
Temporal coding: Information in the precise timing of spikes.
Population coding: Information is distributed across groups of neurons.
Sparse coding: Information represented by a small subset of active neurons.
Understanding neural coding is like deciphering an alien language- we see the signals but must infer the mapping between neural activity and meaning.
3. Learning & Memory
The brain’s most remarkable property might be its adaptability. From infancy to old age, our neural circuits continuously rewire in response to experience.
This pillar explores the rules governing synaptic plasticity - how connections between neurons strengthen or weaken over time. The classic Hebbian rule (‘neurons that fire together, wire together”) has evolved into sophisticated mathematical models that explain everything from perceptual learning to episodic memory.
Modern deep learning in AI draws heavily from these biological principles, creating a fascinating feedback loop between neuroscience and artificial intelligence research.
Why This Matters to Me(And Maybe You Too)
As someone coming from Electrical and Electronics Engineering, I’m drawn to theoretical neuroscience because it represents the perfect intersection of rigorous science and profound philosophical questions.
The mathematical tools developed here extend far beyond neuroscience. They help us understand complex systems in general-from financial markets to climate patters to social networks.
Moreover, our models of the brain inevitably shape our understanding of ourselves. How we conceptualize memory, decision-making, and consciousness influences fields from education to mental health to ethics.
What’s coming next
As I continue through Theoretical Neuroscience, I’ll be sharing my journey into:
Dynamical systems theory and how it applies to neural circuits
Information theory and efficient coding in sensory systems.
Computational models of learning and memory
The fascinating links between neural network theory and modern AI and many more insights.
I’ll also be implementing some models and simulations, sharing both code and insights about what these mathematical abstractions reveal about our inner workings.
Whether you’re a fellow neuroscience student, an AI enthusiast, or simply curious about the universe between your ears, I hope these posts will spark your curiosity and provide useful learning resources.
I’d love to hear your thoughts:
Have you encountered mathematical models in unexpected places?
What aspects of brain function do you find most puzzling or fascinating?
How has your understanding of your own mind changed through learning about neuroscience?
References:
Dayan, P., & Abbott, L. F. (2001). Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. MIT Press.
Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 117(4), 500-544.

