2025/11/25
The human mind is an emergent process, of which the neuron is its basic building block. The neurons of a single human body form an immense, interconnected network of biological information-processing devices which, after many millions of years of evolution, are now able to do things like remember, and decide, and learn. And after some learning, they can solve calculus problems, invent direct-current electrical generators, or program a flight control computer to conduct a gravity well transit with minimal delta-V.
To understand the human mind, you must understand the neuron.
Neurons are a simple information processing unit, with one state variable—electrical charge, measured by the difference in potential (voltage) as compared between the interior and exterior of the neuron. For a neuron to fire, it has to accumulate enough positive charge to reach a voltage threshold, a threshold which is consistent within and between neurons. When the threshold is reached, the neuron 'fires,' activating for a moment, and sending a signal to all its synaptic terminals.
Neurons transmit information to one another through synapses. Synapses connect one neuron to one other neuron, and most neurons have quite a few of them—hundreds, maybe thousands. A synapse is made up of a 'cleft' of empty space through which chemical signals travel, a presynaptic 'terminal'—a patch of cell wall on the sending neuron, adjacent to the cleft—and a postsynaptic 'density', a patch of cell wall on the receiving neuron, on the opposite side from the terminal.
The postsynaptic density is occupied by a large number of receptors. Chemicals acting as signals, when released from the terminal, will diffuse very quickly through the cleft and attach to specially designed spots on the receptors, opening them up, which forms a filtered channel into the cell through which electrical charge travels. Some receptors are excitatory, which means when they get the right signal the channels send positive charge to the postsynaptic cell and this brings it closer to the threshold, and some, vice versa, are inhibitory. In most cases, any given cell will be either excitatory or inhibitory, all its terminals releasing the same chemical.
One excitatory synapse being activated once doesn't cause the next neuron to fire, but it slightly elevates the neuron's electrical potential. Repeated firing from one synapse, or more feasibly extensive excitatory signals across dozens or hundreds of synapses across the neuron, are required to reliably reach the signal threshold. And, obviously, inhibitory signals act against that through an identical and opposite mechanism.
Mediated by some complex systems that aren't necessary to expound on on this page, there is also a feedback system for excitatory synapses. Synapses who send an excitatory signal which then end up a contributor to reaching the threshold are strengthened, via the manufacture and addition of more receptors into the postsynaptic membrane. And, vice versa, synapses who send an excitatory signal which doesn't lead to the cell activating, or instances of a neuron firing without the synapse in question having received an excitatory signal, lose some of their receptors. If enough receptor channels are lost, the synapse may be 'pruned'—that is, removed completely, with the terminal retracting. Don't worry, though; neurons are constantly reaching out to form new synapses.
Neurons that fire together, wire together.
This process is known as Hebbian learning.
While the initial theory has gone through refinement in the many decades since, Hebb, for whom it's named, turned out to be extraordinarily correct. The process as I explained it is oversimplified (there's stuff like hormonal modulation and transcription-factor-based long-term memory processes, but those don't violate this basic principle, they just complexify it), but the more abstracted 'correlated activation strengthens future activation, non-correlated activation weakens future activation' makes up the overwhelming majority of all neuronal learning activity.
Everything that you know, everything you've learned to do, every judgement call you've made, that time you hurt your foot and walked differently for a while and even after it was feeling better it took a bit to go back to walking normally, every time someone with PTSD is reminded of a horrible experience by some otherwise benign image or sound—every aspect of neuron-based cognition is at least partly explained by this process.
And most relevant: the meaning of every word you've ever learned.