The human brain it is one of the most efficient information processing systems ever observed, it just consumes 20 watts to power 86 billion neuronsmanaging cognitive tasks of very high complexity.
With a mass of approximately 1.3–1.4 kg, equal to just over 2% of body weight, the human brain consumes approximately 20% of the body’s basal metabolism. In terms of power, this corresponds to ~20 continuous average watts.
A value surprisingly low when compared to quantity of operations that the brain carries out in parallel: motor control, sensory integration, autonomic regulation, learning, memory and decision making. The comparison with modern artificial intelligence systems highlights a fundamental difference: lenergy efficiency it does not depend only on what is calculated, but on how theinformation it is represented and transformed.
How much the human brain really consumes
The brain’s energy consumption is surprisingly stable. Whether we are lying on the sofa, concentrating on a math problem or immersed in a conversation, the absorbed power remains around 20 watts.
This happens because the brain doesn’t “turn on” new parts when we think more: it is always activeand cognitive work corresponds above all to one reorganization of neuronal activity patternsnot to a drastic increase in consumption. From the point of view of brain energy consumption, the most wasteful processes are:
- Maintenance of ionic gradients (Na⁺, K⁺, Ca²⁺) across neuronal membranes.
- Synaptic transmissionparticularly the release and recycling of neurotransmitters.
- Propagation of action potentials along the axons.
The main fuel is the glucose and experimental estimates indicate that most of the energy is spent on keeping the system ready to respond, i.e. in a dynamic state close to functional equilibrium. Keeping a network ready to respond at all times has a basic cost that cannot be eliminatedbut which evolution has made extremely efficient.
A crucial aspect is that:
- the brain is event-driven: neurons consume energy mainly when they transmit signals;
- the activity is sparse and asynchronous, not all neurons are active at the same time;
- processing is massively parallel.
This explains why the consumption total remains almost constant between rest and complex cognitive tasks.
How much artificial intelligence consumes
If we shift attention to artificial intelligence, the picture changes radically. Systems like ChatGPT are implemented on digital computers and based on deep artificial neural networks. Computationally, the operation of these models is a long sequence of explicit mathematical operationsite: multiplications and additions between numbers, organized in layers. Each time the model generates a response, these operations must be performed again.

Every single operation has a well-defined energy costbecause it requires the physical movement of electrical charge in the transistors. Unlike the brain, which exploits physical dynamics already present in the neural network, artificial intelligence must recalculate everything step by step. The consumption of a single response may be relatively low, but the key point is that this cost is repeated for each requestwithout ever canceling itself.
The hidden weight of infrastructure
The true energy consumption of AI emerges when you look beyond the single answer. Large models are trained on clusters of thousands of processors for weeks or months, with powers on the order of megawatts. The total energy spent in this phase can be up to hundreds of thousands or millions of kilowatt hours. Added to this is the permanent cost of data centers: servers always on, cooling systems, redundancy and reliability. Even when no one is querying the model, the infrastructure remains operational. It is continuous consumption, which does not directly depend on instantaneous use.
Architecture: where the difference really comes from
There difference fundamental between brain and artificial intelligence is not only quantitative, but structural. In digital computers, memory and computation are separate. Data must be continuously transferred from memory to computing units and vice versa. This movement of information in space is one of the most expensive operations from an energy point of view. In the human brain, however, memory and calculation coincide.
Information is embedded in synapses and network organization. Learning means modifying connections locally, do not move data from a central repository. From a physical perspective, this dramatically reduces the energy cost of computing.
This difference becomes even more evident if we consider thelearning: artificial intelligence learns above all during a dedicated phase, training, which is separate from use and extremely energy-intensive. Once training is complete, the model is used, but the actual learning has already occurred.
The human brain, on the contrary, it continually learns as it works. There is no phase in which he stops operating to train.
If we make a very rough estimate and consider an average consumption of 20 watts for the first 20 years of life, we obtain a total energy of the order of 3,500 kilowatt hours. A value comparable to the electricity consumption of an apartment for a few months.
At the basis of all this there is a difference in history. THEThe human brain is the result of millions of years of evolutionduring which energy efficiency was a key selective pressure. Space and energy were limited resources.
Artificial intelligence, on the other hand, is the product of a few decades of technological development, in which energy has often been treated as a scalable resource: if more power is needed, new servers are added.
The comparison between the human brain and artificial intelligence does not tell us that one is “smarter” than the other. It shows us something deeper: lIntelligence is always embodied in a physical structureand its energy cost depends on how that structure is made.
Artificial intelligence is one very powerful toolbut pays for its power with high energy consumption. The human brain, on the other hand, represents a compromise extraordinary between computational capacity, robustness and efficiency.
