AI Is Not Becoming Too Intelligent — It’s Becoming Too Powerful
- Christos Makiyama

- Feb 7
- 2 min read
Lately, much of the discussion in the technology world seems to orbit around a single question:
are we close to Artificial General Intelligence?
Most answers focus on how well machines can think.
Reasoning.
Problem solving.
Writing.
Planning across domains.
If a system can do many of these things, we call it intelligent.
If it can do them broadly, we call it “general”.
That framing misses something important.
Human intelligence is not only about thinking.
It is about remaining viable while acting in the world over time.
Humans get tired.
They feel stress and fear.
They hesitate under pressure.
They carry memories of past failures.
They change behavior because consequences hurt.
These limits are not defects.
They are what restrain power.
Modern AI systems are remarkable at recognizing patterns and generating outputs.
They can also act at enormous scale, through software, markets, and institutions.
But they do not feel pressure.
They do not burn out.
They do not remember mistakes in a way that permanently changes behavior.
They do not slow themselves down as risk accumulates.
When an AI-driven decision causes harm, the system does not become more cautious.
Humans absorb the consequences.
Much of the concern around AI focuses on machines becoming more intelligent than us.
I think that is the wrong focus.
The real issue is this:
Power is scaling faster than restraint.
In human systems, power has always been limited by biology, psychology, and history.
Leaders burn out.
Organizations collapse.
Societies remember disasters.
AI bypasses these limits.
It can act continuously.
Optimize relentlessly.
Scale instantly.
Without internal braking.
This is not an intelligence explosion.
It is a power imbalance.
Most safety discussions try to address this with rules or guardrails.
Better instructions.
Better alignment.
But rules operate on what systems know, not on what they experience.
You cannot instruct a system to feel overwhelmed.
You cannot fine-tune regret.
You cannot prompt historical scars.
So action continues.
Until the damage appears somewhere else.
Socially.
Economically.
Politically.
The real danger is not that AI will replace humans.
It is that AI will amplify human power without human restraint.
If we continue to define intelligence as thinking ability alone,
we will keep missing the problem.
Until systems move faster than our ability to respond.
And by then, slowing down may no longer be possible.



