The new human civilization has had an idea more than once: to create a legendary strong artificial intelligence!
This strong artificial intelligence has self-awareness and can achieve infinite self-update iterations by optimizing itself and improving algorithms. In this way, it can infinitely improve its IQ.
Such species can bring about the "singularity era" of technological explosion in the short term.
This idea is indeed a wonderful one, and has existed since the age of the earth. Let’s not worry about the safety of it, and I think it is safe for now.
People fantasize about the arrival of technological singularity, a strong artificial intelligence that can bring immortality to all humans! With infinite IQ, isn’t immortality an easy task?
Various big names keep boasting that they have calculated a countdown to a better future. There are still thirty years to the singularity era, twenty years to go, etc...
But the reality is terrible.
With the development of science and technology, especially with the advent of interstellar civilization and the explosion of science and technology, people are becoming more and more inclined to believe that this kind of mechanical civilization that can evolve infinitely does not seem to exist...
In other words, you can only create species that are "stupid" than you, but you cannot create species that are "stronger" than you!
This statement is easy to explain.
"We are using our brains to study our brains." Yuriko pointed to her head, thought for a long time and said: "Do you think... our wisdom can completely understand our wisdom..."
This is not a tongue twister, but a... philosophical question.
People's understanding of the brain is based on people's thinking. Thinking is based on the brain itself. Similarly, if we write a program that can write programs, can this program write programs as complex as itself?
program?
If human beings are a naturally occurring process, can they fully identify themselves?
“To paraphrase one doctor: If our minds were simple enough to be understood, we would be too stupid to understand them; conversely, if we were smart enough to understand our brains, they would be too complex to be understood.
We reveal.”
"We have made progress, and so have our brains."
"If we can't even understand ourselves, let alone fundamentally create a stronger species."
"So I think that strong intelligence barriers may indeed be a huge problem that is difficult to crack. Fertility screening can increase the upper and lower limits of intelligence, but it cannot break it. Because it itself is not a problem of intelligence at all."
"We ourselves, even if we become superhuman, will probably still fail."
After hearing her explanation, Yu Yifeng felt sad. This question had risen to the level of philosophy, which made people feel very helpless.
This helplessness is a feeling of powerlessness that comes from deep within.
No matter how smart and powerful people are, they can only create weaker "artificial intelligence" and it is difficult to achieve infinite iterations.
The power of weak artificial intelligence only lies in its computing power. Assuming that its computing power is the same as that of new humans, there is no doubt that it will be completely destroyed in all aspects.
Thinking further, this paradox-like problem can be extended to a wider area and to all species in the universe.
Suppose a species A can easily create another stronger species B; then can species B easily create a stronger species C?
Then C creates a stronger D...
D recreate F……
If the iteration continues endlessly until the later species N is reached, how powerful will it be? How smart will it be?
If this were true, the universe would already be filled with "god-like" creatures!
But the starry sky is still so dim, the "gods" are not running around, and there is no "Mother" spirit to save everything.
That is to say, the proposition of "creating stronger species" is extremely difficult in itself, or it is simply a false proposition, and it is difficult to achieve within the known scope!
Similarly, this problem can be generalized again.
The universe is a tool for our research. Everything we can think about and experiment is based on the basic laws provided by this universe.
At the same time, the universe is the object of our research. We study the birth and destruction of the universe, and why the universe is the way it is now and not something else.
That is to say...our research, including our imagination, is itself provided by the universe. Being able to approximate it is already an amazing achievement.
"According to this theory, is the world unknowable?" Yu Yifeng said sadly, thinking of all kinds of YY novels that would explode with one punch.
Yuriko said softly: "Let's get back to the topic, don't worry about these broader things... strong barriers are just a conjecture, and have not been completely proven. Our thinking and self-awareness are indeed a black box system. We don't know how to study it, and we don't know how.
Cut in."
"But I think that while constantly accepting input and output, it is possible to figure out some of the conditions and make small improvements to the content inside."
"Or... we can turn to the unknown, things we don't understand, to make improvements."
Seeing Yu Yifeng immersed in thinking, she smiled slightly and said: "Such a strong intellectual barrier is not a problem for L4 civilization, or even an interstellar civilization... Let's not think too much about it for the time being."
"Those who have crossed the strong threshold can already be called a god-level civilization. We temporarily doubt the existence of this god-level civilization... Are they the sowers? I don't know."
"What L4 civilization faces is just a weak intellectual barrier, which can definitely be solved."
"Weak barriers" are much simpler, just a biological problem.
The basis of wisdom is the brain, but a brain is only equivalent to hardware and is not equivalent to wisdom at all.
At the moment when a person dies, in that 0.000000001 second, theoretically, the physical properties of the brain do not change drastically, but why do people die like this? Is human death instantaneous or continuous?
If it was instantaneous, it is likely that some "soft" things, such as self-consciousness, died, so the person died.
Strong barriers mainly describe this kind of "software".
At a certain moment, the "software" crashed due to hardware errors, and people died.
The physical structure of the brain itself, as "hardware", can be optimized biologically and is visible and tangible. It is not a black box system like "self-awareness".
"There is no paradox in using software to transform hardware."
The human brain is a piece of equipment that is messily put together. It is inefficient, clumsy, and difficult to understand, but it still works. No matter how you look at it, the brain is a poorly designed, inefficient mass, but it is also unexpected.
The ground works well.
From a biological perspective, some clumps can indeed be improved due to imperfect evolution. This is the origin of the "brain chips" currently used by people.
But the current brain chip is far from enough. It can increase the amount of calculations, but it cannot make people smarter.
"The brain developed a specific solution to a problem a long time ago, and people have used it over the years, or improved it for other purposes, forming many different kinds of wisdom. In the words of molecular scientist François Jacob
To put it bluntly: evolution is a tinkerer, not an engineer."
Regardless of evolution or technological means, if the brain's own capabilities can be maximized, that is... the "weak barrier" has been passed.
Fertility screening is a common way to pass weak barriers.
"Hardware is the foundation of wisdom, and software defines the boundaries of wisdom. A software that can play Go cannot play chess. This is an algorithm problem."
"The software possessed by the human brain is more complex than any computer program. It is very powerful and can do almost anything. It looks perfect, but in fact it also has its own boundaries. We cannot consider or think of things outside the boundaries.
, can’t feel it.”
"It is precisely because the software is so powerful that the hardware is too weak."
"In other words, strong barriers are the 'software' of intelligent life, the boundaries of thinking, including unsolved mysteries such as self-awareness, and there is basically no possibility of self-improvement."
"Weak barriers are only equivalent to hardware... they are the physical structure of the entire brain. Improving hardware can indeed be achieved by various means."
As long as there is a way to make hardware more powerful, more suitable for software, and gradually bring out the maximum performance of "software", it can be said that the weak barrier has been passed.
If the potential of "software" is fully developed through weak barriers, how smart will this life be?
"However, just this threshold, a weak barrier, is as difficult as climbing to the sky, and it has blocked the vast majority of interstellar civilizations!"
Yu Yifeng sighed: "I don't know whether we have passed the weak barrier or not...Theoretically, the superhuman brain is far from being fully developed. Hmm...Also, has our software been improved by perfection?"
?”
After discussing this, the two of them returned to the office with their thoughts on their minds.