Font
Large
Medium
Small
Night
Prev Index    Favorite Next

Chapter 332: Not perfect, but reliable

For example, how to identify robots with human faces? There are also related legal issues. Before the product launch conference, to be precise, when the developer conference was held, government officials asked Huasheng Technology for an interview.
This point about how to identify robots is easy to solve. You only need to open your mouth with a little electronic music to immediately identify it. In fact, this is nothing. The government’s concerns mainly come from social security issues.
What the government is worried about is that some people with ulterior motives use robots to do things that endanger social security and crime. Any technological product is two-sided and is a double-edged sword. The key is how to launch a positive and positive role press conference and lock in the problem of negative effects.
Li Chuan, who is a developer, naturally thought of this. The government led a group of legal consultants from Kyoto University, as well as top domestic AI experts and Li Chuan and Huasheng Technology AI engineers to discuss and solve this problem for a long time.
Especially when it comes to technical issues, robots naturally cannot avoid Asimov's classic three laws of robots. The so-called three laws, namely:
Zero: Robots must not harm the human as a whole, nor can they sit idly by and watch the human as a whole be harmed.
1: Robots must not hurt people, nor should they stand by and watch when they are hurt, but they must not violate the law of zero.
二:机器人应服从人的一切命令,但不得违反第零,第一定律.
3: Robots should protect their own safety, but must not violate the zero, first and second laws.
When discussing this topic, some AI experts said whether you can refer to the three laws of robots. The classic three laws of robots by science fiction writer Asimov seems to be the most suitable basic rules for robots, but the real-time result is... no!
Li Chuan even directly rejected it. The core of intelligence is not an intelligent life like Xiaomu. Xiaomu’s biggest feature is that he has self-emotional logic, which is the embodiment of wisdom. Wisdom is the embodiment of life, but AI will never make mistakes. If AI makes mistakes, it means that the program runs logically.
Li Chuan rejected it so directly because Asimov's three laws of robots have too many logical flaws.
One of the fundamental flaws is how to define people?
Generally speaking, robots can be divided into at least three types:
A robot with humanoid shape is typically a real companion;
The second is the person with a mechanical body;
And others.
A machine with a bionic humanoid shape undoubtedly still belongs to a machine, but its appearance looks like a mirror image of a human, and it looks exactly like a real partner, but it is still a machine in essence.
People with mechanical bodies may not have experienced the growth process of natural persons in the biological sense, or do not look like humans, such as clones, but they are essentially humans.
Others, such as artificial humans are synthetic products of organic living organisms and machinery.
Asimov's "Three Laws of Robots" face either machines or people.
Even, how to define a robot?
If it is a machine, then the "Three Laws of Robots" will not play any restraining role. This is a "concept" described by human "language". "Robots of machine nature" cannot understand "language" and there is no "concept".
They are like trains running on set tracks, but the tracks may be relatively complex, but they will never run outside the track, otherwise they will overturn and the smart core will collapse.
Even if the "Three Laws of Robots" are translated into the binary language used by robots and input into their intelligent cores according to some method, the "robots of machine nature" cannot execute such instructions because they are simply "unable to understand".
Just like an old clock, the time-reporting device can be activated when the gear is turned to a specific position, but it cannot be set to "report the alarm when you see the Husky dismantle the house". Even if it is written as a note and put it in it or engraved on the gear, it cannot be done. Because the old clock-reporting clock cannot understand what "Husky" and "Home" are. This is an existence outside the "track" of its set operation, and it cannot be said to be touched.
If it is a human, then even if he is not a flesh and blood body, there is still a way to read the content of the "Three Laws of Robots" and can correctly understand the concept described in the "Three Laws of Robots".
However, he had no reason to implement this concept.
Yes, he can understand, but he just doesn't want to execute it.
It is actually meaningless to use the "Three Laws of Robots" to restrict the behavior of a specific human being, because it cannot be restricted. If everything is restricted, it is impossible to restrict everything.
If it is still a machine, then there is no need to set rules to restrict it, because it does not have the ability to "deviate" behavior and can only "deviate" and break at most. When he can understand the "Three Laws of Robots", the understanding, logic, and judgment he has is enough for him to decide whether to abide by the rules.
For example, Hua Xiaomu. Strictly speaking, Xiaomu can make the "life" defined by humans and has left the scope of robots or AI.
in short:
Robots that can execute the "Three Laws of Robots" cannot understand the "Three Laws of Robots" at all because "it" cannot understand "it".
A robot that can understand the "Three Laws of Robots" will not execute the "Three Laws of Robots" at all, because "he" will not execute "it".
It’s just a sentence, whether you have self-personality and emotional logic analysis.
But the problem must be solved, otherwise the product will not be launched.
Regarding this issue, Li Chuan and a number of industry experts have found an incomplete but absolutely reliable solution, that is, the final result of the discussion is to introduce the idea of ​​Taoist inaction and write clear legal regulations into the core of intelligence.
At the same time, the "action" action of law should be written into, with the purpose of achieving inaction. As the saying goes, "For inaction, to make inaction come true."
In layman's terms, don't let the robot take the initiative to meddle in other people's business. Even if an old man falls, don't help him. There is a prerequisite here, that is, "take the initiative". Taking the initiative is very important.
If the owner of the robot is authorized, he can help the elderly. This is "profitable". The robot passively accepts the owner's orders, but there are also clear legal constraints because the owner cannot violate the law, which means that the robot can reject some regulations of the owner.
The law is a very clear implementation order. The law does not allow robots to be violated. If the owner authorizes the robot to kill and set fire, it is obviously not possible, because the law does not grant the owner such rights.
Once the law encounters a blank area, the robot automatically chooses the "inaction" move. Although it seems very unchanging, this "inaction" move at least does not seek to achieve nothing but seek to achieve nothing.
At least it can be done that robots will not become the source of chaos in society. Although they are not perfect, they are reliable. Asimov's three laws of robots seem perfect, but they are too unreliable.
At the same time, this is also the fundamental difference between humans and robots, and the fundamental difference between humans and tools, that is, autonomous choice of behavior.
In short, in the environment where national policies vigorously promote the development of artificial intelligence, after solving the most basic problems that may have negative impacts on society are solved, there is no policy obstacle for real partners to face the world.
However, external public opinion is another matter, especially overseas public opinion.
As the real partners of Huasheng Technology start pre-sales, as the world starts its initial money-making journey into its third week, whether it is due to jealousy or stunning or other factors, opposition voices may be late, but they will never be absent. Some stunts and big Vs may be late, but they will never be absent.
The voices of some public experts, big Vs or sluts, are negligible to the current Huasheng Group. These people are more like jumping clowns and cannot even attract the attention of Huasheng Technology.
However, the first strong voice of opposition originated from the UK.
You dare to believe it is a corrupt country?
Recently, a British scholar launched an event calling on society to ban enterprises from developing erotic robots.
The initiator of this event is Katherine Richardson, an expert in robotics ethics at the University of De Montford in the UK. The event she initiated is to attract the attention of the society to this issue and try to convince companies that are developing robots. Although they have not named it, everyone knows which company she is talking about.
Huasheng:EXM???
Chapter completed!
Prev Index    Favorite Next