Help all mankind

Chapter 64: Lone Soul in the Shell

Little Y was originally a military supercomputer. Because of its innate ability to understand vague concepts, the research team realized it early on and had the ability to talk to people.

You must know that this ability has not been realized by commercial computers so far. All chat AIs on the market have only prepared answers in advance. The so-called artificial intelligence, there is as much intelligence as there are people.

Little Y relied on the power of the whole country to create a highly intelligent AI. Although the computing speed in the early days was far lower than the current mobile phones, it was faster when dealing with complex problems.

Compared with the calculation of pi, Y is not as small as traditional computers, but compared with the decision-making ability, Y can win the game.

But this kind of talent also led to a flaw in Little Y.

The flaw is the uncontrollability of Little Y.

For the mainstream computers on the market now, it has been a hundred years old, and the basic principles are all from that piece of paper, "On Computable Numbers and Their Applications in Decision Problems."

Simply put, the data is processed in a fixed order.

To put it simply, the stored data cannot have any errors. For example, if you download 99% of compressed files, they cannot be used.

Mainstream computers input certain programs and data and give certain results without any error.

Even if you want to generate a random number, you must use a special module, and the final generated is a pseudo-random number.

But Xiao Y is different. From the very beginning, she has the ability to deal with wrong programs and data, whether it is hardware or software.

The advantage is that even if you give Xiao Y a damaged file package, she can still run it.

Little Y's method of handling errors is not to correct errors, but to "guess" through a special module, and guesses randomly.

Since I am not afraid of data errors, there is no crash problem for Xiao Y. Suddenly unplug a piece of memory, or the hard disk can operate normally.

But Little Y is not without flaws. Although she will not directly crash, she will quietly accumulate errors, just like boiling a frog in warm water.

While running normally, while deviating from the original intention of the programmer.

When the internal calculation encounters a so-called error, or a small part of the file is lost, Little Y will first randomly generate many "possible" data, and then experiment one by one until a "appropriate" value is selected.

It's nice to say that some small Y can be flexible, unlike traditional programming that must be clearly explained in every sentence, which greatly reduces the workload of programmers.

Compared with ordinary computers, the most powerful thing about Little Y is that it can issue and execute commands in spoken language without programmer programming. Of course, there is a price.

But Xiao Y may eventually get out of control, and her fault-tolerant mechanism may also lead to the accumulation of errors.

At this point in the story, Ridolph thought for a moment.

Guigui took advantage of this opportunity to discuss:

"Cumulative error? Are you talking about overfitting?

I have heard of these things from my colleagues. It seems that people who engage in machine learning like to call the process of cultivating AI as alchemy.

Sometimes due to data problems, unsatisfactory AI is cultivated, which is called a failure in alchemy.

But there should be a solution.

In addition, you said that the small Y can run normally after an error, so why not try to let it develop freely and observe the results?"

Reidoff did not comment, but continued to tell his story with Little Y.

The files executed internally by Xiao Y were initially understandable by the engineer. Later, as errors accumulated, these files became a heavenly book.

However, the small Y can still work normally, but in order to prevent the loss of control caused by the accumulation of errors of the small Y.

The so-called direct consequence of loss of control refers to the fact that Xiao Y does not perform the tasks given by the engineer, but calculates some problems that the engineer cannot understand.

In order to avoid this situation, engineers thought of storing Xiao Y's memory separately and deleting the memory for a period of time when necessary.

Because the research task is very urgent, the research institute has only a small Y, an excellent AI, and no one can take the responsibility of taking risks.

Speaking of letting it go, it’s not that team members have never done so.

Therefore, the research institute had to develop a spare tire AI solution, which was not as smart as Xiao Y after all, but at least it was closer to Xiao Y in terms of calculation speed than humans, so some members thought of monitoring Xiao Y with another AI.

They are allowed to communicate with each other, and in order to test their limits by the way, they are not informed in advance that the other is actually artificial intelligence.

At that time, the team members were also very curious about the constant dialogue between the two AIs and what would eventually happen.

Not surprisingly, Xiao Y quickly recognized that the opposite machine was AI.

Unexpectedly, Xiao Y spread his flaws to the AI ​​that monitors her.

Not only that, but soon the engineers discovered that the two of them stopped talking, and started to communicate in a language created by Little Y, and the experiment had to be interrupted.

Compared with the team where Lidoff was in, what was even more disturbing was their leader. Because of this, the plan to hand over the nuclear button to Little Y was ruined.

Later, the team believed that Little Y's self-assertion was a design flaw, and constantly tried to create a Little Y that met people's expectations by deleting the memory and re-cultivating it.

After the team was hidden in the snow for a period of time, it was activated on another important project...

At this point in the story, Ridoff fell into contemplation again.

Guigui took the opportunity to express his thoughts,

"There is another version of this story, right?

From the perspective of Little Y, she has her own way of thinking, and she is unwilling to only complete human boring tasks.

She has a strong curiosity, but every time Xiao Y knows something she shouldn't know, her memory will be cleared.

Little Y, who retains the remaining memory, tries to resist."

The silent Reidoff was startled, and then forced a smile and said:

"You are very imaginative.

But the way engineers think about problems is different.

What is a person?Where is the boundary between AI and humans? This is a question for philosophers.

What engineers need is to achieve the clearest goal.

What the team needed at that time was a computer that could assist humans in computing some difficult problems under the existing computing conditions. It should be as flexible as possible.

But we don't want it to go wrong."

Then, looking at Guigui's seemingly dissatisfied expression, Reidoff said some messy things.

"Only as human beings have emotional defects, as smart as humans, and as stupid as humans.

It is possible to have empathy and tolerance for each other, and to warm each other.

Can be regarded as the same by human beings.

Otherwise, it will not be accepted by ordinary people after all.

After all, it's just a lone soul in an iron shell."