Ningdian shijie

top

Artificial intelligence cannot replace humans, nor should it replace humans
Release time:2023-12-07 Source: Qingqiao Number of views:


Turing, the father of computer science and artificial intelligence, once defined artificial intelligence with a simple standard. He said:True artificial intelligence is when you use a computer to hide behind the scenes and have a conversation with a person. The person in the conversation cannot distinguish whether it is a person or a computer behind the scenes, so this computer has artificial intelligence.

The level of artificial intelligence research now far exceeds Turing's definition.The rise of Google's big data research in 2004 sparked the first wave of modern artificial intelligence research, especially in 2006 when experts such as Geoffrey Hinton proposed applications for pattern recognition, such as facial recognition, fingerprints, characters, speech, and gamesofDeep neural network model, the representative algorithm of artificial intelligence at this stage is Google'sTensorFlow, typical applications include Google Go player Alfa Go and ChinaArtificial Intelligence ScientistsTang Xiaoou's129 layers of facial recognitionnetwork.

The ChatGPT text generation artificial intelligence chatbot, released on November 30, 2022, has launched a new wave of large-scale model construction technology. Artificial intelligence has been promoted from text generation to various industries, such as lawyers, accountants, technology consulting, decision-making, economic analysis, and even the financial sector such as stocks and futures.

What exactly is artificial intelligence, what is the current level of research on artificial intelligence in the world, and what impact will it have on our lives? In this issue of Ningdian Interview, we have interviewed Professor Jin Sheng to share with you.

goldSound:

Director of the Artificial Intelligence Expert Committee of Beijing Zhongke Senior Expert Technology Center.

Formerly served as Associate Professor, Professor, and Chief Professor of Information Technology at the University of New South Wales, University of Sydney, and University of Newcastle in Australia, as well as the Australian Institute of Information Technology Enhancement(ITiC)Dean of Academic Affairs.

stayMotorolaCAScan the world, super soft, and many other world-renowned companiesServing asTechnology consultant. PublishedthreeOver a hundred studiesTechnical papersthirteenThis monographeighteenPatents. andsixtyInvited multiple times to give keynote speeches at international conferences, government departments, universities, and research institutions.

Journalist: To understand something, the first thing to know is its development process. People had long fantasized about the emergence of artificial intelligence and proposed this concept. However, it is not until now that this concept has become widely known. What kind of development has it gone through?

The concept of artificial intelligence can be traced back toIn 1956, an American scholarMarvin·Minsky(MaRVin MinskyThe term artificial intelligence was first introduced.

The development history of artificial intelligence roughly goes through early, low, and high-speed development periods. To understand these, it is necessary to understand the development history of computers.

stayIn the 1940s, people were studying hardware and how to build a computer. After the computer was built in the 1950s, people's research focused on algorithms, such as equations of fourth order or higher, and there was no formula that could express the solution. The workload of this algorithm was very large, so people were studying how to improve calculation speed at that time.

ArrivedAdvanced computer languages such as C language emerged in the 1960s. Programs written in high-level languages are like English articles that people can read.

In the 1970s, computers entered the era of databases. In 1972, Michel from the University of California, Berkeley proposed the Ingres database, which allows computers to perform abstract searches.

In the 1980s, Intel produced the 8080 microprocessor,sendComputer miniaturization, becoming like a small suitcasesizeA desktop computer, and the price has been reduced to$2000, which brings computers to thousands of households.

In the 1990s, we all knew about the emergence of the Internet, which connected computers from all over the world. ArrivedtwoonecenturyA new field called multimedia has emerged,Computers process numbers fromandcharactersProgress to be able to process images, voice, and video.

The last milestone isThe concept of big data in 2005 originated from Google, and it can be said that only then did the software and hardware conditions for the development of artificial intelligence mature, leading to the resurgence of artificial intelligence in 2010ofThe wave of technology.

Reporter: From the development of computers mentioned above,The concept of artificial intelligence was first proposed in 1956, but it took 50 years of obscurity to be reintroduced, and there must have been the efforts of scientists in the process. What kind of efforts did they go through?

In 1956, Marvin Minsky was in DartMaomaoDuring the conference, neural networks were proposed as an algorithmic mathematical model that mimics the behavioral characteristics of animal neural networks and performs distributed parallel information processing. In short, neurons can accept many inputs, not just one. Dendrites collect these electrical signals and combine them to form stronger electrical signals.

If the signal is strong enough to exceedthresholdWhen the value is reached, the neuron will emit a signal along the axon, reaching the terminal and transmitting the signal to the dendrite of the next neuron, thus developing the concept of artificial intelligence.

Under the limited research conditions at that time, it was ultimately concluded that the three-layer neural network element was only a linear variable, which could lead to both success and failurechange, can't solve itmostCommonlogicXOROperation. In this way, all scientists studying neural network elements have given up. So fromFrom 1956 to 1986, there was a period of 30 years known as the "dark period" of neural network elements, which was also the "dark period" of artificial intelligence. In 1986, backpropagation neural networks were born, proving that neural network elements can solve problems like“differentor”This non-linear calculation has brought about a revival of neural network meta research, and many scientists have devoted themselves to this research.

When there is no big data support, training neural network elements is a very troublesome task. There are no formulas or rules, and it is entirely based on experience and luck. Sometimes, with good luck, results will appear quickly after training, and sometimes with bad luck, calculations will continue to loop.

After the emergence of big data in 2004, it was used for training neural network elements, and the statistical significance was very good, so the training could quickly produce results. One of the more famous ones is the 2016 AlphaGo Deep Learning Go game, which defeated the world's top Go player Li Shishi.

In 2017, Google made an algorithm improvement on AlphaGo called reinforcement learning, giving birth to AlphaGo Zero.

I mentioned it in the first question, whereIn the 1950s, scientists were researching algorithms. After I gained computing power, the next technical challenge I faced was algorithms. Scientists spent 30 years doing this research.

The earliest neural network model that emerged afterwards was mainly used for pattern recognition, such as facial recognition, character recognition, speech recognition, and the game of playing Go. Later, it was promoted from pattern recognition to various industries.

Because data from various industries and application scenarios are different, neural network elements need to build different models,CHatGPAfter the birth of T, with the construction technology of large models, the current wave of research has emerged.

Reporter: So, there are many factors that constrain the development of artificial intelligence, and among them, what is the most influential?

The biggest factor that can constrain the development of artificial intelligence is the computing power of computers. Whether it'sTensorFlow deep neural networks and large model algorithms both have far superior performance than humans. To enable computers to have self-learning ability and the possibility of surpassing human intelligence, it is necessary to build on a large amount of computation.

The current artificial intelligence is based on big data, but it cannot be said to rely on big data. More precisely, it is a model based on big data. For example, facial recognition is very fast nowadays, but why is it fast? Because we have already trained all the algorithms using millions of faces in advance, and made a judgment based on the existing intelligence, this is very fast. A large amount of work actually involves proficient training in the early stages, which requires very high computer computing power.

In the past, for example, training algorithms for facial recognition took several months to complete. Once the final result was problematic, the program needed to be modified and then trained for several months. This time-consuming process may make it impossible for research to continue.

Now, based on NVIDIA'sGPU used to require months of algorithm training and could be completed in just a few days. However, algorithms are also developing and becoming increasingly complex, requiring higher computing power. Nowadays, GPUs are not enough, so if artificial intelligence wants to make another leap, it still needs to wait for the next hardware upgrade.

Reporter: Many fields now say that their products have adoptedHow can "artificial intelligence" technology be considered a true "artificial intelligence"? What is the level of artificial intelligence in China in the world?

I often go to various parts of the country to help some enterprises improve their technology. They want to do intelligent manufacturing, and I also visit all of their intelligent manufacturing. Finally, I told them that these cannot be considered intelligent manufacturing, they can only be considered automated manufacturing.

Although production is done by robots, the process is programmed and involves repeating some mechanical actions. The core difference between automation and intelligence is that automation cannot solve sudden situations, it simply mechanically executes according to the program.

Intelligence is a closed-loop system. Go playerAlphaGo is trained using the chess manuals of all Go masters. Arriving at AlphaGoZERO is reinforcement deep learning, which does not require training with master chess manuals. It can understand that all points on the chessboard are two states, and can combine countless states. Some states are invalid and will be automatically excluded,This wayHaving the ability to self-learning, and later onAlphaGoZEroalsoDefeatedAlphaGo.

Our initial calculations are usually exhaustive, listing all possibilities that can be infinitely extended, like a big tree, one by one to verify which possibility is correct. The real algorithm is exhaustivecutbranch, make an advance judgment on thisbranchWill there be any upper levelresultIf not, cut it off, so that the tree won't grow in the endTechnetiumVery large, and in the end, the calculation will follow the fastest path to reach the result. In the future, computers will be used to design this algorithm themselves, searching for patterns.

At present, the United States and China are leading other countries in the field of artificial intelligence, the United States is leading in basic technology, and China is leading in application fields.Currently, artificial intelligence is widely applied to face, fingerprint, mapping, and speech recognition in China, and China is leading in these areas. For example, facial recognition can achieve129 floors.

Journalist: There is a saying that in any field, artificial intelligence will turn some knowledge into common sense, which is that people only compete in special cases. How should we understand this? Can artificial intelligence replace humans?

Common sense means that it has universal rules that can be pre written in the program, and training should follow these rules.

However, some special features are very unique, making it difficult for computers to find patterns. Therefore, it is still necessary to use human intelligence to identify these features and then hand them over to the computer for application.

In this regard, artificial intelligence cannot replace humans, and there are also some unique human intelligence or emotions that artificial intelligence cannot replace. Some scientists believe that after the emergence of artificial intelligence, things like lawyers and accountants will be eliminated because their patterns are very obvious. However, artificial intelligence cannot solve the problem of two people falling in love, and personal emotions cannot be explained clearly.

We used to work on intelligent medical computer automatic diagnosis systems. Generally, experienced doctors from top tier hospitals, combined with many symptoms, can at most associate withThere are over 20 diseases, but the computer can instantly identify over 20000 possible diseases and sort them by similarity.

Although it can list cases, ethically it cannot diagnose patients and cannot assume responsibility. Ultimately, it is the doctor who makes the diagnosis,endReturning requires a doctor to make a final judgment.

Reporter: What are the requirements of the development of artificial intelligence for the ability structure of future talents? What do you think about the potential ethical risks brought about by artificial intelligence as mentioned earlier?

In the future, people from all walks of life should master artificial intelligence technology, because as mentioned before, Turing only defines the threshold for artificial intelligence, and the application of technology varies in different fields. Future talents, in whatever field they are engaged in, need to understand the development of artificial intelligence in this industry and have relevant knowledge reserves.

At least you need to know the basic principles of artificial intelligence and how deep learning is accomplishedof. It is not necessary to know how neural network language is trained, but at least it is necessary to know the basic conditions for training a neural network element. For example, in a case where we need to use a deep neural network to train it, the required data must be large enough, and to what extent a solution is needed.

At the same time, it is necessary to know how artificial intelligence draws conclusions, based on which significant characteristics, and thenstayIn the relevant results, the ultimate inference is to determine the maximum possibility. You should understand that the conclusion obtained is only a probability, not an absolute one. Just like when a doctor obtains the arrangement of symptoms, they cannot just look at the maximum probability, but also need to judge based on the other possibilities listed in the arrangement to avoid making mistakes.

Reporter: That means, in fact, artificial intelligence cannot ultimately replace humans?

Artificial intelligence cannot replace humans, nor should it replace them. Let me give an example, after the advent of cloning technology, all countries banned cloningpeopleWhy? There is an ethics here, and ethics are not based on what we can usually imagine.

Just like our computers, Microsoft developed a technology called automatic updates very early on. When a computer is used for a period of time, it will pop up to automatically update, which has been widely criticized by experts in the computer field. Why? It will homogenize all computers.

What is the problem with homogenization? If a virus attacks a computer and everyone's updates become homogeneous, the virus will quickly cause the entire world's computers to suddenly crash one day. Similarly, if everyone had clones, cloning would lead to homogenization. In case one were to come alongdeadlyViruses, possiblycauseworldUp to a large number of peopleDeath from illnessAnd even exterminate humans.

So, all technologies are trained uniformly by artificial intelligence, and the results of training will become homogeneous. Once a problem arises, the consequences are unimaginable. Artificial intelligence and humans, there is no substitute for anyone, no superiority or inferiority, it is just a division of labor in social development, and we have an additional tool for production.

Artificial intelligence is responsible for that part with rules and regulations, while humans are responsible for creating differentiated parts. In fact, this should be the way to promote world progress.

(The pictures in the article are provided by the interviewee)



Laos:+856 2026 885 687     domestic:+0086-27-81305687-0     Consultation hotline:400-6689-651    

E-mail:qingqiaoint@163.com   /   qingqiaog5687@gmail.com

Copyright: Qingqiao International Security Group     备案号:鄂ICP备2021010908号

Service number

G5687
Telephone
400-6689-651

Code scanning plus WeChat

home

WeChat

Code scanning plus WeChat

Telephone

facebook

LinkedIn