[Decoding Chatgpt] Yang Qingfeng | Chatgpt: Characteristic Analysis and Ethical Investigation

Since November 2022, ChatGPT, a chat robot developed by American artificial intelligence research company OpenAI, has quickly become the fastest-growing consumer-grade application in history, attracting widespread attention. The emergence of ChatGPT has become the tipping point of the development of artificial intelligence, which has promoted the competition of scientific and technological innovation in various countries to enter a new track. The leap of technology will inevitably lead to in-depth observation in application scenarios. No matter how smart artificial intelligence services become, adapting to and meeting the needs of human development is always the fundamental direction. Facing the future, discussing ChatGPT’s important influence on people’s mode of production, lifestyle, way of thinking, behavior mode, values, industrial revolution and academic research will help us to use and manage this technology correctly and then think about the development prospect of artificial intelligence.

Hegel mentioned the concept of bubble burst in the Ethical System, which meant that the process of destruction was like an expanding bubble bursting into countless tiny water droplets. If we look at the development of artificial intelligence technology with this concept, we will find that it is more consistent. After the artificial intelligence bubble burst in 1956, it became many tiny water droplets and splashed everywhere. There are AlphaGo and so on in chess; There are AlphaFold and so on in scientific research; Language dialogue includes LaMDA, ChatGPT, etc. Image generation includes Discord, Midjourney and so on. These technologies have gradually converged into a force, which has involved mankind in an era of intelligent generation.

ChatGPT: generating and embedding

Generation constitutes the first feature of ChatGPT, which means innovation, but this is questioned. Chomsky believes that ChatGPT discovers rules from massive data, and then connects the data according to the rules to form similar content written by people, and thinks that ChatGPT is a plagiarism tool. This view is somewhat inaccurate. In the process of ChatGPT generation, something new is produced. However, this is not new in the sense of existence, that is to say, it does not produce new objects, but finds unseen objects from old things through attention mechanism. In this sense, it belongs to the new in the sense of attention. In 2017, a paper entitled "Attention is All You Need" proposed transformer based on the concept of attention, and later ChatGPT used this algorithm. This technology uses self-attention, multi-head-attention and other mechanisms to ensure the emergence of new content. Moreover, ChatGPT may also generate text by reasoning, and the results can not be summarized by plagiarism.

Embedding constitutes the second feature of ChatGPT, and we can regard the embedding process as enriching some form of content. The development of intelligent technology is divorced from the track of traditional technology development. Traditional technology is often regarded as a single technical article, and its development presents a linear evolution model. However, the development of intelligent technology gradually shows embeddability. For example, as a platform, smart phones can be embedded with many apps. ChatGPT can be embedded in search engines and various applications (such as various word processing software). This kind of embedding can obviously improve the ability of agents. This is the basis of ChatGPT enhancement effect. According to Statista’s statistics, as of January 2023, OpenAI has been closely integrated with science and technology, education, commerce, manufacturing and other industries, and the trend of technology embedding is becoming increasingly obvious. The degree of embedding affects the friendliness of the robot. At present, ChatGPT can’t be embedded in the robot as a sound program. In our contact, it is more like a pen pal. In the future, companion robots and talking robots may be more important, such as voice communication, human talking, machine listening and responding.

ChatGPT’s black box status

For ChatGPT, transparency is a big problem. From a technical point of view, opacity stems from the unexplained problem of technology. Therefore, technical experts attach great importance to the interpretability of ChatGPT, and they also have a headache about the black box effect of neural network. In terms of operation mode, the operation of ChatGPT itself is difficult to explain. Stuart Russell clearly pointed out that we don’t know the working principle and mechanism of ChatGPT. Moreover, he doesn’t think that the large-scale language model brings us closer to real intelligence, and the interpretability of the algorithm constitutes a bottleneck problem. In order to solve this problem, they can observe the mechanism of neural network and touch the underlying logic through some technical methods such as reverse engineering. And through the mechanical interpretable method, the results are displayed in its visual and interactive form. With the help of these methods, they opened the black box of neural network. However, the interpretability obtained by this method is only effective for professional and technical personnel.

From a philosophical point of view, the emergence of black box is related to terminology. Difficult and obscure terms will affect the acquisition of theoretical transparency. For example, the theoretical concepts on which ChatGPT algorithm depends need to be clarified. In the article "Attention is All You Need", attention mechanism is a common method, which includes self-attention and multi-attention If these concepts are not effectively clarified, it will be difficult for outsiders to understand, and the black box will still not be opened. Therefore, one of the most basic problems is to clarify attention itself. However, this task is far from complete. Ethical problems caused by lack of transparency will bring about a crisis of trust. If the principle of ChatGPT is difficult to understand, its output will become a problem. In the end, this defect will affect our trust in technology and even lose confidence in technology.

Enhancement effect of ChatGPT

ChatGPT is an intelligent enhancement technology. What it can do is to intelligently generate all kinds of texts. For example, generate an outline of data ethics and generate the research status of a frontier issue. This obviously enhances the search ability and enables people to obtain higher efficiency in a short time. This enhancement is based on generativeness and embeddedness. From the generative point of view, it realizes the discovery of brand-new objects through the transformation of attention; In terms of embeddedness, it greatly improves the function realization of the original agent.

As an intelligent technology, ChatGPT can obviously improve the work efficiency of human beings. This brings out a basic problem: the relationship between human beings and agents. We divide intelligence into substantive intelligence and relational intelligence. Entity intelligence, that is, the intelligence possessed by entities, such as human intelligence, animal intelligence and entity robot intelligence; Relational intelligence is mainly used to describe the relationship between human beings and agents, and augmented intelligence is the main form of relational intelligence. It is necessary to purify the enhanced intelligence, make it show the general significance of people and technology through philosophical treatment, and make it have normative significance through moral treatment.

However, ChatGPT, which can enhance the effect, will cause some ethical problems. The first is the problem of intelligence gap. At present, this technology is limited, and there is a certain technical threshold, which will lead to the widening gap among users, that is, the gap caused by intelligent technology. This is the gap and gap arising from the acquisition of technology. The second is the issue of social equity. Unless this technology can be as popular as mobile phones, this fairness problem will be exposed very significantly. People who can use ChatGPT to work are likely to improve their efficiency significantly; Those who can’t use this technology will keep their efficiency at the original level. The third is the problem of dependence. Users will feel the convenience of this technology during use. For example, it can quickly generate a curriculum outline, write a literature review, and search for key information. This will make users gradually rely on this technology. But this dependence will have more serious consequences. Taking searching literature as an example, with the help of this technology, we can quickly find relevant literature and write a decent summary text. Although ChatGPT can quickly generate a literature review, it has lost the academic training of related abilities, so the result may be that researchers and students have lost their abilities in this field.

The relationship between ChatGPT and human beings

In the face of the rapid offensive of ChatGPT, academic circles generally take a defensive stance, especially many universities have banned the use of this technology in homework and thesis writing. However, prohibition is not the best way to deal with it. Technology is like water, which can be infiltrated in many ways, so relatively speaking, rational guidance is more appropriate.

To guide rationally, we need to consider the relationship between agents and human beings. I prefer to compare the relationship model between the two to "make the finishing point". Taking the generation of text outline as an example, ChatGPT can generate a data ethics outline based on data processing links around related ethical issues in data processing, such as collection, storage and use. In a narrow sense, this outline is appropriate and can reflect some aspects of ethical issues in data processing. However, from a broad point of view, this outline is too narrow, especially only from the data processing itself to understand data, without considering other aspects, such as dataization, data and lifestyle. What we can do or want to do is to make the finishing touch on the generated text and make it "live" through adjustment. In this way, the position of intelligently generated text has also begun to be clear: it is the finishing touch of human beings that plays a key role in the generation. Without this pen, the intelligently generated text is just a text without soul. If not, it will be difficult to guarantee the meaning and value of human beings, and the corresponding ethical problems will also arise.