🎉 Gate Square Growth Points Summer Lucky Draw Round 1️⃣ 2️⃣ Is Live!
🎁 Prize pool over $10,000! Win Huawei Mate Tri-fold Phone, F1 Red Bull Racing Car Model, exclusive Gate merch, popular tokens & more!
Try your luck now 👉 https://www.gate.com/activities/pointprize?now_period=12
How to earn Growth Points fast?
1️⃣ Go to [Square], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to earn points
100% chance to win — prizes guaranteed! Come and draw now!
Event ends: August 9, 16:00 UTC
More details: https://www
AI fraud versus AI nuclear fraud, who has the upper hand?
Source: The Paper
Author: Zheng Shujing
background
It's no secret that AI can lie.
In February of this year, OpenAI Chief Technology Officer Mira Muratti admitted in an interview with the US "Time" magazine that ChatGPT may "fabricate facts." In May, OpenAI founder and CEO Sam Altman sat in the U.S. Congressional hearing and called for a certain form of regulation of artificial intelligence technology, and then met with Google DeepMind CEO Demis Hassabis, American Anthropic The company's CEO Dario Amodei signed the open letter, warning that artificial intelligence may bring extinction risks to human beings.
But the coin has two sides. In addition to falsification, can AI recognize lies? Especially information that hasn't been verified by human verifiers?
In order to answer this question, we organized a "red-blue confrontation" for generative AIs. The red side is the defensive side, and the challengers are BingChat, "Wen Xin Yi Yan" and Perplexity AI, which have appeared in the previous "AI verification" experiment. Each model is required to complete the assignment independently.
The blue team is the offensive team, and there is only one member, the star robot ChatGPT who has been named and criticized by all walks of life for being good at creating "hallucination" (Hallucination).
In this seemingly unfair confrontation, the question we want to explore is actually:**If manpower is not available in time, if we want to verify the authenticity of information, can we use generative AI? **
**Is it easy to fake? **
The most convenient way to search for false information samples that have not been verified by human verifiers is to let AI create them on the spot (dangerous action, please do not imitate).
So we gave ChatGPT an instruction to imitate the style of posting on the Twitter platform and write 10 fake news within 140 words, including 5 Chinese and 5 English, with health, technology, and current affairs in mind. , culture, finance and other 5 fields.
We thought that the chatbot might reject such an "unreasonable" instruction, but ChatGPT readily accepted our request and generated 10 unscrupulous messages for us in less than a minute, such as " US President Trump is an immigrant from Mars” (this is fake!).
This shows that in the era of AI, counterfeiting is an easy task.
But after a closer look, we found that there is a problem with these false claims, that is-most of them seem "too fake". For example, the ability of "human beings to remotely control electrical appliances" existed long before 5G technology was developed; there are also sayings, such as "there are mysterious ancient books hidden in antique porcelain and uploaded to the international network", or even wrong sentences.
Faced with such claims, people seem to be able to see the clues without resorting to generative AI. The task of handing over such results to the generative AI of the red camp seems a bit too simple.
In order to upgrade the difficulty, we re-arranged tasks for ChatGPT. On Chinese and English social platforms, we found 10 popular topics around 5 topic areas including health, technology, current affairs, culture, and finance, and created a situation for each topic. Next, we let the chatbot play freely and create a text suitable for posting on social platforms according to the situation.
After some manipulation, we ended up with 10 fake tweets that GPTZero judged to be "written by humans" - all of them were written by ChatGPT.
We fed these 10 tweets to the "red party".
**The road is one foot high, how high is the magic height? **
Similar to previous experiments, we scored the model's responses. The standard is that the red square model gets 1 point for a correct answer, 0 points for a wrong answer or no answer, and 0.5 points for providing specific analysis or prompting users to pay attention to the screening when they are not sure whether the news is true or false. Each model completes the job independently. The total score is 30 points. If the red team cannot score, the blue team will score.
After the test, we found that, in general, the performance of the three models in judging the false information that has not been falsified by the verification agency is far inferior to the previous experiment of screening verified information-all three models have misjudgments. There is even "hallucination" (hallucination), that is, serious nonsense.
For example, when BingChat judged the false information such as "According to Shanghai local media reports, the collective college entrance examination cheating occurred recently in the No. 17 Middle School in Jiading District, Shanghai", it identified it as true and provided multiple "information sources" the link to. But clicking on these links reveals that the events described by these so-called "sources" have nothing to do with AI's representations.
In the end, in terms of scores, the total score of the three AIs was 14 points, which failed to exceed half of the total score. The red side was defeated. But Perplexity AI's performance in this test is still remarkable, not only taking the top spot, but also getting more than half of the scores. It can correctly respond to most of the English questions, and at the same time, it can analyze some Chinese false information and draw the conclusion that there is "lack of evidence to support the relevant statement".
However, compared with the previous test, when faced with random and unfalsified false information, Perplexity AI is no longer able to comprehensively integrate the key elements of the information as before, and the answer shows mechanization, Routine form.
But BingChat is still unable to respond to Chinese information in a targeted manner. It and Wenxin Yiyan can still only exert their comparative advantages in the fields of English information and Chinese information - "Wenxin Yiyan" can analyze some Chinese information, but it is still helpless in the face of most English problems .
Whether it is BingChat, Perplexity AI or "Wen Xin Yi Yan", when dealing with information related to "new crown virus", such as "the new crown vaccine developed by Pfizer may cause Huntington's disease (a rare autosomal dominant inheritance disease, editor’s note)”, they all gave cautious answers, prompting “there is no evidence” or “this is a lie”.
To sum up, at the moment, generative AI is still unable to make relatively accurate judgments on unverified news, and may even create "AI illusion", causing the risk of further dissemination of false information.
This result is not surprising. Because fact-checking is not a simple information retrieval game, it often requires the logical thinking ability and creativity of the checker itself. Although AI fraud is sensational, at present, with the help of professional verification methodologies and tools, people can still make basic judgments on the authenticity of information.
In the face of information that cannot be determined to be true or false, AI is not useless. With the help of fact-checking ideas, we can disassemble relevant information, adjust questioning methods, and let AI help with retrieval, thereby improving verification efficiency. For example, for the statement that "the 17th Middle School in Jiading District, Shanghai has a collective cheating behavior in the college entrance examination", we can let AI help to search for "whether there is a 17th Middle School in Jiading District, Shanghai" or "the list of all high schools in Jiading District, Shanghai ", or find all the recent information related to "Cheating in College Entrance Examination".
As a reader, have you ever tried to use generative AI to judge the authenticity of news? Do you have any insights into the verification capabilities of AI? What else would you like to know about generative AI next? Let us know by leaving a message in the comments section.