“Just yesterday, Google released its latest artificial intelligence technology, FaceGAN.
Also released at the same time was a website called "These People Don't Exist".
Perhaps the audience will think this is a complete prank, because these portraits and these characters look so real and lifelike.
Cute children, beautiful ladies, handsome gentlemen, and old men who have gone through vicissitudes of life. None of this seems any different from the characters in our lives.
But it turns out that these images were not taken in real life, but generated using cutting-edge artificial intelligence technology.
We should be deeply concerned about this.
FaceGAN is a combination of two phrases, Face and GAN. Among them, GAN is a generative adversarial network proposed by Meng Fanqi, a young artificial intelligence scholar in China, two months ago.
This is an innovative artificial intelligence method. Two months ago, GAN was just a groundbreaking idea in the field.
But two months later, FaceGAN for face tasks has shown amazing generation effects and is infinitely powerful.
At this rate, it's reasonable to believe that anyone with a computer and the Internet can create realistic photos and videos of people saying and doing things they don't actually say or do.
Even ridiculous things may be supported by [evidence].
Although impressive, current FaceGAN technology is still not comparable to real high-definition photos - with closer inspection, you can usually tell that the photo was generated by artificial intelligence.
But the technology is advancing at an alarming rate. Experts predict that it won’t be long before people will be unable to distinguish AI-generated content from real images. "
This is Forbes’ report, which is generally fair.
Although I am overly optimistic about the speed of subsequent artificial intelligence development, this is a normal mistake made by laymen and is completely understandable.
But over at CNN, the reporting style is completely different.
This section in particular is very offensive.
“The first use case where generative technology like this is going to be widely adopted — and that’s often the case with new technologies, whether you want it to be or not — is going to be pornographic content.
Generative erotic content is almost always non-consensual. From some dark corners of the Internet, such generative technologies will gradually spread from the erotic field to the political field and cause greater chaos.
It doesn't take much imagination to understand the harm that could be done if everyone could be shown false content that they believed to be [true].
Imagine generative fake footage of politicians engaging in bribery or sexual assault before an election; or U.S. soldiers committing atrocities against civilians overseas; or President Ok-Kwan Hai announcing the launch of nuclear weapons against North Korea.
In such a world, even if there is some uncertainty about whether these segments are real, the consequences could be catastrophic.
Thanks to the ubiquity of the technology, anyone can produce footage like this: state-sponsored actors, political groups, independent individuals.
Distort democratic discourse; rig elections; erode trust in institutions; undermine journalism; exacerbate social divisions; undermine public safety; and cause irreparable damage to the reputations of high-profile individuals, including elected officials and candidates for public office.
In the past, if you wanted to threaten the United States, you needed 10 aircraft carriers, nuclear weapons, and long-range missiles.
Today...all you need is the ability to create a very realistic fake video that could undermine our election, which could plunge our country into a massive internal crisis and weaken us deeply.
These things are in the near future.
If we can't trust the video, audio, images and information collected from around the world, that's a serious national security risk.
It almost doesn’t matter whether the images and videos are real or not. Powerful generative technologies will make it increasingly difficult for the public to distinguish between what is real and what is fake, and political actors will inevitably exploit the situation—with potentially devastating consequences. "
Meng Fanqi was almost numb when she read this. No wonder Trump likes to say CNN is Fake News.
This is more outrageous than words say. Now it is just a generation technology for low-resolution facial images, and CNN said it is more evil than an aircraft carrier.
What used to require “10 aircraft carriers, nuclear weapons and long-range missiles” now only requires the ability to make fake videos?
It means that if Meng Fanqi develops artificial intelligence for two more years, he will conquer the United States during the Omnic Crisis, right?
It doesn't talk about serious things at all, and it doesn't mention the technical content at all. It just patronizes selling anxiety.
Meng Fanqi's blood pressure was high.
The Wall Street Journal report is the most technical:
“The core technology that makes it possible to generate such realistic images is the generative adversarial network, which was announced by Meng Fanqi in October 2013.
Both Hinton and Bengio, godfathers of artificial intelligence, highly praised the idea and called it the most interesting idea of the past decade.
Before the emergence of GANs, neural networks were good at classifying existing content, language, speech, images, etc., but were not good at creating new content at all.
Meng Fanqi not only gave the neural network the ability to perceive, but also gave it the ability to create.
Meng's conceptual breakthrough was to build a GAN using two separate neural networks—one called a "generator" and the other a "discriminator"—and pit them against each other.
Starting from a given dataset (for example, a collection of photos of faces), the generator starts generating new images that are mathematically similar in terms of pixels to existing images. Meanwhile, the discriminator is fed photos without being told whether they come from the original dataset or the output of the generator; its task is to identify which photos are synthetically generated.
As the two networks were repeatedly pitted against each other—the generator trying to fool the discriminator, and the discriminator trying to demonstrate the generator's creativity—they honed each other's abilities. Eventually the discriminator's classification success dropped to 50%, no better than random guessing, meaning the synthetically generated photos became indistinguishable from the originals.
This is also the case in our reality. Once we find a way to identify generative false content, the generation side can quickly correct the content. Just like a cat-and-mouse game, our future confrontation with generative false content, just like the GAN method, will continue to make the generative model more powerful. "
"I am super, this is a pure philosopher." Meng Fanqi was shocked after reading it. This final sublimation was something he did not expect.
After browsing around on Twitter again, Meng Fanqi realized why this technology suddenly attracted so many people's attention.
It turns out that there are performance artists. After looking at the website [None of these people exist], I found a few pictures from there to use as avatars, and started live streaming to chat online.
As a result, many of the people he talked to had commented on these images, but none of them doubted their authenticity.
Under the watch of millions of melon-eating people, FaceGAN's strength has been raised to a level that does not belong to it.
As the first author of GAN technology and FaceGAN technology, Meng Fanqi is now very popular on Twitter.
He was dazzled by countless questions and @. Even the mainstream media outlets that posted the article also invited Meng Fanqi for interviews through Twitter messages or Google connections.