How creative AI techniques became a threat for national security

In 2014, a 28-year-old American researcher called Ian Goodfellow published a paper called Generative Adversarial Networks (GANs) that presented a powerful technique to generate brand new images (i.e., human faces) with Artificial Intelligence.

Goodfellow's idea got a lot of attention from the academic and industry communities and new research papers followed improving the state of the art and, as it can be seen in the images below, the perceived quality of the human faces that GANs generated.

Needless to say that none of these six people actually exists. However, for the last three ones, only a skilled computer forensic would be able to detect the deception.

GANs are making a tremendous positive impact in different domains, such as healthcare or in creative industries. Unfortunately, malicious applications of this apparently harmless technique are emerging and posing a thread to our personal and even national security.

Scams. Deep Learning has become a gold mine for criminal organizations that combine hyperrealistic GAN-generated faces with voice synthesizers to perform scams online. The so-called romance scams, for example, cost Americans $143M in 2018.

Revenge porn. One of the most shameful applications of this technology is to ruin people's reputation by creating footages of sexual acts with the target person's face. According to this research, 100% of the victims of these videos were women.

Fake news. Deepfake videos are a threat to national security, according to the Pentagon. The defense and intelligence services fear scenarios targeting democratic processes (manipulate elections) and the incitement of crowd mobilization under false pretenses. This deepfake video of former US President Barack Obama is an example of the dangers this technique poses.

Improving the deception detection techniques, together with more strict laws controlling unethical uses of these technologies, is essential in order to limit the negative impacts on our society.

Nevertheless, it is not only up to the research community and the authorities to tackle this issue. We citizens must strengthen our awareness of these threats and be more suspicious when consuming digital content from untrusted sources.