They are all fake

“Hi, there! I’ll be here to answer any of your questions.”, reads a chatbot, including a photograph of a smiling employee on the side. Were you always convinced of these ’employees’ to be real? Yes, same for me here. Let me tell you: we were wrong. These ‘people’ are made by artificial intelligence companies who sell images of computer-generated faces. Companies who want to ‘increase’ their diversity, or dating apps that need women, can now create trumped-up models to achieve this. It seems ideal, right? But is it?

The video above is an advertisement for the website Generated.Photos. Here, companies can purchase a fake person for their enterprise for just $2,99. It is possible to adjust their appearances, like age, hair length, and ethnicity. Rosebud.AI provides a website where clients can create their own virtual characters. If clients prefer to, their animated creations can even have a customized voice and speak.

A generative adversarial network, a new type of artificial intelligence made it possible to fake faces. After uploading many pictures of real people in a computer program, the computer will study them and try to replicate their features in new portraitures. The system intends to make these fakes so convincing that they are able to mislead the human eye. It will also try to discover the ones are fake and which ones are real.

The New York Times created its own A.I. system in order to show how easy it is to generate such images. Basically, each face is seen by the A.I. system as a mathematical model. In order to adjust, for instance, the size of a nose, different values can be chosen. When a smile is preferred, the system generates a smiling and a non-smiling picture to use as the start and endpoints. Between these two standards, the smile can be accustomed.

As this A.I. system improves at a high rate, as in 2014 these generated faces looked like the Sims. The New York Times suggests, it is not odd to envision a future with whole collections of fake portraits. What could be a downside of this development, is that these faces empower new generations of people with agendas. Catfishers, scammers, and spies can now use a friendly face as a mask to online stalk and troll. The photos make it easy to create fake personas, disguise hiring biases, and counteract the diversity of a company.

This realization could be frightening as we often cannot distinguish the fake ones from the truth. Naturally, we assume these images are real and not created by computers. As Elana Zeide, a professor in A.I., law, and policy at Los Angeles’s law school, said: “There is no objective reality to compare these photos against. We’re used to physical worlds with sensory input.” Erosion of trust across the internet is already happening, yet, with this in mind, it will become even worse, as campaigns, media, and news can appear truthful when it, in fact, could be computer-generated.

As The New York Times beautifully stated, “Artificial intelligence can make our lives easier, but ultimately it is flawed as we are, because we are all behind it.” We often trust these systems but they can, just as humans, be imperfect and used with the wrong intentions.

Sources:

  • generated.photos
  • https://www.washingtonpost.com/technology/2020/01/07/dating-apps-need-women-advertisers-need-diversity-ai-companies-offer-solution-fake-people/
  • https://www.theverge.com/2019/9/20/20875362/100000-fake-ai-photos-stock-photography-royalty-free?fbclid=IwAR04GSt2MI1xJTB8nbW4xwZG8TFyD_g86HUCnvmk7DSEpsWHS0eZaoRadak