HomeWinBuzzer NewsSurvey Shows White Faces from AI Convincingly Mistaken for Human More Than...

Survey Shows White Faces from AI Convincingly Mistaken for Human More Than Actual Photos

An international study found that AI-generated white faces are considered more realistic than real human white faces.


A recent international study has reported a striking finding concerning artificial intelligence's ability to replicate human faces. A group of researchers from various distinguished institutions, including the University of Amsterdam and the University of Aberdeen, has concluded that white faces generated by artificial intelligence are being mistaken for real human faces at a higher rate than actual photographs of white individuals.

Biases in AI-Created Images

The study aimed to understand how accurately people can differentiate between AI-generated and real human faces, with a particular focus on racial aspects. A significant discovery was that the algorithm, primarily trained using images of white individuals, created white AI faces that people perceived as more realistic than their genuine counterparts. In the analyses, 66% of these AI-generated white faces were identified as human compared to 51% for photos of real white people.

Diverse Implications of AI in Image Recognition

Besides the striking findings of white AI faces being more recognizable, the study has also revealed that there was no discernible difference in recognition accuracy between AI-generated faces and photographs when it came to people of color. This disparity suggests potential consequences that reach beyond the realm of technology and into social issues such as racial bias in image recognition used for important purposes like searching for missing children or providing online therapy.

In the study's methodology, white adult participants were presented with 100 AI-generated white faces and 100 human white faces, each being requested to classify the faces as either AI or human and to indicate their confidence in their decision on a scale of up to 100. The participants' racial background seemingly did not influence the outcomes. Another component of the research involved participants rating both AI and human faces across 14 attributes, including age and facial symmetry, without disclosing the nature of the faces as AI-generated or human. This led to conclusions that familiarity, memorability, and proportionality of the face primarily influenced the erroneous perception of AI faces as human.

Advancing AI While Preventing Bias

Echoing the study's concerns, Dr. Clare Sutherland, a co-author, emphasized the significance of confronting biases within artificial intelligence systems. They underscore the urgency of ensuring that AI advancements do not inadvertently exclude or disadvantage individuals based on ethnicity, gender, age, or other identifying characteristics. The existence of a machine learning system developed by the research team capable of distinguishing AI-generated faces from real human faces with 94% accuracy further illustrates the complex interplay between human and artificial perception.

AI images are potentially dangerous because they can create fake or distorted content that can damage the image, privacy, or safety of people or groups. For instance, deepfake videos can show people doing or saying things they never did, such as offending someone, admitting to a crime, or having sexual acts. AI images can also be used to copy someone's identity for harmful reasons, such as spying, cheating, or extorting. AI images can also weaken the trust and reliability of information sources, such as news media, social media platforms, or public figures. Therefore, it is important to be aware of the existence and potential impact of AI images and to check the truthfulness and accuracy of the content we see online.

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News