Incom ist die Kommunikations-Plattform der Fachhochschule Potsdam

In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre

Incom ist die Kommunikations-Plattform der Fachhochschule Potsdam mehr erfahren

IAS (Image Authentication Software)

IAS is a speculative software rating the authenticity of an image by analyzing its Meta Data.

Abstract DE/EN

IAS_Titelbild.pngIAS_Titelbild.png

DE

Durch immer bessere Bild-Verändernden oder neuerdings Bild-Generierenden Technologien mittels KIs, wird die Frage der Authentizität von Bildern aufgeworfen. Bisher gibt es kaum einen Weg „künstliche Bilder“ von „echten“ zu unterscheiden. IAS ist eine fiktive Software, die mit diesem Problem helfen soll, indem sie Meta-Daten (KI-Daten eingeschlossen) von Bildern analysiert und deren Authentizität ermittelt.

EN

Due to continually improving image-altering or , of late, image-generating technologies using AI, the question of the authenticity of an image arises. So far, there is hardly a way to distinguish 'artificial images' from 'real' ones. IAS is a fictive software aiming to help with that by analyzing the Meta Data of an image, including Ai-Data, to rate the authenticity of it.

The Problem

In spring 2023, anyone active on social media or following the news could witness a riot of confusion and fascination when several images began appearing in our feeds. These images often depicted important political figures engaging in peculiar actions. For instance, around the time when Vladimir Putin visited China for a diplomatic meeting with China's President Xi Jinping, images surfaced showing Putin falling to his knees before Jinping. Given the context of Russia's conflict with Ukraine, this triggered a significant global reaction filled with fear and strong emotions. It also sparked fascination and concern because this image wasn't just fake; it was created by an AI image generation program. Similar scenarios abounded: an image portraying an explosion at the Pentagon in America, Joe Biden playing in the rain, or Elon Musk holding hands with GE CEO Mary Barra. The use and accessibility of AI image generation programs are straightforward, and the results are often astonishingly realistic. However, there is currently no regulation or rule that enables the distinction between AI-generated images and 'genuine' images.

This is problematic because images are naturally perceived as a reliable source of information. Images, especially photos, are regarded as truthful media, hence the popular saying 'pics or it didn't happen.' Research has shown that trust in information increases when it is accompanied by an image. This reliance on images has been exploited previously as a tool to shape 'public truth,' as seen when Stalin altered images of Lenin speaking to a crowd, erasing two individuals on stage next to Lenin who had fallen out of favor with Stalin.

Following image editing programs and filters, AI has become the new, alarmingly effective, and easy way to create false scenarios that are difficult to distinguish from real images. There is, and will continue to be, a pressing need for methods to differentiate truth from falsehood.

The Software

So, how do you rate the authenticity of an image? In a fictional scenario, the software IAS aims to address this challenge. In this future scenario, Exif information is stored in the metadata of AI-generated images as well as 'real' images. The software visualizes Exif information and offers data-based research possibilities, supported by AI, to fill in data gaps. Consequently, it can analyze images created with both good and bad intentions, even when there is missing information. The software will provide you with an estimated and well-reasoned probability regarding the extent to which the analyzed image might be altered or fake.

The software is designed as a native Mac application, intending to replicate a professional, everyday power tool for detailed information filtering. It can serve as a precautionary step before further working with the provided information, especially in public channels. It underscores the importance of handling potential misinformation in our daily lives.

Exif Information for AI-generated images

Up until today*, there have been no regulations on how to label or identify an AI-edited, altered, or artificially created image. We have relied solely on our judgment of what is depicted. Of course, conducting contextual research can sometimes provide clarity, or searching for certain signifiers that the image might be AI-generated, such as anomalies in the image (like hands with 6 fingers, overly smooth surfaces on clothes, the uncanny valley effect, etc.). However, since AI software is continually being optimized and developed, these methods cannot be relied upon now or in the future.

* The newly announced AI Act by the EU provides initial insight into how to regulate AI software and its use. Inspired by these regulations (which were still under discussion when this project began), we have defined some rules ourselves:

  1. The AI Act (by the EU) has already passed and includes a risk-level categorization for each AI software, ranging from high-risk to low-risk, based on the training data used and the potential bias in the generated images.
  2. Exif information (similar to what's used for photos taken with cameras) is now mandatory when an image is created or altered by AI.

The AI-Exif information contains the following information:

  • Time and date of image creation
  • Time and date of image alteration
  • Software used
  • Owner of the software used
  • Risk-lable by the Ai Act (our version)
  • The prompt that created the ai image

When Exif information is missing, this hints on a process the image went through (screenshots, multiple uploads) or deletion in bad faith. Both possibilities are being considered by IAS.

Visualzing Meta Information

Our software provides an overview of the information within an image on the dashboard. The Score, which estimates the authenticity of the image, is displayed at the top of the Sidebar. It is ranked from A to E, with E representing the least trustworthy rating. A more detailed version can be found in the top right corner of the dashboard, listing the criteria that the software uses to generate the score. Any form of image alteration (editing, cropping) has a negative impact on the Score. Missing information is also rated negatively. Transparency and information that 'makes sense' have a positive influence. Conversely, contradictory information is negatively rated. If the Score falls below D, or if certain criteria require further research or human evaluation, the software triggers an alarm.

dashboard_active_image_preview.pngdashboard_active_image_preview.png
dashboard_alarms_going_off.pngdashboard_alarms_going_off.png

There are different possibilities to dig in deeper in the meta data of the image.

The Meta Data Viewer offers the user a chronological timeline of the Exif Information. And an overall access to the data at hand. It also visualizes AI-Exif Info, if available, in a structure similar to a timeline, each step representing an AI Program being used.

ai_meta_data_viewer.pngai_meta_data_viewer.png
meta_data_viewer.pngmeta_data_viewer.png

Further research can be done by comparing available public data (online) with the existing Meta information of the Image in question. For example, time and Place of an image can be used to compare images found online holding information of the same time and place. The contents can then be compared and filtered.

map_surrounding_images.pngmap_surrounding_images.png

The Software also offers data reconstruction, supported by AI. If information is missing or conflicting, the data reconstruction helps analyzing and interpreting the data available. E.g. you can re-create possible prompts and/or compare results of different AI-Models using the same prompt. You can have the visual image content analyzed and checked for possible hints on AI image generation that a human eye might still miss. All this information will help you get a funded understanding of how trustworthy an image is, why and where it comes from.

promt_generator_loading.pngpromt_generator_loading.png
ai_model_analysis.pngai_model_analysis.png
content_analysis_loading.pngcontent_analysis_loading.png

Fachgruppe

Interfacedesign

Art des Projekts

Studienarbeit im zweiten Studienabschnitt

Betreuung

foto: Prof. Boris Müller

Zugehöriger Workspace

Wicked Problems and Speculative Software

Entstehungszeitraum

Sommersemester 2023