Misal, Thakre and Satyam, Kadu and Ronit, Bera and Rohit, Atre and Prof. Shreya, Bhanse (2024) Detection of AI-Generated Images. International Journal of Trend in Scientific Research and Development, 8 (5). pp. 805-810. ISSN 2456-6470
Text
ijtsrd69452.pdf Download (2MB) |
|
Text
ijtsrd69453.pdf Download (1MB) |
Abstract
Generative AI has gained enormous interest nowadays due to new applications like ChatGPT, DALL E, Stable Diffusion, and Deepfake. In particular, DALL E, Stable Diffusion, and others (Adobe Firefly, ImagineArt, etc.) can create images from a text prompt and are even able to create photorealistic images. Due to this fact, intense research has been performed to create new image forensics applications able to distinguish between real captured images and videos and artificial ones. Detecting forgeries made with Deepfake is one of the most researched issues. This paper is about another kind of forgery detection. The purpose of this research is to detect photorealistic AI-created images versus real photos coming from a physical camera. Id est, making a binary decision over an image, asking whether it is artificially or naturally created. Artificial images do not need to try to represent any real object, person, or place. For this purpose, techniques that perform a pixel-level feature extraction are used. The first one is Photo Response Non-Uniformity (PRNU). PRNU is a special noise due to imperfections on the camera sensor that is used for source camera identification. The underlying idea is that AI images will have a different PRNU pattern. The second one is error level analysis (ELA). This is another type of feature extraction traditionally used for detecting image editing. ELA is being used nowadays by photographers for the manual detection of AI-created images. Both kinds of features are used to train convolutional neural networks to differentiate between AI images and real photographs. Good results are obtained, achieving accuracy rates of over 95%. Both extraction methods are carefully assessed by computing precision/recall and F1 -score measurements. The proliferation of AI-generated images, often referred to as deepfakes or synthetic media, has revolutionized how digital content is created, consumed, and shared. While these advancements offer immense creative potential, they also pose significant challenges, particularly in terms of authenticity and misinformation. This paper explores the growing need for AI-generated image detection, especially in social media applications, and delves into the methods used for distinguishing between human-made and AI-generated content. It outlines the technical challenges, the potential societal impact, and strategies for implementing robust detection mechanisms in social media platforms.
Item Type: | Article |
---|---|
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Divisions: | Postgraduate > Master's of Islamic Education |
Depositing User: | Journal Editor |
Date Deposited: | 23 Oct 2024 09:42 |
Last Modified: | 23 Oct 2024 09:42 |
URI: | http://eprints.umsida.ac.id/id/eprint/14310 |
Actions (login required)
View Item |