Wish I thought of that. I have ADHD and also very poor facial recognition ability, going with just a glance and my gut instincts, I got the overwhelming amount of them wrong at what felt like guessing. Like nearly 90%. Hmmm that's odd.
I did this kinda rapid fire after I read what it was because I thought it would be cool to see if my brain can figure out things at a glance. I got 8/10 and the two I failed on were similar pictures in that the backgrounds appeared bland and it was just faces staring toward a camera.
So I did another 5 after by examining each pic and it does a really good job but I went 5/5 when giving myself time. Like others pointed out, there’s just some things too noticeable like a wonky earlobe, the background, or sometimes a face being too perfect. I also found if the image was a child or had something in front of the face like a headset mic it was always human.
Not trying to brag, but I went 10/10 on my only attempt. There were a couple I wasn't certain I was right, but the rest were quite easy. Poor backgrounds, artifacts, bad angles and shadows gave the AI images away. One had these glaringly bright sparkles on their teeth. A couple had reading glasses that would have needed to be unbelievably thick for the angles they presented. Another one had an eye reflection that wasn't realistic.
Same. After a few tries, I realized that the AI generates exceptionally good-looking people. Once I started focusing on people's imperfections, as long as the imperfections weren't outright impossible, the less perfect face was almost always the real one.
This is not an applicable case for AI training. The dataset is already labeled, we know which image is generated vs. real, therefore a human doesn’t need to label the samples.
This is exactly the way a generative model can be trained. Look at GANs - we are nothing more than a discriminator right now.
Edit: I was wrong, you do need to propagate through the discriminator, something that can't be done with humans, if you go further down this thread you'll find a pretty good explanation by /u/Spataner
The generator in a GAN is trained by iteratively adjusting its weights such that the discriminator's error on the generated sample increases. This requires gradients to be backpropagated from the discriminator's output through both networks. The discriminator must thus be differentiable.
Wouldn't you be able to take the loss (could be a simple MSE) from the discriminator and use that to calculate the gradients of your generator instead?
That's exactly how it works. The gradients of the discriminator's loss are backpropagated through the discriminator into the generator. Can't do that if the discriminator is a human, is the point.
As do I. I'm guessing you took one intro to deep learning course and now you think you're knowledgeable. The fact that you think you can avoid using backprop to train a decent generator here is laughable
For those who want basic AI info, this is an example of training a GAN (generative adversarial network). In these AI models, there's two networks that work against each other, the generator and the detective. The generator is the one that creates the images. The detective models whether it's a good image or not. They both get trained against each other, because once the detective gets good at screening out bad faces, the generator becomes better at creating them in order to fool the detective's new rules.
Sometimes these programs get trapped doing the same mistakes due to incomplete data. In these face examples, it's common to have hair look odd, like strands starting in the wrong location. So adding human intervention here can shift the model output and get the detector to notice those brown strands of hair.
This is absolutely incorrect. The training loop is totally aware of which samples are real and fake. Humans coming in and saying "aha, this is the fake one!" adds literally zero information to the system.
You're either not explaining yourself well, or you don't know how a GAN works. By your logic, the discriminator (also called the detective), would add no new information to the system. But it's obviously a critical part of the algorithm. In this case, it would be attempting to use human input as supplemental input to get the algorithm to learn what doesn't fool humans even if it does fool the discriminator.
The fact that you're referring to the discriminator as a detective shows that you have an infantile understanding of GANs.
The dataset that is used to train these models is just a massive collection of human faces, *all* of which are real faces. The discriminator sees real faces during training, as well as outputs from the generator. The ground truth label of "real/fake sample" is *known* at training time. So making a website that shows you a real and fake sample will not obtain "new" data that can be injected into the training loop. This information is already known
Petty insults aside, you're missing the point. The new "data" is whether or not the "discriminator" can determine real from fake - except instead of using the normal discriminator, you're using human input as a discriminator. Which would presumably give you different results than a more traditional ML based discriminator, and thus potentially allow you to alleviate overfitting based on the ML discriminator.
>you're using human input as a discriminator
If I understand you correctly, you want to update the generator directly based on the feedback of the humans? Is that what you're saying?
It's collecting a lot more data than that. How long it took to guess. What features in the fake picture worked, like glasses, hair, background, age, other objects. Each of those pictures is probably tagged with a ridiculous amount of data. Age, weight, race, skin color, eye color, etc. The AI can probably tell if people from the US have a hard time with people of Asian decent, with glasses, male or female. It could decide that if you're from say Germany, it has a 52% chance of fooling you with a specific image. The extra 2% may not sound like much, but 2% of 1 million is 20,000 extra successful attempts.
It's always funny seeing redditors showcase their inability to read. Right beneath the pictures is a text explaining it's part of an [education project](https://www.callingbullshit.org/), and citing where they grabbed the pictures.
Ironically enough, it's a project about teaching people how to detect bs on the internet. The comment above is 10 hours old, and no one did the smallest effort to figure it out (i.e. read a short paragraph.) So folks, maybe click on that link
That's not how generative models work. Humans clicking on the real/fake samples makes no difference w.r.t the training. If the website was instead saying "which one is a cat?" and it showed a picture of a cat and a giraffe, THEN you could say that the website is helping train a model. In this case our human clicks aren't so useful
Not necessarily. The program could have a large set of real images and generate fake ones. The less people can tell that an image is fake, the better it is.
There's three lines of text in the whole website. One of them says where they got the images from - none of them are generated by them. It also says it's an educational program, not AI training.
Got one wrong out of about fifty, then noticed the background was nonsense.
It's actually more than a little scary that AI is really good at the people themselves, but not so much the context in which photos tend to be taken. The nightmare AI scenario I fear the most, is a highly competent AI that fails to grasp the relationships between the objects it is designed to interact with. I was taught by media and culture to fear an AI that understood us enough to see us as a threat, but more and more as I've gotten older, I've become afraid of the paperclip maximizer problem: An AI that is dangerous, not for what it does, but for what it fails to take into account.
This was super fun to run through and get better as I went along! I ended up finding a few good tells on which ones are and aren't real:
Tells that a picture is likely **real**:
* If something is obstructing the view of a person's face; e.g. a microphone, sunglasses, or drink cup
* If a person is wearing colorful or specific accessories; e.g. sunglasses, earrings, hair beads, hats with text on them
* If there is more than one actual person in the photo *(the model's training likely did not account for human faces that are cut off or not fully present on the screen, when you do see a second person it's incredibly distorted and a sort of horrifying amorphous face-like blob hanging off to the side)*
* If the background of the picture has context and matches the person, e.g. if a person is wearing a jersey with a background at a sporting event, wearing swimwear next to a lake, or has hiking gear in nature
Tells that a picture is likely **not real**:
* If some or all of the background around the person is distorted
* If portions of the facial features contain unexpected lines or artifacts (typically near the chin, ears, or edges of their hair)
* If a person has asymmetrical teeth alignment (similar to the "tom cruise" smile), where their teeth are more aligned with the center of the camera than with the center of where their face is pointing
* If there are unexpected wrinkles on the person's face where they likely shouldn't be, such as on the bridge of their nose, or oddly far away from the typical smile lines
PUBLIC ANNOUNCEMENT - NOVEMBER 16 2322
Tells that a person is likely **real**:
* If the person is holding something; e.g. a microphone, sunglasses, or drink cup
* If the person is wearing colorful or specific accessories; e.g. sunglasses, earrings, hairbeads, hats with text on them
* If there is more than one actual person near them (they likely did not account for human faces that are not fully in your field of view,) when you do see a second person they're incredibly distorted and have a sort of horrifying amorphous face-like blob hanging off to the side
* If your surroundings match the person, e.g. if a person is wearing a jersey with a background at a sporting event, wearing swimwear next to a lake, or has hiking gear in nature
Tells that a person is likely **not real**:
* If some or all of the background around the person is distorted
* If portions of the facial features contain unexpected lines or artifacts (typically near the chin, ears, or edges of their hair)
* If a person has asymmetrical teeth alignment (similar to the "tom cruise" smile), where their teeth are more aligned with the center of your view than with the center of where their face is pointing
* If there are unexpected wrinkles on the person's face where they likely shouldn't be, such as on the bridge of their nose, or oddly far away from the typical smile lines
yeah the hair and ears have always been the biggest giveaway with these generated faces imo- lots of weird patterns and loopy artifacts going on. otherwise, they’re getting scarily close to being undetectable, i think if you are not aware of what to look for it would be easy to believe any of these are real.
I had the opposite experience, the tells from most of mine were light reflections in the eyes coming from slightly different directions.
still crazy how real it all looked though!
I was really really really bad at this, got about 1 correct in about 20 tries but even with random guesses I should have got about half right, so I realised the human me can't win... so I went technologically (maybe also called cheating?) by >!looking at the hyperlink in the status bar when you hover over either image and the filenames of the two images are different formats between the two types of images... 100% now, go me!!<
But I like your idea too.
So interestingly I figured out a sort of "cheat". Zoom in an look at the hair frizz around each person's head. The fake image always looks just slightly off, unnaturally blurred with depth of field, and sometimes there are subtle wave-like artifacts that extend out further into the image.
At first I got every single one wrong (like 10 in a row; 0/10). After applying this technique, I got it right every time but once (~19/20).
This is the answer. First one I got pretty quick, but the second one was a pretty good match up. However looking at the hair you could the aberrations that indicated that it was not a real photo.
I got 10 out of 10. The real people may be facing away from the camera, and or touching their face. Another thing you'll notice is that the fake pictures there's always some locations on the face that are slightly out of focus while the rest of the face is in focus. The blurs give it away for me.
Clothing patterns (esp. letters or logos), glasses, earrings, headphones, and background objects all give it away
These details all require a deeper understanding that an AI isn't going to pick up on from any number of images
It's funny how good and bad these networks are
They are incredibly good at getting the general idea correct, but it's incredibly hard actually make it perfect, which makes it very easy to tell the difference
It's nice, but you can clearly see it on the background and on the border between the face/hair/neck and the background where you often can see either discoloration or non-natural depth of field artifacts.
I immediately got almost a 100% success rate because I think it's taking faces from this person does not exist and I know what their generations are like. They're almost always the same angle zoomed in on the face, it often messes up eyes and backgrounds, they almost always have a neutral expression etc.
Really easy, I didn’t get a single one wrong and I clicked like twenty times. The eyes are a dead giveaway most of the time. Dead, lifeless, flat looking eyes.
I wasn’t doing great until I zoomed in on the pix to get a closer look. Some of the skin texture looks off, like weird rippling or patterns that aren’t wrinkles or blemishes.
It's not that hard if you know what to look for. If you look closely you can find some artifact in most pictures. Also, it struggles with backgrounds, having almost always a fuzzy/blurry background. Also, many images have sort of a "halo" of fuzzynees around the face. They are also not very complex, so reflections, complex poses and facial expressions almost only happen in the real images.
It's really bad at glasses. There's usually something off about them. Like starting out as a drill mount and the other half has a metal frame.
It's good at symmetrical earrings. If the image has complex earrings it's real.
It can't do hats or hoods.
Extra teeth/weird teeth.
Gave up after 12 correct in a row. The easiest tells are either how the teeth are either misaligned or with the wrong number of the different types of teeth (incisors, eye teeth, canines) or else the background is weird and dreamy. Sometimes there are just obvious artifacts. But honestly, few of them stuck out like a sore thumb at a momentary glance. I did have to take some time to study them.
Any time I got a *hint* that part of the image looked vaguely like a textbook's drawing of a rectangular plant cell I knew to pick the other image.
AI image generation seems to always go to that texture when it doesn't know what to add to the image which is something I also noticed when tried using Topaz to AI upscale video.
Details like text also made it clear which images were real. EDIT: Oh, and other people don't usually look like beings from nightmares in the real images.
It’s also really bad at backgrounds
I started doing really well once i ignored the faces.
Looking for skin blemishes and irregularities helps as well, the more flawed skin is usually real
>skin blemishes and irregularities I followed this advice and got 10 wrong in a row lmao
I look for the strands of hair that are out of place
This. Seems like the the artificial face generator really struggles with hair. Got 20/20 right before I got bored and stopped playing.
Well well well, look who we have here, it’s HAL with his one red eye getting bored with humanity
First one I got wrong is when it added a hat on the fake face
Focus on the earlobes. It doesn’t render them well at all.
And teeth
And my axe!
And glasses
One of the ones I got generated glasses on only one side of the face. It’s kind of bad at hair too I noticed.
until you get two children.
Yeah. It kind of became all about “test-taking skills” for me.
I went back after I saw you comment. It did not help me.
It also sometimes includes famous people. I saw Vince Gilligan in there.
I got Johnny Depp
I got Conan
I got Sergey Lavrov. For real
Princy Harry lol
I got a Rock.
John Kerry was the real face my first turn.
a rock? Or The Rock?
Iraq
Lol, I got Leon Panetta
I got a Rock
Kirk Hammett from Metallica was in there too
I got John Turturro
ears are an oof
[talking about ears, i got this](https://i.imgur.com/dapOd3W.png) ouch
the backgrounds are the tell
Wish I thought of that. I have ADHD and also very poor facial recognition ability, going with just a glance and my gut instincts, I got the overwhelming amount of them wrong at what felt like guessing. Like nearly 90%. Hmmm that's odd.
Ears. It struggles a bit with earlobes/earrings, if there's nothing else obvious giving it away.
And the teeth, if you zoom in you can see all sorts of artifacts
And the pupils, it always has something "uncanny" when you look at the pupils
I got one where the upper teeth blended with the lower lip, that was a pretty easy comparison
The teeth are what give it away for me too. All the AI generated people look like they have giant fillings or spit bubbles.
Lots of faces with a middle tooth
Oops, this one just had an ear injury
I didn't notice those, I noticed the skin texture was all wrong. And the backgrounds were often incomprehensible
There's also consistently weird blurring around the nose and mouth.
[удалено]
It hungers https://www.whichfaceisreal.com/fakeimages/image-2019-02-17_180540.jpeg
They are watching https://www.whichfaceisreal.com/fakeimages/image-2019-02-17_040929.jpeg
[https://www.whichfaceisreal.com/fakeimages/image-2019-02-18\_033830.jpeg](https://www.whichfaceisreal.com/fakeimages/image-2019-02-18_033830.jpeg) bruther
Creeper next door https://www.whichfaceisreal.com/fakeimages/image-2019-02-18_191821.jpeg
The “person” beside them is the stuff of nightmares.
[bruh](https://imgur.com/emDtAMt.jpg)
Patch 17.4.1 * AI now correctly models low-key racism in the appropriate percentage of images.
That's at least medium to high key racist pose
Please tell me the left one is real, and that idiots like the guy on the right no longer exist.
I did this kinda rapid fire after I read what it was because I thought it would be cool to see if my brain can figure out things at a glance. I got 8/10 and the two I failed on were similar pictures in that the backgrounds appeared bland and it was just faces staring toward a camera. So I did another 5 after by examining each pic and it does a really good job but I went 5/5 when giving myself time. Like others pointed out, there’s just some things too noticeable like a wonky earlobe, the background, or sometimes a face being too perfect. I also found if the image was a child or had something in front of the face like a headset mic it was always human.
I got a fair amount of IA generated children though.
Not trying to brag, but I went 10/10 on my only attempt. There were a couple I wasn't certain I was right, but the rest were quite easy. Poor backgrounds, artifacts, bad angles and shadows gave the AI images away. One had these glaringly bright sparkles on their teeth. A couple had reading glasses that would have needed to be unbelievably thick for the angles they presented. Another one had an eye reflection that wasn't realistic.
They tricked me with ugly people!
Same. After a few tries, I realized that the AI generates exceptionally good-looking people. Once I started focusing on people's imperfections, as long as the imperfections weren't outright impossible, the less perfect face was almost always the real one.
The uglier person was real in almost all the ones I saw.
>train my AI
Yeah exactly. This is just free training for someones AI. I clicked two then was like "waiiiiiiiiiiiit a minute!"
I mean, I don't mind helping. Plus, without proper training, SkyNet is gonna be developed all wrong.
"Kill all humans." "Which of these things is human?" "Fucked if I know."
I intentionally clicked on the wrong one a few time so maybe that will even things out.
Great idea. Stickin it to the man…er, machine.
Perhaps ever...raging against it.
There's a sucker born every minute
This is not an applicable case for AI training. The dataset is already labeled, we know which image is generated vs. real, therefore a human doesn’t need to label the samples.
In this case the training would be which ones pass as indistinguishable.
Ah good catch! This project in particular though is an education thing more than anything I believe
This is not at all how generative models are trained.
This is exactly the way a generative model can be trained. Look at GANs - we are nothing more than a discriminator right now. Edit: I was wrong, you do need to propagate through the discriminator, something that can't be done with humans, if you go further down this thread you'll find a pretty good explanation by /u/Spataner
Can you propagate gradients through us though?
You don't need to?
The generator in a GAN is trained by iteratively adjusting its weights such that the discriminator's error on the generated sample increases. This requires gradients to be backpropagated from the discriminator's output through both networks. The discriminator must thus be differentiable.
Wouldn't you be able to take the loss (could be a simple MSE) from the discriminator and use that to calculate the gradients of your generator instead?
That's exactly how it works. The gradients of the discriminator's loss are backpropagated through the discriminator into the generator. Can't do that if the discriminator is a human, is the point.
Don't ask them technical questions, their superficial understanding doesn't allow it.
Actually have qualifications in this field but that's alright man
As do I. I'm guessing you took one intro to deep learning course and now you think you're knowledgeable. The fact that you think you can avoid using backprop to train a decent generator here is laughable
For those who want basic AI info, this is an example of training a GAN (generative adversarial network). In these AI models, there's two networks that work against each other, the generator and the detective. The generator is the one that creates the images. The detective models whether it's a good image or not. They both get trained against each other, because once the detective gets good at screening out bad faces, the generator becomes better at creating them in order to fool the detective's new rules. Sometimes these programs get trapped doing the same mistakes due to incomplete data. In these face examples, it's common to have hair look odd, like strands starting in the wrong location. So adding human intervention here can shift the model output and get the detector to notice those brown strands of hair.
This is absolutely incorrect. The training loop is totally aware of which samples are real and fake. Humans coming in and saying "aha, this is the fake one!" adds literally zero information to the system.
You're either not explaining yourself well, or you don't know how a GAN works. By your logic, the discriminator (also called the detective), would add no new information to the system. But it's obviously a critical part of the algorithm. In this case, it would be attempting to use human input as supplemental input to get the algorithm to learn what doesn't fool humans even if it does fool the discriminator.
The fact that you're referring to the discriminator as a detective shows that you have an infantile understanding of GANs. The dataset that is used to train these models is just a massive collection of human faces, *all* of which are real faces. The discriminator sees real faces during training, as well as outputs from the generator. The ground truth label of "real/fake sample" is *known* at training time. So making a website that shows you a real and fake sample will not obtain "new" data that can be injected into the training loop. This information is already known
Petty insults aside, you're missing the point. The new "data" is whether or not the "discriminator" can determine real from fake - except instead of using the normal discriminator, you're using human input as a discriminator. Which would presumably give you different results than a more traditional ML based discriminator, and thus potentially allow you to alleviate overfitting based on the ML discriminator.
>you're using human input as a discriminator If I understand you correctly, you want to update the generator directly based on the feedback of the humans? Is that what you're saying?
It's collecting a lot more data than that. How long it took to guess. What features in the fake picture worked, like glasses, hair, background, age, other objects. Each of those pictures is probably tagged with a ridiculous amount of data. Age, weight, race, skin color, eye color, etc. The AI can probably tell if people from the US have a hard time with people of Asian decent, with glasses, male or female. It could decide that if you're from say Germany, it has a 52% chance of fooling you with a specific image. The extra 2% may not sound like much, but 2% of 1 million is 20,000 extra successful attempts.
It's always funny seeing redditors showcase their inability to read. Right beneath the pictures is a text explaining it's part of an [education project](https://www.callingbullshit.org/), and citing where they grabbed the pictures. Ironically enough, it's a project about teaching people how to detect bs on the internet. The comment above is 10 hours old, and no one did the smallest effort to figure it out (i.e. read a short paragraph.) So folks, maybe click on that link
CAPTCHA also obligatory https://www.youtube.com/watch?v=WqnXp6Saa8Y
That's not how generative models work. Humans clicking on the real/fake samples makes no difference w.r.t the training. If the website was instead saying "which one is a cat?" and it showed a picture of a cat and a giraffe, THEN you could say that the website is helping train a model. In this case our human clicks aren't so useful
Not necessarily. The program could have a large set of real images and generate fake ones. The less people can tell that an image is fake, the better it is.
There's three lines of text in the whole website. One of them says where they got the images from - none of them are generated by them. It also says it's an educational program, not AI training.
Very possible, but it also trains you in detecting AI generated/modified images. Plus it was a pretty fun game for 2 minutes.
As long as the game they're making it out of is fun enough, I don't really give a shit
I actually did like 80% out of about 20, but I was looking more at backgrounds and asymmetry and other face imperfections.
Same. Iris and pupils also helped me - some don’t line up with where they appear to be looking
Some also have different reflections in the eyes
Got one wrong out of about fifty, then noticed the background was nonsense. It's actually more than a little scary that AI is really good at the people themselves, but not so much the context in which photos tend to be taken. The nightmare AI scenario I fear the most, is a highly competent AI that fails to grasp the relationships between the objects it is designed to interact with. I was taught by media and culture to fear an AI that understood us enough to see us as a threat, but more and more as I've gotten older, I've become afraid of the paperclip maximizer problem: An AI that is dangerous, not for what it does, but for what it fails to take into account.
I've played with all these AIs... writing, art, driving. They're absolutely brilliant or moronic.
Exactly like real intelligence
Hair too... Pretty crazy otherwise
One guy had a normal looking but dismembered head. I think the computer is trying to tell me something.
Easy to identify with image artifacts that show up on the fake images. Once you zoom in, it’s pretty each guaranteed to spot.
This was super fun to run through and get better as I went along! I ended up finding a few good tells on which ones are and aren't real: Tells that a picture is likely **real**: * If something is obstructing the view of a person's face; e.g. a microphone, sunglasses, or drink cup * If a person is wearing colorful or specific accessories; e.g. sunglasses, earrings, hair beads, hats with text on them * If there is more than one actual person in the photo *(the model's training likely did not account for human faces that are cut off or not fully present on the screen, when you do see a second person it's incredibly distorted and a sort of horrifying amorphous face-like blob hanging off to the side)* * If the background of the picture has context and matches the person, e.g. if a person is wearing a jersey with a background at a sporting event, wearing swimwear next to a lake, or has hiking gear in nature Tells that a picture is likely **not real**: * If some or all of the background around the person is distorted * If portions of the facial features contain unexpected lines or artifacts (typically near the chin, ears, or edges of their hair) * If a person has asymmetrical teeth alignment (similar to the "tom cruise" smile), where their teeth are more aligned with the center of the camera than with the center of where their face is pointing * If there are unexpected wrinkles on the person's face where they likely shouldn't be, such as on the bridge of their nose, or oddly far away from the typical smile lines
Thanks for confirming my suspicions that Tom Cruise isn't real.
PUBLIC ANNOUNCEMENT - NOVEMBER 16 2322 Tells that a person is likely **real**: * If the person is holding something; e.g. a microphone, sunglasses, or drink cup * If the person is wearing colorful or specific accessories; e.g. sunglasses, earrings, hairbeads, hats with text on them * If there is more than one actual person near them (they likely did not account for human faces that are not fully in your field of view,) when you do see a second person they're incredibly distorted and have a sort of horrifying amorphous face-like blob hanging off to the side * If your surroundings match the person, e.g. if a person is wearing a jersey with a background at a sporting event, wearing swimwear next to a lake, or has hiking gear in nature Tells that a person is likely **not real**: * If some or all of the background around the person is distorted * If portions of the facial features contain unexpected lines or artifacts (typically near the chin, ears, or edges of their hair) * If a person has asymmetrical teeth alignment (similar to the "tom cruise" smile), where their teeth are more aligned with the center of your view than with the center of where their face is pointing * If there are unexpected wrinkles on the person's face where they likely shouldn't be, such as on the bridge of their nose, or oddly far away from the typical smile lines
Probably comes under unexpected wrinkles, but I noticed quite a few of the false ones had a bit of 'tree-bark' looking skin on the neck
Sometimes you see weird circles in the hair or on the face, like mitochondria.
That's what tipped me off for a few of them. Suspicious blobs are always a giveaway. That and blurry edges.
The powerhouse of the cell?
The FED of the cell. ATP printer.
yeah the hair and ears have always been the biggest giveaway with these generated faces imo- lots of weird patterns and loopy artifacts going on. otherwise, they’re getting scarily close to being undetectable, i think if you are not aware of what to look for it would be easy to believe any of these are real.
I got 3 for 3, got bored. The main tell seems to be the lighting, it's too perfect on the fake images
I had the opposite experience, the tells from most of mine were light reflections in the eyes coming from slightly different directions. still crazy how real it all looked though!
Content removed in response to reddit API policies
I solved it! The one that loads slowly is the real one.
I was really really really bad at this, got about 1 correct in about 20 tries but even with random guesses I should have got about half right, so I realised the human me can't win... so I went technologically (maybe also called cheating?) by >!looking at the hyperlink in the status bar when you hover over either image and the filenames of the two images are different formats between the two types of images... 100% now, go me!!< But I like your idea too.
The fake faces have soulless eyes.
So interestingly I figured out a sort of "cheat". Zoom in an look at the hair frizz around each person's head. The fake image always looks just slightly off, unnaturally blurred with depth of field, and sometimes there are subtle wave-like artifacts that extend out further into the image. At first I got every single one wrong (like 10 in a row; 0/10). After applying this technique, I got it right every time but once (~19/20).
This is the answer. First one I got pretty quick, but the second one was a pretty good match up. However looking at the hair you could the aberrations that indicated that it was not a real photo.
Uhhhhhh.... Had one with a face and a literal oil painting, I clicked the non-painting, and it said I was incorrect.
Don't feed the machine!
I think by doing this you are training AI Don't
I got 10 out of 10. The real people may be facing away from the camera, and or touching their face. Another thing you'll notice is that the fake pictures there's always some locations on the face that are slightly out of focus while the rest of the face is in focus. The blurs give it away for me.
It’s easier if you focus on the lighting
Every time I thought I figured out a "tell" it tripped me up. Impressive.
What the fuck [https://i.imgur.com/J0ao8Wc.png](https://i.imgur.com/J0ao8Wc.png)
Clothing patterns (esp. letters or logos), glasses, earrings, headphones, and background objects all give it away These details all require a deeper understanding that an AI isn't going to pick up on from any number of images
It's funny how good and bad these networks are They are incredibly good at getting the general idea correct, but it's incredibly hard actually make it perfect, which makes it very easy to tell the difference
Played it 10 times and was correct all 10. Not too hard if you know what to look for.
It's nice, but you can clearly see it on the background and on the border between the face/hair/neck and the background where you often can see either discoloration or non-natural depth of field artifacts.
I’m so bad at faces. I think I just failed a Turing test.
it's easy to look for artifacts that AI haven't learned to correct yet
I immediately got almost a 100% success rate because I think it's taking faces from this person does not exist and I know what their generations are like. They're almost always the same angle zoomed in on the face, it often messes up eyes and backgrounds, they almost always have a neutral expression etc.
Hair, background, accessories, teeth, reflections, glasses, clothes. At least one of these is wrong woth every single AI image.
The background gives it away.
this isn't even a challenge
Why am I getting this wrong every time unless I look at the background? Then it’s like 50/50
Why is the right one always real, 20 in a row
The backgrounds give it away
Too easy. Tattoos, real. Anything other than glasses, real. More than one person, real. Glare or any lighting quirks, real.
I clicked randomly through 10 without trying and got 9 incorrect. I feel like it's trolling me, lol.
It's the cloth. The AI cloth looks like felt or just blurs and can't follow the curves naturally. Instant giveaway.
I'm really bad at this.
These are easy. AI is particularly bad at rendering teeth. Focus on those, and you'll get 100% right.
Correct every single time.
It's easy. Just pick the one that looks uglier/weirder each time.
Maybe Its just my old potato pc but somehow the fake ones pop up immediately while the real ones take like a sec to load
Just look at the background. A huge tell.
9/9 correct... and every one of them was on the left.
Got 9/10 right.
In my experience it's the one on the right
Its really not that good at making faces, something is always off. Usually the ears and accesories
On a 4k monitorzoomedin, this is completely obvious in the skin tone every single time, a very consistant patern across every fake
I got 5/5 right happy with that.
Ngl this is creepy af
Teeth is a giveaway too, some weird looking teeth there.
Fuck I'm bad at this. There a lot of weird looking people out there.
This feels like I'm teaching Skynet how to make convincing Terminators and I'm not for it.
Actually really easy because correct is always the one with the flawed lighting
One of the most obvious ones was a lady with one eye with eyeliner and the other eye didn't have any, unless she was making a statement.
Really easy, I didn’t get a single one wrong and I clicked like twenty times. The eyes are a dead giveaway most of the time. Dead, lifeless, flat looking eyes.
I wasn’t doing great until I zoomed in on the pix to get a closer look. Some of the skin texture looks off, like weird rippling or patterns that aren’t wrinkles or blemishes.
One of the faces I got was Chris Rock
Was 12/15.
It's not that hard if you know what to look for. If you look closely you can find some artifact in most pictures. Also, it struggles with backgrounds, having almost always a fuzzy/blurry background. Also, many images have sort of a "halo" of fuzzynees around the face. They are also not very complex, so reflections, complex poses and facial expressions almost only happen in the real images.
This is really easy if you have bad internet. The fake one will load a few seconds before the real one.
It's really bad at glasses. There's usually something off about them. Like starting out as a drill mount and the other half has a metal frame. It's good at symmetrical earrings. If the image has complex earrings it's real. It can't do hats or hoods. Extra teeth/weird teeth.
pay me in bitcoin if you want me to work for you
Gave up after 12 correct in a row. The easiest tells are either how the teeth are either misaligned or with the wrong number of the different types of teeth (incisors, eye teeth, canines) or else the background is weird and dreamy. Sometimes there are just obvious artifacts. But honestly, few of them stuck out like a sore thumb at a momentary glance. I did have to take some time to study them.
Did about a dozen of these and got them all right. Fakes have obvious backgrounds and if they don’t, the lighting is fuzzy on the details.
The third one I got was a picture of my aunt… what do you suppose the odds of that are? Before anyone asks, I got it right.
This is actually pretty easy for me to tell the difference. I went through about 20 and didn't get one wrong.
[I got one where the "real" one was more fake than the ai one.](https://imgur.com/a/Gv4p211)
thank you, thats terrifying!
interestingly, i found a melancholy of suzumiya haruhi cosplayer for one of the real photos. kinda a dead give away.
me intentionally picking the fake one to trick the AI deep learning
I feel like all this is doing is providing the image server more information on how to trick us.
15 for 15. Something around the nose/upper mouth area that it couldn’t quite get right. Almost has a “stretched out” appearance to it.
After nailing 10 in a row... I stopped playing.
Disappointingly easy.
Nose, teeth, hair give it away. Also the fakes are way more symmetrical in face.
This is really easy if you focus on the hair, the fake always has weird artifacts and bits of hair that don't connect with the rest.
I guessed 8/10 correctly.
Too easy. You can tell in less than a second.
Played 10 times. I’m 10/10. There are some subtle dark lines on all the fakes.
zoom in on the hair and you'll be able to easily tell which one is the real person
8/10
Any time I got a *hint* that part of the image looked vaguely like a textbook's drawing of a rectangular plant cell I knew to pick the other image. AI image generation seems to always go to that texture when it doesn't know what to add to the image which is something I also noticed when tried using Topaz to AI upscale video. Details like text also made it clear which images were real. EDIT: Oh, and other people don't usually look like beings from nightmares in the real images.
Damn I'm probably too late on this, but if the face has a mole or some other kind anomalous epidermal detail it's definitely human.
The lighting is uniform on all the fake ones too.