My point is that AI images don’t differ significantly enough from non-AI images. “AI images” is an extremely broad category.
If you are narrowing that category to, say, “all Dall-E images” or “all Midjourney images” or something, MAYBE. They tend to have a certain “look.” But even that strikes me as unlikely, and those are just a slice of the “AI images” pie.
As someone who has played around with Stable Diffusion and Flux, the “average color” of an image can vary dramatically based on what settings and models you’re running. AI can create remarkably real-looking images with proper variance in color and contrast, because it’s trained on real photos. Pixels, as I said, are pixels.
That’s not to mention anime or sketch or stained glass or any other medium imitation. And of course, image-to-image with in-painting, where only parts of an image are handled by the AI.
My point is that if there were overtly simple answers like, “all AI images average their color to a beige,” then there wouldn’t be all this worry about AI images. It would be easy to detect them. But things aren’t that simple, and if you spend a small amount of time looking into the depth that generating AI images has gained even in the last year, you’d realize how absurd a simple answer like that is.
So because you “make” AI generated images you are saying that they are magical and don’t follow the rules of their generation?
They are based on noise maps and inferred forwards from there. They leave a history in the pixels it’s how lots of people are detecting them.
Just because it’s trained on real photos does not mean it’s still a real photo. Just cause it looks fine doesn’t mean there isn’t stuff true beneath it.
In the video I linked they even talk about how the red blue green maps have the same values cause it started with a colorless pixel anyways. A real sensor doesn’t do that.
I worked with photographers and in Photoshop and did what you think you are doing. Working with images and pixels are not just pixels. That means nothing. Dogs are just dogs. There are still different breeds and types of dogs.
Isn’t there a whole thing about if you average out colors on AI generated photos you get a uniform beige color?
I don’t get why these tools don’t just do that but I guess you got to keep the marketing up of using AI to find a solution.
Either that’s not true of AI images or it’s true of all images. There aren’t answers that simple to this. Pixels are pixels.
What? That’s some extreme logic.
First of all why would it be true of all images? Real photos would have variance of contrast and color in different ways.
These guys literally point out average colors and contrast in AI images
Instead of engaging the conversation you just say pixels are pixels? Luke that means something smart?
My point is that AI images don’t differ significantly enough from non-AI images. “AI images” is an extremely broad category.
If you are narrowing that category to, say, “all Dall-E images” or “all Midjourney images” or something, MAYBE. They tend to have a certain “look.” But even that strikes me as unlikely, and those are just a slice of the “AI images” pie.
As someone who has played around with Stable Diffusion and Flux, the “average color” of an image can vary dramatically based on what settings and models you’re running. AI can create remarkably real-looking images with proper variance in color and contrast, because it’s trained on real photos. Pixels, as I said, are pixels.
That’s not to mention anime or sketch or stained glass or any other medium imitation. And of course, image-to-image with in-painting, where only parts of an image are handled by the AI.
My point is that if there were overtly simple answers like, “all AI images average their color to a beige,” then there wouldn’t be all this worry about AI images. It would be easy to detect them. But things aren’t that simple, and if you spend a small amount of time looking into the depth that generating AI images has gained even in the last year, you’d realize how absurd a simple answer like that is.
So because you “make” AI generated images you are saying that they are magical and don’t follow the rules of their generation?
They are based on noise maps and inferred forwards from there. They leave a history in the pixels it’s how lots of people are detecting them.
Just because it’s trained on real photos does not mean it’s still a real photo. Just cause it looks fine doesn’t mean there isn’t stuff true beneath it.
In the video I linked they even talk about how the red blue green maps have the same values cause it started with a colorless pixel anyways. A real sensor doesn’t do that.
I worked with photographers and in Photoshop and did what you think you are doing. Working with images and pixels are not just pixels. That means nothing. Dogs are just dogs. There are still different breeds and types of dogs.
It is absolutely not true of all AI images. I’d be surprised if it’s even true about most AI images.
Just saying that because you feel like it’s true or because you’ve participated in that line of thought for even 5 seconds?
AI images come from a noise map, it’s true cause they generate from it in a consistent manner.