Photography isn’t one thing anymore. It hasn’t been for a while, but we’re still using the same word for fundamentally different activities, which creates confusion about what’s happening to the medium and why it matters.
There are four distinct categories of photography now, defined by who makes the image and who consumes it. Understanding these categories and how they’re bleeding into each other is more important than endless hand-wringing about what AI generation does.

Photography made by humans for humans is what we traditionally think of as photography. Someone sees something, decides it’s worth capturing, makes choices about how to capture it, shares it with other humans who look at it and respond. The entire chain involves human intention, human seeing, human interpretation, human taste. No matter the genre, street photography, portraiture, documentary work, art photography, deliberate family snapshots, the photographer is present, makes choices, and the audience engages with those choices.
This category is shrinking proportionally but not disappearing. It’s becoming more intentional, more self-conscious about being human-made in a world increasingly full of other kinds of images. Look at the number of posts about what photography is and intent on Substack. The value moves from the image itself to the fact that a human was present and chose to capture this specific thing at this specific moment.
Photography made by machines for machines has existed longer than most people realise but it’s expanding rapidly. images are captured automatically, processed algorithmically, used to trigger other automated systems or feed databases that machines query. Medical imaging analysed by AI without radiologist involvement (e.g. breast cancer detection, which is amazingly accurate), traffic cameras feeding automated systems (e.g. number plates recognition), satellite imagery processed for patterns humans never see (e.g. deforestation, humidity patterns), all have been around to a certain extent for a while.
This is photography as pure data capture. The image isn’t for looking at. It’s for computational processing. Human visual aesthetics are irrelevant. What matters is information density, consistency, detection, machine-readable format. These images exist in a world parallel to human photography but rarely intersecting with it.
Photography made by machines for humans is the newest category and the most obviously disruptive. AI-generated images that look photographic but aren’t photos of anything that existed. Computational photography in phones that composites multiple exposures and adds algorithmic improvements that never existed in the original scene. Deepfakes and synthetic media. Images that appear to be photographs but are partially or entirely machine-generated.
This category is the most hated online because it pretends to be something it’s not. If an image can look photographic without anyone being present to witness what it depicts, what does photography mean? The indexical relationship, the idea that photographs are caused by light from real objects, breaks down entirely. This category brings about a world where images are photographic in appearance but fictional in origin.
Photography made by humans for machines is the category that deserves more attention than it’s getting. Every image uploaded is used to train AI models. Every photo is used as data rather than as image to be seen. Humans take photographs not for other humans to look at but to feed computational systems that will learn from them.
This includes the intentional side: photographers selling image libraries to AI companies, people creating datasets deliberately. But it also includes the unintentional side: every photograph shared online potentially being scraped for training data. You take a photo to show your friends. It ends up training a model that generates synthetic images.
This category inverts the traditional purpose of photography. You’re not making images for human viewing. You’re making them as raw material for machines to process into capability to generate other images. The photograph becomes input rather than output, means rather than end.
These four categories would be manageable if they stayed distinct. You’d know which kind of photography you were doing and consuming. But they’re bleeding into each other in ways that destroy the ability to know what you’re looking at or what your own work is becoming.
The bleed is more dangerous than AI generation itself because it’s invisible and deep-seated. The most obvious one is that human-for-humans has become human-for-machines without permission.
You photograph something intending to share it with other humans. You post it online, whether IG, FB, YT, or portfolio site. Your intention is human audience. Your understanding of what you’re doing is making work for people to see.
But the moment it’s online, it’s potentially training data. Scraped by AI companies (often illegally), fed into models, used to teach machines how to generate images that look like yours. Your human-for-humans work becomes human-for-machines work without your knowledge or consent, and usually without compensation.
This isn’t just theft of individual images. It’s theft of entire practice. Your years of developing your eye, your particular way of seeing, your aesthetic choices, all of that gets extracted and becomes capability that machines can deploy. You trained yourself through years of work. Then your work trains the machines. And the machines might eventually replace the kind of photography you do.
Another insidious bleed is that human-for-humans becomes machine-for-humans without acknowledgment.
Your phone camera makes “photographs” that aren’t optically accurate records of what you saw. Computational photography combines multiple exposures, adds sharpening and noise reduction algorithmically, adjusts colours based on what the phone thinks the scene should look like rather than what it actually looked like.
You think you’re taking a photograph. You’re present, you see something, you capture it. Human-for-humans, surely? But what you’re actually doing is providing input to algorithmic systems that output something partially synthetic. The final image is machine-for-humans pretending to be human-for-humans.
It’s different from film photography that, although it changes reality because of the constraints of technology and processes, doesn’t add anything created by a machine. A film image won’t have additional trees compared to what was in front of you when you pressed the shutter. Film images won’t correct someone’s face with traits that the person never had in reality.
The bleed here is that you don’t know where your seeing ends and machine generation begins. You can’t point to a line in the image and say “this is what I captured, this is what the machine added.” It’s seamless, invisible, designed to feel like photography even though it’s partially something else.
This destroys your ability to know what you’re actually doing when you photograph. Are you capturing or are you prompting? Are you making choices or are you providing rough input that machines refine? The distinction disappears and you’re left uncertain about your own agency in the process.
Every smartphone photograph exists in this bleed zone. Partly human seeing, partly machine processing, presented as if it were pure capture. Millions of people making what they think are photographs while actually collaborating with algorithms they don’t understand and didn’t consent to involving.
In the near future, another bleed will emerge: machine-for-humans masquerading as human-for-humans.
AI-generated images are approaching the point where distinguishing them from actual photographs is impossible without metadata or provenance tracking. Soon any image could be either human-made or machine-made, and viewers won’t know which.
It means real photographs lose credibility because they might be synthetic. The uncertainty contaminates everything. When you can’t trust that any image is what it appears to be, all images become equally suspect. This is actively exploited by some regimes around the planet already to discredit witnesses and the press.
Human-for-humans photography depends on an implicit contract: the photographer was there, witnessed something, chose to capture it. That contract breaks down if viewers can’t know whether the image is witness or invention. Your photograph might be perfectly honest documentation, but if it looks like something AI could generate, why should anyone believe it?
The bleed matters more than simple replacement. If AI simply replaced photography entirely, that would be clear. Photography is dead, everything is synthetic now, we all know it and adjust accordingly. Terrible outcome for photography but at least everyone knows where they stand.
But the bleed means photography continues while being eaten from the inside. It looks like photography is still happening. People are still “taking photos.” Images still exist that look photographic. But the meaning has changed without the terminology changing, and most people don’t notice.
You can be a photographer your entire life without realising that half your images are partially machine-generated and the other half are training AI models to replace you. The category appears stable while the reality has changed entirely.
This creates false consciousness about what photography is and what it’s for. Young people learning photography now might think they’re learning human-for-humans practice when they’re actually learning to provide input to computational systems (think algorithmic pressure on social media platforms). The tradition appears to continue while the substance changes fundamentally.
Photography’s power came partly from its indexical nature. A photograph was caused by light from real objects. That causal chain meant photographs were evidence, not just illustration. Legal systems, journalism, historical documentation, power struggles, all depended on photographs being trustworthy records of what existed.
The bleed destroys that trust without providing replacement. We can’t trust photographs to be photographs anymore, but we haven’t developed alternative methods for verifying what’s real. There are a couple of initiatives, but they’re slow to develop and be adopted. We’re in a transition period where nothing is reliable and we don’t yet have tools to navigate the uncertainty.
You could refuse to use computational photography, but that means refusing modern cameras entirely. You could refuse to post work online, but that removes you from contemporary image culture and limits your audience to people you can show work to physically. You could watermark images or use tools designed to poison training data, but these are marginal resistance against systems much larger and better resourced.
The bleed is structural, built into how images are captured, processed, distributed, and consumed now. Resisting it entirely requires opting out of modern image-making and sharing infrastructure, which isn’t practical for most photographers.
What you can do is be conscious of it. Understand that when you photograph with a phone, you’re not just capturing. You’re collaborating with algorithms whether you want to or not. Understand that when you share work online, you’re potentially contributing to training data that will be used to generate images that compete with yours. Understand that viewers might not know whether your images are photographs or generations, and that uncertainty affects how your work functions in the world.
Make choices with that awareness rather than pretending the bleed isn’t happening. Accept that human-for-humans photography now requires conscious resistance to default infrastructure rather than being the default itself.
You can be explicit about provenance when it matters. If you’re making work where witnessing is the point, document your process, provide metadata, establish chain of custody for images. This won’t prevent the bleed but it creates some islands of verifiable human-for-humans work in a sea of uncertainty.
You can choose to work in ways that are difficult for machines to replicate. Extreme subjectivity, physical processes like film and darkroom work, embodied presence that’s visible in the work. This doesn’t prevent your work being used as training data, but it might make the resulting machine capability less threatening to what you’re actually doing.
Long term, will photography as distinct human practice survive the bleed, or will it dissolve entirely into computational imaging where human and machine contributions are inseparable and indistinguishable?
I think it will survive but become much smaller and more consciously defined against machine involvement. Human-for-humans photography will become niche practice for people who care specifically about human presence and seeing, who value the photographer being there and making choices, who want the indexical connection to reality. This is already a trend in GenZ where they are more attracted to photographs they know have been generated by humans, with all their imperfections, than pretty pictures.
The vast majority of images will be bleed-category: partially machine-made, possibly training data, uncertain provenance, consumed without much thought about what they actually are or where they came from. That’s probably fine for most purposes. Most images don’t need to be trustworthy witnesses. They just need to be visually interesting or socially functional. Entertainment.
But for the subset of photography that matters as documentation, as art, as serious human practice, the bleed is existential threat. If you can’t distinguish human seeing from machine generation, if your work feeds systems that replace you, if viewers can’t trust what they’re seeing, then photography in the meaningful sense can’t function.
The threat isn’t just that AI will replace photography. The threat is that the four kinds of photography are bleeding into each other so thoroughly that photography as coherent practice stops making sense. We’ll still have images. We’ll still call some of them photographs. But the meaning will have drained away without anyone quite noticing when it happened.
We’re living through the collapse of categories while the vocabulary and practices of photography continue as if nothing has changed. That disconnect is dangerous because it prevents us from understanding what’s actually happening to the medium. We need to invent a new language that describes things as they actually are.
#Photography #Opinion #IMayBeWrong #AI

