I’ve been meaning to write down some thoughts on “AI Art” for a while now. Sadly, it’s impossible to do that publicly without adding to The DiscourseTM, but it’s also impossible to learn from others without participating in it, so hey – hell is other people and all that. My excuse is that this is less an assertion of my opinion as facts, and more of a way of organizing my own, personal thoughts, and asking people to point out if they think I’m completely wrong about any of them.
A primer on data
We usually talk about data as if it’s a “thing”. I think it’s more useful to see it as a “lens”. It’s not really that “anything can be made into data”, it’s that anything can be seen as data. Data can be clustered, derived and integrated, reasoned upon. People are more complicated. What they think, how they feel, their individual characteristics. Ugh, imagine having to deal with that. So just boil ’em, mash ’em, stick ’em in a stew, and bam, ready for some cloud server to churn!
Now look, I can definitely throw this snark the other way around as well. Our brains weren’t really made to deal with large swathes of information. Statistics is extremely counter-intuitive, and we’re prone to all sorts of built-in brain heuristics that screw us over and turn into confirmation bias, cognitive dissonance et al. We still run on prehistoric firmware, really, which was not built for kilotons of CO2 production, trillions of agreed-upon-value tokens, or immediate access to almost every other human being in the planet. This means that there are times when the data lens is extremely beneficial, and churning through it with state of the art calculators can be literally life saving.
So when it comes to AI and data, as any “super-powerful tool”, The DiscourseTM inevitably ends up with whole spectrum between “we can use it as-is for absolutely anything and it will solve all of our problems with no side effects” and “there is no real value to this, and pursuing it is inherently evil”.
Most groups across that spectrum tend to agree on two things: the others are wrong, and nuance is for idiots!
This is why we can’t have nice things
We’re fortunately not at the point where some AGI poses existential threat to humans – in fact, we’ve already done way more harm with way simpler systems, like ad micro-targeting and filter bubbles tearing holes in democracies and the idea that we share a factual reality.
We are, however, reaching the point where AI is posing existential dread to humans: consciousness, creativity and the all powerful idea that humans are unique in some inherent, unreproducible way are put at a tough spot when your fancy calculator starts presenting you with apparently similar behaviours.
But while it seems that’s part of the reason for people to worry about AI-based art generators, the real issue is the perceived dangers to their livelihood. And I think they’re right.
This is not the first time in history where specific careers are endangered because of technology. Not endangered as in “these will immediately cease to be”, but “these will have less and less demand for, until they are at a point where most people who are not at a high enough level will not find work” – and “high level” here can be a completely arbitrary metric, e.g., “being more famous” rather than “being better at their job”. The printing press didn’t kill calligraphy, but it sure as hell reduced the need for scribes. Not every picture needed to be drawn after the advent of photography. There fortunately aren’t many jobs around producing whale oil nowadays, and we’re desperately trying to at least dramatically downsize the entire fossil fuel industry (or rather, should be).
There’s obviously no comparison between the cost-benefit relation of “making art” and “planet killing industries”, but it’s good to remember that most people in the latter group aren’t necessarily Disney Villains who are in it for the demise of mankind, but just regular people who need a job and that’s what they could find. The Disney Villains tend to be the ones who are really good at raising to the upper echelons and making the calls.

Creative industries are especially fragile to this pattern, because it’s rare to have great artists that are not mostly interested in… art. And the weird dichotomy here is: while we hold creativity as one of our “special human powers”, it’s not really as valued as a profession.
So when people say that they are afraid of AI art because companies will want to cheap out, and churn out “high quality art” using one person that can prompt engineer like crazy instead of 10 master painters, they are right! This will happen. And mind you, this would happen to everyone, including programmers.
But when people say that AI-driven creativity tools (or coding tools) are great because they allow non-experts, people with less access to tools and educational material or people with disabilities to take part in the joy of creating things… they are also right!
At the end of the day, the inherent issue here is that we’re at the cusp of post-scarcity technologies without reaching post-scarcity societies. If no one had to pay rent, no one would have reasons to be worried about computers taking jobs.
“He who controls the spice controls the universe”
There are some things that I consider “catalytic symptoms”: they are, by themselves, artifacts that strongly affect the world around them, but they also only exist because of underlying conditions. I think that latest developments in image generation are very much in that ballpark.
I was having a chat the other day with a friend who was a librarian and completely pivoted his career into becoming a traditional painter. We touched on the point of artistic sensibilities, and it resonated with what I think is the core “breaking point” of AI art at the moment: just like with ubiquitous internet access people’s ability to access information became far greater than their ability to assess information, with the latest development in AI art generation, the rendering capabilities of automated systems have become greater than the average person’s threshold to call something “art”.
At the risk of sounding like a gatekeeping prick, I believe a lot of this also comes from the lack of incentives and opportunities for people to develop their artistic sensibilities. And I’m not saying that everyone needs to know about color theory, or jazz harmony, or that there are higher forms of art or whatnot, but art-by-the-numbers shovelled by IP holders and driven by analytics has been a reality for long enough to shape people’s average taste in things, and big companies prefer to have production lines that release products on a steady pace than being the patron of some muse chaser who can’t be mechanically milked for the next big thing every year. This leaks out to how artists must present their work as well: there’s an interesting comparison between the front pages of ArtStation and MidJourney; the former having had a shift towards displays of technique rather than aesthetics, as that’s the metric artists have to optimize for to have visibility.
This is kind of a self-preserving mechanism: the latest blockbuster movie or AAA game needs a lower barrier of entry for consumption to justify the costs, which over time shapes culture, which possibly makes the required barrier of entry lower until, if we’re lucky, there’s some breaking point that pushes the threshold back up. And it might just be that we need to have tools that make production cheaper so that smaller teams with fresh ideas and little money can take that risk.

The very real pain is that visibility is inversely proportional to how accessible tools and learning material are, and “churning out content” is already more valued than “making art”. People already live this daily grind, and there kind of isn’t enough room for everyone, especially because our society is so dependent on scarcity that we’re at the point where people are just inventing random artificially scarce crap so they can make a quick buck at other people’s expense, no matter the cost.
Drawing lines
For quite some time I wondered what Greg Rutkowski thought of being a built-in example prompt pretty much everywhere, and he obviously isn’t happy about it. The LAION dataset is great because it’s open, but it’s also filled with considerable issues: I remember navigating it a while ago and finding some expected bad bias where “doctors” were mostly white dudes, but being surprised that “nurses” were not only mostly women, but also screencaps from porn movies when the safety filter was off. Even medical data seems to have made its way into it unnoticed!

The tech industry is very known for “move fast, break things” and “it’s easier to ask for forgiveness than it is for permission”, and we all know how well that has been going the past few years. Legislation about technology is also notoriously slow because, just like artists tend to not become suits, technologists tend to not become politicians.
When “human things” and the “data lens” collide at a large enough scale, you’re shaping people based on pre-existing biases, and sometimes, on purpose. This means that mitigating bias is not only about improving how datasets are built, but also educating people that dataset bias is inevitable (which is currently compounded by the fact that the amount of data required to train large models cannot be dealt with without some level of unsupervised automation).
The real danger of widespread adoption of ML as an “ultimate solution to everything” is that, when left unchecked, we will further mask our existing biases behind a façade of “hard data” – almost as if the modern version of Plato’s cave is made of mirrors, instead of shadows on the wall.
So what the hell do we do about this?
Just like every other issue, it comes down to the structural changes and the personal changes. When it comes to large image generation models, there’s some really solid constructive suggestions popping up, like Karla Ortiz’s investigation if her art is on Midjourney’s datasets or prompts, and talking about building a framework based on consent. There isn’t a lot of constructive discussion yet because engagement from the developers is still pretty defensive.
If you are (like I am) super excited about image generation technology, you should start by listening to people. It’s easy to fall into the trap of “there is no other way to do this and critics are luddites”, especially because there’s money in making you fall for that. If you’re a researcher, instead of replicating the latest awesome results, focus on governance, and help empower people who might be unwillingly going into datasets. If you’re a developer, make tools that help people identify issues, like https://haveibeentrained.com/. If you’re an artist, you obviously know that you should buckle up, but I’d recommend pushing portfolio sites like DeviantArt or ArtStation to become active players in the discussion: robots.txt is a dead simple solution to similar issues, and there’s more strict existing anti-crawling measures that people are happy to use as soon as they charge for stuff. Also, verify if/how can image generators become part of your workflow – ignoring them won’t make them go away, so best to add that tool to your tool belt.
What do I think about all of this?
I’m not really sure yet.
It’s hard for me not to be biased towards the benefits of the tech. As someone who is really into art but had little natural ability for it, I chose to spend my proverbial 10.000 hours in tech. I did pick up some art fundamentals over time and it’s really exciting being able to combine the exploration of latent spaces with my photobashing skills and make art that is “good enough” for game jams and personal projects.
Solo-developing video games after hours is a hell of a steep climb, and being able to boost visual quality is a blessing – I don’t have a budget to hire artists, and can only rely in what I can do myself. A few months ago, my plan was making a game with a ton of AI generated art using Disco Diffusion, because it hits that interesting sweet spot between “looks like proper art” and Weird ShitTM, enabling me to use the generator as a co-creator.
That said, early on I established a few ground rules, so that I could be somewhat at ease with the idea of charging for something using tech that still exists in a moral grey area:
- No use of artists outside the public domain in my prompts: at the end of the day image generators are this weird mix of concept search and pixel interpolation engine, so mindfully avoiding people in the prompt gives me a little safeguarding from ripping people off
- Being open about methodology and techniques: the biggest reason for things to evolve as fast as they did is because people were very open with their work, from artists that added alt-text to their portfolios to people remixing open colabs and exchanging prompt ideas
- Accepting artist takedown requests: I find it unlikely that there would be any actual plagiarism generated, but it’s good to keep in mind that things might need to change unexpectedly, and prepare for that beforehand
- If by some miracle money starts flowing into the project, invest it back in artists: in my case, shipping a game using AI art could mean getting money to fund actual artists. There’s a list of people I’d love to commission, and I’d really like not having to start an e-mail to them with “so, my budget is just some personal savings, what’s your lowest rate?”
Extrapolating
Pop culture is very aware of the “Three Laws of Robotics” by now, and it’s funny how we’re waiting for T-800s to walk around to apply them. Given the breakneck speeds with which things have evolved, I imagine that there will be “perfect generators” before we figure out a way to do things fairly. Here’s a few general thoughts on where my mind is at, and where I think things are headed:
- Scraping publicly shared data to create closed, commercial datasets/tools is not illegal, but is not really moral. This is similar to my stance on piracy: I’m ok with things being downloaded by broke students from developing countries, but profiting from piracy is thievery. Startups have to pay for compute so charging something is understandable, but as technology evolves and anyone’s computer can download a snapshot and run locally, if you’re charging people to use your models, make sure you’re paying the artists who make up your dataset.
- If the only way to fully abide to an artist’s request to not be included in your training data is to re-train from scratch, maybe that’s the right thing to do. You didn’t ask for permission, so maybe this is what constitutes asking for forgiveness. The environment is ripe for less nuclear solutions, and I hope people start investing heavily in this.
- AI image generation tools will be integral part of game art pipelines within the next 2 years. Texture synthesis and upscaling are already pretty established, but I think the field of concept art will change considerably with inpainting and lookdev tools.
- In the long run, image generators will be a net-positive. AI generated art will not replace human artists, because it is inherently bound to pre-existing data, but it will enable people (both non-specialists and specialists) to realize their visions more easily.
- We’ll hit the “thousand bowls of oatmeal” point relatively soon. Things are super impressive now because it’s all new and flashy, but the general public will slowly develop an eye for “the generated look” and instead of going “WOW!” will start to go “meh”.
- Gatekeeping is (sadly) alive and well, and great artists who use image generation as a tool might be caught in the crossfire. I’m old enough to have seen Linda Bergkvist go from being one of the first “digital painting grand masters” to being bullied out of the community because she used photobashing as part of her process.
- Traditional, non-digital art will see a raise in commercial popularity. The perceived value of digital art will take a hit, especially as “perfect generators” start popping up in the next few years. This will happen to pretty much any discipline in the coming decades, not just art.
We’re at a point where boundaries are being discovered, and discussion must be fomented to make sure that we’re enabling people without ripping other people off. It’s early days enough that we can shape things to be beneficial for everyone, but that will take dialogue, which I think artists are already engaging in way better than the tech folks involved – and yes, that includes artists who are scared and pissed off about it.
As a game developer, there’s a strong “to make an apple pie from scratch you must first invent the universe” factor. And I can only imagine how many ideas I’ll never finish just because there isn’t enough time for me to build (or budget for me to pay people to do for me). So the older I get, the more I wish for a “make game” button that takes care of all the boring parts. But sometimes, I don’t want to generate a flower, I want to build a thing that generates a flower – there’s so much joy in the process itself. I think that’s a big part of the desire to make art, and that is often diametrically opposed to the idea of making money.
Do I think that AI-driven art generation is inevitable? Yes. Even if in a never before seen plot twist legislation preventing the current tools/techniques from being used pops up, we’ll inevitably reach the point where an AI is capable enough to create an image the same way artists do: by studying technique books, then getting together a mood board of reference images and “paint” something new from scratch. This means that while we need to mitigate immediate problems, the real solution is completely orthogonal to art: it’s societal.
I think the best way to close any discussion about art is a comic by Lynda Barry, which to me, is one of the most beautiful meta-art pieces ever made. We’re at the point where computers can generate incredibly interesting imagery, but still many years away from them being able to generate something like this, because while we’re on the fast track to artificial intelligence, we’re very far away from artificial sensibility.
Are you an artist, ethicist or AI researcher? Do you think I’m either a partial or complete idiot? Please let me know in the comments below! I’d love to refine my current stance on things.