A reader on BDSMLR asked me for my thoughts on using AI-generation models such as Dall-E and Stable Diffusion to create erotic art assets for All These Roadworks.

This isn’t the kind of topic I usually cover in Reality Check, but I thought my answer probably deserved a better platform, so I’m presenting it here for those who may be interested.

The short answer, in case it’s not clear, is I’m not planning to change the way I obtain art assets any time soon, and if you’re an artist interested in providing erotic art assets of any size or detail at budget rates you’re always welcome to get in touch and provide links to your portfolio.

(The images in the header art were generated with Dall-E mini using the prompt “hentai art of girl in bikini”. Models such as Stable Diffusion and the main Dall-E model can generate much more realistic depictions of humans.)


Intro Note

I have a legal background, although IP law is not my speciality.  

Commercial use of AI art

(1) You can’t put the genie back in the bottle.  AI image generation exists, and the reality of the future is that people *will* be able to generate professional quality imagery on their home computer using text prompts, and it will be indistinguishable from something they might generate in the traditional way.  Whether or not the tech is 100% there yet, it clearly *will* be.  Attempts to regulate it by either hard-banning it (laws which forbid it) or soft-banning it (laws which disincentivize it, such as expanded intellectual property (IP) laws) are ultimately doomed to failure.  This *is* the future we will have.  

(2) The way AI image generation works is by sampling a wide corpus of existing art and effectively “remixing” it to create new works.  This is really not very different to the way the human brain creates art.  Saying that it *inherently* infringes on IP because it is inspired by existing art betrays a misunderstanding of how art is made by humans.  It’s a foolish argument.

(3) However, the AI systems do very often produce art that is close enough to existing art, or existing styles of art, that it might infringe upon IP rights.  When this happens, people can sue.  This is absolutely no different to when a human generates art, deliberately or otherwise, that infringes upon someone else’s IP, and we don’t need new laws for that.

(4) When a *human* generates art that is very similar to existing art, they are likely to know that, because they will have seen and remembered the similar art.  As a result, they will likely *know* that they can’t use the infringing art commercially.  However, current AI art generation doesn’t provide that assurance.  It might generate something 98% identical to existing IP, and the user would have no way of knowing – but still be liable to lawsuit if they use it commercially.  And the fact that they used AI to make it doesn’t make them less liable – it probably makes them *more* liable, as they should have known infringing art was a possible output of the algorithm, and were reckless.

(5) AI may also generate an output that has previously been provided to another user, or an output which is very similar, and given that you’re using the same model as that user, it *may* raise legal issues to use your output commercially whether or not the previous user has commercially exploited theirs (or even shared it publicly).  

(6) As a result of point 4 and 5 above, I think it would be *very unwise* to use AI generated art for commercial purposes at this stage, and I think intelligent corporate users are likely to come to that same view, at least until the AI models are able to provide better feedback about how close the end result is to any individual image in their corpus or any previous image they have generated.  (And given the ability to run the models on a private terminal, I don’t see how any instance could certify what art it has created on other private instances.)

(7) Practically speaking, the best results I’ve seen from AI art so far involve giving very specific prompts to get it to copy the specific style and art of identifiable artists by specifically drawing from their existing art.  I think there’s a strong chance this kind of art could support a lawsuit – or that laws are likely to change to allow such a lawsuit.

(8) Also, from a practical perspective, art that closely copies an existing artist has limited corporate use, because it doesn’t present a unique look or brand that distinguishes me from competitors, and so far AI seems to have trouble either generating a unique and original recognisable “look”, or reliably applying that look to new images sufficient to achieve branding or identity.

SUMMARY:  AI art is interesting, but I think it’s too high risk to use commercially at this stage.  Probably someone is going to use it that way anyway on a large scale in a “move fast and break things” kind of way, but I don’t have the kind of privilege where I get to do that kind of thing, maybe go bankrupt, and then start again, in the manner that some people do.

AI-generated porn

The question of using AI art to generate porn – and specifically porn based on real people – is a much thornier one.  The short answer is the existence of low-cost home-computer AI art means we’re going to have to rethink our understanding of intimate images.  We must accept that people have always had the capacity to generate realistic nude images of other people in our head, and those with artistic talent have always been able to express those images to other people in a realistic fashion.  The reality is that in future people *will* be able to generate nude images of anyone, doing anything, at any time.

Sensible laws regulating this kind of thing will probably look at:

  • intent (i.e. it’s the intent to use a nude image to cause harm that is criminalised, not the creation or sharing of that image);
  • honesty and defamation (whether a nude image is presented as fiction, or as a truthful depiction of that person); and
  • commerciality (people owning the rights to the commercialisation of their likeness, regardless of who generates it or how it is generated).

What does this mean for artists?

Regardless of all of the above, I think things *are* going to get shit for traditional artists.  There will always be work for them, for a range of reasons.  And much of the art generated by AI will be by users who would not otherwise have paid for art, so it’s not actually lost money to artists.  But it’s absolutely the case that some of the work they would have traditionally been paid for will now be AI generated, and that means less money coming to artists as a whole.  I don’t know what the answer to that is, but as I said at the start, you can’t put the genie back in the bottle.

Sensible laws

Having talked about “sensible laws”, I have to say that the history of regulating technology is that we rarely get sensible laws, and we’re probably going to see a lot of wrong-headed and unworkable laws get passed in a lot of jurisdictions that cause a lot of chaos and third-party harm.  The likelihood of such nonsense is also a relevant risk factor to take into account in using AI art.

– All These Roadworks

Leave a Reply