after a short period of neglecting my blog (mostly because of easter, but that is not the entire truth) i feel another blog post coming up.

i attended the expectably-interresting talk A Walk in Latent Space: Generative AI in the Creative Process by martin pichlmair (who i had the privilege of supervising during his phd here at TU Wien, which is also why i expected the talk to be interesting). before his talk (and afterwards), we had the chance to have a little chat in one of the sunny spots at MQ, and the topic of copyright of synthetic media came up.

ah yes i should mention that i adopted the term synthetic media from some article somewhere, because i think it is spot on.

standing there in the slowly fading evening sun, i argued that i like the idea that synthetic media cannot be copyrighted – which incidentally is the current position of the US copyright office, with whom i find myself rarely if ever on the same side of an argument – because these texts, images, sounds etc. should be seen as the result of a rather high-tech process where we collaborate with more or less all of humanity.

this last bit comes from the (imho) brilliant piece There is not AI by jaron lanier, who could be my buddy (we are born but three years apart) but isn’t, probably because we never met.

in this text, he tries to reframe our discussion about »AI« by proposing that we view those system not as technologial entities, but as the result of all the work that went into creating the material that went in there as training material. that way, we can take our attention off any (for now 100% imaginary) dangers of what the systems could do on their own, which is good because all these »AI«-systems are very far away from doing anything on their own. instead, we should focus on the danger that people can do using these systems. note that this is not a »guns don’t kill people«-argument, as it shifts the talk to the affordances of »AI«, instead of talking about imaginary things that these systems will do on their own.

i think this is a very good idea, and i would like to take it one step further.

in a course i am currently doing with astrid weiss, we talk a lot about chatGPT, and it was interesting to see how students referred to the system. at the start, the occational he was used, but we were quick to point out that it would be the more suitable pronoun. but, in the light of the above discussion, i propose that we use we – not in the sense of the machine and i but in the sense of we, all of humanity.

when i use chatGPT to create text, this text is in fact created by all of us. the materials chatGPT was trained on are the works of billions of people and nothing else. it can be argued that the role of the model and the software is insignificant compared to the work that went into writing all the material the system was trained on. so, the author of this text is all of us: me, because i prompted the text, and everybody whos work was used to train the system that produced the text.

this framing makes the no-copyright-for-synthetic-media position instantly comprehensible; you need substantial transformation to make it something copyrightable to you – which is exactly the same as with any other work, just like arnulf rainers blackenings, overpaintings and maskings can and should be seen as genuine pieces of work.

but the synthetic media artefact itself was created by all of us together with the help of some admittedly brilliant software, and made possible by obscene amounts of energy (which might be a topic for another time). to reflect this process, we need a form of we that expresses this humanity-embracing-inclusiveness of the technical process that yielded the artefact.

what do you think? i am open for discussion on hci.social/@peterpur or in the comments below.

image: »a million people working together«, created by everybody via midjourney.

Comment