While we were appropriating

While we were appropriating

While we were appropriating, the machine was learning. What we were appropriating and what the machine was learning may have run parallel for some time, yet the machine was more studious (trained by studious people) and concentrated on a large volume of structured data. Machine learning is massive; in comparison, humans can access only a small part of the existing and potentially learning material. However, an equally important difference is that humans attribute a concept to the product of their appropriation, while machines generate content on a given concept with the use of pre-existing content (appropriation). At both ends of the line is ‘the concept’ which for the moment derives from and stays with humans.

What is appropriating: To incorporate elements from existing works, like texts and images, into one’s own work without much transformation and without the permission of the creator. 

In the arts, appropriation has always been a practice, as in ‘Dali Mona Lisa’ or the African masks in the paintings of Picasso and in the artworks of the dadaists. Where in art limits are vague, in textual works there is more control. The academic world has sorted this out: In your writings you must mark every bit of text, phrase, or idea that is not strictly yours and put the reference in the foot/end-notes. In any other case appropriation is called plagiarism. In the art world, boundaries are loose and the issue is addressed case by case, usually attached to a legal process. However, an effort is made to draw some rules in image appropriation, such as the Appropriation Art Guideline, a policy drawn by Pictoright, the author’s rights organisation for visual creators in the Netherlands.

The recent release (November 2022) of generative artificial intelligence bots by OpenAI, along with increased media attention, has sparked once more the discussion about the relationship between humans and machines, the issue of property and copyright of the used and the generated material, and the eventual job loss as a result of increased automation. 

The talk is about generation of texts and images, including artworks, with the use of algorithms that analyze and recreate content and form/style. The AI uses text to generate text and prompts (commands) to generate images. The image generating AI also creates image variations based on a generated or an uploaded image. On text generative AI bot ChatDPT you can have a smooth conversation with the machine. You ask a question and the machine generates an answer. When asked about the impact of text and image generative AI on employment, the machine answers:

“As an AI language model, I do not have personal opinions or feelings. However, I can provide information and context on the topic of the potential impact of AI on employment.”

It also states that its training stopped in 2021, so information after that year is not in its set of knowledge. The generated texts seem quite general; they can be used as a basis for further editing and creating a specific text, for example for marketing purposes, (micro)blogging, reports, etc. For shorter advertising texts, the Ai-generated text suffices. 

While the generation of text seems to go smoothly, the generation of images is more of a struggle. For example, when experimenting with DALL-E, which is described as “capable of creating images from natural language descriptions” (such as ‘a red kitten with back light on ears’), it soon becomes obvious that one has to learn to ‘talk’ to the machine in order to get something other than a smudge or a caricature out of it. That means that there is a need for usable prompts (commands, string of text) in order to have generated something close to the desired image. Entering easy ideas for a start, numerous examples are images of kittens and puppies, or zombies and cartoon heroes. When moving a bit further, the generated images are less interesting, ranging from illustrative clichés to incoherent smudges or too close to the source image (without the flair) to be considered a new creation.

The machine still has a lot to learn about art and words alone will not do the job. That aside, and despite the fact that there has been AI experimentation in the art world for a few years already, visual artists start having dark thoughts about their role in the future, or the near future for what concerns illustrators and graphic designers. At present, it is good to note that DALL-E is still in research (beta) mode; the generated images do not fall under copyright law because they are not human creations; when you upload your own images, these are considered ‘feed’ and are taken into the database and anyone can use them.

DALL-E generated image ‘Van Gogh style painting Cat with bandaged ear’ [off topic]

Thinking backwards, a number of points line up: the question of quality of the generated images; the question of property and copyright of the appropriated material and of the generated as well; the question of prompts; the question of quality of the generated text; the question of quality and extent of the fed & learnt material; the question of impact on creative professions. 

In experimental and open mode these text and image generating tools are fun and fine. It is the extent and speed, as well as the natural-like language use of ChatGPT, that make these tools a mega-appropriation project. This will bring changes in laws, jobs, ethics and aesthetics. It is a game changer, worth checking. Try it and enjoy it before the serious questions, like ‘why’ and ‘what for’, will pop-up. There might be a little traffic jam on ChatGPT.

“We’re experiencing exceptionally high demand. Please hang tight as we work on scaling our systems.” [Sincerely yours, ChatGPT]

P.S. 1 The non-digitally-documented artifacts (and texts) are not part of this game.

P.S. 2 This is an interesting article; an interview with ChatGPT (read the comments too): Thoughts on AI’s Impact on Scholarly Communications? An Interview with ChatGPT