What We Learned about Generative AI on Day One of Digital Dragons 2023

Gamezebo is at Digital Dragons 2023, Krakow’s annual games conference.

By
Share this
  • Share this on Facebook
  • Share this on Twitter

Gamezebo is at Digital Dragons 2023, Krakow’s annual games conference.

This isn’t our first time at the event, though it has grown markedly since we last attended in 2014.

For a start, it’s graduated from a single hangar-like hall to a sprawling multistorey conference space called ICE Krakow Congress Centre.

We’re told that there are more that 2,300 attendees this year, along with 140 speakers and 100 lectures and workshops.

Naturally, we’re all about the games, but we took some time to sit in a few talks on the most fascinating, exciting, confusing, and vaguely terrifying topic in the world right now: generative AI.

They were The Elephant in the Room – Generative AI, by Monika Gorska and Lene Marcinoska-Boulange of Wardynski & Partners; To AI or Not to AI? – The Ins and Outs of Using Artificial Intelligence for Creating Game Assets, by CD Projekt lawyer Kuba Jankowski; and Make gAIns not clAIms – how to safely use AI in your videogame, by Michal Pękała, a senior attorney at Rymarz Zdort Maruta.

We’ve tried to summarise what they collectively had to say. Any factual errors that you might find below are definitely our fault and not theirs.

Copyrights and Wrongs

Did you know that the output of generative AI isn’t protected by copyright?

Anybody can use that hilarious AI-generated image you made of Homer Simpson with no clothes on, and they don’t have to ask your permission or give you credit.

That’s because a “work” must be the result of human intellectual effort. It’s why photographer David Slater was famously unable to claim ownership of the selfie that a monkey took using his camera in 2011.

You can claim copyright over the bits of an AI-generated image you have personally modified, but everything else is fair game.

Here’s the rub. While AI-generated images are copyright-free, much of the material a generative AI tool trawls through as it scrapes the internet for inspiration is not.

With the possible exception of Adobe Firefly, AI tools are ravenous and indiscriminate, thanks in part to laws that allow the use of copyrighted material for the purposes of training AI.

As a consequence, if you ask a generative AI tool to create images of “yellow plastic toy men” and publish the output, you’d better prepare yourself for a very difficult conversation with LEGO’s legal representatives.

Likenesses and voices are subject to copyright, too, so you’re exposing yourself to legal peril if you publish your AI-generated image of Keanu Reeves speaking like Donald Trump, however funny it might be.

These are stark examples, but the fact is it’s possible to find copyright breaches in all sorts of AI-generated outputs, however unintentional.

How to Stay Safe

Game developers can protect themselves by, among other things, ensuring that they only ever use AI-generated assets as inspirations, and clearly label them so as not to allow legally vulnerable material into their work.

Nobody reads the small print, but it’s worth taking a look at the user agreements attached to AI tools. In many cases they can essentially publish your private information – not because they’ve gone snooping but because you’ve fed it to them. Like a chump.

This happened to three Samsung employees, one of whom entered the transcript of a confidential meeting into ChatGPT and asked it to summarise.

Big mistake. But it’s possible to make a much bigger one. If you work for a publicly listed company and you give a generative AI tool information that could potentially allow a user to make advantageous trading decisions, you might just be guilty of insider trading.

And that could mean jail time.

Doors Closing and Opening

We also learned that artists are in trouble. While there are absolutely ethical questions around the capacity of generative AI to make artists redundant and undermine the entire enterprise of human creativity, there doesn’t seem to be an easy fix.

But there is a whole new job. Outputs may not be copyrightable, but it’s feasible that prompts – the instructions that tell generative AI tools what to make – could be.

Enter, prompt engineers.  It seems there’s a growing number of coders who specialise in creating prompts for generative AI, and they’re commanding eye-watering sums for their skills.

We came out of that talk nervously uncertain as to whether the Picasso of the future is going to be a prompt engineer.

Ultimately, this is the beginning of something, and the beginning of things is often a bit like the Wild West. There’s every chance that we’ll wrestle the flailing squid of AI back into the water.

There are three potentially pivotal court cases going on right now, two of them involving Stability AI.

The first of these has been brought by Getty Images, which argues that Stability AI has been conspicuously violating its copyrights by generating eerily familiar images that even contain versions of the company’s distinctive watermark.

The second has been brought by a group of artists against not only Stability AI but also Midjourney and DeviantArt, arguing that generative AI is creating outputs derived from their original work.

Lastly, a group of programmers is taking on Microsoft, OpenAI, and GitHub over claims that they are scraping licensed code for their Copilot AI initiative.

The outcomes of these legal actions will hopefully begin to create a foundation of case law.

Meanwhile, legislation is being created in various jurisdictions that might just prevent generative AI from taking over the world – or at least plunging the games industry into turmoil.

Rob Hearn has over ten years of experience in the mobile journalism - so is our in-house sage on all things mobile, naturally.