An Apology, A Take Down And More Questions About AI!

Transcript

Well, I had another video already uploaded and set to release today but there’s been a last-minute change of plans. Most of you know I typically produce videos weeks in advance to maintain my schedule, but there’s a first time for everything, right? So this is my first spur-of-the-moment upload.

And, here’s another first for me. It’s the first time I’ve felt driven to make a response video, even though it’s kinda weird because it’s a response to one of my own videos. And it’s also inspired by a particular commenter on my earlier video, AND it’s in response to a reply I received from an email I sent to Joe McCollough regarding this previous video. Which is this one, right here. Frostgrave vs ChatGPT. 

And in yet another first time for me, this will be the first time I’m going to retract a video and pull it down. It’s currently still up, but I’ll be pulling it down 24 hours from the release of this video. I want you guys to be able to see the comment thread I pinned there if you’re curious. 

And I don’t consider any of this particular situation like “ooh controversial-ness for the sake of controversy”, I actually consider this a microcosm of the much larger, world-spanning sociological predicament artificial intelligence is putting us in. Us as in, humanity. 

All right, let’s pull the ripcord, we’re going into free fall, hope you packed a lunch. Or a beer. Oh, and a parachute.

Greetings good humans, and welcome to Tabletop Alchemy, where sometimes we find ourselves right in the thick of social conundrums and existential crises. And we do what hopefully we do best – we talk things out and discuss and have conversations. And where sometimes, your host has to make an apology.

And we thank our patrons for their continued and much appreciated support.

All right, we’re talking AI again and this is a can of worms, no doubt about it. I don’t know how many finite answers we’re gonna get outta this video, but it’s certainly gonna result in a few discussion-worthy questions.

So, the setup. If you don’t know, I recently posted a video in which I used ChatGPT to help me generate new scenarios to use with the miniatures-agnostic skirmish game, Frostgrave, which is written by Joe McCullough. In that video, I discovered that ChatGPT had at least the Frostgrave 2nd Edition rulebook in its dataset. But I also copied and pasted four scenarios from that rulebook into ChatGPT’s interface, ostensibly to give it some basis for learning the scenario format in the hopes that it would format its own output as a playable scenario.

Okay. So ChatGPT did surprisingly well at generating playable scenarios. 

Now, for the hairy stuff and the reason we’re here. Rule of Carnage commented on the video asking if my copy/pasting of the scenarios into ChatGPT was a breach of Frostgrave’s copyright (or perhaps trademark, either/or). 

I’ll be honest and tell you that when they asked me that, I felt a little twinge, you know, just that tiniest little bit of Spidey-sense that something might be awry. In my own thought process.

I replied to their comment and we’ve had a fairly long discussion there, and I’ve pinned that comment thread on that video, so you can peruse it, at least for the next 24 hours. 

Now my intentions here are not to prove or disprove Rule of Carnage’s assumptions or beliefs, nor my own. I am just gonna discuss what my own thoughts were and are regarding some of the ideas in that comment thread and to pose some general questions that I don’t have answers for.

I also emailed Joe McCullough with a link to the Frostgrave video and asked him what his thoughts were on it and what I had done. And as Rule of Carnage pointed out, I 100% should have done that before I posted that video. Hey, what can I say? I’m a champ at making mistakes. 

Joe graciously responded with a thoughtful reply and a general wish that I had not posted the video. I have the utmost respect for Joe both as an author and game designer and I have respect for both Joe and Rule of Carnage as people – assuming I haven’t been interacting with AI constructs. 

Yes, that’s just a joke, but one that’s probably going to become a legitimate concern in the very near future.

Anyway, Joe was very kind to point out that he didn’t feel I was maliciously trying to infringe on his rights, which is absolutely true. But he also pointed out that even though ChatGPT has Frostgrave in its dataset, he has never given any permission for his work to be used in any large language model’s dataset. And that what I had done technically amounted to plagiarism or piracy.

Joe also, also pointed out that what I did is made a bit murkier because of his own inclusion in the Frostgrave rule book of encouragement to players to use his scenarios as inspiration and jumping off points to create their own scenarios for his game. But he definitely does not like the idea of piracy, which is essentially what both OpenAI did and what I did, even though I didn’t consider that before I did what I did. And for that, I am sorry. 

Now, I’m going to say some things that will sound like I’m defending myself, but I just want to share what my thought process was for discussion’s sake, with full understanding that my intentions do not absolve me of any wrong-doing.  

Rule of Carnage has yet another point of view on what I did. They posed the notion that my video was suggesting to potential customers or players of Frostgrave ways of using ChatGPT to replace, or circumvent the need to purchase, actual Frostgrave books.

Now, of course, that was definitely not my intention. But … as we discussed back and forth in the comment thread, I realized I must admit that there is the possibility that a viewer of that video might come away with that exact idea. But of course then I’m thinking there’s the whole question of, and this is a silly metaphor but you’ll get it, if a borax miner shows a colleague how to use dynamite to help them dig a mine faster and that colleague then uses that knowledge of dynamite to help themselves rob a bank, is the borax miner responsible for what his colleague did? 

Of course, again, that doesn’t absolve the borax miner from mining illegally to begin with.

My intention with my video was to simply explore the possibility of using an AI to help generate new game scenarios to play. That was my entire reason for not just making the video but for doing the actual ChatGPT exercise. I did not – and would never – tell anyone to use ChatGPT to get around purchasing any of the awesome books Joe has written. Or any other author’s books. Having been a filmmaker, I’ve never pirated a copy of a movie. I never used Napster to pirate music back in the day.

I do, however, use Spotify. We’ll come back to Spotify in a bit.

In fact, in making the Frostgrave vs ChatGPT video, I actually thought I might be contributing to the extended value of owning Frostgrave and all its associated companion books. It just never occurred to me to say things like that in the video, which is neither here nor there relative to what we’re actually talking about. I just figured anyone who would be generating Frostgrave scenarios would be doing so because they’d run out of content to play. Yes, naiveté might very well be my middle name. You don’t know.

But I also do now think that pasting Frostgrave scenarios into ChatGPT was wrong of me to do because of the copyright issue and the infringement it posed upon Joe’s rights as an IP creator and I’ve got Rule of Carnage to thank for spurring on that realization.

But now let’s talk about AI as a tool, because, whether unfortunately or not, I don’t think it’s going away. Does that affect all the ethical questions surrounding it? No, not at all. It just means it’s something we have to deal with, somehow.

So, I have a question and this is for everyone. This is just a theoretical question, but let’s see how it goes. If I had a copy of ChatGPT or some other discreet writing AI that lived on my personal computer and it was not enabled with internet access, would it be okay for me to copy and paste in the Frostgrave rules and scenarios I have purchased, to allow this personal AI to add that information to its dataset, and then utilize the AI to generate or inspire or help me create new scenarios for me to play? Either solo or with friends. And I never publish those scenarios in any way – is that an acceptable use of AI with Frostgrave?

To me, that seems 100% acceptable. And legal. But, you know, I’ve been wrong before, right? Sometimes I need outside opinions and different viewpoints to come to unconsidered or new conclusions for myself. But as of right now, I think that the situation I just described should be free of controversy. But I don’t know. 

Now let’s look at the case of Dungeons and Dragons. So many people are using ChatGPT to create all kinds of D&D-related content or material or whatever. But D&D has an open gaming license so I suppose that’s what makes D&D an acceptable IP for use with ChatGPT. And this segues into Spotify with the whole license thing.

The Napster-to-Spotify sort of timeline or series of events seems to me to be indicative of what’s going to happen with ChatGPT or Large Language Models and art AI’s in general. I think initially artists and writers are going to sue – I mean, artists are already bringing class-action lawsuits against art AI manufacturers – coders? – whatever, companies that create and commercialize art AI and writers are probably not far behind. 

And this is kinda what happened with Napster. Someone created an app that illegally stole commercial music and a metric shit-ton of people used it and then it was sued out of existence and years later, we have Spotify and YouTube Music and Apple Music. We have a music industry driven entirely by tiny license fees and streaming apps. Movies and shows on streaming services operate in much the same way. And I think large language models are going to end up doing the same thing. And ultimately for creators, that’s not super-great because the licensing fees are so miniscule. But on the flip side of that, everyone gets to enjoy everyone’s music for affordable prices, even though I know the musicians aren’t making a lot of money. I do definitely feel the artist/distributor profit ratio needs to be adjusted.

Here’s another weird question I have. Let’s say OpenAi, the makers of ChatGPT – which from a quick Google search appears to have scanned every single title available on Amazon – what if OpenAI purchased a copy of every book in existence. So they own a copy of every book and then they feed every book they own into their AI’s dataset. Now what? Is that legal for them to do? If it is, is it legal for them to then give the public access to that AI and its dataset? What if they expressly forbid their AI from simply outputting the entire contents of a book? Is using the AI legal at that point? This is why I feel like the Spotify-style licensing process will be the way things work for these large language models and copyrighted written material in the future. But, you know, who knows.

There’s just so many weird questions – like, every published book makes up only a fraction of all the other publicly available content that an AI can consume or incorporate. Do there need to be licensing fees for that other material? Should I have a button on each of my videos here on YouTube to opt in or opt out of being incorporated into an AI’s dataset? I don’t know. I mean, I post the videos knowing they’ll be public. 

It’s weird, cause in a way, artificial intelligence IS us. It’s the sum total of human output, aggregated and made accessible to all humans. Sort of. There’s just so many weird questions we need to deal with.

One overall idea I can’t seem to shake is the notion that AI tools are here, they’re going to keep getting better at what they do, and if someone chooses to NOT use AI in certain capacities, that someone is probably going to be left behind, they are not going to be able to compete with other folks who do use AI to augment their work. This is an independent thought of the main video topic, I’m just ruminating on AI in general. There are all kinds of metaphors for this of course, but my goofy one is if I’m a lumberjack and I’ve been using a hand axe to chop down trees for my business and my competitor starts cutting trees down with a chainsaw, well, I better get to Home Depot and pick up a chainsaw, right?

Now I know there’s a lotta folks out there who would rather AI didn’t exist at all. I can’t say I blame them. Personally, for me, I really like the idea of AI, insofar as being a tool that allows me to work and create not just faster but in more unique ways. And in ways that I am unable to do because I don’t have a certain skill set. I want AI to do all the menial tasks for me. I want to direct what AI does for me. It paints a picture and I tell it to tweak this over here, change that over there,, make that funnier, make that more dramatic, delete that paragraph. I want to create with just my thoughts alone, as weird as that sounds. That really just amounts to me applying my aesthetic sense of taste to a product or piece of content or work of art that I wanna share with the world. And that’s really where I see all this going. Something that frightens a lot of people about AI is that it equalizes the playing field of creativity in a certain way. You don’t have to know how to write like Stephen King or paint like Picasso to generate content of a similar quality. But I think those fears might be a little bit unfounded, because it’s still going to be the human operator’s aesthetic taste and artistic choices in directing the AIs in the creation of their content that will result in either the success or failure of that content, success meaning how many other people enjoy what that human operator has produced. 

These are all crazy notions probably, and almost certainly naive, and things will probably never be that simple. There are lots of ways AI is going be abused in our society. And maybe the people out there freaking out about how it’s going to end society will end up being right. I personally don’t really buy that, I think humans might use AI to end society but I’m not convinced AI itself is gonna do anything on it’s own. But I also have no idea. I don’t think anyone does. But these kinds of conversations are the only way for us humans to stumble our way through this. As Rule of Carnage has put it, we’re all figuring these things out, and that includes governments, lawmakers and experts as well as you and me. And I think making mistakes is endemic to the process of learning how to deal with something new. 

So let’s wrap this up. I think I made a mistake with the Frostgrave video and I apologize for that; I think using AI to generate Frostgrave scenarios could be totally fine if or when Frostgrave is ever licensed to an AI dataset; and I think going forward I’ll try to be much more conscious of decisions I make when using AI as a creative tool.

So … go apologize for making a mistake – sorry, I was trying to do the normal type of sign off there, doesn’t feel quite appropriate. 

I’ll be looking forward to reading any and all comments, you know I like discussion and conversation, it’s definitely one of the things that helps me grow and be inspired. I hope it does for you too.

See ya!

Leave a Comment