The Progression of Generative AI and How to Protect Yourself as an Editor

This post may contain affiliate links. Please read my disclosure policy.

It’s been three years since I first talked about ChatGPT on the podcast, which is a lifetime in the tech world. How has generative AI progressed in those three years, and what do freelance editors need to do to protect themselves? That’s what we’re diving into on this episode of The Modern Editor.

Before we get started, it’s important to note that I hate generative AI with a fiery passion. If you’re looking for a balanced approach to the topic, you won’t find that here. However, I’m also not going to spend the whole podcast ranting about it. I’m here to provide practical tips you can implement in your editing business in the age of generative AI.

  • It is high time I did an update about generative AI here on the podcast. It's currently April of 2026, and my first episode about ChatGPT happened in February of 2023. So basically another lifetime ago, especially in tech years. And we've learned a lot since then. So let's take a peek today at what's happened, some more predictions about what I think is going to happen in the future, and most importantly, how we as editors can protect ourselves as much as possible. 

    Welcome to The Modern Editor Podcast, where we talk about all things editing and what it's like to run an editorial business in today's world. I'm your host, Tara Whitaker. Let's get to it.

    Hello, everyone. I am going to kick off right off the bat by saying this: I hate generative AI with a fiery passion. So if you are looking for a balanced opinion on the pros and cons of generative AI, this is not the episode for you. However, the last thing that I want to do, or I'm going to do, is a podcast episode where I'm just ranting and raving about generative AI.

    I don't think it does much to the conversation. I don't think it's productive. And quite frankly, it feels very reminiscent of a bunch of podcasters that come to mind who just so happen to have a mic and like hearing themselves talk. And I'm talking about mediocre white men, so we're not gonna do that here.

    If you do wanna see some mild rants from me, my Instagram Stories is where you'll find those. But for today, we are going to have a real conversation about how generative AI has progressed in the last three years, things that you need to be aware of you might not be, and some ethical and legal consequences, particularly for editors.

    Now, I hope at this point in the game, you've been keeping yourself educated on the topic of generative AI so that you have a stance and an opinion. If you don't, this is your gentle reminder to do so. And if you've been around for a while or if you haven't, let me just be clear. I don't tell you what to do or tell you you have to do something other than just a few things. But one thing I am going to say that you need to do is you need to have a stance on whether or not you use or don't use generative AI and whether or not you edit or don't edit content that has used generative AI.

    Now this is not meant to be a scare tactic or make you not want to edit anymore or to start editing in the first place, depending on where you are in your journey. It is about being smart, keeping up to date on our industry, which is what a modern editor does, and protecting ourselves to the best of our ability.

    All right, so first, let's go back in time. Let's go back three years ago when I first talked about this. If you want the full scoop, check out episode 21. Back in this episode, I focused on ChatGPT because that was the big thing. That platform, just as a reminder, was built by OpenAI, and one of the founders of OpenAI was Elon Musk. So let us not forget that nugget, because this was, again, back in 2023, way before DOGE and way before the bullshit with the US federal government.

    So we talked about ChatGPT. I relistened to the episode, which is always a wild ride, but 2023 Tara was slightly naive but also right in some regards. And I will be a hundred percent honest—it gives me no pleasure in saying that at all. I wish I was totally wrong. However, I was not.

    So at this point, we now know what exactly generative AI is. We know where it illegally got its data from, which as I suspected, was from stolen books. And in that episode, I focused on authors and editors not being replaced, and I still stand by that but with some caveats or admissions.

    We have seen a ton of companies lay off their writers and editors in favor of generative AI. So I don't want to not acknowledge that people have lost their jobs because of generative AI, and I suspect they will continue to do so. However, I think there was a huge, you know, shift in that when ChatGPT launched its latest version, when it did in 2023. Everybody was like, oh, it's the next greatest thing and we're just gonna dump all of our writers and editors.

    We have seen that scale back, which is good. It's not great, but it's good, or it's better. You know, we've seen some companies hire back those people. We've seen companies hire people, like editors who know how to edit AI-generated content, which is eh, but there has been some recalibration with, okay, let's fire everybody who's creative and knows how to write and edit, and now they're starting to realize maybe that's not such a good idea.

    Interestingly enough, in the last three years, there are more authors and editors using generative AI. I've noticed on both sides with authors and editors, I noticed that it's more on the corporate or business side as opposed to the indie author, self-published side. Probably because editors might not want to explain to their indie authors that they use the author's potentially stolen property to edit. That's just a guess.

    But there are a lot of people using it to write books. Go on Amazon. It's flooded. On the flip side, I am thankful to see that there is a lot of pushback on this. If you can't spend the time to write something, I'm not gonna spend the time to read it, and that seems to be a thought that is gaining traction, which I like to see.

    But we've learned a lot about generative AI since 2023. And I'm gonna go over a few of these things too. I mean, we just did, but a few more, and these are no particular order, but we saw the Anthropic lawsuit happen. In a nutshell, authors had their books, their intellectual property stolen, they sued, and they won a settlement.

    Each author that was affected is supposed to get a minimum of $3,000. And then depending on how they were published, they might have to split that with their publisher. I'm gonna put a link in the show notes so you can see more details about all of that. But this is not the last we're gonna hear about lawsuits in terms of generative AI. How they'll play out is anyone's guess. But this was the biggie since the last podcast episode I did.

    We have also learned about the environmental impacts. The electricity, the energy demand in the areas that have data centers is astronomical. The water usage is disgusting. There's so many other environmental impacts, but I want to make a very clear point: Driving a car is not the same as using generative AI in terms of environmental impact. It is a privileged and classist take to say so.

    People need cars to travel, to get to work, to get to school. And yes, they are not the most amazing thing for the environment with gas for sure. But it is our current reality that we don't all live in places with public transportation or walkability. We all have different abilities. We all, well, I'm gonna say we all do need a job or a career or a business to, you know, live. And sometimes that takes a car.

    And using generative AI is a 100% choice. You are not forced to use it. Your livelihood does not depend on generative AI. So please do not compare driving a car as having the same impact as using generative AI. Please, please.

    Health impacts. There is something now known as the data center “hum.” I will put another link to an article in the show notes, where the areas around these data centers have this terrible noise. There have been reports of higher rates of asthma and respiratory issues and headaches and migraines and nausea and hearing loss.

    And guess what? It's not just adults experiencing this. It's kids. Kids. These poor kids who live next to a data center now have to worry about this on top of everything else. And let's be very clear: These data centers are not being built in white affluent areas. They are being built in rural, less wealthy areas with minority populations.

    This feels very Erin Brockovich, very Flint water crisis, and I'm not having it. I just, I don't like any of this, but I really don't like it affecting kids. That pisses me off to no end because it's unnecessary. All of this is unnecessary.

    Okay, moving on. We have also learned about the real effects that it has on our brains and our ability to think critically. Now, in my humble opinion, critical thinking is at an all-time low. Can we please not make it worse for the love of God? MIT recently did a study and it actually found that people who use generative AI have reduced brain activity and memory. I'm sorry, but I find that terrifying.

    I was always taught growing up, for better or worse, that no one can take away your education. It was always ingrained in my head, like, go to college, you know, then you're set for life, right? Ha ha, elder millennial here. That is not the case. But in the same vein kind of, no one can take your education. No one can take your brain unless you give them permission to. 

    No one can take your thoughts, take your, you know, what goes on inside that beautiful brain of yours. But what happens when we willingly allow ourselves to not use our brains like we should? We become easily manipulated. We can't think critically. And I want you to think about who benefits from that. Is it people who have your best interests at heart? No. That whole thing, I just, it freaks me out.

    Like, I will not be manipulated, I will not be controlled. I cannot. So for me, that's such a big issue with generative AI for me, is how it can affect your brain. Social media already terrifies me, and I've been very open about my addiction with that and my steps of me breaking that addiction, this terrifies me even more. It's just a whole other level. So think about that.

    We've also learned how wildly inaccurate it is. We've got hallucinations, we've got straight-up false citations and made-up case law, and the fricking US government produced documents that cited laws and cases that didn't even exist. How embarrassing. How embarrassing. Because that data actually does exist. That's why we have libraries. That's why we have legitimate databases. All it would've taken was a few extra minutes to look that up, but oof. Gotta take the easy route, right? 

    No, it's just inaccurate. Just not. So those are just a few of the things that we have learned, and I could go into deep detail about all of them. But in the interest of time, we're gonna leave it at that and we're gonna talk about some things that I hope and think will happen in the future in terms of generative AI.

    This first one is a hundred percent, I admit, a hope, but I do think that the bubble is gonna burst pretty soon. And what I mean by that is this happens when something comes, something happens, something is invented or created, or something gets big and a lot of investors pour money into it and they think they're gonna get a big payday. Then when that doesn't happen, a lot of people lose money and it crashes the economy.

    Think of the dot-com bubble. Think of Lehman Brothers. I think that's gonna happen soon. I hope. I really hope, not that I wanna see the economy crash any more than it already has, but I do want the bubble to burst with the quickness.

    I also am not holding out huge hope in there being a solid AI checker. But there are so many people out there that are far smarter than me, so it might happen. That would be great, but I just don't see it happening. I don't see a solid AI checker being created anytime soon.

    I predict that the law will forever lag behind. We know that because the internet. The law never caught up to the internet. The law is never gonna catch up to generative AI. I cannot tell you how wrong I hope I am with that. If I am wrong, I will shout it from the rooftops. I will tell everybody I know that I was wrong. I just don't see it happening.

    I do think, unfortunately, that more authors and editors and publishing professionals are going to have their ethics and reputations challenged publicly. I'm gonna talk about that in a little bit, but we have seen it happen time and time again where someone suspects AI was used, generative AI, and they post on social media and it catches like wildfire. And there you go.

    I think publishers are going to be hasty in signing books that are, you know, popular and have a good following without doing their due diligence. And then when they get accused of generative AI, the authors do, they're gonna dump the author to save their own ass rather than stand up for the author if they can be stood up for or admit they're wrong. I mean, we've seen many examples of that in the past where a publisher is 100% gonna save their own ass over an author they signed.

    I think they're going to keep acquiring books that are questionable in terms of generative AI, as long as they're written by white men. I think women and authors of color are going to be accused more of using AI and it will hurt them more than it will a white man being accused.

    But on the flip side, 'cause it's not all doom and gloom, I also know, I do know that there are editors and authors and other publishing professionals out there that are going to continue to join together and stand up for, you know, people getting their work stolen, leading with integrity, spending the time and energy and effort that their readers deserve, their authors deserve.

    There are ethical, amazing people out there and there will continue to be—that's not going away. Or I really hope it's not. But I do think we're gonna see a lot more community come into play too, which, you know, I'm all about. So we'll make it through. It's just gonna be together. 

    And with all of that being said, let's talk about how to protect yourself as an editor. It's so wild because while I was writing an outline for this episode, I was just thinking, you know, I've been at this for fourteen years, I think. Fourteen. Yeah. None of, this was a thing fourteen years ago.

    We did not have to worry about any of this stuff, and now we do, which is why it's important to keep on top of it. It's also important to realize that there is no 100% guaranteed way of, you know, fully protecting ourselves with everything, which it never has been like that, but it just feels like it's getting more and more sticky.

    And we, our industry for sure, and there are other industries too, but reputation matters a lot. It's a small world, but it's a big world, publishing. And we know how a single social media post or single accusation can derail your business and your career. So it's more important than ever to have as many protections in place to the best of our ability.

    Now we cannot prevent anyone from accusing us of something, right? I mean, I could go on social media right now and accuse my grandma of being an alien or something. You know, we see that all the time. Nobody can prevent the accusation from happening. What we can do is prove that we didn't use generative AI by not using generative AI.

    I hope that's clear. The only way to prevent yourself from getting in trouble from using generative AI is to just not use it in the first place. Does that make sense? I hope. And I say this because in this, you know, the year of 2026, it's very hard to get away with stuff, right? Like maybe back before the internet, you know, you could disappear, you could go off the grid and it's very easy.

    We didn't have social media and online presences and, you know, it was just different. Now everything is digital, everything is traceable. So you can have your computer searched and they can find out if you've used generative AI. I know this for a fact, and I'm not gonna give away any details, but in a lawsuit—it's not me, by the way; I did not get sued—but in a lawsuit, you can have your computer confiscated and searched. So if you say, I did not use generative AI, and you did, it might be traceable.

    Here's where I wanna talk about data retention terms. So different platforms have different policies, right? ChatGPT will say, we don't save any of your data past so many days, dah, dah, dah. Quite frankly, I don't believe any of them. They literally built their business off of people's stolen work. So I take everything they say with a grain of salt, and they just defy orders too, like they defy the law. So what they say is not necessarily true. So I would not put a lot of stock in that.

    If you do use generative AI outside of your editing work, just know that there is a risk with that. How high that risk is, I don't know. I don't know what data retention policies will be held up. I don't know what the scope of the law is going to entail. I don't know if it varies by judge. I don't know. I don't know. Just know that there is going to be a risk involved with that, whether it's high, whether it's low, whether it's something you want to, whether that’s something you're comfortable with, that's up to you.

    And like I said, the law's not gonna catch up, but it just makes me nervous. It just makes me nervous. So know that even if you don't use generative AI to edit, I don't know how you prove that you didn't use it to edit. Does that make sense? I hope. I like how I'm asking you, like you can tell me. So just be careful.

    Also, we've talked about this: There's no good AI checker. Nothing is 100% accurate. We've seen so many examples of people writing something 100% original, uploading it to an AI checker, and it's saying it's like, 85% AI generated when it's totally not. They're not reliable. I wouldn't even use them. Like, why? Why use a tool that has been known to be wildly inaccurate. That's just oof. That's playing with fire.

    And I just wanna do a shout-out: The em-dash is not indicative of AI for the love of God. The em-dash is my favorite punctuation work. It does not mean that AI wrote it. It means that people that know how to write had their content stolent by AI and so now it uses it, huh? Leave my em-dash alone.

    Okay, how to protect yourself. Next step: Who are you going to work with? Now, editors have taken different stances on this. Whatever you take is up to you, but you do need a stance. Some people, some editors will say, I don't care if you use generative AI, whatever. Fine. Some editors will say, I don't want to work with authors who use generative AI to write their book. Okay. Then other editors will say, I don't wanna work with any authors who use generative AI in any way, shape, or form, whether it's with brainstorming or ideation or writing, or to create their book cover or their social media graphics. Anything. If you touch generative AI for your book, it's a no-go for them.

    And this is the honor system here, right? Because there is no way to check. There's no wa unless you like, are looking over your author client's shoulder and watching their every move, which is obviously not possible.

    There's no way to know for sure, which is why I keep saying protect yourself as much as possible. One of the ways you're gonna do this is by having an AI policy on your website. Now there is no law requiring anybody to disclose that, that I know of. But who knows by the time this goes to air. It could be. I doubt it, though, 'cause it's so far behind.

    I will say, you need to have an AI policy out there. You need to be clear so that authors know what they're getting into. We have known for many, many years that authors have always been nervous about editors getting their books and stealing them. I personally have never heard of this happening, so it was kind of one of those things where I, you know, it's a concern, but it wasn't—it was a valid concern, but it wasn't something that happened. Or if it did happen, it was so infrequent that you really didn't hear about it.

    Now, authors are worried about editors uploading their books to ChatGPT or Claude or whatever, and editors are, so these are extremely legitimate concerns because let us not forget, these platforms were built from stolen intellectual property. So if their book is uploaded into one of these platforms, they just fed it their intellectual property without their permission. So the editor did steal their work, which is bullshit.

    And I swear like, oh, it just makes me so infuriated. So editors, we are not doing that. We are not doing that unless you have express written permission from the author that they are okay with you doing that. I don't know why you would, but again, as long as you have permission, that's between you and the author. I would avoid it at all costs. But again, I don't get to tell you what to do.

    Now, if you are an editor and you are uploading someone's book into a generative AI platform, taking what it spits out and returning it to the author and saying you edited it, no. You're not an editor, by the way. That's an unethical grifter who gives editing a bad name. We're not doing that. Okay? So editors need to make sure that we all have clear generative AI policies for our clients to help mitigate some of that worry. They're already worried about us stealing their book, which usually, you know, editors aren't stealing books to publish.

    Now they're worried about editors stealing their content by putting it into a generative AI platform. Valid. And we know that the author-editor relationship has been adversarial, right? It hasn't been seen as a collaborative partnership, which modern editors know that's not what we're doing. We focus on communication and honesty and relationships, you know, and knowing that it's their book, but we already have all these things going against us, right? Let's not make it more difficult. Please.

    And I don't know about you, but I have a lot of stuff going on in my life, and I would rather not add a lawsuit hanging over my head to the things I need to worry about. I just don't have the time or the energy. That would terrify the heck out of me because I, you know, I follow the law, I follow the rules, you know, to a point, the important rules. And I don't wanna be sued. I don't want my name just blasted online for doing something that, well, for me, I didn't do. Because I didn't use generative AI, but I just, I'd wanna avoid that at all costs, if at all possible.

    So have the conversation and have an AI policy now. This can go somewhere on your website. I've seen it in lots of different places. It doesn't really like, there's no one place to put it. Some editors put it right on their homepage, some put it on their services page. As long as it's clear and visible, fine. Say what you do and don't accept and call it a day.

    I've seen it in an email footer. I've seen it in a newsletter. I've seen it on social media feeds, and I've seen it in social media profiles. It can go anywhere. In my Instagram profile, I have two emojis. I have the no sign, you know, like that's kind of popular with no smoking, and then a robot, short and sweet.

    That's not necessarily an AI policy, but it gives you a quick glimpse that I'm not into it. I would suggest going into a little more detail on your website, but you know, that can work. It has to, has to, has to, has to go in your contract. And I know. Tara, you're telling me I have to do something again? I know this episode is full of that, but it's for your protection. I swear it's not because I'm on some power trip. It's because I want you to be making good choices, ethical choices, and protecting yourself. Okay?

    So we are going to have a contract for every single client. Every single one. Even if it's your best friend, even if it's your best friend's girlfriend's daughter. Quite frankly, those are the clients that can end up being the biggest pains in the asses because it's, oh, well, they're a friend, of course nothing bad is gonna happen. We don't need a contract. And then something does. We have seen somebody be accused of using generative AI to edit a book, and they were called an “acquaintance.”

    So just have a contract. Just have a contract, And what it is going to include is it's going to be an extension of your AI policy just with more parameters, consequences, action items. Okay? So for example, what types of generative AI do you use as the editor?

    Note: Generative AI is different than editing tools. Spell-check, at the point of this recording, is not generative AI, although there's, I'm seeing some things bubble up about that, but for now we're gonna say spell-check, macros, Text Expander, PerfectIt, those are not generative AI. Those are editing tools. It's not what we're talking about here. We are talking about putting something into a platform, like a prompt or a book or a section of content, and that platform spits back an answer. That's not what any of those things do.

    If you use ChatGPT, Claude, Gemini, Perplexity, all of these bajillion other things, say so in the contract. Be honest and up front. Also, if you don't use generative AI, say that. That's probably even more important. They're both important. I don't wanna say one is more important; they're both important. But whether you do or don't needs to go in the contract.

    Do you edit content generated by AI? And we talked about this before. That means the writing, but it also means brainstorming, feedback, the book cover, the blurb, if you use it just in your business at all. Spell out what you will and will not accept even if you're not editing it. If you're not editing the author's social media posts, some editors still don't want to work with that author. Spell it out.

    Okay. Here's a doozy. What will you do if you find out later that the author used generative AI when they said they didn't? This is tricky and this is icky. Now if they admit it, the author admits it after saying they originally didn't, that's where you need to have procedures in place on what happens next.

    This can be a variety of things. Perhaps you stop work immediately. Perhaps you'll finish the contract or what the contract entails. Will you require certain payments? All of those things need to be spelled out.

    But if they don't admit it, but you suspect it, this is iffy because again, we don't have a guaranteed AI checker. There is no way to know for sure whether the author used it or not. That still needs to go in the contract. Are you going to finish the contract but never work with the author again? And that's not something that would go in the contract. Be like, if I suspect you used AI, I am never working with you again.

    I wouldn't say that, but maybe it's something like, if you don't already have something in place requesting that your name not be included in the acknowledgments or the copyright, requesting the author does not publicize your name in any way. Like, mention that they worked with you on social media or their emails or anything like that.

    This gets iffy, right? Because you don't want to falsely accuse someone of using generative AI when they didn't. But I know myself, and I know many of you out there, because of what we do, we can sense when generative AI could be used. This especially comes into play if we have worked with an author in the past.

    The more books, you know, the more obvious it is that if they've been writing, writing, writing, you know, their tone and their style, and then all of a sudden it changes, it just doesn't sound like them, that's a good indicator that they've started using generative AI.

    And that's something you can approach them with. You can ask them, you know, and it depends on your relationship, it depends on lots of different things, right? But there is the possibility of reaching out to them and saying, hey, you know, I've worked on so many books with you, I know your tone, I know how you write, and it seems like that's changed in the last book or two books or whatever. I'm just curious if you started implementing a new writing style or whatever, and you can kind of gauge from there.

    And again, honor system. They could lie. We've seen stories of agents and editors and authors lying. People lie, we are human, so I just want you to be aware of this happening. Being careful not to go accusing people when we don't know, but also protecting yourself and knowing what you're going to do ahead of time if that situation presents itself.

    And a lot of this can go in the contract, but if some of it isn't applicable for the contract, it needs to be written down in your own personal records or wherever you keep stuff because as you'll see in a minute, it is so much better to be prepared so you don't have to get prepared if a situation presents itself.

    Okay, and what I mean by that with this point was what if an author publicly accuses you as an editor of using generative AI? This is not something, again, we had to think about, not even that long ago, but it is now. I don't think it's a bad idea to know an attorney so that you can reach out to them immediately if this happens, because if it does, it needs to be acted upon swiftly.

    Now, do you need to spend gajillions of dollars on an attorney on retainer? No. Find someone in your area, especially because, you know, it depends on the state, the country, the province, all of that. There's so many different things. Usually they will offer a free consultation. Ask them, say, hey, you know, I'm an editor. I gotta be careful about people talking shit about me online. What do I do if that happens? And can I contact you? And then having their information accessible because if it does happen, you're gonna wanna contact them right quick.

    I'm not an attorney, thank God. I know one, my brother who helped me write the editing contract template. But if something happens where if, you know, you see a post on social media blasting you, call them with the quickness and have a plan in place beforehand so that you're not scrambling.

    I hope you never get in this situation if you are falsely accused. If you are rightfully accused, that sucks. But also, as you know, you know the risk. Just know what you're doing. But for those of you out there not using it and saying you're not using it, make sure you have a plan in place beforehand. Okay? But you're not gonna have that happen because you're not a poopy editor, right?

    All of this to say, using generative AI is a 100% free will choice. No one, and I repeat no one is forcing you to use it. You will not be left behind if you choose not to use stolen IP for your gain. And if you do decide to use it and/or edit content that includes it, know what you're potentially getting yourself into.

    This is not meant to be a scare tactic. I do not believe in running a business rooted in fear, but I also want you to be well aware of what's going on, what's already happened, what could potentially happen, how that will affect editors and authors. It is unfortunately not something we can ignore or stick our heads in the sand about or be like, well, I don't know, you know, I don't wanna get in the middle of it. Mm-hmm.

    This is something that editors need to choose how we feel about how we're going to run our businesses based on that knowledge and how we're gonna communicate that effectively to our potential and current clients. It's a non-negotiable.

    So if you have more questions about generative AI, particularly around the elements to include in your contract, feel free to check out the editing contract template that I mentioned before. My brother, the attorney, specializes in plain language. So we combined forces to create an easy-to-understand, and maybe more importantly, human-written contract that is specifically for freelance editors, and that includes all of this stuff about generative AI. You can go to TaraWhitaker.com/Contract and take a peek

    And until next time, keep learning, keep growing, and when you think about generative AI, ask yourself this: Who benefits most from me using this tool?

    Thank you so much for tuning in to today's episode. If you enjoy The Modern Editor Podcast, I would be so grateful if you left a review over on Apple Podcasts or wherever you consume podcasts. And don't forget, you can head to TaraWhitaker.com to connect with me and stay in touch. We'll chat again soon.

You Need to Have a Stance on Generative AI

I’m not here to tell you what to do or what to think about generative AI, but I do hope that you’re keeping yourself updated and educated on the topic. As an editor, you need to have a stance on generative AI, whether or not you use it, and if you’ll edit content that uses it. 

Whether or not you agree with my opinion isn’t the point. You need to develop your own opinion and keep yourself educated as generative AI continues to advance. 

A Recap of the Last Three Years of Generative AI

Three years ago, I released my first podcast about generative AI called Will ChatGPT Replace Authors and Editors?

I went back and listened to that episode, which always feels cringey, but it provided some helpful context on where we started and where we are today. I admit that three years ago, my opinions on generative AI were a little naive. However, unfortunately, some of my predictions about its downsides were correct. 

Here are some “then and now” highlights:

  • Elon Musk was one of the founders of OpenAI, the company that created ChatGPT

  • I correctly predicted that generative AI gets data from stolen books

  • In 2023, when a new version of ChatGPT was launched, a lot of creative people lost their jobs

  • Thankfully, we’ve seen some of the job loss scale back, and companies rehiring the positions they cut

  • Unfortunately, in the last three years, more authors and editors have begun using generative AI (more so on the corporate side than the indie side) 

  • On the flip side, I’m thankful to see there’s been some pushback to this

What We’ve Learned About Generative AI Since 2023

Here are some things we’ve learned about generative AI in the last three years:

My Predictions For the Future of Generative AI

Listen, I don’t want to see the economy crash over AI, but I’m predicting the bubble will burst soon. We’ve seen it before with the dot-com boom: investors pour a ton of money into the hot new thing, and then it crashes, and they lose their money. 

I also predict the law will continue to lag behind generative AI, just as it still does with the Internet. 

In the publishing world, I think more authors and editors will have their ethics challenged publicly. What does that mean? Essentially, someone will be accused of using generative AI online, and the idea will catch on like wildfire. 

I believe publishers will be eager to sign new authors with huge followings without conducting due diligence. As soon as their authors get accused of using AI, the publishers will dump them and save themselves. Then they’ll continue to sign authors without doing diligence because they can just repeat the process and look out for themselves.

Unfortunately, I believe women and people of color will be accused of using generative AI more than white men, and they’ll experience a worse fallout. 

On the bright side, I believe that a group of authors, editors, and publishing professionals will join together to continue to fight against the use of generative AI in the industry. They’ll stand up for their work being stolen, lead with integrity, and bring the community together around these ethics. 

How to Protect Yourself Against Generative AI as an Editor

With anything, including generative AI, there’s no way to keep yourself 100% protected. Especially with something like technology that changes all the time. However, we’ve seen how accusations can completely derail careers, so it’s important to have as many protections in place as possible. 

You can’t prevent someone from accusing you of using generative AI, but you can prove that you didn’t use it. The easiest way to prove you didn’t use generative AI is to not use it. 

At this point, we don’t have a reliable AI checker to assess whether content was created with generative AI. In an extreme case, if you’re ever sued and your computer is confiscated, whether or not you used it will quickly come to light. 

Communicating Your AI Stance to Clients

Whether or not you use generative AI outside of editing is up to you, but you need to understand there are still risks involved. 

Another way to protect yourself is to be discerning about who you choose to work with. I’ve seen editors take three main approaches:

  1. They will work with authors who use generative AI

  2. They won’t work with authors who use generative AI to write their books

  3. They won’t work with authors who use generative AI in any part of their process, including brainstorming and ideation

Of course, you’re relying on an honor system here since there’s no reliable way to check whether or not an author used generative AI. Even so, it’s important to take a stance and stick to it. 

Once you decide on your policy, you need to have it plainly listed on your website. You need to be clear about who you’ll work with, whether or not you use generative AI in the editing process, and that you will not upload an author’s work to an AI platform.

Your policy also needs to be plainly stated in your contracts with every author you work with, even your friends. The policy should include more details than the website disclaimer so that it clearly outlines your parameters, consequences, and action items. 

What Do You Do When Your AI Policy is Violated?

What happens if an author tells you they didn’t use AI and then you find out they actually did? You need to have policies and procedures in place to prepare for this situation. For instance, do you keep working with them or stop the work? If you stop working with them, will you still require certain payments? Do you finish the contract and never work with that author again?  

How you approach an author about this depends on your relationship with them. Again, you have to be careful here because there are no AI checkers, and you don’t want to falsely accuse someone. 

It’s important to think through all the possible scenarios and prepare how you’d handle them before it happens. The more you prepare, the more you can protect yourself. 

What to Do If You’re Accused of Using Generative AI

Let’s flip the script. What do you do if one of your authors accuses you of using generative AI? Unfortunately, this is something we have to think about because it could very well happen. 

I recommend finding a local attorney who offers consultations (many will do this for free). Ask them how to approach this situation if it ever happens, and if it does, whether or not you can contact them for help. If they say yes, keep their contact info accessible so you can respond to the situation swiftly. 

You’re Not Behind if You Don’t Use AI

To wrap up, I want to reiterate that using generative AI is 100% a free will choice. Your job is not dependent on it, and you can say no. You’re not going to get left behind if you refuse to use it.

If you do choose to use it, you need to understand the risks involved. Whatever your stance is, generative AI is not something we can choose to ignore. We need to stay informed, clarify our values and boundaries, and stick to them. 

Most importantly, we have to communicate our AI policies with our clients. Need help? I partnered with my brother, an attorney who specializes in plain language, to create a contract template you can use with your clients. 

Important Sections:

  • (2:00) You Need to Have a Stance on Generative AI

  • (3:02) A Recap of the Last Three Years of Generative AI

  • (6:25) What We’ve Learned About AI Since 2023

  • (12:13) My Predictions For the Future of Generative AI

  • (16:18) How to Protect Yourself Against Generative AI as an Editor

  • (21:34) Communicating Your AI Stance With Clients

  • (30:04) What Do You Do When Your AI Policy is Violated?

  • (33:13) What to Do if You’re Accused of Using Generative AI

  • (35:16) You’re Not Behind if You Don’t Use AI

Resources Mentioned:

Work with Me:

xo, Tara

Next
Next

The Difference Between Working ON Your Business and IN Your Business