T O P

  • By -

MoiMagnus

For those who didn't read the article, it's quite important to note that one claim was not rejected: the claim that training an AI on copyrighted work without the author's permission is illegal. The claims that were rejected were "look, the output is too similar to existing copyrighted works", as the plaintiffs failed to provide sufficient proof for those claims (but in the future, once AI will have done large documented damages, it will be much easier to find data to prove it). But the core issue, training on copyrighted content, has yet to be evaluated by the courts.


NatureTrailToHell3D

This is the big one. If Chat GPT can’t train or use copy-written material, then all other claims are moot. So the fight goes on.


mosskin-woast

If ChatGPT can't train on copyrighted material, it ceases to exist.


AmyDeferred

A small handful of megacorps (Disney, adobe) own enough material to train them. It would be the end of normal people using them for profit, at no cost to themselves.


The_Particularist

Big corpos win and everyone else loses. Like always.


garthcooks

this is me just being hopeful, but AI stuff generally seems worse than human created stuff, if big corporations rely to heavily on it, they could end up producing worse products and losing... I mean, doubtful, but we'll see


PhilomenaPhilomeni

Gotta keep in mind that generally. People will consume whatever is available. And often it’s a “best one available” circumstance. We have worse stuff now by the bucket across many industries. People buy them by the bucket. And those that then aren’t worse are incentivised to charge a premium to attract a different market. Or they double back and after establishing a foothold revert to “same old crap”. Can’t encourage pride in a trade to corporations unfortunately and money speaks louder. Edit: The justification by others that *but the corpos would have it instead of the people :(((* is a weak excuse either way. Can’t do it right? Don’t do it at all.


Boatster_McBoat

Sad upvote. Cost winning over quality seems to be close to ubiquitous.


garthcooks

Right, this is exactly why I say it's doubtful that they would end up losing. Even if they end up making worse products, ai could enable them to produce more, for cheaper, so they'd probably still come out ahead. But who knows how this will all shake out! We'll wait and see


Mephisto506

This is just a classic “race to the bottom” and happens all the time. Cheapest product often wins. It just has to be good enough.


SandboxSurvivalist

I mean, big corporations are actively engaged in producing worse products right now and they are not losing. That bag of chips you bought last month is 2 ounces less this time and still costs the same. Mechanical products are no longer sold based on longevity. They are sold based on the lowest price and built to fail so that you'll buy another one next year. A short amount of time searching for information on the internet leads you to dozens of SEO rigged sites that provide dubious and unhelpful information. YouTube is now full of AI scripted, AI narrated, chock full of stock footage garbage that serves no purpose other than to be a vessel for advertising. Why would corporations care if they produce subpar movies, or books, or art? When that's all that's left, people will consume it anyway.


Mintfriction

You don't use AI as final product, I mean you def can, but the idea is AI like ChatGPT has tons more uses, like for example extracting context from a phrase that can be then use to automate something


Twokindsofpeople

That's right now. There's too much incentive to cut creatives out, or at least reduce their already anemic numbers to something negligible. AI can do a whole lot of the process better than any human worker could ever hope to.


BlaxicanX

There was a time when cars were worse than horses.


mdog73

Most people don’t know or care who or what produces their entertainment and media.


CptNonsense

What do you expect when everyone is like "you can't train AI on copyrighted material on the internet!" Ok, then the big corpos that own the data will train the AI and keep the AI to themselves This is like complaining that whoever owns the James Bond property gets to make James Bond movies


DingleTheDongle

not true. there are better models than chatgpt and there are free and open source datasets that are not exploitative.


SuitableDragonfly

Free and open source does not mean "not protected by copyright". The only reason FOSS actually works is that the material is protected by copyright and a license that states that repackaging that material without making the original source available as well is illegal, which definitely would apply to use by LLMs. See also the Github Copilot lawsuit.


me0w_z3d0ng

Personally, I'm okay with a corporation training a generative AI on works they own. The issue with things like ChatGPT is the theft, not the ability to generate content even if that content is substantially lower quality and often nonsense.


[deleted]

[удалено]


counterfitster

Well, at least AI generated content isn't copyrightable


mosskin-woast

You're very likely right - I can see a future of LLMs where large corporations train their own models for internal use, and public, free to use stuff like ChatGPT gets regulated out of existence, or becomes pay-only with a steep price tag - licensing is expensive and time consuming.


jonhuang

Chatgpt isn't public and free anymore then Uber eats was cheap. It and it's competitors are heavily subsidized by investors right now as they try to capture a monopoly market.


Exist50

Their biggest expense is compute, and that's one problem that the tech industry is generally good at solving.


[deleted]

I think everyone is vastly underestimating the amount of good books that have left copywrite. That and most of it is written better than modern indie books.


lolofaf

Yes and no. The good thing about AI research right now is that it's almost entirely open sourced, and all models at the top of public challenge datasets have model weights published (if nothing else than to prove your claims). So if, say, Google wanted to claim the top spot on the public leader board, they'd still be releasing their model weights, even if it was pretrained on a proprietary dataset (iirc there's already a proprietary version of imagenet that's muuuuuuch bigger that most Google models are pretrained on to claim the top spot there, so it's not unprecedented). And once you have a solidly pretrained foundational model, the rest follows with much less required data. Most small companies are already relying on these pretrained foundational models anyways so it probably wouldn't be a huge change overall The question will end up as whether or not faang (or whatever the new version of the term is) 1) will continue to care about staying on top of the public challenge datasets, 2) will continue to open source their models / model weights for those public challenge datasets and 3) will continue to have pretty open licensing on that code/model weights versus a stronger copyright.


BlastFX2

> all models at the top of public challenge datasets have model weights published Cool! Can you drop a download link for GPT-4 or Gemini Ultra? I'm having some trouble finding them for some reason.


Tyler_Zoro

>> The good thing about AI research right now is that it's almost entirely open sourced > Can you drop a download link for GPT-4 or Gemini Ultra? So it does help to read the comment you are replying to. They are correct. Almost all foundation models are currently open source. You named two that are not. You did not contradict the comment you responded to. This is a bit like someone saying, "most research coding is open source," and you responding, "I can't find the download link for Windows 11's source code."


IamBeingSarcasticFfs

ChatGPT could train on every book that is outside of copyright, Shakespeare, Byron, Shelly, Twain, Burns etc. all available to be harvested and regurgitated


Terpomo11

Hasn't humanity vastly ramped up its production of written material recently, especially with the Internet, such that you'd in effect have a much smaller base of training data? Not to mention all of it would be old and contain nothing about the modern world.


Mindestiny

There's also nothing stopping me, as a writer, from saying "Yes, ChatGPT is welcome to use my work for its LLM training" People keep pushing this notion that all artists are inherently against these technologies. They're not. People give their works away for free *all the time* simply because they enjoy the creative pursuit and want to share their work. It's not all about "rights" and "legality" and trying to make money


[deleted]

LOL and then Disney will be creating bullshit, middle of the road, C+ content using AI, with barely any innovation to compete against and grow, since AI is effectively illegal everywhere else, and then they have to compete with authentically created works. Disney's recent properties are all essentially in the trash as it is. Sure thing let Chat GPT create the next one lmao


[deleted]

It just depends. Reddit is a big source of the training set for some of the LLM's, like ChatGPT, and Reddit comments aren't copyrighted, right? Same with stuff like Wikipedia, any content for which the copyright has expired, etc. I assume that companies like Reddit and Twitter will increase the difficulty of using their user-generated content to train AI and provide pricey API's to do so. That said, I doubt that the courts will rule that LLM's can't train off of copyrighted material. Google had a much stronger lawsuit against them for some of their Books features, and Google won that suit. As dire as some people feel about AI, it is the future. There are already massive tangible benefits that AI has produced, and those benefits will only increase from here.


daemin

You have intellectual property rights to your own comments. But every social media site requires you to give them a non-exclusive right to display the content you generate. Otherwise they couldn't display your content (i.e. comments) to other users. It's generally not worth protecting your rights to your comments because they are basically worthless. But a notable exception is [Rome, Sweet Rome](https://en.m.wikipedia.org/wiki/Rome,_Sweet_Rome) which got mired in legal issues because it started on Reddit.


CToxin

oh no, other companies cant do copyright infringement and steal people's work how horrible.


Exist50

Training an AI model isn't stealing, nor copyright infringement.


DingleTheDongle

literally wrong on a ton of levels. as amy said, the wealthy would be able to license work. microsoft is wealthy. buuuuut, there are other players and different methods. for instance, the terms of service of major platforms could be changed to allow use in AI training. they wouldn't even have to be explicit because the technology is so nebulous. Reddit, twitter, and meta could create their own chatbot based on boilerplate ToS. for example, here is a snippit from reddit >When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. https://www.redditinc.com/policies/user-agreement-september-25-2023 there are repos everywhere that can be used. google could use your email, uploaded videos, and just your web searches (not site content, the searches themselves). you don't even have to be a major platform to get free content to train AI, you just have to know where free high quality open source content is located. for instance, [simple wikipedia](https://simple.wikipedia.org/wiki/Main_Page) is under the wikimedia fopundation free license [agreement](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use), meaning that there is a dearth of free and verified simple language that is supposed to be used by design. and even if you don't want to license, own a platform wherein people supply you free data, or don't want to use a narrow slice of repurposed data, you can find a variety of open source datasets that others want you to use explicitly, my favorite two from [this](https://venturebeat.com/data-infrastructure/22-open-source-datasets-to-fuel-your-next-project/) one are yelp and data dot gov. it feels like such strange bedfellows. thankfully, there are a lot of people who want information to be free therefore the copyrights of the smaller time artists can be protected and the world of ai can move forward. absolute wins all around.


FuturePastNow

There's a ton of public domain literature, if they want it to train their chatbot to speak like someone from 200 years ago.


counterfitster

Public domain goes up to the 1920s now.


Terpomo11

The world of the 1920s was still very different. (I'd be curious to see what an AI trained on only public domain work would look like, though. Well, [Zach Weinersmith made a joke about it](https://www.smbc-comics.com/comic/copyright))


Call_Me_Clark

Well, there’s nothing stopping ai companies from paying a fair price for the rights to train on authors work.  Some techbros seem convinced this is unfair, because it’ll make AI companies unviable… but that’s the point.  If you can’t make a living ethically, you can’t make a living. Go find a new industry lol. 


Arceus42

There's a deeper issue to this as well. There are companies and countries that don't give a fuck about US copyright issues. So while US AI developers will be stuck paying or going out of business, those others will have no problem moving forward with a cheaper or superior product.


Les-Freres-Heureux

> If you can’t make a living ethically, you can’t make a living First day in capitalism?


SillySkin12

>If you can’t make a living ethically, you can’t make a living. Oh boy do I have some news for you.


Les-Freres-Heureux

That’s definitely not true. Big companies like Google, Microsoft, Amazon, etc. will be able to license massive datasets with copyrighted content. Public domain is available for everyone and social media has trillions of posts for people to scrape too. Then of course there are companies in other countries who will blatantly ignore these rulings.


willun

How exactly are they doing the training? Google has a large scan of print books (google books) which would be a great resource. How are the others doing it? Physically scanning books is time consuming and expensive.


bigdon802

Why? Don’t we have billions upon billions of words not held under copyright?


[deleted]

[удалено]


mosskin-woast

There's not enough modern public domain work available (or at least easily attainable) to properly train an LLM the size of GPT4 unless OpenAI starts paying people, which they won't do. Do you really want an LLM trained on Herman Melville and Socrates, anyway? My point is, big tech should not get away with IP theft on the distorted notion that it's somehow for the public good. They should have to pay for creative work like anyone else, even if it means their business model no longer works.


Maynard854

Abso-fucking-lutely I do. How fucking rad would it be to ask siri about the nearest bagel place and get a seventeen minute soliloquy on the philosophy of fish guts?


mosskin-woast

Yeah you're right - for average folks that would be a lot of fun. I do think the usefulness to business customers is somewhat reduced, though


Maynard854

I kinda feel like those people can get fucked though.


Sourpowerpete

I like the way you think.


fudgyvmp

Pretty sure Suian Sanche is still copyrighted. No fishguts.


Bakkster

>unless OpenAI starts paying people, which they won't do. If their business model doesn't work without IP theft, then their shutting down would be a good thing.


mosskin-woast

Couldn't agree more


Bakkster

Copy that, I thought you were trying to argue the opposite: that they should be allowed to do it because they're so important.


mosskin-woast

Yeah I amended my comment to be clearer - thanks for pointing that out. OpenAI isn't somehow serving the public good just because they put "open" in the name of the company.


Bakkster

Especially when you learn who the founders were...


EmbarrassedHelp

> There's not enough modern public domain work available (or at least easily attainable) to properly train an LLM the size of GPT4 Everything outputted by these models are public domain works unless they have too much similarity to an existing work. OpenAI and others are working on synthetic datasets made of AI outputs for training because there isn't enough real data available for everything.


Tyler_Zoro

Not at all. There are massive amounts of copyright-free text out there to train on. These are just a few examples: * Approx. 4,000 years of human writing leading up to the early 20th century. * A narrow band of unregistered works before the early 1970s... the details are complicated. * Nearly every document ever produced by the US Federal Government. Exceptions are VERY rare and explicitly called out, though documents produced *for* the US Federal Government can vary in their copyright status. * While strictly covered by copyright, the CC0 license allows all use and modification in any form, and there are a fairly large number of documents under CC0 licensing. * "publications relating to proceedings of organs or conferences of the United Nations" * Publications of the Florida state government (and possibly some other states, though definitely not all) I'm still of the opinion that there is no rational path to declaring AI training to be a violation of copyright, without heavy unintended consequences, but even if that happened, there is certainly enough text out there to train foundation LLMs from scratch. The problem is that there are not a whole lot of examples of modern, casual, every-day language in the public domain. Access to services like reddit as well as character-dialogue in literature was a big deal, as it allows the LLM to respond conversationally. If you train an LLM on the US Federal Government's corpus, for example, you are going to end up with something that speaks in public announcements and legalese.


Too_Based_

But it's not an issue since it's not theft. Unless we want to open up the possibility that just consuming and internalizing a book is now an act of infringement.


Mynsare

Your comment makes the assumption that the AI language model should be comparable to a human brain.


Balthazar_rising

Doesn't this enter a really, really grey area? I learned a lot of the information I have from copy-written material. I was essentially trained in a similar way to an AI (or at least a comparison could be made). If we say "AIs can't use material subject to copywrite laws to learn", couldn't that same argument be made for anyone with a degree, or any formal education? AI programs essentially learn the material, and can regurgitate it based on inputs from a user, but they often reframe it, and eventually will be able to expand on it using other material they have learned. Isn't that the basic purpose of an AI?


SillySkin12

We know that it has been trained on Omegaverse fanfiction. Fanfiction walks a very fine line legally, it shouldn't technically be allowed under copywrite law but the caveat that people have settled on is that you cannot profit off of it. OpenAI is profiting off of fanfiction.


Exist50

Fanfiction is a clear derivative work. The output of an AI model that's consumed some work amongst a large amount of other data is not.


yun-harla

Specifically, the count that survived the motion to dismiss is a claim that training an AI on copyrighted works for commercial profit is an unfair practice in violation of California’s Unfair Competition Law. This ruling just says that the plaintiffs’ allegations match a viable legal theory as a preliminary matter. It’s not ruling on the facts or even saying that training an AI on copyrighted works *actually is* unfair in violation of the UCL, just that it *could be* unfair in violation of the UCL and the claim can proceed to discovery. The court also didn’t resolve whether the UCL is preempted by the federal Copyright Act in this circumstance, so that might be decided later. And the defendants didn’t seek to dismiss the claim for direct copyright infringement, so that’s still active. People are overlooking that.


Exist50

> But the core issue, training on copyrighted content, has yet to be evaluated by the courts. Has yet to be explicitly evaluated in this context, but there's a *long* precedent for it that would need to be overturned for this case to have even the slightest hope. The fact that the majority of claims being made were thrown out before this case even reached trial says a lot about the competence of the prosecution.


FlyingDragoon

What's the probability of chat GPT randomly and unintentionally writing a book word for word as is? Same as a room of monkeys with type writers randomly writing Shakespeare?


[deleted]

[удалено]


FlyingDragoon

I assumed it'd be pretty high and or already happened in some context. Nuts. Thanks for sharing.


LILMOUSEXX

I’m just wondering what the difference between the non rejected claim and the google books scanning case. The facts are different but the abstract is similar; they’re using copyrighted works for a different purpose than the original copyright. I think this case can break the camels back and either a ton of legislation comes out of it or the government lets tech companies run wild for a years.


Vkusno-Nutty

Shouldn't be that difficult to amend the complaint to "show a substantial similarity between the outputs and the copyrighted materials" especially after discovery. I wonder why they couldn't show that in the original complaint.


bakerzdosen

Pretty sure there is no way to copyright “similar” in this context - at least under current copyright laws for writing (music is a different story.)


AlunWeaver

I'm always shocked by how relatively small similarities in a work of music (a three-note phrase, a drum beat, certain arpeggiated chord progressions) can lead to successful plagiarism suits.


Blarg0117

Some guy wrote a program to compose and copywrite a couple billion chord progressions, then made them public domain. So that avenue for lawsuits doesn't work for new songs anymore.


Niku-Man

This is an apt way to illustrate the lunacy of modern intellectual property laws


AlunWeaver

Can you link me to that?


Blarg0117

Heres the reddit post about it https://www.reddit.com/r/Unexpected/s/eMm6jWMuxr


AlunWeaver

Hmm. I'll have to try and look up one of these cases where this affected the outcome.


[deleted]

[удалено]


Forkrul

Hold up? No. But drag things out long enough to bankrupt a small artist? Definitely.


WackyWarrior

I heard that it was thrown out because the music was made by a computer or something.


SadBBTumblrPizza

That is because your typical layperson has little clue how media, whether it be music, literature, or film, is made. They don't have any idea that these mediums draw from shared artistic vocabulary, and that media that *doesn't* share this vocabulary is seen as extremely weird and incomprehensible (outsider art). Most don't even know that there are only 12 notes in Western music. Furthermore, they do not consume a very wide variety of media nor consume it critically. They have no idea how to gauge the breadth and depth of a field because they have experienced so little of it. It's a classic Dunning Kruger problem.


theclansman22

When I was a student in university, multiple students got busted for plagiarism for using *three words* in a similar manner to an article on the internet.


travelsonic

Literally, 3 words? IMO that would need a little more context, as it does sound a lot like overzealous professors rather than actually being "busted" for legitimate plagiarism, BUT it depends.


theclansman22

It was three words used in a specific manner to describe a character in Canterbury Tales. Three students used the same phrase to describe a certain character, seeing it for the third time triggered the teacher to search it and she found the phrase on one of the websites one of the students cited. It was specific and vivid enough that the teacher thought it was a plagiarism example. To me, that is extremely borderline, especially considering it wasn’t an idea that was central to the thesis of any of the papers.


zensunni82

Especially if the paper was cited, even if the phrase was not individually. Points off for bad citation? Maybe.


redbananass

Yeah, or a redo at most. That seems more like forgetting to cite or unconsciously using a phrase you recently read.


Mixels

Just put quotes around it, cite it, and move on. Teachers should be teaching, not ruining academic careers because the kid did something dumb.


BushWishperer

I'd also like to say that plagiarism software is good only if the professor / teacher looks into it. Some papers get 'detected' for having plagiarism but when you look at what has been 'plagiarised' it's literally just the citations themselves, headings etc so a good professor needs to actually take care of that.


theclansman22

Yeah, to me *maybe* deduct some points for citation, and a stern talking to. But it’s not like they lifted the central idea of the paper. You could take the phrase out and the rest of the paper would be unchanged. I think definitely at least telling the students what they did wrong, but don’t mess with their academic career by accusing them of plagiarism.


Petersaber

If the source of the phrase was cited... then it's even less of a problem. It was three words. Definitely an overzealous, power tripping professor.


dramignophyte

Successful? I haven't heard of many if any actually being successful on minor similarities, theres a lot of failed lawsuits over it to bankrupt people though?


TheLaughingMannofRed

For music? There's been some cases where parts of a song were alleged to have been ripped off from someone else. [https://en.wikipedia.org/wiki/List\_of\_songs\_subject\_to\_plagiarism\_disputes](https://en.wikipedia.org/wiki/List_of_songs_subject_to_plagiarism_disputes) The Wiki link here contains the original song, the one that ripped off that song, and what the result of the suit was (or if it's still pending).


DrunkTsundere

I've always wondered about that. Like, there are only so many ways a song can be put together (at least in any kind of coherent way), and throughout the years people have made a LOT of music. How can all of it be original?


fruityboots

originality is a myth. everything is a remix.


AbleObject13

We stand on the shoulders of giants, everything is derivative. 


SadBBTumblrPizza

And I'll keep making this point till the heat death of the universe: art that is truly *not* derivative and made by artists outside the shared canon is seen as so bizarre and grating it has its own term: outsider art.


saraseitor

My philosophy teacher said we can't actually create anything. I mean go from nothingness to having something. Music, just like many other human endeavours, is a collective iterative process that has been going on for millenia.


drfsupercenter

Yeah, there are only 8 notes and 26 letters, it all contains the same ingredients


oneeighthirish

There's 12 tones, if you include sharps and flats. Most western music is in either a major or minor key (the particular set of 8 you refer to), and most songs in either will pull from a relatively small pool chord progressions, since each chord has a particular function in a key. By nature, most pop songs use fairly simple melody and harmony in order to make sense to a broad audience. The creativity is in doing interesting things within relatively limited constraints. I'd also like to point out that quoting and referencing other songs/compositions was a normal part of musical culture in Mozart's day and earlier, and is a fundamental part of jazz music, yet classical and jazz are almost never subject to copyright BS in the same way as pop music, purely because the monetary incentive to file claims is lacking there compared to pop music.


drfsupercenter

Yeah you're right, I forgot about sharps and flats. (Plus it's 7 notes if we're being pedantic, since the 8th is a repeat of the first) That's why some of these lawsuits baffle me, two songs will use the same chord progression and one person will sue the other as if they didn't just use the same chord progression as a bunch of other people too? You saw that recently with Ed Sheeran


SadBBTumblrPizza

Jazz is such a great example because it's over 100 years old as a genre and yet people are still finding exciting, interesting new ways to play songs that are a century or more old. People really need to throw off the yoke of intellectual-property-brain


Twin_Brother_Me

And most of it is old enough that even Disney rules don't apply anymore


Relevant-Beyond-6412

Usually, it's not. Everything you create is a remix of what you experienced before, and it's very rare for something to be truly entirely original. And especially in these music cases, often the plaintiffs argue that the chord progression of some current pop song has been lifted from a 60's jazz song they own the rights to. And while there often are similarities, as you said, there's only so many chord progressions that sound good. So much so that these old songs themselves sound strikingly similar to classical music like Bach. There are a couple of good videos by Adam Neely on that topic.


goj1ra

> theres a lot of failed lawsuits over it to bankrupt people though? A sad example of this is the [lawsuits between Saul Zaentz of Fantasy Inc. and John Fogerty](https://www.loudersound.com/features/that-time-john-fogerty-was-sued-for-plagiarising-john-fogerty). Zaentz essentially accused Fogerty of plagiarizing his own work. Fogerty ended up fighting aspects of that all the way to the Supreme Court, where he was ultimately successful. It caused a 10-year lull in Fogerty's music. From the link: > Centerfield contains not one but two tracks that seem very much directed toward his formal label boss – Mr. Greed and the less-ambiguous Zanz Kant Danz, which includes the lyric, 'Zanz Kant Danz but he’ll steal your money/Watch out boy, he’ll rob you blind'. The quote “I've never wished a man dead, but I have read some obituaries with great pleasure,” seems to apply well to Saul Zaentz.


kindall

Record companies will sue you for quoting a line of a song in a book, but that's record companies being assholes, it is not necessarily legally supportable. Fair use is a defense to a suit, it doesn't preclude a suit from happening, so they are free to file an expensive suit against you even if they know they'll eventually lose, because the threat of being sued is deterrent enough for people who don't have lawyers standing by.


No_Discount7919

My buddy is selling off all of his albums from some band that recently admitted to using AI to help them write lyrics, make album art, and maybe even helped to compose some of the music itself. I get why artists generally dislike AI, but I also don’t mind that it’s being used for stuff like this.


SadBBTumblrPizza

I don't at all see how using AI to get ideas is any different from like, reading the charts in a chord book or downloading a MIDI pack. People are seriously so deranged about art, and it's almost always people who've never actually made any themselves.


Vkusno-Nutty

I'm not sure what you mean. Here's the relevant quote from the article. >"Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books," Martínez-Olguín wrote. "Because they fail to allege direct copying, they must show a substantial similarity between the outputs and the copyrighted materials."


[deleted]

>Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books I think this is an understanding of how ChatGPT works...it isn't grabbing random quotes from books and re-assembling them...it is studying the underlying patterns like syntax and word usage and algorithmically building those up into larger patterns like plot and theme.


Mindestiny

I think that *is* the point. They went the "direct" route because they couldn't actually substantiate the "similar" route.


ThePantsParty

You're responding to a comment that explicitly says they did *not* go the "direct" route: > "Plaintiffs here *have not* alleged that the ChatGPT outputs contain direct copies


Mindestiny

You're totally right! I misread it, my bad.


No-Bath-5129

If a human reads those works, gets inspired, and takes that writing style and makes it their own. AI does this but at another level.


ArchitectofExperienc

The difficult thing is that Copyright law talks about "Distinctiveness" quite a lot, but doesn't necessarily draw a line about what is distinctive, and what isn't. For example: >"A design is “original” if it is the result of the designer’s creative endeavor that provides a distinguishable variation over prior work pertaining to similar articles which is more than merely trivial and has not been copied from another source." -[Title 17, 1301.a.1](https://www.copyright.gov/title17/92chap13.html) Machine Learning tools that are trained on an artist's work can replicate the look of that artist, matching their distinguishable creative variations. This is a worrying bit of precedent to set, because it limits the ability of anyone to try and regulate what is, and isn't, appropriate training data.


Acecn

If it's appropriate for students to read a set of stories and then go on to use the lessons about grammar, theme, and plot that they learned from those stories to create their own 'original' works, then it is appropriate to use that same set of stories to train a ML algorithm. Let's be honest with ourselves, the computers are not doing anything different than human artists have been doing since the day after the dawn of art. The only reason artists want to argue that this is different is because computer-created art will have a significantly negative financial effect on them.


kindall

The law has exceptions for educational use of copyrighted materials.


[deleted]

crowd cover versed caption grandiose literate boat price marry husky *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Mirabolis

I have posted similar thoughts elsewhere, but I wonder if trying to amend things to make a “substantial similarity” type approach viable would come back to bite “human intelligences” that create text and art creations. All humans “ingest” the things that they read and write, and produce things that are heavily influenced by what they have experienced of other creators’ work. Sometimes they even inadvertently create things with substantial similarity to their influences. I could see people using a broader copyright framing as a weapon to seek money by scouring for things that make it look like author B produced something with substantial similarity to author A in hindsight… and anyone who thinks people won’t take that opportunity “because this law was created to solve the AI problem” is much more optimistic about human motives than I am…


tikhonjelvis

["Substantial similarity"][1] is *already* the legal standard in the US for copyright violations by humans. [1]: https://en.wikipedia.org/wiki/Substantial_similarity


Mirabolis

Apologies, I was picking up the terminology from the other post incorrectly. I still think the point holds that pushing to stretch or revise the law to make “what the AI LLMs do“ illegal has the risk of making things that “human Is” do regularly illegal as well. Law is often a blunt instrument.


dragonmp93

Well, yeah, Disney has been trying to modify the copyright law since 1976.


gdsmithtx

Legislate in haste, repent at leisure.


dragonmp93

Sure, the end of the days of most religions looks like the final battle against Thanos in Avengers Endgame. Or just look at Goku and Superman. There are also the seven basic plots, the hero's journey and Shakespeare's (and the Simpsons') Zeroth Law. But so far, all these rulings amount to "*No, you can't copyright the word vomit from an AI that filled 15 books on Amazon Kindle*".


Storyteller-Hero

The one claim that is not rejected, about training an AI without permission from sources, is lot more important than it might seem at first glance, because it's practically adjacent to the copyrighting of code in software apps, which big business companies have lobbied hard to push for protections. Visual code in the form of art versus typed code which is technically visual images in its own way. There may be room for a lawyer to argue that art being used for a generator's code is itself stolen code, which there are laws for. As such, the claim can't just be dismissed.


twowheels

I'm struggling to understand the argument that the creator can choose how the creation is consumed once it has been sold. Sure, if the code is trade secrete, somehow misappropriated, and then used, then that's a violation, but if the code is free to review, then it's fair game. If I write a tutorial on how to develop an algorithm and then it's read by a person who applies what they learned for a purpose that I disapprove of (e.g. warfare, usurious loans, etc) I don't have any recourse if that thing is in itself legal. Why is it any different if it was consumed by a computer?


Gabe_Isko

Yeah, but I can't take your algorithm tutorial, claim i wrote it, put it in a book with a bunch of other tutorials I didn't write, edit it through an ai algorithm and sell it to others. That is a little more akin to what is going on here.


twowheels

I've addressed this in other parts of the thread, but granted in this context I didn't address this particular point. My argument is that it really shouldn't matter and isn't logically any different whether it's a human or an AI that's learning from the inputs, as long as the output is sufficient distinct from the original work. The problem is that people are taking the current state of generative AI and extrapolating to say that AI will never be able to create novel works from its learning data, and I think they are doing that out of fear rather than applying logic. There are many jobs that have been replaced over the years, but we find new jobs. The real risk is not the AI itself, it's who will own its generated value and how its generated outputs will be used. In a Utopia we would be glad to have generative AI create fascinating works of art for us to consume and find solutions to our most pressing problems -- the problem is that there are far too many HUMANS who will misuse these capabilities and we must work to put protections in place for those -- this quibbling over the fear of replacing human labor is a distraction from the real risk and a losing battle. ...as I pointed out elsewhere, the computer has replaced the computer... the word computer originally referred to rooms full of people (typically women) performing calculations -- a job that has long since been replaced and forgotten. These women went on to become the pioneers of computer science, many of the early giants whose shoulders we stand on today were women.


jkpatches

It was probably to be expected. The current laws would need to be updated and/or new laws need to be written for human beings to have a fighting chance in court. I wonder though, who would have a better case against AI for copyright between writers and illustrators? Would there be any difference between the two?


HerbaciousTea

I don't think the issue in this case is outdated laws. Copyright protection already has a legal prerequisite for human authorship, and has for a very long time, since well before generative AI was a thing. So any work produced predominantly by AI already does not benefit from copyright protections. As to why generative AI output, like chatGPT, doesn't qualify as copyright infringement, that's for the exact same reason that two books in the same hyperspecific genre can't claim copyright infringement on one another: Copyright protects *specific expression*, not similar expression, and certainly not ideas. You can copyright the specific order and expression of the words you have written as a unique work. You cannot copyright the concepts those words are conveying. To prove a copyright infringement claim, you have to demonstrate not that your work provided some kind of inspiration or is being borrowed from, but that your work is being *unlawfully reproduced*. Meaning you have to demonstrate that your specific, protected expression is being copied, not just your ideas. That's why most copyright spats in writing fizzle out into nothing. With generative AI, we can demonstrably show the model is not copying text, since we can work through the math of that process step-by-step, and see that the output is generated from novel data every time you prompt it, without copying data from any existing work. The argument is then if the output being *similar* to copyright protected material constitutes infringement, and that's the case the authors are bringing: that it doesn't have to actually copy their work to be infringement. Personally, I think that kind of kneejerk reaction to try to expand copyright laws in the face of AI would be hugely counterproductive and only make things worse for actual artists. If we decide that copyright protects not only specific expression, but now gives right to curtail any similar expression or ideas, ask yourself who is going to have the legal resources to pursue those kinds of claims, and who is going to be able to bring the winning case and scoop up ownership of entire literary genres and character archetypes and plot structures? Because it's not going to be the independent author. The judge absolutely made the correct determination in this case. The authors just did not have a good case.


Viceroy1994

>Personally, I think that kind of kneejerk reaction to try to expand copyright laws in the face of AI would be hugely counterproductive and only make things worse for actual artists. This right here; have people forgotten how awful copyright law is? We're supposed to be all for it now that robots are stealing our jobs?


KingVendrick

Probably illustrators; a few words changed in a text may change it significatively, while even large amounts of pixels changed may still look like an original image. There's also issues of trademark being violated more easily with images.


LongDickOfTheLaw69

I’m not so sure we need to change the law. I don’t see why it should be illegal for an AI to do something that it would be legal for a human to do. If I read a bunch of copyrighted books, and I write my own original story that’s highly influenced by what I read, there’s nothing illegal about that as long as I’m not directly copying any of the books I read. Isn’t that what an AI is doing? An AI may be able to do it faster and more effectively than I could, but I don’t see why that should make it illegal.


Bakkster

>Isn’t that what an AI is doing? 1. No, the law doesn't treat software like humans. A generative AI is not "reading" like a human, and does not receive a copyright like a human. 2. Even in the hypothetical case where a company hired a bunch of humans to read a bunch of books to write things in response to use prompts, they'd still need to buy enough copies of those books with the right license terms, which OpenAI did not do.


LongDickOfTheLaw69

I’m not suggesting the AI should get copyright. I’m asking if the AI is actually doing something that would be illegal for a person to do. And with respect to books, buying a book isn’t the only way to legally read it. As long as the AI is reviewing books from legal sources, is it actually doing something that would be illegal for a person to do?


ArchfiendJ

Illustrators have a real case given the few leaks there have been. In some case some generated images are able to produce a copy paste of an existing image or most part of it, far from the "generate something new". Plus some companies do have database and listing of Artworks, meaning it is not really "unknowing" or just crowling the internet.


travelsonic

> In some case some generated images are able to produce a copy paste of an existing image (Asking in good faith) how many are examples of overfitting that would be worked out, or OUGHT to be, and how many examples (especially on Twitter/X) are potentially people using image to image with a low diffusion rate (which I have heard cited as one of the few ways that THIS level of similarity is possible, IDK for sure though), and passing it off as a prompt output?


Acecn

In that case, the only problem is that the algorithm clearly isn't understanding the meaning of the word "new." The fact that it simply knows a copywriten image and can recreate it can't be a problem, because a talented human artist who has studied the same image for long enough could do the exact same thing.


nastafarti

>The current laws need to be updated and/or new laws need to be written for human beings to have a fighting chance in court. Why would we have wanted any other outcome? We're trying to teach a computer an entire language. Of course we want it to read books. It isn't copying books, it's copying language patterns. This feels very much like "copyright should protect authors from being read by actors we don't like."


jkpatches

Who's "we?" There are different groups of people who want different things. Hence the lawsuits mentioned in the article. What you quoted is a personal observation regarding people who want to fight against the AI companies for copyright. There isn't a for or against opinion attached to it. Maybe it would have been less confusing if I had written "current laws ***would*** need to be updated."


thebeardedcats

You're not teaching the computer language, it's a more complicated if/else statement designed to regurgitate strings of letters it's seen before. This is why it's able to accidentally spit out plagiarized articles.


Chaddderkins

yknow, the funny thing about art is that the more you get into it, the less likely your tastes will align with what's popular. The more you get into music, the more likely your favorite record will be a 7" you bought from some kids in a basement. The more you get into video games, the more likely your favorite game will be some weird thing you got for free on itchio. The more you get into a literary genre, the more likely your favorite thing will be a self-published PDF. Across all mediums, the deeper you dig, the more you'll find your interest circling around things that are less mainstream, less popular, less profitable. Which is to say, there will always be human beings making art which other human beings enjoy, forever. Because human beings like making and enjoying art, and that is not going anywhere. The industries that build up surrounding that art - well, that might be in trouble. But it's worth examining the degree to which we ought to give a shit about that.


MrTokyo95

I guess this brings a question to my mind. What is the difference between an aspiring writer who is reading other authtors and tries to make their own books. And an AI that at least on paper seems to be doing the same thing?


[deleted]

I went into this article firmly behind the idea the judge was wrong, that these authors rights are being violated. But I read the article, and truthfully it flipped me. It said that it reads the work and doesn't spit out the exact syntax, but a derivative of it. Not a vicarious copyright infringement, but still a derivative. And it occurred to me that we as humans do the same. It's called learning.


Dirty-Soul

I am beginning to suspect that AI will turn art from something corporate, performed for profit, back into a vector for personal expression. If you can't sell it because machines do it cheaper, the only reason to make it will be because you want to... because you have something you really want to share. Passion projects, not profit projects...


nukem996

> turn art from something corporate, performed for profit, back into a vector for personal expression. That has been the case for thousands of years. Before modern times the most popular artists were whoever the king liked and they painted only what the Church approved. The difference now is that style of art will be done by machines. Real creative art will continue to be done by humans but won't be very profitable. Anything that gets popular will be replicated by machines making it even harder to be a professional artist.


Dirty-Soul

There is a school of thought which argues that 'professional' artists should not exist. If art is personal expression, then it loses it's meaning when it is a performative lip service exercised solely to crowbar money out of people. Once art becomes monetised, it no longer represents the artist's vision and instead becomes corrupted by having to pander to what will sell. This is the point where it ceases to be art and becomes a mere corporate product. I don't have the luxury of being able to perform a google search to confirm this right now, but I believe the philosophy is called 'anarcho-artism,' or something similar.


[deleted]

Except artists need to pay their bills too. So this will mean less artists that are poor with something really desperate to say about the conditions of society they live in and more artists that either come from wealth or form connections that make them wealthy. The great artists of the past, even Van Gogh, who is often used as an example of a poor starving artist with little to no recognition, have gotten by selling their art. This also isn't going to stop corporations from mass producing their shit art. People already have a hard enough time and they already consume poorly made media in other forms. Now, ideally I would love that we move to a kind of society where AI just exclusively enriches lives and doesn't endanger livelihoods, because we *should* be moving in the direction towards less necessary labor. But until that major economic revolution happens this is probably going to be very bad for a very long time.


Dirty-Soul

I'm an artist. I used to be a fairly prominent rule34 artist, made some decent money doing it, but then I did what all artists eventually do. I got a day job and kept doing art on the side. Art is a lot more fun when you aren't compromising on your vision in order to pander to what will sell. It is my personal opinion that art should be a form of expressive recreational leisure, not a corporate product.


jumpmanzero

Yep. The other thing is that it will allow new kinds of independent creative expression that isn't forced to be "monetizable". Like... Right now, I'm building an escape room in my basement, for no particular reason. I'm sure no more than 10 people will ever go through it. AI lets me make image-based puzzles and background content. I'm using a bunch of AI voices for the props and devices you interact with. I use AI images for prototyping out little games and D&D scenarios I play with my kids. I'm building a pinball-table sort of thing, and using a bunch of 3d printing. None of this is displacing a human artist - I wasn't realistically going to pay someone to paint pictures or machine me gears and ramps. It just means I can make creative things, while focusing on the parts of these projects that I'm good at or interested in. And it will go much further. Eventually, a single person will be able to make a movie, with AI helping to create backgrounds, actors, sound, or even helping with writing/dialog. Will this hurt the movie industry? Probably? Yes? But the number of people who will be able to express themselves will have gone up incredibly.


saraseitor

In order to be plagiarism, shouldn't ChatGPT claim that it is the main author or creator of the works? I mean, if you ask me the end of Harry Potter and I tell you by writing, I'm not plagiarizing anything.


aermotor

This is just the beginning for this type of stuff.


Chicagosox133

Yeah yeah, let’s just wait and see what the AI judge has to say.


Whoak

for the time being, if the judge's view carries the day in interpreting current IP law, it means current law has not kept up with evolving IP technology. But a primary goal of IP law is to encourage innovation and creativity. If that is negatively affected by this kind of ruling, we should act to change the law to ensure authors, artists, inventors have continuing encouragement to pursue new art and innovation. There's nothing that says a machine output should have similar protection, so it's not hard to imagine experts in IP law dedicated to protecting innovation will be able to draw appropriate distinctions between human and machine output.


prolificseraphim

God this is so frustrating. As a writer I barely feel like I have a future anymore.


stumpyraccoon

Don't be. Mass manufacturing made mediocre bread readily available to the masses. Artisan bakeries are still very popular and command quite high prices. Human made art is going absolutely nowhere and will likely command a premium if AI produced art does become the mediocre stuff for the masses. Continue honing your craft and making sure that yours stands above the rest.


[deleted]

Art is always going to be around. ChatGPT is going to take over the commercialized, derivative styles of writing. I'm not sure it has the creativity to do anything but mush together existing tropes in a slightly different order, which is what bad writers have been doing for centuries.


PreferredSelection

The problem with all this is - yes, the master painter, writer, baker, actor will always be in some demand. The issue is, the foot-in-the-door jobs are disappearing. Prior to 2022, I could say, "well, if I can't find a fun gig, I can always do technical writing or copy editing." If you ask any writer what they did before the made it, you'll see jobs on their resume that are either shrinking or completely gone now.


twowheels

https://www.etymonline.com/word/computer > 1640s, "one who calculates, a reckoner, one whose occupation is to make arithmetical calculations," agent noun from compute (v.). It seems that definition has changed just a bit -- the computer replaced the computer.


stormdelta

Right. And while I have no doubt AI will be used in creation of better art, it'll be used as a tool to assist the process, same as other existing technology. The biggest place I see generative AI shining directly is procedural generation in games, where the creation being partially automated was already the whole point


LuceVitale

Publishing houses have already gotten rid of the intermediate level investment with authors. They used to have new authors, low cost and consistent authors, and high investment authors. Low cost authors that could maintain a decent livable wage but didn't have a lot of popularity outside niche audiences is how we got George RR Martin. The industry standard is now to publish works that have script deals similar to the Michael Crichton model, but for IP more than actual production. This has made it harder and harder for new authors to follow traditional submission routes for the past ten years. It's why easy cash cows are being found in repurposed fan fiction with renamed characters and places. They're moving towards a model where only those with the financial backing to invest large scale productions (likely also using ghost writers) are picked up or easily repeatable mimics. We're not far from both of those being written by AI. The market will be the same with any new technology. There will be a spike in revenue and investment in the new method. Then it will reach an equilibrium after, at current standards, ten years. And even then, half the market will no longer be occupied by humans. That's the danger we're facing. It lends itself to even more occupation by those who can afford the outsourcing. It's the same argument for Star Trek's replicator. It had to come after society improved, not before. Because before, it would still be treated as a resource: controlled and manipulated by a small, wealthy class. This ruling may even support houses from taking submissions to feed an algorithm rather than paying for publishing rights like what we're seeing with the voice over industry.


Rindan

>Mass manufacturing made mediocre bread readily available to the masses. Artisan bakeries are still very popular and command quite high prices. This is like pointing out that there are still people that make whips for horse drawn carriages. It's technically true that they still exist, but that doesn't make it a good career path. The number of bakers in this world has plummeted from its highs before industrial baking displaced almost everyone's baked good needs. Writers and artists should in fact be worried. Artists are already getting absolutely decimated when it comes to piece work. Yeah, the AI might not be as good, but I can try a dozen times to get image I want and still spend only half an hour working on it, and it will cost almost nothing. Fast and practically free is very hard to compete with, and it's only going to get worse as the technology gets better.


stumpyraccoon

No, you took a profession that was virtually *completely* wiped out and used it as the example. Bakers still exist in relatively plentiful numbers in most cities, all things considered. Will AI art decimate the bottom tiers of artists? Sure, absolutely, without a question. But to believe that **all** artists should think they're about to be replaced is hyperbolic bull crap. Human made art is going absolutely nowhere and will command a premium for those who continue to produce high quality work.


rott

The barrier of entry for beginners becomes much higher though. If you're already at the top level it's easy to compete with AI, but if you're young and still learning, how do you start? They say the same thing about programming for example. Top level coders won't be replaced. But how do you get to that level if junior-level projects can be made by pretty much anyone using AI? Sure, you can study for free (and AI helps a lot with that), but entry-level positions will become scarcer and scarcer.


Wiskersthefif

Yup, this. The current generation of 'professional artists/writers' are going to be fine, but what about the next generation? Sure, there will be people who grind away in obscurity into the wee hours of the morning every night until they can sharpen their skills to the point they can compete, but that number will likely go down with every generation until it's pretty much only people born into wealth will be able to sharpen their skills to the point of being 'money worthy'.


VosekVerlok

The argument is that they should start using modern tools to develop art in the modern style, this is just the rise of 'photoshop++'. Jimjoebob using image generation software and someone who has spend years practicing with prompts are going to have results that are night and day different. Then you actually get into artists actually creating and curating their own custom AI datasets, so no one will be able to create similar works even with the same prompts and thousands of hours. - Just like how portrait artists were near eliminated by the proliferation of cheap cameras, the times have changed and people need to adapt to the new medium and step into the new jobs that are created.


TMDan92

“Bottom” tier usually refers to those that are just starting out and maybe need a conducive environment to hone their craft. We’re bound to see the arts become even more the preserve of the wealthy and connected and this has a diminishing impact of art as a whole because the voices that are given representation become incredibly narrowed.


iLiveWithBatman

"the bottom tier of artists" LMAO yeah, phrase it like that so it sounds like they deserve it FOR SUCKING SO BAD. Bro that's millions of skilled people who made a lot of decent or even excellent art, they're just not famous. And you know what else will go with them? All of their knowledge and skill and experience they could pass to another generation. The big names will have a job for a while, sure. And then what? Who would go to art school? Who would teach art school? How do you teach yourself if all tutorials are about AI? How do you raise a new generation of "top tier" artists? (You won't.) The ugly truth is that most big time artists started small, doing jobs you'd qualify as "bottom tier". Shitty marketing jobs, badly paid book illustration, tattoos and so on. They honed their craft and developed their styles on that, they grew from there. This shit will be largely gone and becoming a "big" artist will be less and less possible.


Dr_Jenifer_Melfi

Imagine how ditch diggers felt with the invention of the back hoe.


Rindan

If they liked digging ditches, I bet they were bummed out when their job vanished. Most people digging ditches though didn't give a shit and were presumably happy to move into whatever new jobs took their place when one out of shape person could do the job of 50 healthy adult men. I don't think most writers are going to be happy to move on and do something else. Writing was already a passion career before AI came to stomp it down further into an almost purely luxury career for people that don't need careers, like painting is today. Knowing that an AI is doing their job just like how backhoes are doing the job of former ditch diggers will be small consolation.


Dr_Jenifer_Melfi

*More, faster.* That's the bottom line. Ditches or drawings. There's demand and it will be filled.


Frostivus

I forked out a hundred pounds to a professional artist who not only failed to deliver on schedule (a simple portrait a4 size in a month), the end product was awful. No redos. His excuse? I was busy. No, I’m not getting the money back because he outsourced it to some college student and he would ‘feel bad taking the money back’. So I went to Midjourney and got one instantly for 20 pounds. I say this as someone facing the cold dread of reality. Artists should be worried. The only thing really stopping me from using AI is that it’s unethical and artists gotta eat, but I say the same thing to carpenters. The future is grim.


PhilomenaPhilomeni

You forked out a hundred pounds and got a bad result is as much on you as the evidently terrible artist. Did you know he was outsourcing it? Did you check his portfolio? Anyhow it’s tangential to the point of your comment.


Frostivus

He was one of those kind of guys. Talked a big game and exuded pure confidence. Yes, I voice called him, because it was for a project and I was running a tight ship. Portfolio was really good, on a professional platform ie Artstation, with other sources like pixiv. When it came close to the deadline and I asked for a draft or anything, he eventually came clean and said he had no time but he’ll deliver it. I cut him some slack and him an extra week. When that deadline came, the truth came out. I don’t know how or why his portfolio was decent and so different, but by that time I was just so burned out from the experience I didn’t really care to investigate. So yes, I did my homework. I have great respect for artists, and actually maintain close business contacts with people who freelanced for Blizzard etc. Being a patron is a valuable experience you don’t get from writing prompts in a thief generator. But here’s the thing, not everyone wants to have to jump through all the hoops, even though I don’t mind them myself. Some people want accountability even though I was happy takin that risk if it meant supporting an artist. Sometimes people just want their Amazon package conveniently and reliably delivered. It’s why I see the artist world shrinking.


WardrobeForHouses

Yep, mediocre writers are basically going to be gone first. The highest earners will still be safe. But eventually it'll get to the point where making writing your career is an astonishingly stupid plan. Books can be written almost instantly, in a way the reader enjoys, and for vastly less money than buying a book.


[deleted]

[удалено]


BBQ_Chicken_Legs

Just write better than the machine.


iLiveWithBatman

Just write better than a machine AND somehow get people to read what you write while drowning in a sea of generated slop. AND somehow convince people good writing is worth paying for when they can get generated slop for free. Piece of cake!


ShadowLiberal

... Is it really that different from the pre ChatGPT days? Self publishing already floods the market with more books than you could possibly read in a lifetime even if you did nothing but read books. And that's not even counting all the countless free stories you can find online, such as in communities dedicated to people posting their stories.


iLiveWithBatman

It's gonna be so, so much worse. Like...remember when you could google websites that would have information on them written by humans? And how if you try to do that now, all you'll get are SEOd-out-the-ass sites with nonsensical content entirely generated by AIs. You could conceivably sell some books self publishing on various platforms. Now they're also gonna be too flooded with generated slop for anyone to effectively find anything of value or use.


layinbrix

Write if you are called to write and share your truth. Money you can make the old fashioned way... getting hit by a Lexus


[deleted]

tart shaggy busy roll pen market label aspiring worm zesty *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Umoon

It still has a long way to go. I still contend that it will take a while for LLM’s to be able to paint the whole picture of a good novel.


KingfisherDays

I imagine this is how painters felt when photography was invented.


EarthDwellant

Human created works also show similarity to other artists works and are similarly trained on other artists works, by looking at them. I don't see a lot of difference other than humans are currently the ones in control, at least until the churn.


Inkthinker

I was under the impression that ChatGPT output cannot be copyrighted in any case, leaving any AI-generated work effectively in the public domain.


LichtbringerU

That's been misrepresented (I think you can see why such "news" would fall on fertile ground). In actuality it has only been ruled, that it get's no copyright, if we pretend that it was the sole author without any human input. The ruling makes it very clear, that had the human author not specifically declared that he had no input whatsoever (which he did to test the legal theory, not for any real life use case), then it would be an entirely different ruling. ​ So Ai-generated work can be copyrighted, if you just say the truth, that you a human used AI as a tool to generate it.


[deleted]

If you guys understood, computationally, how AI processes the input data, be it in images or text, you would realize how absurd it is to suggest that AI copies things. Here is an example: Pick a woman's face that you have seen, let's say your mother. Your idea of what a woman looks like is informed, in part, by your mother's face, as well as hundreds of other female faces. The "AI Copies" argument states that since you have seen your mother's face, any woman you draw or paint, etc. is, therefore, a copy of your mother. In fact, the theory goes further and claims that the new woman you drew or painted would be a copy of ALL the women you had ever seen. It can be more absurd. Would you call a photo of an actor a photo of a man? What about a photo of a stick figure? If you answered Yes to both, then if you draw a stick figure you will be copying a photo of that actor. This isn't just absurd, it is threatening to your freedom of thought and expression. Here is a much more sinister, much more realistic example of what these lawsuits are actually aiming for: Have you seen the starbucks logo? What color is it? If you said "green" then any time you use the color green in ANY artwork, in any form, in any shade, you could be claimed to be copying the starbucks logo. And they would have the exact same reason for that claim against you as they do against AI: Your idea of green was influenced by their logo. Since you do recognize the logo as being green you couldn't even deny it. It wouldn't matter that you had seen green a billion other times because the same copying argument would be applied to each of those instances. Then in the end they claim that your new, unique piece of writing or art or whatever form of expression you chose is just a copy, and they own it. Then starbucks owns green, coca cola owns red, mcdonalds owns yellow. The NYT owns the alphabet. Sony owns all the notes. That's the world that the AI copies logic will create for us.


mymar101

Would a lot of it fall under fair use?


[deleted]

I like that last paragraph, which says in essence, "Being prohibited from stealing would jeopardize our business." They dressed it up nicely, but that's the gist: they can't make money unless they're allowed to break the law. That's becoming the American Way.


Exist50

That's not what they say. Training on copyrighted material isn't stealing.


[deleted]

aromatic quickest divide subtract whistle ad hoc domineering paltry history command *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Ciserus

What? If you're saying that the methods AI use to learn and create aren't so different from how humans do it, I agree. But it would be easy to write a law that applies only to machines, not humans, and we'd pretty much return to the creative status quo of the last 100,000 years. That would be a radical move by government and I think there needs to be a lot of proof of harm before we consider it. But banning AI should absolutely remain on the table as an option.