This is the right way forward for open-source. Correct attribution - by tightening the connection between agents and the humans behind them, and putting the onus on the human to vet the agent output. Thank you Linus.
I agree this is very sane and boring. What is insane is that they have to state this in the first place.
I am not against AI coding in general. But there are too many people "contributing" AI generated code to open source projects even when they can't understand what's going on in their code just so they can say in their resumes that they contributed to a big open source project once. And when the maintainer call them out they just blame it on the AI coding tools they are using as if they are not opening PRs under their own names. I can't blame any open source maintainer for being at least a little sceptical when it comes to AI generated contributions.
I think them stating this very simple policy should also be read as them explicitly not making a more restrictive policy, as some kernel maintainers were proposing.
From everything I'm seeing in the industry (I'm basically a noncoder choosing to not use AI in the stuff that I make, and privy to the private work experience of coders and creators also in that field because of human social contacts), I feel like I can shed a bit of light.
It looks to me like a more restrictive policy will be flat-out impossible.
Even people I trust are going along with this stuff, akin to CAD replacing drafting. Code is logic as language, and starting with web code and rapidly metastasizing to C++ (due to complexity and the sheer size of the extant codebase, good and bad) the AI has turned slop-coding to a 'solved problem'. If you don't mean to do the best possible thing or a new thing there is no excuse for existing as a coder in the world of AI.
If you do expect to do a new thing or a best thing, in theory you're required to put out the novel information as AI cannot reach it until you've entered it into the corpus of existing code the AI's built on. However, if you're simply recombining existing aspects of the code language in a novel way, that might be more reachable… that's probably where 'AI escape velocity' will come from should it occur.
In practice, everybody I know is relegating the busywork of coding to AI. I don't feel social pressure to do the same but I'm not a coder. I'm something else that produces MIT-licensed codebases for accomplishing things that aren't represented in code AS code, rather it's for accomplishing things that are specific and experiential. I write code to make specific noises I'm not hearing elsewhere, and not hearing out of the mainstream of 'sound-making code artifacts'.
Therefore, it's impractical for Linux to take any position forbidding AI-assisted code. People will just lie and claim they did it. Is primitive tab-complete also AI? Where's the line? What about when coding tools uniformly begin to tab-complete with extensive reasoning and code prototyping? I already see this in the JetBrains Rider editor I use for Godot hacking, even though I've turned off everything I can related to AI. It'll still try to tab-complete patterns it thinks it recognizes, rarely with what I intend.
And so the choice is to enforce responsibility. I think this is appropriate because that's where the choices will matter. Additions and alterations will be the responsibility of specific human people, which won't handle everything negative that's happening but will allow for some pressures and expectations that are useful.
I don't think you can be a collaborative software project right now and not deal with this in some way. I get out of it because I'm read-only: I'm writing stuff on a codebase that lives on an antique laptop without internet access that couldn't run AI if it tried. Very likely the only web browsers it can run are similarly unable to handle 2026 web pages, though I've not checked in years. You've only got my word for that, though, and your estimation of my veracity based on how plausible it seems (I code publically on livestreams, and am not at all an impressive coder when I do that). Linux can't do what I do, so it's going to do what Linux does, and this seems the best option.
You can refuse to use AI personally, but why would you not help yourself when you can?
… my dad is 86 and only after I signed him up to Claude could he write Arduino code without a phone call to me after 5 minutes of trying himself. So now, he’s spending 4+ hours at a time focused writing code and building circuits of things he only dreamt about creating for decades.
Unless you’re doing something for the personal love of the craft and sharpening your tools, use every advantage you can get in order to do the job.
But… as above, if you’re doing it for the love of it, sure - hand crafted code does taste better and you know all the ingredients are organic
It cannot be understated how religiously opposed many in the Linux community are to even a single AI assisted commit landing in the kernel no matter how well reviewed.
Plenty see Torvalds as a traitor for this policy and will never contribute again if any clearly labeled AI generated code is actually allowed to merge.
Are they against change in general, or certain kinds of change? Remember when social media was seen as near universal good kind of progress? Not so much now.
Social media has never been seen as a universal positive force? It's the same with AI. It has good and bad aspects as does any technology that has an impact on this scale, AI will arguably have a much bigger impact imo.
People are generally against change that forces them to change the way they used to do things.
I'm sure most will have their reasons why they are against this particular change, but I don't think it will affect anything. The genie is out of the bottle, AI is here to stay. You either adapt or you will slowly wither away.
It reminds me of something I read on mastodon: "genie doesn't go back in the bottle say AI promoters while the industry spends a trillion dollars a year to try to keep the genie out of the bottle"
That is the bait and switch. The end goal is that you are out of the equation. Your perceived effectiveness at using AI as an exchange of labor diminishes over time to the point that you become irrelevant.
Who has that end goal?? Who is going to direct the AI if only the CEO is left in the organization? The CEO will never actually do it , and will always need someone who can and will do it. I just can’t see a grand plan to take humans out of the equation entirely.
If you selectively read one sentence of my comment, you risk missing the forest for the trees. I don't have any particular knowledge on the arab spring so I won't comment on that but I quite clearly said that technology has good and bad aspects to it.
This is like blaming a knife as being a killer weapon. Social media is inherently good if owners of the platforms allow for good interactions to take place. But given the mismatch between incentives alignment, we don't have nice things.
For those who might wonder how accurate this is, there is advice from the Federal Register to this effect. [0] Its quite comprehensive, and covers pretty much every question that might be asked about "What about...?"
> In these cases, copyright will only protect the human-authored aspects of the work, which are “independent of” and do “not affect” the copyright status of the AI-generated material itself.
I cannot take seriously any politician or layer using the words "artificial intelligence", especially to models from 2023. These people have never used LLMs to write code. They'd know even current models need constant babysitting or they produce unmaintainable mess, calling anything from 2023 AI is a joke. As the AI proponents keep saying, you have to try the latest model, so anything 2 years old is irrelevant.
There's really 2 ways to argue this:
- Either AI exists and then it's something new and the laws protecting human creativity and work clearly could not have taken it into account and need to be updated.
- Or AI doesn't exist, LLMs are nothing more than lossily compressed models violating the licenses of the training data, their probabilistically decompressed output is violating the licenses as well and the LLM companies and anyone using them will be punished.
Yeah, an LLM, being a machine obviously shouldn't hold copyright. But that doesn't stop people claiming that running vast amounts of code through an LLM can strip copyright from it.
Ultimately LLMs (the first L stands for large and for a good reason) are only possible to create by taking unimaginable amounts of work performed by humans who have not consented to their work being used that way, most of whom require at least being credited in derivative works and many of whom have further conditions.
Now, consent in law is a fairly new concept and for now only applied to sexual matters but I think it should apply to every human interaction. Consent can only be established when it's informed and between parties with similar bargaining power (that's one reason relationships with large age gaps are looked down upon) and can be revoked at any time. None of the authors knew this kind of mass scraping and compression would be possible, it makes sense they should reevaluate whether they want their work used that way.
There are 3 levels to this argument:
1) The letter of the law - if you understand how LLMs work, it's hard to see them as anything more than mechanical transformers of existing work so the letter should be sufficient.
2) The intent of the law - it's clear it was meant to protect human authors from exploitation by those who are in positions where they can take existing work and benefit from it without compensating the authors.
3) The ethics and morality of the matter - here it's blatantly obvious that using somebody's work against their wishes and without compensating them is wrong.
In an ideal world, these 3 levels would be identical but they're not. That means we should strive to make laws (in both intent and letter) more fair and just by changing them.
If consent to use of your code in AI training can be revoked at any time, that makes training impossible, since if anyone ever withdraws consent, it's not like you can just take out their work from your finished model.
Nice, -4 points, somebody, many somebodies in fact, took that personally and yet were unable to express where they disagree in a comment.
Look, if you think I am wrong, you can surely put it into words. OTOH, if you don't think I am wrong but feel that way, then it explains why I see no coherent criticism of my statements.
When your comment is about how you can’t take your counterparty seriously and they’re a joke, you’re incentivizing people who disagree to just downvote and move on.
The signal you’re sending is that you are not open to discussing the issue.
It’s weird how people on HN state legal opinion as fact… e.g if someone in the Philippines vibecodes an app and a person in Equador vibecodes a 100% copy of the source, what now?
Meanwhile I expect that intellectual property protections for software are completely unenforceable and effectively useless now. If something does not exist as MIT, an LLM will create it.
The playing field is level now, and corpo moats no longer exist. I happily take that trade.
Because AI is also proving to be very good at reverse engineering proprietary binaries or just straight up cloning software from test suites or user interfaces. Cuts both ways.
Have you ever seen what obfuscation looks like when somebody puts the effort in?
Not to mention companies will try to mandate hardware decryption keys so the binary is encrypted and your AI never even gets to analyze the code which actually runs.
I spend a fun week during Christmas figuring out some really obfuscated bibary code with antidebugging anti pampering things in a cryptographic context. I didn’t use ghydra or ida or anything beyond gdb with deepseek chat in a browser. That low effort got me what I needed to get.
Companies have been encrypting code to HSMs for decades. Never stopped humans from reverse engineering so it certainly will not stop AI aided by humans able to connect a Bus Pirate on the right board traces. Anything that executes on the CPU can be dumped with enough effort, and once dumped it can be decompiled.
You are agreeing with me, you just don't know it yet.
1) The financial aspect: As you say, more and more advanced DRM requires more and more advanced tools. Even assuming advanced AI can guide any human to do the physical part, that still means you have to pay for the hardware. And the hardware has to be available (companies have been known to harass people into giving up perfectly moral and legal projects).
2) The legal aspect: Possession of burglary tools is illegal in some places. How about possession of hacking tools? Right now it's not a priority for company lobbying, what about when that's the only way to decompile? Even today, reverse engineering is a legal minefield. Did you know in some countries you can technically legally reverse engineer but under some conditions such as having disabilities necessitating it and only using the result for personal use?[0]
3) The TOS aspect: What makes you think AI will help you? If the company owning the AI says so, you're on your own.
---
You need to understand 2 things:
- Just because something is possible doesn't mean somebody is gonna do it. Effort, cost and risk play huge roles. And that assumes no active hostile interference.
- History is a constant struggle between groups with various goals and incentives. Some people just want to live a happy life, have fun and build things in their free time. Other people want to become billionaires, dream about private islands, desire to control other people's lives and so on. People are good at what they focus on. There's perhaps more of the first group but the second group is really good at using their money and connections to create more money and connections which they in turn use to progress towards their primary objectives, usually at the expense of other people. People died[1] over their right to unionize. This can happen again.
Somebody might believe historical people were dumb or uncivilized and it can't happen today because we've advanced so much. That's bullshit. People have had largely the same wetware for hundreds of thousands of years. The tools have evolved but their users have not.
AI proponents completely ignore the disparity of resources available to an individual and a corporation. If I and a company of 1000 people create the same product and compete for customers, the company's version will win. Every single time. Or maybe at least 1000:1 if you're an optimist.
They have access to more money for advertising, they have an already established network of existing customers, they have legal and marketing experts on payroll. Or just look at Microsoft, they don't even need advertising, they just install their product by default and nobody will even hear about mine.
Not to mention as you said, the training advances only goes from open source to closed source, not the other way around.
AI proponents who talk about "democratization" are nuts, it would be laughable if it wasn't so sad.
>If I and a company of 1000 people create the same product and compete for customers, the company's version will win. Every single time.
As a person who works for a company with 25k people, I would disagree. You, a single person will often get to the basic product that a lot of people will want much faster than a company with 1k, 5k and 25k people.
Bigger companies are constrained by internal processes, piles of existing stuff, and inability to hire at the scale they need and larger required context. Also regulation and all that. Bigger companies are also really slow to adapt, so they would rather let you build the product and then buy out your company with your product and people who build it. They are at at a temporary disadvantage every time the landscape shifts.
The point wasn't about the number of people, the point was a company which employs that number of people has enough money which can be converted to leverage against you.
Besides that, your whole arguments hinges on large companies being inflexible, inefficient and poorly run. Isn't that exactly the kind of problem AI promises to solve? Complete AI surveillance of every employee, tasks and instructions tailored to each individual and superhuman planning. Of course at that point, the only employees will be manual workers because actual AI will be much better and cheaper at everything than every human, except those things where it needs to interact with the physical world. Even contract negotiations with both employees and customers will be done with AI instead of humans, the human will only sign off on it for legal requirements just like today you technically enter a contract with a representative of the company who is not even there when you talk to a negotiator.
The corporate moat is the army of lawyers they have. It doesn’t matter whether they win or not if you can’t afford endless litigation. Is the same for patents.
Funny, their army of lawyers seems incapable of stopping me from easily downloading pirated software or coding an open alternative to their closed-source software with AI if I wanted to..
You cannot keep a purely legally-enforced moat in the face of advancing technology.
Music is free, because music piracy is unenforceable so the law is irrelevant. Now, I personally buy most of my music on vinyl because I want to support artists, but absolutely nothing forces me to do that as all the music is available for free.
Uhm... yes? The cost of downloading pirated music is essentially zero. The only reason why people use services like Spotify is because it's extremely cheap while being a bit more convenient. But jack up the price and the masses will move to sail the sea again.
The cost of stealing has always been essentially zero. Same argument can be made for streaming, and yet Netflix is neither cheap nor struggling for subscribers.
In the sense of artists cannot expect to get any money for their work, yeah music's free. Becoming a meme or a celebrity on the grounds of personality is still fair game, to the extent that AI is not impersonating people effectively at scale yet.
Yet.
A whole bunch of people I watch on youtube (politics, analysts, a weatherman) are already seeing AI impersonation videos, sometimes misrepresenting their positions and identities. This will grow.
So, you can't create art because that's extruded at scale in such a way that it's just turning on the tap to fill a specified need, and you can't be a person because that can also be extruded at scale pretty soon, either to co-opt whatever you do that's distinct, or to contradict whatever you're trying to say, as you.
As far as being a person able to exist and function through exchanging anything you are or anything you do for recompense, to survive, I'm not sure that's in the cards. Which seems weird for a technology in the guise of aiding people.
Generating software still token costs, generating something like ms-word will still cost a significant amount, takes a lot of human effort to prompt and validate. Having a proven solution still has value.
You can already generate surprisingly complex software on an LLM on a raspberry pi now, including live voice assistance, all offline. Peoples hardware can self write software pretty readily now. The cost of tokens is a race to zero.
Ironically, I actually suspect the exact opposite. Linux has no real choice in this matter because most of the code is written by Google, Red Hat, Cisco, and Amazon at this point, and these big cos are all going to mandate their developers have to use AI coding agents. Refuse to accept these contributions and we're just going to end up with 20 Linuxes instead of one, and the original still under the control of Linus will be relegated to desktop usage and wither and die.
I don't think modified by a human is enough. If you take licensed text (code or otherwise) and manually replace every word with a synonym, it does not remove the license. If you manually change every loop into a map/filter, it does not remove the license. I don't think any amount of mechanical transformation, regardless if done by a human or machine erases it.
There's a threshold where you modify it enough, it is no longer recognizable as being a modification of the original and you might get away with it, unless you confess what process you used to create it.
This is different to learning from the original and then building something equivalent from scratch using only your memory without constantly looking back and forth between your copy and the original.
This is how some companies do "clear room reimplementations" - one team looks at the original and writes a spec, another team which has never seen the original code implements an entirely standalone version.
And of course there are people who claim this can be automated now[0]. This one is satire (read the blog) but it is possible if the law is interpreted the way LLM companies work and there are reports the website works as advertised by people who were willing to spend money to test it.
If they actually were decided to be infringements somehow, there are millions of different cases needed already, so it is already past the point of enforcement.
These sorts of things are almost never tested legally and it seems even less likely now.
Sounds dramatic, but it entirely depends on what "many" and "plenty" means in your comment, and who exactly is included. So far, what you wrote can be seen as an expectable level of drama surrounding such projects.
True - on Mastodon there is a very vocal crowd that are against AI in general, and are identifying Linux distros that have AI generated code with the view of boycotting it.
Doesn't matter. Linux today is a toy of corporations and stopped being community oriented a long time ago. Community orientation I think these days only exists among the BSD and some fringe linux distributions.
The linux foundation itself, is just one big, woke, leftist mess, with CV-stuffers from corporations in every significant position.
The idea that something can simultaneously be "woke [and] leftist" and somehow still defined by its attachments to corporations is a baffling expression of how detached from reality the US political discourse is.
The rest of the world looks on in wonder at both sides of this.
AIs are not human and therefore their output is a human authored contribution and only human authored things are covered by copyright. The work might hypothetically infringe on other people's copyright. But such an infringement does not happen until a human decides to create and distribute a work that somehow integrates that generated code or text.
The solution documented here seems very pragmatic. You as a contributor simply state that you are making the contribution and that you are not infringing on other people's work with that contribution under the GPLv2. And you document the fact that you used AI for transparency reasons.
There is a lot of legal murkiness around how training data is handled, and the output of the models. Or even the models themselves. Is something that in no way or shape resembles a copyrighted work (i.e. a model) actually distributing that work? The legal arguments here will probably take a long time to settle but it seems the fair use concept offers a way out here. You might create potentially infringing work with a model that may or may not be covered by fair use. But that would be your decision.
For small contributions to the Linux kernel it would be hard to argue that a passing resemblance of say a for loop in the contribution to some for loop in somebody else's code base would be anything else than coincidence or fair use.
Copyright Office's interpretation of US copyright laws says that AI is not human, thus not an attributable author for copyright registration, and output based on mere prompting is no one's IP, it can't be copyrighted[1].
When AI output can be copyrighted is when copyrighted elements are expressed in it, like if you put copyrighted content in a prompt and it is expressed in the output, or the output is transformed substantially with human creativity in arrangement, form, composition, etc.
That you can't copyright the AI's output (in the US, at least), doesn't imply it doesn't contain copyrighted material. If you generate an image of a Disney character, Disney still owns the copyright to that character.
> That you can't copyright the AI's output (in the US, at least),
It's also not really clear if you can or cannot copyright AI output. The case that everyone cites didn't even reach the point where courts had to rule on that. The human in that case decided to file the copyright for an AI, and the courts ruled that according to the existing laws copyright must be filed by a person/human/whatever.
So we don't yet have caselaw where someone used AIgen and claimed the output as written by them.
> Is something that in no way or shape resembles a copyrighted work (i.e. a model) actually distributing that work?
Does a digitally encoded version resemble a copyrighted work in some shape or form? </snark>
Where is this hangup on models being something entirely different than an encoding coming from? Given enough prodding they can reproduce training data verbatim or close to that. Okay, given enough prodding notepad can do that too, so uncertainty is understandable.
This is one of the big reasons companies are putting effort into the so called "safety": when the legal battles are eventually fought, they would have an argument that they made their best so that the amount of prodding required to extract any information potentially putting them under liability is too great to matter.
> Does a digitally encoded version resemble a copyrighted work in some shape or form? </snark>
Well that's different because an encoded image or video clearly intends to reproduce the original perfectly and the end result after decoding is (intentionally) very close to form of the original. Which makes it a clear cut case of being a copy of the original.
The reason so many cases don't get very far is that mostly judges and lawyers don't think like engineers. Copyright law predates most modern technology. So, everything needs to be rephrased in terms of people copying stuff for commercial gain. The original target of the law was people using printing presses to create copies of books written by others. Which was hugely annoying to some publishers who thought they had exclusive deals with authors. But what about academics quoting each other? Or literary reviews. Or summaries. Or people reading from a book on the radio? This stuff gets complicated quickly. Most of those things were settled a long time ago. Fair use is a concept that gets wielded a lot for this. Yes its a copy but its entirely reasonable for the copy holder to be doing what they are doing and therefore not considered an infringement.
The rest is just centuries of legal interpretation of that and how it applies to modern technology. Whether that's DJs sampling music or artists working in visual imagery into their art works. AI is mostly just more of the same here. Yes there are some legally interesting aspects with AI but not that many new ones. Judges are unlikely to rethink centuries of legal interpretations here and are more likely to try to reconcile AI in with existing decisions. Any changes to the law would have to be driven by politicians; judges tend to be conservative with their interpretations.
IANAL; this is what my limited understanding of the matter is. With that caveat: it is easy to forget that copyright is on output- verbatim or exact reproductions and derivatives of a covered work are already covered under copyright.
So if the AI outputs Starry Night or Starry Night in different color theme, that's likely infringement without permission from van Gogh, who would have recourse against someone, either the user or the AI provider.
But a starry-night style picture of an aquarium might not be infringing at all.
>For small contributions to the Linux kernel it would be hard to argue that a passing resemblance of say a for loop in the contribution to some for loop in somebody else's code base would be anything else than coincidence or fair use.
I would argue that if it was a verbatim reproduction of a copyrighted piece of software, that would likely be infringing. But if it was similar only in style, with different function names and structure, probably not infringing.
Folks will argue that some things might be too small to do any different, for example a tiny snippet like python print("hello") or 1+1=2 or a for loop in your example. In that case it's too lacking in original expression to qualify for copyright protection anyway.
>AIs are not human and therefore their output is a human authored contribution and only human authored things are covered by copyright.
That is a non sequitur. Also, I'm not sure if copyright applies to humans, or persons (not that I have encountered particularly creative corporations, but Taranaki Maunga has been known for large scale decorative works)
Didn't a court in the US declare that AI generated content cannot be copyrighted? I think that could be a problem for AI generated code. Fine for projects with an MIT/BSD license I suppose, but GPL relies on copyright.
However, if the code has been slightly changed by a human, it can be copyrighted again. I think.
Thaler v. Perlmutter said that an AI system cannot be listed as the sole author of a work - copyright requires a human author.
US Copyright Office guidance in 2023 said work created with the help of AI can be registered as long as there is "sufficient human creative input". I don't believe that has ever been qualified with respect to code, but my instinct is that the way most people use coding agents (especially for something like kernel development) would qualify.
Interesting. That seems to suggest that one would need to retain the prompts in order to pursue copyright claims if a defendant can cast enough doubt on human authorship.
Though I guess such a suit is unlikely if the defendant could just AI wash the work in the first place.
No, a court did not declare that. The case involved a person trying to register a work with only the AI system listed as author. The Supreme Court decided that you can't do that, you need to list a human being as author to register a work with the Copyright Office. This stems from existing precedent where someone tried to register a photograph with the monkey photographer listed as author.
I don't believe the idea that humans can or can't claim copyright over AI-authored works has been tested. The Copyright Office says your prompt doesn't count and you need some human-authored element in the final work. We'll have to see.
It's almost a certainty that you can't copyright code that was generated entirely by an AI.
Copyright requires some amount of human originality. You could copyright the prompt, and if you modify the generated code you can claim copyright on your modifications.
The closest applicable case would be the monkey selfie.
I’m curious to see if subscription vs free ends up mattering here. If it is a work for hire, generally it doesn’t matter how the work was produced, the end result is mine, because I contracted and instructed (prompted?) someone to do it for me. So will the copyright office decide it cares if I paid for the AI tool explicitly?
It's obvious that a computer program cannot have copyright because computer programs are not persons in any currently existing jurisdiction.
Whether a person can claim copyright of the output of a computer program is generally understood as depending on whether there was sufficient creative effort from said person, and it doesn't really matter whether the program is Photoshop or ChatGPT.
Just thinking out loud... why can't an algorithm be an artificial person in the legal sense that a corporation is? Why not legally incorporate the AI as a corporation so it can operate in the real world: have accounts, create and hold copyrights...
Corporations are required to have human directors with full operational authority over the corporation's actions. This allows a court to summon them and compel them to do or not do things in the physical world. There's no reason a corporation can't choose to have an AI operate their accounts, but this won't affect the copyright status, and if the directors try to claim they can't override the AI's control of the accounts they'll find themselves in jail for contempt the first time the corporation faces a lawsuit.
In certain law cases plagiarization can be influenced by the fact if person is exposed to the copyrighted work. AI models are exposed to very large corpus of works..
Copyright infringement and plagiarism are not the same or even very closely related. They're different concepts and not interchangeable. Relative to copyright infringement, cases of plagiarism are rarely a matter for courts to decide or care about at all. Plagiarism is primarily an ethical (and not civil or criminal) matter. Rather than be dealt with by the legal system, it is the subject of codes of ethics within e.g. academia, journalism, etc. which have their own extra-judicial standards and methods of enforcement.
I suspect they were instead referring to patents; for example, when I worked at Google, they told the engineers not to read patents because then the engineer might invent something infringing, I think it's called willful infringement. No other employer I've worked for has every raised this as an issue, while many lawyers at google would warn against this.
The law is a compromise between what the people in power want and what they can get away with without people revolting. It has nothing to do with morality, fairness or justice. And we should change that. The promise of democracy was (among other things) that everyone would be equal, everybody would get to vote and laws would be decided by the moral system of the majority. And yet, today, most people will tell you they are unhappy about the rising cost of living and rising inequality...
The law should be based on complete and consistent moral system. And then plagiarism (taking advantage of another person's intellectual work without credit or compensation) would absolutely be a legal matter.
LLMs are not persons, not even legal ones (which itself is a massive hack causing massive issues such as using corporate finances for political gain).
A human has moral value a text model does not. A human has limitations in both time and memory available, a model of text does not. I don't see why comparisons to humans have any relevance. Just because a human can do something does not mean machines run by corporations should be able to do it en-masse.
The rules of copyright allow humans to do certain things because:
- Learning enriches the human.
- Once a human consumes information, he can't willingly forget it.
- It is impossible to prove how much a human-created intellectual work is based on others.
With LLMs:
- Training (let's not anthropomorphize: lossily-compressing input data by detecting and extracting patterns) enriches only the corporation which owns it.
- It's perfectly possible to create a model based only on content with specific licenses or only public domain.
- It's possible to trace every single output byte to quantifiable influences from every single input byte. It's just not an interesting line of inquiry for the corporations benefiting from the legal gray area.
Dude come on, I clearly wasn't saying LLMs are people. My point was it's a tool and it's the responsibility of the person wielding it to check outputs.
If it's too hard to check outputs, don't use the tool.
Your arguments about copyright being different for LLMs: at the moment that's still being defined legally. So for now it's an ethical concern rather than a legal one.
For what it's worth I agree that LLMs being trained on copyright material is an abuse of current human oriented copyright laws. There's no way this will just continue to happen. Megacorps aren't going to lie down if there's a piece of the pie on the table, and then there's precedent for everyone else (class action perhaps)
The practical concern of Linux developers regarding responsibility is not being able to ban the author, it's that the author should take ongoing care for his contribution.
A DCO bearing a claim of original authorship (or assertion of other permitted use) isn't going to shield them entirely, but it can mitigate liability and damages.
In a court case the responsibility party very well could be the Linux foundation because this is a foreseeable consequence of allowing AI contributions. There’s no reasonable way for a human to make such a guarantee while using AI generated code.
It’s not about the mechanism: responsibility is a social construct, it works the way people say that it works. If we all agree that a human can agree to bear the responsibility for AI outputs, and face any consequences resulting from those outputs, then that’s the whole shebang.
Sure we could change the law. It would be a stupid change to allow individuals, organizations, and companies to completely shield themselves from the consequences of risky behaviors (more than we already do) simply by assigning all liability to a fall guy.
Right now it's very easy not to infringe on copyrighted code if you write the code yourself. In the vast majority of cases if you infringed it's because you did something wrong that you could have prevented (in the case where you didn't do anything wrong, inducement creation is an affirmative defense against copyright infringement).
That is not the case when using AI generated code. There is no way to use it without the chance of introducing infringing code.
Because of that if you tell a user they can use AI generated code, and they introduce infringing code, that was a foreseeable outcome of your action. In the case where you are the owner of a company, or the head of an organization that benefits from contributors using AI code, your company or organization could be liable.
So it's a bit as if Linux Organization told its contributors you can bring in infringing code but you must agree you are liable for any infringement?
But if a lawsuit was later brought who would be sued? The individual author or the organization? In other words can an organization reduce its liability if it tells its employees "You can break the law as long as you agree you are solely responsible for such illegal actions?
It would seem to me that the employer would be liable if they "encourage" this way of working?
A human has to willingly violate the law for that to happen though. There is no way for a human to use AI generated that doesn't have a chance of producing copyrighted code though. That's just expected.
If you don't think this is a problem take a look at the terms of the enterprise agreements from OpenAI and Anthropic. Companies recognize this is an issue and so they were forced to add an indemnification clause, explicitly saying they'll pay for any damages resulting in infringement lawsuits.
They don’t produce enough similar code to infringe frequently. And if they did independent creation is an affirmative defense to copyright infringement that likely doesn’t apply to LLMs since they have the demonstrated capability to produce code directly from their training set.
You have shifted from "very easy not to infringe" to "don't infringe frequently", which concedes the original point that humans can and do produce infringing code without intent.
On independent creation: you are conflating the tool with the user. The defense applies to whether the developer had access to the copyrighted work, not whether their tools did. A developer using an LLM did not access the training set directly, they used a synthesis tool. By your logic, any developer who has read GPL code on GitHub should lose independent creation defense because they have "demonstrated capability to produce code directly from" their memory.
LLM memorization/regurgitation is a documented failure mode, not normal operation (nor typical case). Training set contamination happens, but it is rare and considered a bug. Humans also occasionally reproduce code from memory: we do not deny them independent creation defense wholesale because of that capability!
In any case, the legal question is not settled, but the argument that LLM-assisted code categorically cannot qualify for independent creation defense creates a double standard that human-written code does not face.
And that's not an infringement. Actual copying is the infringement, not having the same code. The most likely way to have the same code is by copying, but it's not the only way.
Imagine your a factory owner and you need a chemical delivered from across the country, but the chemical is dangerous and if the tanker truck drives faster than 50 miles per hour it has a 0.001% chance per mile of exploding.
You hire an independent contractor and tell him that he can drive 60 miles per hour if he wants to but if it explodes he accepts responsibility.
He does and it explodes killing 10 people. If the family of those 10 people has evidence you created the conditions to cause the explosion in order to benefit your company, you're probably going to lose in civil court.
Linus benefits from the increase velocity of people using AI. He doesn't get to put all the liability on the people contributing.
Why would I put much effort into responding to a post like yours, which makes no sense and just shows that you don't understand what you're talking about?
Responsibility is an objective fact, not just some arbitrary social convention. What we can agree or disagree about is where it rests, but that's a matter of inference, an inference can be more or less correct. We might assign certain people certain responsibilities before the fact, but that's to charge them with the care of some good, not to blame them for things before they were charged with their care.
Because contributions to Linux are meticulously attributed to, and remain property of, their authors, those authors bear ultimate responsibility. If Fred Foobar sends patches to the kernel that, as it turns out, contain copyrighted code, then provided upstream maintainers did reasonable due diligence the court will go after Fred Foobar for damages, and quite likely demand that the kernel organization no longer distribute copies of the kernel with Fred's code in it.
Anyone distributing infringing material can be liable, and it’s unlikely that this technicality will actually would shield anyone.
Anyone who thinks they have a strong infringement case isn’t going to stop at the guy who authored the code, they’re going to go after anyone with deep pockets with a good chance of winning.
This is a nice point that I haven't seen before. It's interesting to regress AI to the simplest form and see how we treat it as a test for the more complex cases.
> Surely the person doing so would be responsible for doing so, but are they doing anything wrong?
You're perfectly at liberty to relicense public domain code if you wish.
The only thing you can't do is enforce the new license against people who obtain the code independently - either from the same source you did, or from a different source that doesn't carry your license.
This is correct, and it's not limited to code. I can take the story of Cinderella, create something new out of it, copyright my new work, but Cinderella remains public domain for someone else to do something with.
If I use public domain code in a project under a license, the whole work remains under the license, but not the public domain code.
If someone else uses your exact same prompt to generate the exact same code, can you claim copyright infringement against them? If the output is possible to copyright, then you could claim their prompt is infringement (just like if it reproduced Harry Potter). If it isn’t copyrightable, then the kernel would not have legal standing to enforce the GPL on those lines of code against any future AI reproduction of them. The developers might need to show that the code is licensed under GPL and only GPL, otherwise there is the possibility the same original contributor (eg the AI) did permit the copy. The GPL is an imposed restriction on what the kernel can legally do with any code contributions. That seems legally complicated for some projects—probably not the kernel with the large amount of pre-AI code, but maybe it spells trouble for smaller newer projects if they want to sue over infringement. IANAL.
> If someone else uses your exact same prompt to generate the exact same code, can you claim copyright infringement against them?
No, because they've independently obtained it from the same source that you did, so their copy is "upstream" of your imposing of a new license.
Realistically, adding a license to public domain work is only really meaningful when you've used it as a starting point for something else, and want to apply your license to the derivative work.
The core thing about licenses, in general, is that they only grant new usage. If you can already use the code because it's public domain, they don't further restrict it. The license, in that case, is irrelevant.
Remember that licenses are powered by copyright - granting a license to non-copyrighted code doesn't do anything, because there's no enforcement mechanism.
This is also why copyright reform for software engineering is so important, because code entering the public domain cuts the gordian knot of licensing issues.
Linux code doesn't have to strictly be GPL-only, it just has to be GPL-compatible.
If your license allows others to take the code and redistribute it with extra conditions, your code can be imported into the kernel. AFAIK there are parts of the kernel that are BSD-licensed.
Sqlite’s source code is public domain. Surely if you dropped the sqlite source code into Linux, it wouldn’t suddenly become GPL code? I’m not sure how it works
The Linux kernel would become a GPLv2-licensed derivative work of SQLite, but that doesn’t matter, because public domain works, by definition, are not subject to copyright restrictions.
Claiming copyright on an unmodified public domain work is a lie, so in some circumstances could be an element of fraud, but still wouldn’t be a copyright violation.
This ruling is IMO/IANAL based on lawyers and judges not understanding how LLMs work internally, falling for the marketing campaign calling them "AI" and not understanding the full implications.
LLM-creation ("training") involves detecting/compressing patterns of the input. Inference generates statistically probable based on similarities of patterns to those found in the "training" input. Computers don't learn or have ideas, they always operate on representations, it's nothing more than any other mechanical transformation. It should not erase copyright any more than synonym substitution.
>LLM-creation ("training") involves detecting/compressing patterns of the input.
There's a pretty compelling argument that this is essentially what we do, and that what we think of as creativity is just copying, transforming, and combining ideas.
LLMs are interesting because that compression forces distilling the world down into its constituent parts and learning about the relationships between ideas. While it's absolutely possible (or even likely for certain prompts) that models can regurgitate text very similar to their inputs, that is not usually what seems to be happening.
They actually appear to be little remix engines that can fit the pieces together to solve the thing you're asking for, and we do have some evidence that the models are able to accomplish things that are not represented in their training sets.
If people find this cool and wanna play with it, they can, just make sure to only mix compatible licenses in the training data and license the output appropriately. Well, the attribution issue is still there, so maybe they can restrict themselves to public domain stuff. If LLMs are so capable, it shouldn't limit the quality of their output too much.
Now for the real issue: what do you think the world will look like in 5 or 10 years if LLMs surpass human abilities in all areas revolving around text input and output?
Do you think the people who made it possible, who spent years of their life building and maintaining open source code, will be rewarded? Or will the rich reap most of the benefit while also simultaneously turning us into beggars?
Even if you assume 100% of the people doing intellectual work now will convert to manual work (i.e. there's enough work for everyone) and robots don't advance at all, that'll drive the value of manual labor down a lot. Do you have it games out in your head and believe somehow life will be better for you, let alone for most people? Or have yo not thought about it at all yet?
> Do you think the people who made it possible, who spent years of their life building and maintaining open source code, will be rewarded?
I think they should be rewarded more than they are currently. But isn't the GNU Public License bassically saying you can use such source-code without giving any rewards what so ever?
But I see your The reward for Open Source developers is the public recognition for their works. LLMs can take that recognition away.
UBI only means you won't starve or die of exposure. It doesn't mean that people who are already rich today won't become so obscenely rich tomorrow they are above the law or can change the law (and decide who gets medical treatment or even take your UBI away).
This is a good point but I'd take it in the opposite direction from the implication, we should document which tools were used in general, it'd be a neat indicator of what people use.
> AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO).
They mention an Assisted-by tag, but that also contains stuff like "clang-tidy". Surely you're not interpreting that as people "attributing" the work to the linter?
> Signed-Off ...
> The human submitter is responsible for:
> Reviewing all AI-generated code
> Ensuring compliance with licensing requirements
> Adding their own Signed-off-by tag to certify the DCO
> Taking full responsibility for the contribution
> Attribution: ... Contributions should include an Assisted-by tag in the following format:
Responsibility assigned to where it should lie. Expected no less from Torvalds, the progenitor of Linux and Git. No demagoguery, no b*.
I am sure that this was reviewed by attorneys before being published as policy, because of the copyright implications.
Hopefully this will set the trend and provide definitive guidance for a number of Devs that were not only seeing the utility behind ai assistance but also the acrimony from some quarters, causing some fence-sitting.
Signed-off-by is already a custom/formality that is surely cargo-culted by many first-time/infrequent contributors. It has an air of "the plans were on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard.'" There's no way to assert that every contributor has read a random document declaring what that line means in kernel parlance.
I recently made a kernel contribution. Another contributor took issue with my patch and used it as the impetus for a larger refactor. The refactor was primarily done by a third contributor, but the original objector was strangely insistent on getting the "author" credit. They added our names at the bottom in "Co-developed-by" and "Signed-off-by" tags. The final submission included bits I hadn't seen before. I would have polished it more if I had.
I'm not raising a stink about it because I want the feature to land - it's the whole reason I submitted the first patch. And since it's a refactor of a patch I initially submitted (and "Signed-off-by,") you can make the argument that I signed off on the parts of my code that were incorporated.
But so far as I can tell, there's nothing keeping you from adding "Co-developed-by" and "Signed-off-by Jim-Bob Someguy" to the bottom of your submission. Maybe a lawyer would eventually be mad at you if Jim-Bob said he didn't sign off.
There's no magic pixie dust that gives those incantations legal standing, and nothing that keeps LLMs from adding them unless the LLMs internalize the new AI guidance.
The way you describe it, the developers all did the right thing. You contributed something to the patch, and even if it wasn't in your preferred final form (and it's basically never going to be for a kernel contribution of any significance), you were correctly credited.
If you didn't want to be credited you should have said.
Signed-off-by probably has some legal weight. When you add that to code you are making a clear statement about the origins of the code and that you have legal authority to contribute it - for example, that you asked your company for permission if needed. As far as I know none of this has been tested in court, but it seems reasonable to assume it might be one day.
> You contributed something to the patch, and even if it wasn't in your preferred final form (and it's basically never going to be for a kernel contribution of any significance), you were correctly credited.
I don't see how the "signed-off-by" attestation constitutes correct credit here. It's claiming that GP saw the final result and approved of it, which is apparently false.
It's a sane policy - human is responsible for what they contribute, regardless of what tools they use in the development process.
However, the gotcha here seems to be that the developer has to say that the code is compatible with the GPL, which seems an impossible ask, since the AI models have presumably been trained on all the code they can find on the internet regardless of licensing, and we know they are capable of "regenerating" (regurgitating) stuff they were trained on with high fidelity.
How is one supposed to ensure license compliance while using LLMs which do not (and cannot) attribute sources having contributed to a specific response?
> How one is supposed to ensure license compliance while using LLMs which do not (and cannot) attribute sources having contributed to a specific response?
Additionally there seems to be a general problem with LLM output and copyright[1]. At least in Germany. LLM output cannot be copyrighted and the whole legal field seems under-explored.
> This immediately raises the question of who is the author of this work and who owns the rights to it. Various solutions are possible here. It could be the user of the AI alone, or it could be a joint work between the user and the AI programmer. This question will certainly keep copyright experts in the various legal systems busy for some time to come.
It seems that in the long run the kernel license might become unenforceable if LLM output is used?!
In most cases I've seen it's because they get overwhelmed by sloppy contributions from developers who do not bother to review their AI's output. Code reviews are a lot of work.
Also “responsibility” and “accountability” mean little for anon contributors from the internet. You can ban them but a thousand more will still be spamming you with slop.
I think AI bans are more common in projects where the maintainers are nice people that thoughtfully want to consider each PR and provide a reasoned response if rejected.
That’s only feasible when the people who open PRs are acting in good faith, and control both the quality and volume of PRs to something that the maintainers can realistically (and ought to) review in their 2-3 hours of weekly free time.
Linux is a bit different. Your code can be rejected, or not even looked at in the first place, if it’s not a high quality and desired contribution.
Also, it’s not just about PR quality, but also volume. It’s possible for contributions to be a net benefit in isolation. But most open source maintainers only have an hour or so a week to review PRs and need to prioritize aggressively. People who code with AI agents would benefit themselves to ask “does this PR align with the priorities and time availability of the maintainer?”
For instance, I’m sure we could point AI at many open source projects and tell it to optimize performance. And the agent would produce a bunch of high quality PRs that are a good idea in isolation. But what if performance optimization isn’t a good use of time for a given maintainer’s weekly code review quota?
Sure, maintainers can simply close the PR without a reason if they don’t have time.
But I fear we are taking advantage of nice people, who want to give a reasoned response to every contribution, but simply can’t keep up with the volume that agents can produce.
You are treating humans as reasonable actors. They very often are not. On easy to access platforms like github you can have humans just working as intermediaries between LLM and the github. Not actually checking or understanding what they put in a pull request. Banning these people outright with clear rules is much faster and easier than trying to argue with them.
Linux is somewhat harder to contribute to and they already have sufficient barriers in place so they can rely on more reasonable human actors.
Because you don't want to deal with people who can't write their own code. If they can, the rule will do nothing to stop them from contributing. It'll only matter if they simply couldn't make their contribution without LLMs.
An LLM finding problems in code is not the same at all as someone using it to contribute code they couldn't write or haven't written themselves to a project. A report stating "There is a bug/security issue here" is not itself something I have to maintain, it's something I can react to and write code to fix, then I have to maintain that code.
Well, until you start getting dozens of generated reports that you take your time to review just to find out that they're all plausible-looking bullshit about non-issues.
We already had that happening with other kinds of automated tooling, but at least it used to be easier to detect by quick skimming.
Because they aren’t accountable - after it is merged only I am. And why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time? Anytime I want to work through a pile of slop I can ask for one, but I don’t work that way. I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.
> I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.
As someone who has been using AI extensively lately, this is my preferred way of doing serious projects with them:
Let them create the plan, help them refine it, let them rip; then scrutinize their diffs, fight back on the parts I don't like or don't trust; rinse and repeat until commit.
Yet I assume this would still be unacceptable to most anti-AI projects, because 90%+ of the committed code was "written by the AI."
> why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time?
Presumably for the same reason you go back and forth with humans through PR comments even when you could just code it yourself in real time. That reason being, the individual on the other end of the PR should be saving you time. It's still hard work contributing quality MRs, even with AI.
I don’t have a problem working with contributors who use AI like you described. But this thread is about working with people who could not do the work on their own. So they cannot do what you described, and they cannot save me any time, they can only waste it.
If your doctor told you he used an ouija board to find your diagnosis, would you care about the origin of the diagnosis or just trust that he'll be accountable for it?
This does nothing to shield Linux from responsibility for infringing code.
This is essentially like a retail store saying the supplier is responsible for eliminating all traces of THC from their hemp when they know that isn’t a reasonable request to make.
It’s a foreseeable consequence. You don’t get to grant yourself immunity from liability like this.
Shield from what exactly? The Linux kernel is not a legal entity. It's a collection of contributions from various contributors. There is the Linux Foundation but they do not own Linux.
If Linux were to contain 3rd party copyrighted code the legal entity at risk of being sued would be... Linux users, which given how widely deployed Linux is is basically everyone on Earth, and all large companies.
Linux development is funded by large companies with big legal departments. It's safe to say that nobody is going to be picking this legal fight any time soon.
An open-source project receiving open-source contributions from (often anonymous) volunteers is not even close to analogous to a storefront selling products with a consumer guarantee they are backing on the basis of their supply chain.
Yep, and honestly it's going to come up with things other than lawsuits.
I've worked at a company that was asked as part of a merger to scan for code copied from open source. That ended up being a major issue for the merger. People had copied various C headers around in odd places, and indeed stolen an odd bit of telnet code. We had to go clean it up.
I feel like a lot of people will have an ideological opposition to AI, but that would lead to people sometimes submitting AI generated code with no attribution and just lying about it.
At the same time, I feel bad for all the people that have to deal with low quality AI slop submissions, in any project out there.
The rules for projects that allow AI submissions might as well state: "You need to spend at least ~10 iterations of model X review agents and 10 USD of tokens on reviewing AI changes before they are allowed to be considered for inclusion."
(I realize that sounds insane, but in my experience iterated review even by the same Opus model can help catch bugs in the code, I feel like the next token prediction in of itself is quite error prone alone)
How can you guarantee that will happen when AI has been trained a world full of multiple licenses and even closed source material without permission of the copyright owners...I confirmed that with several AI's just now.
You take responsibility. That means if the AI messes up, you get punished. No pushing blame onto the stupid computer. If you're not comfortable with that, don't use the AI.
If you think it's an unacceptable risk to use a tool you can't trust when your own head is on the line, you're right, and you shouldn't use it. You don't have to guarantee anything. You just have to accept punishment.
That’s just it though it’s not just your head. The liability could very likely also fall on the Linux foundation.
You can’t say “you can do this thing that we know will cause problems that you have no way to mitigate, but if it does we’re not liable”. The infringement was a foreseeable consequence of the policy.
This policy effectively punts on the question of what tools were used to create the contribution, and states that regardless of how the code was made, only humans may be considered authors.
From the foundation's point of view, humans are just as capable of submitting infringing code as AI is. If your argument is sound, then how can Linux accept contributors at all?
EDIT: To answer my own question:
Instead of a signed legal contract, a DCO is an affirmation that a certain person confirms that it is (s)he who holds legal liability for the act of sending of the code, that makes it easier to shift liability to the sender of the code in the case of any legal litigation, which serves as a deterrent of sending any code that can cause legal issues.
This is how the Foundation protects itself, and the policy is that a contribution must have a human as the person who will accept the liability if the foundation comes under fire. The effectiveness of this policy (or not) doesn't depend on how the code was created.
Anyone distributing copyrighted material can be liable that DCO isn’t going to stop anyone.
If that worked any corporation that wanted to use code they legally couldn’t could just use a fork from someone who assumed responsibility and worst case they’d have to stop using it if someone found out.
OpenAI and Anthropic added an indemnity clause to their enterprise contracts specifically to cover this scenario because companies wouldn’t adopt otherwise.
Yeah, but that's not a useful thing to do because not everybody thinks about that or considers it a problem. If somebody's careless and contributes copyrighted code, that's a problem for linux too, not only the author.
For comparison, you wouldn't say, "you're free to use a pair of dice to decide what material to build the bridge out of, as long as you take responsibility if it falls down", because then of course somebody would be careless enough to build a bridge that falls down.
Preventing the problem from the beginning is better than ensuring you have somebody to blame for the problem when it happens.
> Preventing the problem from the beginning is better than ensuring you have somebody to blame for the problem when it happens.
that's assuming that the problems and incentives are the same for everyone. Someone whose uncle happens to own a bridge repair company would absolutely be incentivized to say
> "you're free to use a pair of dice to decide what material to build the bridge out of, as long as you take responsibility if it falls down"
It was already necessary to solve the problem of humans contributing infringing code. It was solved by having contributors assume liability with a DCO. The policy being discussed today asserts that, because AI may not be held legally liable for its contributions, AI may not sign a DCO. A human signature is required. This puts the situation back to what it was with human contributors. What you are proposing goes beyond maintaining the status quo.
It’s not solved. It hasn’t been tested in court to my knowledge and in my opinion is unlikely to hold up to serious challenge. You can be held liable for just distributing copyrighted code even if the whole “the Linux foundation doesn’t own anything” holds up.
Their position is probably that LLM technology itself does not require training on code with incompatible licenses, and they probably also tend to avoid engaging in the philosophical debate over whether LLM-generated output is a derivative copy or an original creation (like how humans produce similar code without copying after being exposed to code). I think that even if they view it as derivative, they're being pragmatic - they don't want to block LLM use across the board, since in principle you can train on properly licensed, GPL-compatible data.
If they merge it in despite it having the model version in the commit, then they're arguably taking a position on it too - that it's fine to use code from an AI that was trained like that.
Humans will not regurgitate longer segments of code verbatim. Even if we wanted to, we couldn’t do it because our memory doesn’t work that way. LLM on the other hand can totally do that, and there’s nothing you can do to prevent it.
Wait for court cases I suppose - not really Linus Torvalds' job to guess how they'll rule on the copyright of mere training. Presumably having your AI actually consult codebases with incompatible licenses at runtime is more risky.
Anything generated by an AI is public domain.
You can include public domain in your GPL code.
I would urge some stronger requirement with the help of a lawyer. You only need a comment like "completely coded by AI, but 100% reviewed by me" to make that code's license worthless.
The only AI-generated part copyrightable are the ones modified by a human.
I am afraid that this "waters down" the actual licensed code.
...We should start opening issues on "100% vibecoded" projects for relicensing to public domain to raise some awareness to the issue.
The policy makes sense as a liability shield, but it doesn't address the actual problem, which is review bandwidth. A human signs off on AI-generated code they don't fully understand, the patch looks fine, it gets merged. Six months later someone finds a subtle bug in an edge case no reviewer would've caught because the code was "too clean."
> they don't fully understand, the patch looks fine
I don't get this part. Why is the reviewer signing off on it? AI code should be fully documented (probably more so than a human could) and require new tests. Code review gates should not change
How can we automate the disclosure of what AI agent was used in a PR and the extent of code? Would be nice to also have an audit of prompts used, as that could also be considered “code”.
Am I being too pedantic if I point out that it is quite possible for code to be compatible with GPL-2.0 and other licenses at the same time? Or is this a term that is well understood?
He's been vibecoding some stuff himself personally, on one of his scuba projects. You could take people as actually believing in the things they do and say.
> All contributions must comply with the kernel's licensing requirements:
I just don't think that's realistically achievable. Unless the models themselves can introspect on the code and detect any potential license violations.
If you get hit with a copyright violation in this scheme I'd be afraid that they're going to hammer you for negligence of this obvious issue.
US legal consensus has set the precedent that "AI" output can't be copyrighted. Thus, technically no one can really own or re-license prompt output.
Re-licensing public domain uncopyrightable work as GPL/LGPL is almost certainly a copyright violation, and no different than people violating GPL/LGPL in commercial works.
Linus is 100% wrong on this choice, and has introduced a serious liability into the foundation upstream code. =3
> Being in the public domain is not a license; rather, it means the material is not copyrighted and no license is needed. Practically speaking, though, if a work is in the public domain, it might as well have an all-permissive non-copyleft free software license. Public domain material is compatible with the GNU GPL.
Yes, if it is clearly labeled as such, than GPL/LGPL licenced works may be included in such products. However, this relationship cannot make such works GPL without violating copyright, and doesn't magically become yours to re-license isomorphic plagiarized code from LLM.
For example, one may use NASA public domain photos as you wish, but cannot register copyright under another license you find convenient to sue people. Also, if that public domain photo includes the Nutella trademark, it doesn't protect you from getting sued for violating Ferrero trademarks/patents/copyrights in your own use-case.
Very different than slapping a new label on something you never owned. =3
I hire specialized IP lawyers to advise me how to mitigate risk: One can't assign licenses on something no one can legally claim right to. You should do the same unless you live in India or China.
Don't become the cautionary tale kid, as crawlers like sriplaw.com will be DMCA striking your public repos eventually. =3
We've seen in the past, for instance in the world of compliance, that if companies/governments want something done or make a mistake, they just have a designated person act as scapegoat.
So what's preventing lawyers/companies having a batch of people they use as scapegoats, should something go wrong?
I like this. It's just saying you have responsibility for the tools you wield. It's concise.
Side note, I'm not sure why I feel weird about having the string "Assisted-by: AGENT_NAME:MODEL_VERSION" [TOOL1] [TOOL2] in the kernel docs source :D. Mostly joking. But if the Linux kernel has it now, I guess it's the inflection point for...something.
Honestly kind of surprised they went this route -- just 'you own it, you're responsible for it' is such a clean answer to what feels like an endlessly complicated debate.
LLMs are lossily-compressed models of code and other text (often mass-scraped despite explicit non-consent) which has licenses almost always requiring attribution and very often other conditions. Just a few weeks ago a SOTA model was shown to reproduce non-trivial amounts of licensed code[0].
The idea of intelligence being emergent from compression is nothing new[1]. The trick here is giving up on completeness and accuracy in favor of a more probabilistic output which
1) reproduces patterns and interpolates between patterns of training data while not always being verbatim copies
2) serves as a heuristic when searching the solution-space which is further guided by deterministic tools such as compilers, linters, etc. - the models themselves quite often generate complete nonsense, including making up non-existent syntax in well-known mainstream languages such as C#.
I strongly object to anthropomorphising text transformers (e.g. "Assisted-by"). It encourages magical thinking even among people who understand how the models operate, let alone the general public.
Just like stealing fractional amounts of money[3] should not be legal, violating the licenses of the training data by reusing fractional amounts from each should not be legal either.
> Just like stealing fractional amounts of money[3] should not be legal, violating the licenses of the training data by reusing fractional amounts from each should not be legal either.
I think you'll find that this is not settled in the courts, depending on how the data was obtained. If the data was obtained legally, say a purchased book, courts have been finding that using it for training is fair use (Bartz v. Anthropic, Kadrey v. Meta).
Morally the case gets interesting.
Historically, there was no such thing as copyright. The English 1710 Statute of Anne establishing copyright as a public law was titled 'for the Encouragement of Learning' and the US Constitution said 'Congress may secure exclusive rights to promote the progress of science and useful arts'; so essentially public benefits driven by the grant of private benefits.
The Moral Bottomline: if you didn't have to eat, would you care about who copies your work as long as you get credited?
The more the people that copy your work with attribution, the more famous you'll be. Now that's the currency of the future*. [1]
> The Moral Bottomline: if you didn't have to eat, would you care about who copies your work as long as you get credited?
Yes.
I have 2 issues with "post-scarcity":
- It often implicitly assumes humanity is one homogeneous group where this state applies to everyone. In reality, if post-scarcity is possible, some people will be lucky enough to have the means to live that lifestyle while others will still by dying of hunger, exposure and preventable diseases. All else being equal, I'd prefer being in the first group and my chance for that is being economically relevant.
- It often ignores that some people are OK with having enough while others have a need to have more than others, no matter how much they already have. The second group is the largest cause of exploitation and suffering in the world. And the second group will continue existing in a post-scarcity world and will work hard to make scarcity a real thing again.
---
Back to your question:
I made the mistake of publishing most of my public code under GPL or AGPL. I regret is because even though my work has brought many people some joy and a bit of my work was perhaps even useful, it has also been used by people who actively enjoy hurting others, who have caused measurable harm and who will continue causing harm as long as they're able to - in a small part enabled by my code.
Permissive licenses are socially agnostic - you can use the work and build on top of it no matter who you are and for what purpose.
A(GPL) is weakly pro-social - you can use the work no matter what but you can only build on top of it if you give back - this produces some small but non-zero social pressure (enforced by violence through governments) which benefits those who prefer cooperation instead of competition.
What I want is a strongly pro-social license - you can use or build on top of my work only if you fulfill criteria I specify such as being a net social good, not having committed any serious offenses, not taking actions to restrict other people's rights without a valid reason, etc.
There have been attempts in this direction[0] but not very successful.
In a world without LLMs, I'd be writing code using such a license but more clearly specified, even if I had to write my own. Yes, a layer would do a better job, that does not mean anything written by a non-lawyer is completely unenforceable.
With LLMs, I have stopped writing public code at all because the way I see it, it just makes people much richer than me even richer at a much faster rate than I can ever achieve myself. Ir just makes inequality worse. And with inequality, exploitation and oppression tends to soon follow.
> In reality, if post-scarcity is possible, some people will be lucky enough to have the means to live that lifestyle while others will still by dying of hunger, exposure and preventable diseases.
By definition, that's not a post-scarcity world; and that's already today's world.
> It often ignores that some people are OK with having enough while others have a need to have more than others, no matter how much they already have.
Do you think that's genetic, or environmental? Either way, maybe it will have been trained out of the kids.
> it has also been used by people who actively enjoy hurting others, who have caused measurable harm
Taxes work the same way too. "The Good Place" explores these second-order and higher-order effects in a surprisingly nuanced fashion.
Control over the actions of others, you have not. Keep you from your work, let them not.
> What I want is a strongly pro-social license - you can use or build on top of my work only if you fulfill criteria I specify such as being a net social good
These are all things necessary in a society with scarcity. Will they be needed in a post-scarcity society that has presumably solved all disorder that has its roots in scarcity?
> With LLMs, I have stopped writing public code at all because the way I see it, it just makes people much richer than me even richer at a much faster rate than I can ever achieve myself.
Yes, the futility of our actions can be infuriating, disheartening, and debilitating. Comes to mind the story about the chap that was tossing washed-ashore starfish one by one. There were thousands. When asked why do this futile task - can't throw them all back- he answered as he threw the next ones: it matters to this one, it matters to this one, ...
Hopefully, your code helped someone. That's a good enough reason to do it.
> I strongly object to anthropomorphising text transformers (e.g. "Assisted-by").
I don't think this is anthropomorphising, especially considering they also include non-LLM tools in that "Assisted-by" section.
We're well past the Turing test now, whether these things are actually sentient or not is of no pragmatic importance if we can't distinguish their output from a sentient creature, especially when it comes to programming.
Nope, there is no “The” Turing Test. Go read his original paper before parroting pop sci nonsense.
The Turing test paper proposes an adversarial game to deduce if the interviewee is human. It’s extremely well thought out. Seriously, read it. Turing mentions that he’d wager something like 70% of unprepared humans wouldn’t be able to correctly discern in the near future. He never claims there to be a definitive test that establishes sentience.
Turing may have won that wager (impressive), but there are clear tells similar to the “how many the r’s are in strawberries?” that an informed interrogator could reliably exploit.
Would you say "assisted by vim" or "assisted by gcc"?
It should be either something like "(partially/completely) generated by" or if you want to include deterministic tools, then "Tools-used:".
The Turing test is an interesting thought experiment but we've seen it's easy for LLMs to sound human-like or make authoritative and convincing statements despite being completely wrong or full of nonsense. The Turing test is not a measure of intelligence, at least not an artificial one. (Though I find it quite amusing to think that the point at which a person chooses to refer to LLMs as intelligence is somewhat indicative of his own intelligence level.)
> whether these things are actually sentient or not is of no pragmatic importance if we can't distinguish their output from a sentient creature, especially when it comes to programming
It absolutely makes a difference: you can't own a human but you can own an LLM (or a corporation which is IMO equally wrong as owning a human).
Humans have needs which must be continually satisfied to remain alive. Humans also have a moral value (a positive one - at least for most of us) which dictates that being rendered unable to remain alive is wrong.
Now, what happens if LLMs have the same legal standing as humans and are thus able to participate in the economy in the same manner?
I can't point out where I draw the line clearly but here's one different I notice:
A recommendation can be both a thing and an action. A piece of text is a recommendation and it does not matter how it was created.
Assistance implies some parity in capabilities and cooperative work. Also it can pretty much only be an action, you cannot say "here is some assistance" and point to a thing.
On https://news.ycombinator.com/item?id=47356000, it looks like the user there was intentionally asking about the implementation of the Python chardet library before asking it to write code, right? Not surprising the AI would download the library to investigate it by default, or look for any installed copies of `chardet` on the local machine.
For [0], it was supposedly shown to do it when specifically prompted to do so.
Despite agentic tools being used by millions of developers now, I am not aware of a single real case where accidental reproduction of copyrightable code has been an issue.
Further, some model providers offer indemnity clauses.
This format really took off in the Python community in the 2000's for documentation. The Linux kernel has used it for documentation as well for a while now.
Linus is the original vibe coder. He barks orders at cadre of human contributor agents and subsystem maintainer agents until the code looks the way he likes.
Jesting aside, OpenHub lists Linus Torvalds as having made 46,338 commits. 45,178 for Linux, 1,118 for Git. His most recent commit was 17 days ago. [1]
That is a far cry from a vibe-coder, no? :-)
Bit unfair to call his leadership vibe-coding, methinks.
Interesting that coccinelle, sparse, smatch & clang-tidy are included, at least as examples. Those aren't AI coding tools in the normal sense, just regular, deterministic static analysis / code generation tools. But fine, I guess.
We've been using Co-Developed-By: <email> for our AI annotations.
That's... refreshingly normal? Surely something most people acting in good faith can get behind.
I am not against AI coding in general. But there are too many people "contributing" AI generated code to open source projects even when they can't understand what's going on in their code just so they can say in their resumes that they contributed to a big open source project once. And when the maintainer call them out they just blame it on the AI coding tools they are using as if they are not opening PRs under their own names. I can't blame any open source maintainer for being at least a little sceptical when it comes to AI generated contributions.
It looks to me like a more restrictive policy will be flat-out impossible.
Even people I trust are going along with this stuff, akin to CAD replacing drafting. Code is logic as language, and starting with web code and rapidly metastasizing to C++ (due to complexity and the sheer size of the extant codebase, good and bad) the AI has turned slop-coding to a 'solved problem'. If you don't mean to do the best possible thing or a new thing there is no excuse for existing as a coder in the world of AI.
If you do expect to do a new thing or a best thing, in theory you're required to put out the novel information as AI cannot reach it until you've entered it into the corpus of existing code the AI's built on. However, if you're simply recombining existing aspects of the code language in a novel way, that might be more reachable… that's probably where 'AI escape velocity' will come from should it occur.
In practice, everybody I know is relegating the busywork of coding to AI. I don't feel social pressure to do the same but I'm not a coder. I'm something else that produces MIT-licensed codebases for accomplishing things that aren't represented in code AS code, rather it's for accomplishing things that are specific and experiential. I write code to make specific noises I'm not hearing elsewhere, and not hearing out of the mainstream of 'sound-making code artifacts'.
Therefore, it's impractical for Linux to take any position forbidding AI-assisted code. People will just lie and claim they did it. Is primitive tab-complete also AI? Where's the line? What about when coding tools uniformly begin to tab-complete with extensive reasoning and code prototyping? I already see this in the JetBrains Rider editor I use for Godot hacking, even though I've turned off everything I can related to AI. It'll still try to tab-complete patterns it thinks it recognizes, rarely with what I intend.
And so the choice is to enforce responsibility. I think this is appropriate because that's where the choices will matter. Additions and alterations will be the responsibility of specific human people, which won't handle everything negative that's happening but will allow for some pressures and expectations that are useful.
I don't think you can be a collaborative software project right now and not deal with this in some way. I get out of it because I'm read-only: I'm writing stuff on a codebase that lives on an antique laptop without internet access that couldn't run AI if it tried. Very likely the only web browsers it can run are similarly unable to handle 2026 web pages, though I've not checked in years. You've only got my word for that, though, and your estimation of my veracity based on how plausible it seems (I code publically on livestreams, and am not at all an impressive coder when I do that). Linux can't do what I do, so it's going to do what Linux does, and this seems the best option.
… my dad is 86 and only after I signed him up to Claude could he write Arduino code without a phone call to me after 5 minutes of trying himself. So now, he’s spending 4+ hours at a time focused writing code and building circuits of things he only dreamt about creating for decades.
Unless you’re doing something for the personal love of the craft and sharpening your tools, use every advantage you can get in order to do the job.
But… as above, if you’re doing it for the love of it, sure - hand crafted code does taste better and you know all the ingredients are organic
Plenty see Torvalds as a traitor for this policy and will never contribute again if any clearly labeled AI generated code is actually allowed to merge.
People are generally against change that forces them to change the way they used to do things. I'm sure most will have their reasons why they are against this particular change, but I don't think it will affect anything. The genie is out of the bottle, AI is here to stay. You either adapt or you will slowly wither away.
You missed the whole arab spring thing?
It needs to be modified by a human. No amount of prompting counts, and you can only copyright the modified parts.
Any license on "100% vibecoded" projects can be safely ignored.
I expect litigations in a few years where people argue about how much they can steal and relicense "since it was vibecoded anyway".
> In these cases, copyright will only protect the human-authored aspects of the work, which are “independent of” and do “not affect” the copyright status of the AI-generated material itself.
[0] https://www.federalregister.gov/documents/2023/03/16/2023-05...
There's really 2 ways to argue this:
- Either AI exists and then it's something new and the laws protecting human creativity and work clearly could not have taken it into account and need to be updated.
- Or AI doesn't exist, LLMs are nothing more than lossily compressed models violating the licenses of the training data, their probabilistically decompressed output is violating the licenses as well and the LLM companies and anyone using them will be punished.
Ultimately LLMs (the first L stands for large and for a good reason) are only possible to create by taking unimaginable amounts of work performed by humans who have not consented to their work being used that way, most of whom require at least being credited in derivative works and many of whom have further conditions.
Now, consent in law is a fairly new concept and for now only applied to sexual matters but I think it should apply to every human interaction. Consent can only be established when it's informed and between parties with similar bargaining power (that's one reason relationships with large age gaps are looked down upon) and can be revoked at any time. None of the authors knew this kind of mass scraping and compression would be possible, it makes sense they should reevaluate whether they want their work used that way.
There are 3 levels to this argument:
1) The letter of the law - if you understand how LLMs work, it's hard to see them as anything more than mechanical transformers of existing work so the letter should be sufficient.
2) The intent of the law - it's clear it was meant to protect human authors from exploitation by those who are in positions where they can take existing work and benefit from it without compensating the authors.
3) The ethics and morality of the matter - here it's blatantly obvious that using somebody's work against their wishes and without compensating them is wrong.
In an ideal world, these 3 levels would be identical but they're not. That means we should strive to make laws (in both intent and letter) more fair and just by changing them.
Look, if you think I am wrong, you can surely put it into words. OTOH, if you don't think I am wrong but feel that way, then it explains why I see no coherent criticism of my statements.
The signal you’re sending is that you are not open to discussing the issue.
It’s weird how people on HN state legal opinion as fact… e.g if someone in the Philippines vibecodes an app and a person in Equador vibecodes a 100% copy of the source, what now?
The playing field is level now, and corpo moats no longer exist. I happily take that trade.
They can wash the copyright by AI training, but the AIs don't get trained on closed source.
"corpo" also has a ton of patents, which still can't be AI-washed.
What will become unenforceable are Open Source Licenses exclusively, how does that make it a "level field"?
Not to mention companies will try to mandate hardware decryption keys so the binary is encrypted and your AI never even gets to analyze the code which actually runs.
It's not sci-fi, it's a natural extension of DRM.
1) The financial aspect: As you say, more and more advanced DRM requires more and more advanced tools. Even assuming advanced AI can guide any human to do the physical part, that still means you have to pay for the hardware. And the hardware has to be available (companies have been known to harass people into giving up perfectly moral and legal projects).
2) The legal aspect: Possession of burglary tools is illegal in some places. How about possession of hacking tools? Right now it's not a priority for company lobbying, what about when that's the only way to decompile? Even today, reverse engineering is a legal minefield. Did you know in some countries you can technically legally reverse engineer but under some conditions such as having disabilities necessitating it and only using the result for personal use?[0]
3) The TOS aspect: What makes you think AI will help you? If the company owning the AI says so, you're on your own.
---
You need to understand 2 things:
- Just because something is possible doesn't mean somebody is gonna do it. Effort, cost and risk play huge roles. And that assumes no active hostile interference.
- History is a constant struggle between groups with various goals and incentives. Some people just want to live a happy life, have fun and build things in their free time. Other people want to become billionaires, dream about private islands, desire to control other people's lives and so on. People are good at what they focus on. There's perhaps more of the first group but the second group is really good at using their money and connections to create more money and connections which they in turn use to progress towards their primary objectives, usually at the expense of other people. People died[1] over their right to unionize. This can happen again.
Somebody might believe historical people were dumb or uncivilized and it can't happen today because we've advanced so much. That's bullshit. People have had largely the same wetware for hundreds of thousands of years. The tools have evolved but their users have not.
[0]: https://pluralistic.net/2026/03/16/whittle-a-webserver/ - "... aren't tools exemptions, they're use exemptions ... You have that right. Your mechanic does not have that right."
[1]: https://en.wikipedia.org/wiki/Pinkerton_(detective_agency)
AI proponents completely ignore the disparity of resources available to an individual and a corporation. If I and a company of 1000 people create the same product and compete for customers, the company's version will win. Every single time. Or maybe at least 1000:1 if you're an optimist.
They have access to more money for advertising, they have an already established network of existing customers, they have legal and marketing experts on payroll. Or just look at Microsoft, they don't even need advertising, they just install their product by default and nobody will even hear about mine.
Not to mention as you said, the training advances only goes from open source to closed source, not the other way around.
AI proponents who talk about "democratization" are nuts, it would be laughable if it wasn't so sad.
As a person who works for a company with 25k people, I would disagree. You, a single person will often get to the basic product that a lot of people will want much faster than a company with 1k, 5k and 25k people.
Bigger companies are constrained by internal processes, piles of existing stuff, and inability to hire at the scale they need and larger required context. Also regulation and all that. Bigger companies are also really slow to adapt, so they would rather let you build the product and then buy out your company with your product and people who build it. They are at at a temporary disadvantage every time the landscape shifts.
Besides that, your whole arguments hinges on large companies being inflexible, inefficient and poorly run. Isn't that exactly the kind of problem AI promises to solve? Complete AI surveillance of every employee, tasks and instructions tailored to each individual and superhuman planning. Of course at that point, the only employees will be manual workers because actual AI will be much better and cheaper at everything than every human, except those things where it needs to interact with the physical world. Even contract negotiations with both employees and customers will be done with AI instead of humans, the human will only sign off on it for legal requirements just like today you technically enter a contract with a representative of the company who is not even there when you talk to a negotiator.
You cannot keep a purely legally-enforced moat in the face of advancing technology.
IP law means nothing once tens of millions of people are openly violating it.
The software industry is about to learn this lesson too.
Uhm... yes? The cost of downloading pirated music is essentially zero. The only reason why people use services like Spotify is because it's extremely cheap while being a bit more convenient. But jack up the price and the masses will move to sail the sea again.
That is not necessarily true, depending on the level of enforcement and the availability of opportunities to steal.
> Same argument can be made for streaming, and yet Netflix is neither cheap nor struggling for subscribers.
Netflix is still pretty cheap for the convenience it provides. Again, jack up the price and see the masses move to torrent movies/shows again.
Yet.
A whole bunch of people I watch on youtube (politics, analysts, a weatherman) are already seeing AI impersonation videos, sometimes misrepresenting their positions and identities. This will grow.
So, you can't create art because that's extruded at scale in such a way that it's just turning on the tap to fill a specified need, and you can't be a person because that can also be extruded at scale pretty soon, either to co-opt whatever you do that's distinct, or to contradict whatever you're trying to say, as you.
As far as being a person able to exist and function through exchanging anything you are or anything you do for recompense, to survive, I'm not sure that's in the cards. Which seems weird for a technology in the guise of aiding people.
As far as I know that has only been decided in US so far, which is far from the whole world.
How am I gonna prove I did?
There's a threshold where you modify it enough, it is no longer recognizable as being a modification of the original and you might get away with it, unless you confess what process you used to create it.
This is different to learning from the original and then building something equivalent from scratch using only your memory without constantly looking back and forth between your copy and the original.
This is how some companies do "clear room reimplementations" - one team looks at the original and writes a spec, another team which has never seen the original code implements an entirely standalone version.
And of course there are people who claim this can be automated now[0]. This one is satire (read the blog) but it is possible if the law is interpreted the way LLM companies work and there are reports the website works as advertised by people who were willing to spend money to test it.
[0]: https://malus.sh/
These sorts of things are almost never tested legally and it seems even less likely now.
The linux foundation itself, is just one big, woke, leftist mess, with CV-stuffers from corporations in every significant position.
The rest of the world looks on in wonder at both sides of this.
The solution documented here seems very pragmatic. You as a contributor simply state that you are making the contribution and that you are not infringing on other people's work with that contribution under the GPLv2. And you document the fact that you used AI for transparency reasons.
There is a lot of legal murkiness around how training data is handled, and the output of the models. Or even the models themselves. Is something that in no way or shape resembles a copyrighted work (i.e. a model) actually distributing that work? The legal arguments here will probably take a long time to settle but it seems the fair use concept offers a way out here. You might create potentially infringing work with a model that may or may not be covered by fair use. But that would be your decision.
For small contributions to the Linux kernel it would be hard to argue that a passing resemblance of say a for loop in the contribution to some for loop in somebody else's code base would be anything else than coincidence or fair use.
When AI output can be copyrighted is when copyrighted elements are expressed in it, like if you put copyrighted content in a prompt and it is expressed in the output, or the output is transformed substantially with human creativity in arrangement, form, composition, etc.
[1] https://newsroom.loc.gov/news/copyright-office-releases-part...
It's also not really clear if you can or cannot copyright AI output. The case that everyone cites didn't even reach the point where courts had to rule on that. The human in that case decided to file the copyright for an AI, and the courts ruled that according to the existing laws copyright must be filed by a person/human/whatever.
So we don't yet have caselaw where someone used AIgen and claimed the output as written by them.
Does a digitally encoded version resemble a copyrighted work in some shape or form? </snark>
Where is this hangup on models being something entirely different than an encoding coming from? Given enough prodding they can reproduce training data verbatim or close to that. Okay, given enough prodding notepad can do that too, so uncertainty is understandable.
This is one of the big reasons companies are putting effort into the so called "safety": when the legal battles are eventually fought, they would have an argument that they made their best so that the amount of prodding required to extract any information potentially putting them under liability is too great to matter.
Well that's different because an encoded image or video clearly intends to reproduce the original perfectly and the end result after decoding is (intentionally) very close to form of the original. Which makes it a clear cut case of being a copy of the original.
The reason so many cases don't get very far is that mostly judges and lawyers don't think like engineers. Copyright law predates most modern technology. So, everything needs to be rephrased in terms of people copying stuff for commercial gain. The original target of the law was people using printing presses to create copies of books written by others. Which was hugely annoying to some publishers who thought they had exclusive deals with authors. But what about academics quoting each other? Or literary reviews. Or summaries. Or people reading from a book on the radio? This stuff gets complicated quickly. Most of those things were settled a long time ago. Fair use is a concept that gets wielded a lot for this. Yes its a copy but its entirely reasonable for the copy holder to be doing what they are doing and therefore not considered an infringement.
The rest is just centuries of legal interpretation of that and how it applies to modern technology. Whether that's DJs sampling music or artists working in visual imagery into their art works. AI is mostly just more of the same here. Yes there are some legally interesting aspects with AI but not that many new ones. Judges are unlikely to rethink centuries of legal interpretations here and are more likely to try to reconcile AI in with existing decisions. Any changes to the law would have to be driven by politicians; judges tend to be conservative with their interpretations.
So if the AI outputs Starry Night or Starry Night in different color theme, that's likely infringement without permission from van Gogh, who would have recourse against someone, either the user or the AI provider.
But a starry-night style picture of an aquarium might not be infringing at all.
>For small contributions to the Linux kernel it would be hard to argue that a passing resemblance of say a for loop in the contribution to some for loop in somebody else's code base would be anything else than coincidence or fair use.
I would argue that if it was a verbatim reproduction of a copyrighted piece of software, that would likely be infringing. But if it was similar only in style, with different function names and structure, probably not infringing.
Folks will argue that some things might be too small to do any different, for example a tiny snippet like python print("hello") or 1+1=2 or a for loop in your example. In that case it's too lacking in original expression to qualify for copyright protection anyway.
That is a non sequitur. Also, I'm not sure if copyright applies to humans, or persons (not that I have encountered particularly creative corporations, but Taranaki Maunga has been known for large scale decorative works)
However, if the code has been slightly changed by a human, it can be copyrighted again. I think.
US Copyright Office guidance in 2023 said work created with the help of AI can be registered as long as there is "sufficient human creative input". I don't believe that has ever been qualified with respect to code, but my instinct is that the way most people use coding agents (especially for something like kernel development) would qualify.
Though I guess such a suit is unlikely if the defendant could just AI wash the work in the first place.
I don't believe the idea that humans can or can't claim copyright over AI-authored works has been tested. The Copyright Office says your prompt doesn't count and you need some human-authored element in the final work. We'll have to see.
Copyright requires some amount of human originality. You could copyright the prompt, and if you modify the generated code you can claim copyright on your modifications.
The closest applicable case would be the monkey selfie.
https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
No, my understanding is that AI generated content can't be copyrighted by the AI. A human can still copyright it, however.
Whether a person can claim copyright of the output of a computer program is generally understood as depending on whether there was sufficient creative effort from said person, and it doesn't really matter whether the program is Photoshop or ChatGPT.
But you shouldn't be right. I mean, morally.
The law is a compromise between what the people in power want and what they can get away with without people revolting. It has nothing to do with morality, fairness or justice. And we should change that. The promise of democracy was (among other things) that everyone would be equal, everybody would get to vote and laws would be decided by the moral system of the majority. And yet, today, most people will tell you they are unhappy about the rising cost of living and rising inequality...
The law should be based on complete and consistent moral system. And then plagiarism (taking advantage of another person's intellectual work without credit or compensation) would absolutely be a legal matter.
LLMs are not persons, not even legal ones (which itself is a massive hack causing massive issues such as using corporate finances for political gain).
A human has moral value a text model does not. A human has limitations in both time and memory available, a model of text does not. I don't see why comparisons to humans have any relevance. Just because a human can do something does not mean machines run by corporations should be able to do it en-masse.
The rules of copyright allow humans to do certain things because:
- Learning enriches the human.
- Once a human consumes information, he can't willingly forget it.
- It is impossible to prove how much a human-created intellectual work is based on others.
With LLMs:
- Training (let's not anthropomorphize: lossily-compressing input data by detecting and extracting patterns) enriches only the corporation which owns it.
- It's perfectly possible to create a model based only on content with specific licenses or only public domain.
- It's possible to trace every single output byte to quantifiable influences from every single input byte. It's just not an interesting line of inquiry for the corporations benefiting from the legal gray area.
If it's too hard to check outputs, don't use the tool.
Your arguments about copyright being different for LLMs: at the moment that's still being defined legally. So for now it's an ethical concern rather than a legal one.
For what it's worth I agree that LLMs being trained on copyright material is an abuse of current human oriented copyright laws. There's no way this will just continue to happen. Megacorps aren't going to lie down if there's a piece of the pie on the table, and then there's precedent for everyone else (class action perhaps)
That is not the case when using AI generated code. There is no way to use it without the chance of introducing infringing code.
Because of that if you tell a user they can use AI generated code, and they introduce infringing code, that was a foreseeable outcome of your action. In the case where you are the owner of a company, or the head of an organization that benefits from contributors using AI code, your company or organization could be liable.
But if a lawsuit was later brought who would be sued? The individual author or the organization? In other words can an organization reduce its liability if it tells its employees "You can break the law as long as you agree you are solely responsible for such illegal actions?
It would seem to me that the employer would be liable if they "encourage" this way of working?
I think you’re looking for problems that don’t really exist here, you seem committed to an anti AI stance where none is justified.
If you don't think this is a problem take a look at the terms of the enterprise agreements from OpenAI and Anthropic. Companies recognize this is an issue and so they were forced to add an indemnification clause, explicitly saying they'll pay for any damages resulting in infringement lawsuits.
Humans routinely produce code similar to or identical to existing copyrighted code without direct copying.
On independent creation: you are conflating the tool with the user. The defense applies to whether the developer had access to the copyrighted work, not whether their tools did. A developer using an LLM did not access the training set directly, they used a synthesis tool. By your logic, any developer who has read GPL code on GitHub should lose independent creation defense because they have "demonstrated capability to produce code directly from" their memory.
LLM memorization/regurgitation is a documented failure mode, not normal operation (nor typical case). Training set contamination happens, but it is rare and considered a bug. Humans also occasionally reproduce code from memory: we do not deny them independent creation defense wholesale because of that capability!
In any case, the legal question is not settled, but the argument that LLM-assisted code categorically cannot qualify for independent creation defense creates a double standard that human-written code does not face.
They wouldn't be some patsy that is around just to take blame, but the actual responsible party for the issue.
You hire an independent contractor and tell him that he can drive 60 miles per hour if he wants to but if it explodes he accepts responsibility.
He does and it explodes killing 10 people. If the family of those 10 people has evidence you created the conditions to cause the explosion in order to benefit your company, you're probably going to lose in civil court.
Linus benefits from the increase velocity of people using AI. He doesn't get to put all the liability on the people contributing.
Anyone who thinks they have a strong infringement case isn’t going to stop at the guy who authored the code, they’re going to go after anyone with deep pockets with a good chance of winning.
There is still the "mens rea" principle. If you distribute infringing material unknowingly, it would very likely not result in any penalties.
As long as everything is GPLv2-compatible it‘s okay.
Surely the person doing so would be responsible for doing so, but are they doing anything wrong?
You're perfectly at liberty to relicense public domain code if you wish.
The only thing you can't do is enforce the new license against people who obtain the code independently - either from the same source you did, or from a different source that doesn't carry your license.
If I use public domain code in a project under a license, the whole work remains under the license, but not the public domain code.
I'm not sure what the hullabaloo is about.
No, because they've independently obtained it from the same source that you did, so their copy is "upstream" of your imposing of a new license.
Realistically, adding a license to public domain work is only really meaningful when you've used it as a starting point for something else, and want to apply your license to the derivative work.
Remember that licenses are powered by copyright - granting a license to non-copyrighted code doesn't do anything, because there's no enforcement mechanism.
This is also why copyright reform for software engineering is so important, because code entering the public domain cuts the gordian knot of licensing issues.
If your license allows others to take the code and redistribute it with extra conditions, your code can be imported into the kernel. AFAIK there are parts of the kernel that are BSD-licensed.
Claiming copyright on an unmodified public domain work is a lie, so in some circumstances could be an element of fraud, but still wouldn’t be a copyright violation.
LLM-creation ("training") involves detecting/compressing patterns of the input. Inference generates statistically probable based on similarities of patterns to those found in the "training" input. Computers don't learn or have ideas, they always operate on representations, it's nothing more than any other mechanical transformation. It should not erase copyright any more than synonym substitution.
There's a pretty compelling argument that this is essentially what we do, and that what we think of as creativity is just copying, transforming, and combining ideas.
LLMs are interesting because that compression forces distilling the world down into its constituent parts and learning about the relationships between ideas. While it's absolutely possible (or even likely for certain prompts) that models can regurgitate text very similar to their inputs, that is not usually what seems to be happening.
They actually appear to be little remix engines that can fit the pieces together to solve the thing you're asking for, and we do have some evidence that the models are able to accomplish things that are not represented in their training sets.
Kirby Ferguson's video on this is pretty great: https://www.youtube.com/watch?v=X9RYuvPCQUA
If people find this cool and wanna play with it, they can, just make sure to only mix compatible licenses in the training data and license the output appropriately. Well, the attribution issue is still there, so maybe they can restrict themselves to public domain stuff. If LLMs are so capable, it shouldn't limit the quality of their output too much.
Now for the real issue: what do you think the world will look like in 5 or 10 years if LLMs surpass human abilities in all areas revolving around text input and output?
Do you think the people who made it possible, who spent years of their life building and maintaining open source code, will be rewarded? Or will the rich reap most of the benefit while also simultaneously turning us into beggars?
Even if you assume 100% of the people doing intellectual work now will convert to manual work (i.e. there's enough work for everyone) and robots don't advance at all, that'll drive the value of manual labor down a lot. Do you have it games out in your head and believe somehow life will be better for you, let alone for most people? Or have yo not thought about it at all yet?
I think they should be rewarded more than they are currently. But isn't the GNU Public License bassically saying you can use such source-code without giving any rewards what so ever?
But I see your The reward for Open Source developers is the public recognition for their works. LLMs can take that recognition away.
That is at the moment: - Nobody knows for sure what agents might add and their long term effects on codebases.
- It's at best unclear that AI content in a codebase can be reliably determined automatically.
- Even if it's not malicious, at least some of its contributions are likely to be deleterious and pass undetected by human review.
It's different from the regular single purpose static tools.
> AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO).
They mention an Assisted-by tag, but that also contains stuff like "clang-tidy". Surely you're not interpreting that as people "attributing" the work to the linter?
I am sure that this was reviewed by attorneys before being published as policy, because of the copyright implications.
Hopefully this will set the trend and provide definitive guidance for a number of Devs that were not only seeing the utility behind ai assistance but also the acrimony from some quarters, causing some fence-sitting.
This was written by Sasha Levin referencing a Linux maintainers’ discussion.
I recently made a kernel contribution. Another contributor took issue with my patch and used it as the impetus for a larger refactor. The refactor was primarily done by a third contributor, but the original objector was strangely insistent on getting the "author" credit. They added our names at the bottom in "Co-developed-by" and "Signed-off-by" tags. The final submission included bits I hadn't seen before. I would have polished it more if I had.
I'm not raising a stink about it because I want the feature to land - it's the whole reason I submitted the first patch. And since it's a refactor of a patch I initially submitted (and "Signed-off-by,") you can make the argument that I signed off on the parts of my code that were incorporated.
But so far as I can tell, there's nothing keeping you from adding "Co-developed-by" and "Signed-off-by Jim-Bob Someguy" to the bottom of your submission. Maybe a lawyer would eventually be mad at you if Jim-Bob said he didn't sign off.
There's no magic pixie dust that gives those incantations legal standing, and nothing that keeps LLMs from adding them unless the LLMs internalize the new AI guidance.
If you didn't want to be credited you should have said.
Signed-off-by probably has some legal weight. When you add that to code you are making a clear statement about the origins of the code and that you have legal authority to contribute it - for example, that you asked your company for permission if needed. As far as I know none of this has been tested in court, but it seems reasonable to assume it might be one day.
I don't see how the "signed-off-by" attestation constitutes correct credit here. It's claiming that GP saw the final result and approved of it, which is apparently false.
However, the gotcha here seems to be that the developer has to say that the code is compatible with the GPL, which seems an impossible ask, since the AI models have presumably been trained on all the code they can find on the internet regardless of licensing, and we know they are capable of "regenerating" (regurgitating) stuff they were trained on with high fidelity.
Additionally there seems to be a general problem with LLM output and copyright[1]. At least in Germany. LLM output cannot be copyrighted and the whole legal field seems under-explored.
> This immediately raises the question of who is the author of this work and who owns the rights to it. Various solutions are possible here. It could be the user of the AI alone, or it could be a joint work between the user and the AI programmer. This question will certainly keep copyright experts in the various legal systems busy for some time to come.
It seems that in the long run the kernel license might become unenforceable if LLM output is used?!
[1] https://kpmg-law.de/en/ai-and-copyright-what-is-permitted-wh...
That’s only feasible when the people who open PRs are acting in good faith, and control both the quality and volume of PRs to something that the maintainers can realistically (and ought to) review in their 2-3 hours of weekly free time.
Linux is a bit different. Your code can be rejected, or not even looked at in the first place, if it’s not a high quality and desired contribution.
Also, it’s not just about PR quality, but also volume. It’s possible for contributions to be a net benefit in isolation. But most open source maintainers only have an hour or so a week to review PRs and need to prioritize aggressively. People who code with AI agents would benefit themselves to ask “does this PR align with the priorities and time availability of the maintainer?”
For instance, I’m sure we could point AI at many open source projects and tell it to optimize performance. And the agent would produce a bunch of high quality PRs that are a good idea in isolation. But what if performance optimization isn’t a good use of time for a given maintainer’s weekly code review quota?
Sure, maintainers can simply close the PR without a reason if they don’t have time.
But I fear we are taking advantage of nice people, who want to give a reasoned response to every contribution, but simply can’t keep up with the volume that agents can produce.
Is it? Remember when that agent wrote a hit piece about the maintainer because he wouldn't merge it's PR?
Linux is somewhat harder to contribute to and they already have sufficient barriers in place so they can rely on more reasonable human actors.
We already had that happening with other kinds of automated tooling, but at least it used to be easier to detect by quick skimming.
As someone who has been using AI extensively lately, this is my preferred way of doing serious projects with them:
Let them create the plan, help them refine it, let them rip; then scrutinize their diffs, fight back on the parts I don't like or don't trust; rinse and repeat until commit.
Yet I assume this would still be unacceptable to most anti-AI projects, because 90%+ of the committed code was "written by the AI."
> why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time?
Presumably for the same reason you go back and forth with humans through PR comments even when you could just code it yourself in real time. That reason being, the individual on the other end of the PR should be saving you time. It's still hard work contributing quality MRs, even with AI.
This is essentially like a retail store saying the supplier is responsible for eliminating all traces of THC from their hemp when they know that isn’t a reasonable request to make.
It’s a foreseeable consequence. You don’t get to grant yourself immunity from liability like this.
If Linux were to contain 3rd party copyrighted code the legal entity at risk of being sued would be... Linux users, which given how widely deployed Linux is is basically everyone on Earth, and all large companies.
Linux development is funded by large companies with big legal departments. It's safe to say that nobody is going to be picking this legal fight any time soon.
2. Infringement in closed source code isn’t as likely to be discovered
3. OpenAI and Anthropic enterprise agreements agree to indemnify (pay for damages essentially) companies for copyright issues.
I've worked at a company that was asked as part of a merger to scan for code copied from open source. That ended up being a major issue for the merger. People had copied various C headers around in odd places, and indeed stolen an odd bit of telnet code. We had to go clean it up.
It’s no worse than non-AI assisted code.
I could easily copy-paste proprietary code, sign my name that it’s not and that it complies with the GPL and submit it.
At the end of the day, it just comes down to a lying human.
At the same time, I feel bad for all the people that have to deal with low quality AI slop submissions, in any project out there.
The rules for projects that allow AI submissions might as well state: "You need to spend at least ~10 iterations of model X review agents and 10 USD of tokens on reviewing AI changes before they are allowed to be considered for inclusion."
(I realize that sounds insane, but in my experience iterated review even by the same Opus model can help catch bugs in the code, I feel like the next token prediction in of itself is quite error prone alone)
How can you guarantee that will happen when AI has been trained a world full of multiple licenses and even closed source material without permission of the copyright owners...I confirmed that with several AI's just now.
The whole use it but if it behaves as expected, it’s your fault is a ridiculous stance.
You can’t say “you can do this thing that we know will cause problems that you have no way to mitigate, but if it does we’re not liable”. The infringement was a foreseeable consequence of the policy.
From the foundation's point of view, humans are just as capable of submitting infringing code as AI is. If your argument is sound, then how can Linux accept contributors at all?
EDIT: To answer my own question:
This is how the Foundation protects itself, and the policy is that a contribution must have a human as the person who will accept the liability if the foundation comes under fire. The effectiveness of this policy (or not) doesn't depend on how the code was created.If that worked any corporation that wanted to use code they legally couldn’t could just use a fork from someone who assumed responsibility and worst case they’d have to stop using it if someone found out.
It’s just the same as if I copy-paste proprietary code into the kernel and lie about it being GPL.
Is the Linux foundation liable there?
For comparison, you wouldn't say, "you're free to use a pair of dice to decide what material to build the bridge out of, as long as you take responsibility if it falls down", because then of course somebody would be careless enough to build a bridge that falls down.
Preventing the problem from the beginning is better than ensuring you have somebody to blame for the problem when it happens.
that's assuming that the problems and incentives are the same for everyone. Someone whose uncle happens to own a bridge repair company would absolutely be incentivized to say
> "you're free to use a pair of dice to decide what material to build the bridge out of, as long as you take responsibility if it falls down"
I'm not talking about maintainability or reliability. I'm talking about legal culpability.
Anything generated by an AI is public domain. You can include public domain in your GPL code.
I would urge some stronger requirement with the help of a lawyer. You only need a comment like "completely coded by AI, but 100% reviewed by me" to make that code's license worthless.
The only AI-generated part copyrightable are the ones modified by a human.
I am afraid that this "waters down" the actual licensed code.
...We should start opening issues on "100% vibecoded" projects for relicensing to public domain to raise some awareness to the issue.
I don't get this part. Why is the reviewer signing off on it? AI code should be fully documented (probably more so than a human could) and require new tests. Code review gates should not change
Or you mean the velocity of commits will be so much that reviewers will start making more mistakes?
[0] https://youtu.be/mfv0V1SxbNA?si=CBnnesr4nCJLuB9D&t=2003
Am I being too pedantic if I point out that it is quite possible for code to be compatible with GPL-2.0 and other licenses at the same time? Or is this a term that is well understood?
https://spdx.org/licenses/GPL-2.0-only.html It's a specific GPL license (as opposed to GPL 2.0-later)
"GPL-2.0-only" "GPL-2.0 only"
I just don't think that's realistically achievable. Unless the models themselves can introspect on the code and detect any potential license violations.
If you get hit with a copyright violation in this scheme I'd be afraid that they're going to hammer you for negligence of this obvious issue.
Re-licensing public domain uncopyrightable work as GPL/LGPL is almost certainly a copyright violation, and no different than people violating GPL/LGPL in commercial works.
Linus is 100% wrong on this choice, and has introduced a serious liability into the foundation upstream code. =3
https://en.wikipedia.org/wiki/Founder%27s_syndrome
https://www.youtube.com/watch?v=X6WHBO_Qc-Q
https://www.gnu.org/licenses/license-list.html#PublicDomain
For example, one may use NASA public domain photos as you wish, but cannot register copyright under another license you find convenient to sue people. Also, if that public domain photo includes the Nutella trademark, it doesn't protect you from getting sued for violating Ferrero trademarks/patents/copyrights in your own use-case.
Very different than slapping a new label on something you never owned. =3
Remember kids never get your legal advice from hn comments.
Don't become the cautionary tale kid, as crawlers like sriplaw.com will be DMCA striking your public repos eventually. =3
https://www.youtube.com/watch?v=xkzy_420hts
So what's preventing lawyers/companies having a batch of people they use as scapegoats, should something go wrong?
Side note, I'm not sure why I feel weird about having the string "Assisted-by: AGENT_NAME:MODEL_VERSION" [TOOL1] [TOOL2] in the kernel docs source :D. Mostly joking. But if the Linux kernel has it now, I guess it's the inflection point for...something.
LLMs are lossily-compressed models of code and other text (often mass-scraped despite explicit non-consent) which has licenses almost always requiring attribution and very often other conditions. Just a few weeks ago a SOTA model was shown to reproduce non-trivial amounts of licensed code[0].
The idea of intelligence being emergent from compression is nothing new[1]. The trick here is giving up on completeness and accuracy in favor of a more probabilistic output which
1) reproduces patterns and interpolates between patterns of training data while not always being verbatim copies
2) serves as a heuristic when searching the solution-space which is further guided by deterministic tools such as compilers, linters, etc. - the models themselves quite often generate complete nonsense, including making up non-existent syntax in well-known mainstream languages such as C#.
I strongly object to anthropomorphising text transformers (e.g. "Assisted-by"). It encourages magical thinking even among people who understand how the models operate, let alone the general public.
Just like stealing fractional amounts of money[3] should not be legal, violating the licenses of the training data by reusing fractional amounts from each should not be legal either.
[0]: https://news.ycombinator.com/item?id=47356000
[1]: http://prize.hutter1.net/
[2]: https://en.wikipedia.org/wiki/ELIZA_effect
[3]: https://skeptics.stackexchange.com/questions/14925/has-a-pro...
I think you'll find that this is not settled in the courts, depending on how the data was obtained. If the data was obtained legally, say a purchased book, courts have been finding that using it for training is fair use (Bartz v. Anthropic, Kadrey v. Meta).
Morally the case gets interesting.
Historically, there was no such thing as copyright. The English 1710 Statute of Anne establishing copyright as a public law was titled 'for the Encouragement of Learning' and the US Constitution said 'Congress may secure exclusive rights to promote the progress of science and useful arts'; so essentially public benefits driven by the grant of private benefits.
The Moral Bottomline: if you didn't have to eat, would you care about who copies your work as long as you get credited?
The more the people that copy your work with attribution, the more famous you'll be. Now that's the currency of the future*. [1]
You'll do it for the kudos. [2][3]
Yes.
I have 2 issues with "post-scarcity":
- It often implicitly assumes humanity is one homogeneous group where this state applies to everyone. In reality, if post-scarcity is possible, some people will be lucky enough to have the means to live that lifestyle while others will still by dying of hunger, exposure and preventable diseases. All else being equal, I'd prefer being in the first group and my chance for that is being economically relevant.
- It often ignores that some people are OK with having enough while others have a need to have more than others, no matter how much they already have. The second group is the largest cause of exploitation and suffering in the world. And the second group will continue existing in a post-scarcity world and will work hard to make scarcity a real thing again.
---
Back to your question:
I made the mistake of publishing most of my public code under GPL or AGPL. I regret is because even though my work has brought many people some joy and a bit of my work was perhaps even useful, it has also been used by people who actively enjoy hurting others, who have caused measurable harm and who will continue causing harm as long as they're able to - in a small part enabled by my code.
Permissive licenses are socially agnostic - you can use the work and build on top of it no matter who you are and for what purpose.
A(GPL) is weakly pro-social - you can use the work no matter what but you can only build on top of it if you give back - this produces some small but non-zero social pressure (enforced by violence through governments) which benefits those who prefer cooperation instead of competition.
What I want is a strongly pro-social license - you can use or build on top of my work only if you fulfill criteria I specify such as being a net social good, not having committed any serious offenses, not taking actions to restrict other people's rights without a valid reason, etc.
There have been attempts in this direction[0] but not very successful.
In a world without LLMs, I'd be writing code using such a license but more clearly specified, even if I had to write my own. Yes, a layer would do a better job, that does not mean anything written by a non-lawyer is completely unenforceable.
With LLMs, I have stopped writing public code at all because the way I see it, it just makes people much richer than me even richer at a much faster rate than I can ever achieve myself. Ir just makes inequality worse. And with inequality, exploitation and oppression tends to soon follow.
[0]: https://json.org/license.html
By definition, that's not a post-scarcity world; and that's already today's world.
> It often ignores that some people are OK with having enough while others have a need to have more than others, no matter how much they already have.
Do you think that's genetic, or environmental? Either way, maybe it will have been trained out of the kids.
> it has also been used by people who actively enjoy hurting others, who have caused measurable harm
Taxes work the same way too. "The Good Place" explores these second-order and higher-order effects in a surprisingly nuanced fashion.
Control over the actions of others, you have not. Keep you from your work, let them not.
> What I want is a strongly pro-social license - you can use or build on top of my work only if you fulfill criteria I specify such as being a net social good
These are all things necessary in a society with scarcity. Will they be needed in a post-scarcity society that has presumably solved all disorder that has its roots in scarcity?
> With LLMs, I have stopped writing public code at all because the way I see it, it just makes people much richer than me even richer at a much faster rate than I can ever achieve myself.
Yes, the futility of our actions can be infuriating, disheartening, and debilitating. Comes to mind the story about the chap that was tossing washed-ashore starfish one by one. There were thousands. When asked why do this futile task - can't throw them all back- he answered as he threw the next ones: it matters to this one, it matters to this one, ...
Hopefully, your code helped someone. That's a good enough reason to do it.
That LLM response is describing a specific project with full attribution.
I don't think this is anthropomorphising, especially considering they also include non-LLM tools in that "Assisted-by" section.
We're well past the Turing test now, whether these things are actually sentient or not is of no pragmatic importance if we can't distinguish their output from a sentient creature, especially when it comes to programming.
Nope, there is no “The” Turing Test. Go read his original paper before parroting pop sci nonsense.
The Turing test paper proposes an adversarial game to deduce if the interviewee is human. It’s extremely well thought out. Seriously, read it. Turing mentions that he’d wager something like 70% of unprepared humans wouldn’t be able to correctly discern in the near future. He never claims there to be a definitive test that establishes sentience.
Turing may have won that wager (impressive), but there are clear tells similar to the “how many the r’s are in strawberries?” that an informed interrogator could reliably exploit.
It should be either something like "(partially/completely) generated by" or if you want to include deterministic tools, then "Tools-used:".
The Turing test is an interesting thought experiment but we've seen it's easy for LLMs to sound human-like or make authoritative and convincing statements despite being completely wrong or full of nonsense. The Turing test is not a measure of intelligence, at least not an artificial one. (Though I find it quite amusing to think that the point at which a person chooses to refer to LLMs as intelligence is somewhat indicative of his own intelligence level.)
> whether these things are actually sentient or not is of no pragmatic importance if we can't distinguish their output from a sentient creature, especially when it comes to programming
It absolutely makes a difference: you can't own a human but you can own an LLM (or a corporation which is IMO equally wrong as owning a human).
Humans have needs which must be continually satisfied to remain alive. Humans also have a moral value (a positive one - at least for most of us) which dictates that being rendered unable to remain alive is wrong.
Now, what happens if LLMs have the same legal standing as humans and are thus able to participate in the economy in the same manner?
I can't point out where I draw the line clearly but here's one different I notice:
A recommendation can be both a thing and an action. A piece of text is a recommendation and it does not matter how it was created.
Assistance implies some parity in capabilities and cooperative work. Also it can pretty much only be an action, you cannot say "here is some assistance" and point to a thing.
Despite agentic tools being used by millions of developers now, I am not aware of a single real case where accidental reproduction of copyrightable code has been an issue.
Further, some model providers offer indemnity clauses.
It seems like a non-issue to me, practically.
Humans for humans!
Don't let skynet win!!!
pre "clanker-linux".
I am more intrigued by the inevitable Linux distro that will refuse any code that has AI contributions in it.
This format really took off in the Python community in the 2000's for documentation. The Linux kernel has used it for documentation as well for a while now.
That's called being a manager, not a vibe coder.
LoL.
Jesting aside, OpenHub lists Linus Torvalds as having made 46,338 commits. 45,178 for Linux, 1,118 for Git. His most recent commit was 17 days ago. [1]
That is a far cry from a vibe-coder, no? :-)
Bit unfair to call his leadership vibe-coding, methinks.
[1] https://openhub.net/accounts/9897
We've been using Co-Developed-By: <email> for our AI annotations.