AI assistance when contributing to the Linux kernel

(github.com)

384 points | by hmokiguess 19 hours ago

37 comments

  • sheepscreek 2 minutes ago
    This is the right way forward for open-source. Correct attribution - by tightening the connection between agents and the humans behind them, and putting the onus on the human to vet the agent output. Thank you Linus.
  • qsort 18 hours ago
    Basically the rules are that you can use AI, but you take full responsibility for your commits and code must satisfy the license.

    That's... refreshingly normal? Surely something most people acting in good faith can get behind.

    • pibaker 11 hours ago
      I agree this is very sane and boring. What is insane is that they have to state this in the first place.

      I am not against AI coding in general. But there are too many people "contributing" AI generated code to open source projects even when they can't understand what's going on in their code just so they can say in their resumes that they contributed to a big open source project once. And when the maintainer call them out they just blame it on the AI coding tools they are using as if they are not opening PRs under their own names. I can't blame any open source maintainer for being at least a little sceptical when it comes to AI generated contributions.

      • theptip 8 hours ago
        I think them stating this very simple policy should also be read as them explicitly not making a more restrictive policy, as some kernel maintainers were proposing.
        • Applejinx 2 hours ago
          From everything I'm seeing in the industry (I'm basically a noncoder choosing to not use AI in the stuff that I make, and privy to the private work experience of coders and creators also in that field because of human social contacts), I feel like I can shed a bit of light.

          It looks to me like a more restrictive policy will be flat-out impossible.

          Even people I trust are going along with this stuff, akin to CAD replacing drafting. Code is logic as language, and starting with web code and rapidly metastasizing to C++ (due to complexity and the sheer size of the extant codebase, good and bad) the AI has turned slop-coding to a 'solved problem'. If you don't mean to do the best possible thing or a new thing there is no excuse for existing as a coder in the world of AI.

          If you do expect to do a new thing or a best thing, in theory you're required to put out the novel information as AI cannot reach it until you've entered it into the corpus of existing code the AI's built on. However, if you're simply recombining existing aspects of the code language in a novel way, that might be more reachable… that's probably where 'AI escape velocity' will come from should it occur.

          In practice, everybody I know is relegating the busywork of coding to AI. I don't feel social pressure to do the same but I'm not a coder. I'm something else that produces MIT-licensed codebases for accomplishing things that aren't represented in code AS code, rather it's for accomplishing things that are specific and experiential. I write code to make specific noises I'm not hearing elsewhere, and not hearing out of the mainstream of 'sound-making code artifacts'.

          Therefore, it's impractical for Linux to take any position forbidding AI-assisted code. People will just lie and claim they did it. Is primitive tab-complete also AI? Where's the line? What about when coding tools uniformly begin to tab-complete with extensive reasoning and code prototyping? I already see this in the JetBrains Rider editor I use for Godot hacking, even though I've turned off everything I can related to AI. It'll still try to tab-complete patterns it thinks it recognizes, rarely with what I intend.

          And so the choice is to enforce responsibility. I think this is appropriate because that's where the choices will matter. Additions and alterations will be the responsibility of specific human people, which won't handle everything negative that's happening but will allow for some pressures and expectations that are useful.

          I don't think you can be a collaborative software project right now and not deal with this in some way. I get out of it because I'm read-only: I'm writing stuff on a codebase that lives on an antique laptop without internet access that couldn't run AI if it tried. Very likely the only web browsers it can run are similarly unable to handle 2026 web pages, though I've not checked in years. You've only got my word for that, though, and your estimation of my veracity based on how plausible it seems (I code publically on livestreams, and am not at all an impressive coder when I do that). Linux can't do what I do, so it's going to do what Linux does, and this seems the best option.

          • alfiedotwtf 12 minutes ago
            You can refuse to use AI personally, but why would you not help yourself when you can?

            … my dad is 86 and only after I signed him up to Claude could he write Arduino code without a phone call to me after 5 minutes of trying himself. So now, he’s spending 4+ hours at a time focused writing code and building circuits of things he only dreamt about creating for decades.

            Unless you’re doing something for the personal love of the craft and sharpening your tools, use every advantage you can get in order to do the job.

            But… as above, if you’re doing it for the love of it, sure - hand crafted code does taste better and you know all the ingredients are organic

    • lrvick 7 hours ago
      It cannot be understated how religiously opposed many in the Linux community are to even a single AI assisted commit landing in the kernel no matter how well reviewed.

      Plenty see Torvalds as a traitor for this policy and will never contribute again if any clearly labeled AI generated code is actually allowed to merge.

      • cinntaile 6 hours ago
        Some people are just against change, that's nothing new. If Linus was like them, he would never have started linux in the first place.
        • sdevonoes 5 hours ago
          Not every change is good, and sometimes we realise too late
          • cinntaile 5 hours ago
            What is it that worries you about the change that is happening?
        • goatlover 6 hours ago
          Are they against change in general, or certain kinds of change? Remember when social media was seen as near universal good kind of progress? Not so much now.
          • cinntaile 5 hours ago
            Social media has never been seen as a universal positive force? It's the same with AI. It has good and bad aspects as does any technology that has an impact on this scale, AI will arguably have a much bigger impact imo.

            People are generally against change that forces them to change the way they used to do things. I'm sure most will have their reasons why they are against this particular change, but I don't think it will affect anything. The genie is out of the bottle, AI is here to stay. You either adapt or you will slowly wither away.

            • dwedge 3 hours ago
              It reminds me of something I read on mastodon: "genie doesn't go back in the bottle say AI promoters while the industry spends a trillion dollars a year to try to keep the genie out of the bottle"
              • cinntaile 3 hours ago
                Do you think the genie will go back in the bottle and why?
            • gnz11 3 hours ago
              Adapting implies you are still a part of the environment though. AI is on a trajectory to replace you and take you out of the environment.
              • bdangubic 3 hours ago
                AI is on a trajectory to replace people who do not effectively use AI with people that do
                • gnz11 1 hour ago
                  That is the bait and switch. The end goal is that you are out of the equation. Your perceived effectiveness at using AI as an exchange of labor diminishes over time to the point that you become irrelevant.
                  • brabel 5 minutes ago
                    Who has that end goal?? Who is going to direct the AI if only the CEO is left in the organization? The CEO will never actually do it , and will always need someone who can and will do it. I just can’t see a grand plan to take humans out of the equation entirely.
                  • bdangubic 8 minutes ago
                    this is certainly a possibility but human beings and societies as a whole adapt
            • LtWorf 3 hours ago
              > Social media has never been seen as a universal positive force?

              You missed the whole arab spring thing?

              • cinntaile 2 hours ago
                If you selectively read one sentence of my comment, you risk missing the forest for the trees. I don't have any particular knowledge on the arab spring so I won't comment on that but I quite clearly said that technology has good and bad aspects to it.
              • wafflemaker 3 hours ago
                Is it meant as sarcasm?
          • contraposit 5 hours ago
            This is like blaming a knife as being a killer weapon. Social media is inherently good if owners of the platforms allow for good interactions to take place. But given the mismatch between incentives alignment, we don't have nice things.
            • dwedge 3 hours ago
              Social media is good if owners allow for good is an example of the logical fallacy "begging the question"
      • Luker88 5 hours ago
        Just remember that "reviewed" is not enough to not be considered public domain.

        It needs to be modified by a human. No amount of prompting counts, and you can only copyright the modified parts.

        Any license on "100% vibecoded" projects can be safely ignored.

        I expect litigations in a few years where people argue about how much they can steal and relicense "since it was vibecoded anyway".

        • shakna 5 hours ago
          For those who might wonder how accurate this is, there is advice from the Federal Register to this effect. [0] Its quite comprehensive, and covers pretty much every question that might be asked about "What about...?"

          > In these cases, copyright will only protect the human-authored aspects of the work, which are “independent of” and do “not affect” the copyright status of the AI-generated material itself.

          [0] https://www.federalregister.gov/documents/2023/03/16/2023-05...

          • martin-t 4 hours ago
            I cannot take seriously any politician or layer using the words "artificial intelligence", especially to models from 2023. These people have never used LLMs to write code. They'd know even current models need constant babysitting or they produce unmaintainable mess, calling anything from 2023 AI is a joke. As the AI proponents keep saying, you have to try the latest model, so anything 2 years old is irrelevant.

            There's really 2 ways to argue this:

            - Either AI exists and then it's something new and the laws protecting human creativity and work clearly could not have taken it into account and need to be updated.

            - Or AI doesn't exist, LLMs are nothing more than lossily compressed models violating the licenses of the training data, their probabilistically decompressed output is violating the licenses as well and the LLM companies and anyone using them will be punished.

            • shakna 3 hours ago
              If monkeys can't hold copyright, which is an actual case discussed above, then no, an LLM probably can't either. "Human" is required.
              • martin-t 1 hour ago
                Yeah, an LLM, being a machine obviously shouldn't hold copyright. But that doesn't stop people claiming that running vast amounts of code through an LLM can strip copyright from it.

                Ultimately LLMs (the first L stands for large and for a good reason) are only possible to create by taking unimaginable amounts of work performed by humans who have not consented to their work being used that way, most of whom require at least being credited in derivative works and many of whom have further conditions.

                Now, consent in law is a fairly new concept and for now only applied to sexual matters but I think it should apply to every human interaction. Consent can only be established when it's informed and between parties with similar bargaining power (that's one reason relationships with large age gaps are looked down upon) and can be revoked at any time. None of the authors knew this kind of mass scraping and compression would be possible, it makes sense they should reevaluate whether they want their work used that way.

                There are 3 levels to this argument:

                1) The letter of the law - if you understand how LLMs work, it's hard to see them as anything more than mechanical transformers of existing work so the letter should be sufficient.

                2) The intent of the law - it's clear it was meant to protect human authors from exploitation by those who are in positions where they can take existing work and benefit from it without compensating the authors.

                3) The ethics and morality of the matter - here it's blatantly obvious that using somebody's work against their wishes and without compensating them is wrong.

                In an ideal world, these 3 levels would be identical but they're not. That means we should strive to make laws (in both intent and letter) more fair and just by changing them.

                • MarsIronPI 57 minutes ago
                  If consent to use of your code in AI training can be revoked at any time, that makes training impossible, since if anyone ever withdraws consent, it's not like you can just take out their work from your finished model.
            • martin-t 1 hour ago
              Nice, -4 points, somebody, many somebodies in fact, took that personally and yet were unable to express where they disagree in a comment.

              Look, if you think I am wrong, you can surely put it into words. OTOH, if you don't think I am wrong but feel that way, then it explains why I see no coherent criticism of my statements.

              • akerl_ 34 minutes ago
                When your comment is about how you can’t take your counterparty seriously and they’re a joke, you’re incentivizing people who disagree to just downvote and move on.

                The signal you’re sending is that you are not open to discussing the issue.

        • alfiedotwtf 8 minutes ago
          In what jurisdiction?!

          It’s weird how people on HN state legal opinion as fact… e.g if someone in the Philippines vibecodes an app and a person in Equador vibecodes a 100% copy of the source, what now?

        • lrvick 5 hours ago
          Meanwhile I expect that intellectual property protections for software are completely unenforceable and effectively useless now. If something does not exist as MIT, an LLM will create it.

          The playing field is level now, and corpo moats no longer exist. I happily take that trade.

          • Luker88 5 hours ago
            Isn't the "corpo moat" bigger now?

            They can wash the copyright by AI training, but the AIs don't get trained on closed source.

            "corpo" also has a ton of patents, which still can't be AI-washed.

            What will become unenforceable are Open Source Licenses exclusively, how does that make it a "level field"?

            • lrvick 4 hours ago
              Because AI is also proving to be very good at reverse engineering proprietary binaries or just straight up cloning software from test suites or user interfaces. Cuts both ways.
              • martin-t 4 hours ago
                Have you ever seen what obfuscation looks like when somebody puts the effort in?

                Not to mention companies will try to mandate hardware decryption keys so the binary is encrypted and your AI never even gets to analyze the code which actually runs.

                It's not sci-fi, it's a natural extension of DRM.

                • Muromec 3 hours ago
                  I spend a fun week during Christmas figuring out some really obfuscated bibary code with antidebugging anti pampering things in a cryptographic context. I didn’t use ghydra or ida or anything beyond gdb with deepseek chat in a browser. That low effort got me what I needed to get.
                • lrvick 3 hours ago
                  Companies have been encrypting code to HSMs for decades. Never stopped humans from reverse engineering so it certainly will not stop AI aided by humans able to connect a Bus Pirate on the right board traces. Anything that executes on the CPU can be dumped with enough effort, and once dumped it can be decompiled.
                  • martin-t 1 hour ago
                    You are agreeing with me, you just don't know it yet.

                    1) The financial aspect: As you say, more and more advanced DRM requires more and more advanced tools. Even assuming advanced AI can guide any human to do the physical part, that still means you have to pay for the hardware. And the hardware has to be available (companies have been known to harass people into giving up perfectly moral and legal projects).

                    2) The legal aspect: Possession of burglary tools is illegal in some places. How about possession of hacking tools? Right now it's not a priority for company lobbying, what about when that's the only way to decompile? Even today, reverse engineering is a legal minefield. Did you know in some countries you can technically legally reverse engineer but under some conditions such as having disabilities necessitating it and only using the result for personal use?[0]

                    3) The TOS aspect: What makes you think AI will help you? If the company owning the AI says so, you're on your own.

                    ---

                    You need to understand 2 things:

                    - Just because something is possible doesn't mean somebody is gonna do it. Effort, cost and risk play huge roles. And that assumes no active hostile interference.

                    - History is a constant struggle between groups with various goals and incentives. Some people just want to live a happy life, have fun and build things in their free time. Other people want to become billionaires, dream about private islands, desire to control other people's lives and so on. People are good at what they focus on. There's perhaps more of the first group but the second group is really good at using their money and connections to create more money and connections which they in turn use to progress towards their primary objectives, usually at the expense of other people. People died[1] over their right to unionize. This can happen again.

                    Somebody might believe historical people were dumb or uncivilized and it can't happen today because we've advanced so much. That's bullshit. People have had largely the same wetware for hundreds of thousands of years. The tools have evolved but their users have not.

                    [0]: https://pluralistic.net/2026/03/16/whittle-a-webserver/ - "... aren't tools exemptions, they're use exemptions ... You have that right. Your mechanic does not have that right."

                    [1]: https://en.wikipedia.org/wiki/Pinkerton_(detective_agency)

                • greton7 3 hours ago
                  [dead]
            • martin-t 4 hours ago
              Exactly.

              AI proponents completely ignore the disparity of resources available to an individual and a corporation. If I and a company of 1000 people create the same product and compete for customers, the company's version will win. Every single time. Or maybe at least 1000:1 if you're an optimist.

              They have access to more money for advertising, they have an already established network of existing customers, they have legal and marketing experts on payroll. Or just look at Microsoft, they don't even need advertising, they just install their product by default and nobody will even hear about mine.

              Not to mention as you said, the training advances only goes from open source to closed source, not the other way around.

              AI proponents who talk about "democratization" are nuts, it would be laughable if it wasn't so sad.

              • Muromec 2 hours ago
                >If I and a company of 1000 people create the same product and compete for customers, the company's version will win. Every single time.

                As a person who works for a company with 25k people, I would disagree. You, a single person will often get to the basic product that a lot of people will want much faster than a company with 1k, 5k and 25k people.

                Bigger companies are constrained by internal processes, piles of existing stuff, and inability to hire at the scale they need and larger required context. Also regulation and all that. Bigger companies are also really slow to adapt, so they would rather let you build the product and then buy out your company with your product and people who build it. They are at at a temporary disadvantage every time the landscape shifts.

                • martin-t 2 hours ago
                  The point wasn't about the number of people, the point was a company which employs that number of people has enough money which can be converted to leverage against you.

                  Besides that, your whole arguments hinges on large companies being inflexible, inefficient and poorly run. Isn't that exactly the kind of problem AI promises to solve? Complete AI surveillance of every employee, tasks and instructions tailored to each individual and superhuman planning. Of course at that point, the only employees will be manual workers because actual AI will be much better and cheaper at everything than every human, except those things where it needs to interact with the physical world. Even contract negotiations with both employees and customers will be done with AI instead of humans, the human will only sign off on it for legal requirements just like today you technically enter a contract with a representative of the company who is not even there when you talk to a negotiator.

          • adrianN 5 hours ago
            The corporate moat is the army of lawyers they have. It doesn’t matter whether they win or not if you can’t afford endless litigation. Is the same for patents.
            • Marha01 3 hours ago
              Funny, their army of lawyers seems incapable of stopping me from easily downloading pirated software or coding an open alternative to their closed-source software with AI if I wanted to..

              You cannot keep a purely legally-enforced moat in the face of advancing technology.

            • lrvick 4 hours ago
              The music industry has an army of lawyers too, and it did not make a damn bit of difference once bittorrent was popularized.

              IP law means nothing once tens of millions of people are openly violating it.

              The software industry is about to learn this lesson too.

              • dwedge 3 hours ago
                So is music free now? The record industry doesn't exist anymore, isn't ridiculously profitable? Artists are finally earning a fair share?
                • lrvick 3 hours ago
                  Music is free, because music piracy is unenforceable so the law is irrelevant. Now, I personally buy most of my music on vinyl because I want to support artists, but absolutely nothing forces me to do that as all the music is available for free.
                • Marha01 3 hours ago
                  > So is music free now?

                  Uhm... yes? The cost of downloading pirated music is essentially zero. The only reason why people use services like Spotify is because it's extremely cheap while being a bit more convenient. But jack up the price and the masses will move to sail the sea again.

                  • dwedge 3 hours ago
                    The cost of stealing has always been essentially zero. Same argument can be made for streaming, and yet Netflix is neither cheap nor struggling for subscribers.
                    • Marha01 3 hours ago
                      > The cost of stealing has always been essentially zero.

                      That is not necessarily true, depending on the level of enforcement and the availability of opportunities to steal.

                      > Same argument can be made for streaming, and yet Netflix is neither cheap nor struggling for subscribers.

                      Netflix is still pretty cheap for the convenience it provides. Again, jack up the price and see the masses move to torrent movies/shows again.

                • Applejinx 3 hours ago
                  In the sense of artists cannot expect to get any money for their work, yeah music's free. Becoming a meme or a celebrity on the grounds of personality is still fair game, to the extent that AI is not impersonating people effectively at scale yet.

                  Yet.

                  A whole bunch of people I watch on youtube (politics, analysts, a weatherman) are already seeing AI impersonation videos, sometimes misrepresenting their positions and identities. This will grow.

                  So, you can't create art because that's extruded at scale in such a way that it's just turning on the tap to fill a specified need, and you can't be a person because that can also be extruded at scale pretty soon, either to co-opt whatever you do that's distinct, or to contradict whatever you're trying to say, as you.

                  As far as being a person able to exist and function through exchanging anything you are or anything you do for recompense, to survive, I'm not sure that's in the cards. Which seems weird for a technology in the guise of aiding people.

          • ako 2 hours ago
            Generating software still token costs, generating something like ms-word will still cost a significant amount, takes a lot of human effort to prompt and validate. Having a proven solution still has value.
            • lrvick 1 hour ago
              You can already generate surprisingly complex software on an LLM on a raspberry pi now, including live voice assistance, all offline. Peoples hardware can self write software pretty readily now. The cost of tokens is a race to zero.
          • nonameiguess 2 hours ago
            Ironically, I actually suspect the exact opposite. Linux has no real choice in this matter because most of the code is written by Google, Red Hat, Cisco, and Amazon at this point, and these big cos are all going to mandate their developers have to use AI coding agents. Refuse to accept these contributions and we're just going to end up with 20 Linuxes instead of one, and the original still under the control of Linus will be relegated to desktop usage and wither and die.
        • VorpalWay 4 hours ago
          > Any license on "100% vibecoded" projects can be safely ignored.

          As far as I know that has only been decided in US so far, which is far from the whole world.

          • IsTom 4 hours ago
            In Poland law is similar in this regard, so I'd assume at least some other countries do this as well.
        • OtomotO 3 hours ago
          So, how are you gonna prove I didn't write some code?

          How am I gonna prove I did?

        • martin-t 4 hours ago
          I don't think modified by a human is enough. If you take licensed text (code or otherwise) and manually replace every word with a synonym, it does not remove the license. If you manually change every loop into a map/filter, it does not remove the license. I don't think any amount of mechanical transformation, regardless if done by a human or machine erases it.

          There's a threshold where you modify it enough, it is no longer recognizable as being a modification of the original and you might get away with it, unless you confess what process you used to create it.

          This is different to learning from the original and then building something equivalent from scratch using only your memory without constantly looking back and forth between your copy and the original.

          This is how some companies do "clear room reimplementations" - one team looks at the original and writes a spec, another team which has never seen the original code implements an entirely standalone version.

          And of course there are people who claim this can be automated now[0]. This one is satire (read the blog) but it is possible if the law is interpreted the way LLM companies work and there are reports the website works as advertised by people who were willing to spend money to test it.

          [0]: https://malus.sh/

          • lrvick 3 hours ago
            You only need to feed the docs and tests to an LLM to get a "clean room" re-implementation that can then be relicensed.
            • Muromec 2 hours ago
              That wasn't tested legally.
              • lrvick 1 hour ago
                If they actually were decided to be infringements somehow, there are millions of different cases needed already, so it is already past the point of enforcement.

                These sorts of things are almost never tested legally and it seems even less likely now.

        • williamcotton 2 hours ago
          [dead]
      • dxdm 6 hours ago
        Sounds dramatic, but it entirely depends on what "many" and "plenty" means in your comment, and who exactly is included. So far, what you wrote can be seen as an expectable level of drama surrounding such projects.
      • ebbi 6 hours ago
        True - on Mastodon there is a very vocal crowd that are against AI in general, and are identifying Linux distros that have AI generated code with the view of boycotting it.
        • lrvick 5 hours ago
          Soon they will have to boycott all of them. Then what I wonder?
      • abc123abc123 4 hours ago
        Doesn't matter. Linux today is a toy of corporations and stopped being community oriented a long time ago. Community orientation I think these days only exists among the BSD and some fringe linux distributions.

        The linux foundation itself, is just one big, woke, leftist mess, with CV-stuffers from corporations in every significant position.

        • simonask 3 hours ago
          The idea that something can simultaneously be "woke [and] leftist" and somehow still defined by its attachments to corporations is a baffling expression of how detached from reality the US political discourse is.

          The rest of the world looks on in wonder at both sides of this.

    • galaxyLogic 17 hours ago
      But then if AI output is not under GNU General Public License, how can it become so just because a Linux-developer adds it to the code-base?
      • jillesvangurp 16 hours ago
        AIs are not human and therefore their output is a human authored contribution and only human authored things are covered by copyright. The work might hypothetically infringe on other people's copyright. But such an infringement does not happen until a human decides to create and distribute a work that somehow integrates that generated code or text.

        The solution documented here seems very pragmatic. You as a contributor simply state that you are making the contribution and that you are not infringing on other people's work with that contribution under the GPLv2. And you document the fact that you used AI for transparency reasons.

        There is a lot of legal murkiness around how training data is handled, and the output of the models. Or even the models themselves. Is something that in no way or shape resembles a copyrighted work (i.e. a model) actually distributing that work? The legal arguments here will probably take a long time to settle but it seems the fair use concept offers a way out here. You might create potentially infringing work with a model that may or may not be covered by fair use. But that would be your decision.

        For small contributions to the Linux kernel it would be hard to argue that a passing resemblance of say a for loop in the contribution to some for loop in somebody else's code base would be anything else than coincidence or fair use.

        • heavyset_go 6 hours ago
          Copyright Office's interpretation of US copyright laws says that AI is not human, thus not an attributable author for copyright registration, and output based on mere prompting is no one's IP, it can't be copyrighted[1].

          When AI output can be copyrighted is when copyrighted elements are expressed in it, like if you put copyrighted content in a prompt and it is expressed in the output, or the output is transformed substantially with human creativity in arrangement, form, composition, etc.

          [1] https://newsroom.loc.gov/news/copyright-office-releases-part...

        • nitwit005 15 hours ago
          That you can't copyright the AI's output (in the US, at least), doesn't imply it doesn't contain copyrighted material. If you generate an image of a Disney character, Disney still owns the copyright to that character.
          • NitpickLawyer 9 hours ago
            > That you can't copyright the AI's output (in the US, at least),

            It's also not really clear if you can or cannot copyright AI output. The case that everyone cites didn't even reach the point where courts had to rule on that. The human in that case decided to file the copyright for an AI, and the courts ruled that according to the existing laws copyright must be filed by a person/human/whatever.

            So we don't yet have caselaw where someone used AIgen and claimed the output as written by them.

          • metalcrow 7 hours ago
            You can copyright AI output assuming there is a "reasonable" degree of human involvement. https://www.cnet.com/tech/services-and-software/this-company...
          • fxtentacle 10 hours ago
            Yes. And that’s why the rules say that the human submitting the code is responsible for preventing this case.
        • friendzis 8 hours ago
          > Is something that in no way or shape resembles a copyrighted work (i.e. a model) actually distributing that work?

          Does a digitally encoded version resemble a copyrighted work in some shape or form? </snark>

          Where is this hangup on models being something entirely different than an encoding coming from? Given enough prodding they can reproduce training data verbatim or close to that. Okay, given enough prodding notepad can do that too, so uncertainty is understandable.

          This is one of the big reasons companies are putting effort into the so called "safety": when the legal battles are eventually fought, they would have an argument that they made their best so that the amount of prodding required to extract any information potentially putting them under liability is too great to matter.

          • jillesvangurp 2 hours ago
            > Does a digitally encoded version resemble a copyrighted work in some shape or form? </snark>

            Well that's different because an encoded image or video clearly intends to reproduce the original perfectly and the end result after decoding is (intentionally) very close to form of the original. Which makes it a clear cut case of being a copy of the original.

            The reason so many cases don't get very far is that mostly judges and lawyers don't think like engineers. Copyright law predates most modern technology. So, everything needs to be rephrased in terms of people copying stuff for commercial gain. The original target of the law was people using printing presses to create copies of books written by others. Which was hugely annoying to some publishers who thought they had exclusive deals with authors. But what about academics quoting each other? Or literary reviews. Or summaries. Or people reading from a book on the radio? This stuff gets complicated quickly. Most of those things were settled a long time ago. Fair use is a concept that gets wielded a lot for this. Yes its a copy but its entirely reasonable for the copy holder to be doing what they are doing and therefore not considered an infringement.

            The rest is just centuries of legal interpretation of that and how it applies to modern technology. Whether that's DJs sampling music or artists working in visual imagery into their art works. AI is mostly just more of the same here. Yes there are some legally interesting aspects with AI but not that many new ones. Judges are unlikely to rethink centuries of legal interpretations here and are more likely to try to reconcile AI in with existing decisions. Any changes to the law would have to be driven by politicians; judges tend to be conservative with their interpretations.

        • ninjagoo 15 hours ago
          IANAL; this is what my limited understanding of the matter is. With that caveat: it is easy to forget that copyright is on output- verbatim or exact reproductions and derivatives of a covered work are already covered under copyright.

          So if the AI outputs Starry Night or Starry Night in different color theme, that's likely infringement without permission from van Gogh, who would have recourse against someone, either the user or the AI provider.

          But a starry-night style picture of an aquarium might not be infringing at all.

          >For small contributions to the Linux kernel it would be hard to argue that a passing resemblance of say a for loop in the contribution to some for loop in somebody else's code base would be anything else than coincidence or fair use.

          I would argue that if it was a verbatim reproduction of a copyrighted piece of software, that would likely be infringing. But if it was similar only in style, with different function names and structure, probably not infringing.

          Folks will argue that some things might be too small to do any different, for example a tiny snippet like python print("hello") or 1+1=2 or a for loop in your example. In that case it's too lacking in original expression to qualify for copyright protection anyway.

        • Lerc 15 hours ago
          >AIs are not human and therefore their output is a human authored contribution and only human authored things are covered by copyright.

          That is a non sequitur. Also, I'm not sure if copyright applies to humans, or persons (not that I have encountered particularly creative corporations, but Taranaki Maunga has been known for large scale decorative works)

          • Sharlin 11 hours ago
            Copyright applies to legal persons, that's why corporations can have copyright at all.
          • direwolf20 8 hours ago
            A "large scale decorative work" is the strangest euphemism for a dormant volcano I've ever heard.
            • Lerc 5 hours ago
              Well obviously it's not doing any decorating right at the moment.
        • mcv 14 hours ago
          Didn't a court in the US declare that AI generated content cannot be copyrighted? I think that could be a problem for AI generated code. Fine for projects with an MIT/BSD license I suppose, but GPL relies on copyright.

          However, if the code has been slightly changed by a human, it can be copyrighted again. I think.

          • simonw 14 hours ago
            Thaler v. Perlmutter said that an AI system cannot be listed as the sole author of a work - copyright requires a human author.

            US Copyright Office guidance in 2023 said work created with the help of AI can be registered as long as there is "sufficient human creative input". I don't believe that has ever been qualified with respect to code, but my instinct is that the way most people use coding agents (especially for something like kernel development) would qualify.

            • davemp 11 hours ago
              Interesting. That seems to suggest that one would need to retain the prompts in order to pursue copyright claims if a defendant can cast enough doubt on human authorship.

              Though I guess such a suit is unlikely if the defendant could just AI wash the work in the first place.

          • tadfisher 14 hours ago
            No, a court did not declare that. The case involved a person trying to register a work with only the AI system listed as author. The Supreme Court decided that you can't do that, you need to list a human being as author to register a work with the Copyright Office. This stems from existing precedent where someone tried to register a photograph with the monkey photographer listed as author.

            I don't believe the idea that humans can or can't claim copyright over AI-authored works has been tested. The Copyright Office says your prompt doesn't count and you need some human-authored element in the final work. We'll have to see.

            • papercrane 12 hours ago
              It's almost a certainty that you can't copyright code that was generated entirely by an AI.

              Copyright requires some amount of human originality. You could copyright the prompt, and if you modify the generated code you can claim copyright on your modifications.

              The closest applicable case would be the monkey selfie.

              https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

              • abletonlive 10 hours ago
                It's almost certain that you're wrong. It's like saying I can't copyright a song if my modular synthesizer generated it. Why would you think this?
            • manwe150 12 hours ago
              I’m curious to see if subscription vs free ends up mattering here. If it is a work for hire, generally it doesn’t matter how the work was produced, the end result is mine, because I contracted and instructed (prompted?) someone to do it for me. So will the copyright office decide it cares if I paid for the AI tool explicitly?
              • galaxyLogic 6 hours ago
                That would depend on whether those who sold you the software-output, had copyright to it.
          • RussianCow 14 hours ago
            > Didn't a court in the US declare that AI generated content cannot be copyrighted?

            No, my understanding is that AI generated content can't be copyrighted by the AI. A human can still copyright it, however.

            • Sharlin 11 hours ago
              It's obvious that a computer program cannot have copyright because computer programs are not persons in any currently existing jurisdiction.

              Whether a person can claim copyright of the output of a computer program is generally understood as depending on whether there was sufficient creative effort from said person, and it doesn't really matter whether the program is Photoshop or ChatGPT.

              • paradoxyl 10 hours ago
                Just thinking out loud... why can't an algorithm be an artificial person in the legal sense that a corporation is? Why not legally incorporate the AI as a corporation so it can operate in the real world: have accounts, create and hold copyrights...
                • direwolf20 8 hours ago
                  Because the law doesn't say it can. It's that simple.
                • SpicyLemonZest 10 hours ago
                  Corporations are required to have human directors with full operational authority over the corporation's actions. This allows a court to summon them and compel them to do or not do things in the physical world. There's no reason a corporation can't choose to have an AI operate their accounts, but this won't affect the copyright status, and if the directors try to claim they can't override the AI's control of the accounts they'll find themselves in jail for contempt the first time the corporation faces a lawsuit.
              • galaxyLogic 6 hours ago
                So if creative effort was put into writing the prompt, then whoever wrote the prompt should have the copyright to the output produced by ChatGPT?
                • LtWorf 3 hours ago
                  Sure, but the prompt wasn't the only input… there was considerable effort put into the training data as well :)
          • singpolyma3 12 hours ago
            Public domain code is GPL compatible
      • afro88 17 hours ago
        Same as if a regular person did the same. They are responsible for it. If you're using AI, check the code doesn't violate licenses
        • rzmmm 16 hours ago
          In certain law cases plagiarization can be influenced by the fact if person is exposed to the copyrighted work. AI models are exposed to very large corpus of works..
          • cxr 15 hours ago
            Copyright infringement and plagiarism are not the same or even very closely related. They're different concepts and not interchangeable. Relative to copyright infringement, cases of plagiarism are rarely a matter for courts to decide or care about at all. Plagiarism is primarily an ethical (and not civil or criminal) matter. Rather than be dealt with by the legal system, it is the subject of codes of ethics within e.g. academia, journalism, etc. which have their own extra-judicial standards and methods of enforcement.
            • dekhn 14 hours ago
              I suspect they were instead referring to patents; for example, when I worked at Google, they told the engineers not to read patents because then the engineer might invent something infringing, I think it's called willful infringement. No other employer I've worked for has every raised this as an issue, while many lawyers at google would warn against this.
            • martin-t 51 minutes ago
              You're right, legally speaking.

              But you shouldn't be right. I mean, morally.

              The law is a compromise between what the people in power want and what they can get away with without people revolting. It has nothing to do with morality, fairness or justice. And we should change that. The promise of democracy was (among other things) that everyone would be equal, everybody would get to vote and laws would be decided by the moral system of the majority. And yet, today, most people will tell you they are unhappy about the rising cost of living and rising inequality...

              The law should be based on complete and consistent moral system. And then plagiarism (taking advantage of another person's intellectual work without credit or compensation) would absolutely be a legal matter.

        • martin-t 17 hours ago
          As opposed to an irregular person?

          LLMs are not persons, not even legal ones (which itself is a massive hack causing massive issues such as using corporate finances for political gain).

          A human has moral value a text model does not. A human has limitations in both time and memory available, a model of text does not. I don't see why comparisons to humans have any relevance. Just because a human can do something does not mean machines run by corporations should be able to do it en-masse.

          The rules of copyright allow humans to do certain things because:

          - Learning enriches the human.

          - Once a human consumes information, he can't willingly forget it.

          - It is impossible to prove how much a human-created intellectual work is based on others.

          With LLMs:

          - Training (let's not anthropomorphize: lossily-compressing input data by detecting and extracting patterns) enriches only the corporation which owns it.

          - It's perfectly possible to create a model based only on content with specific licenses or only public domain.

          - It's possible to trace every single output byte to quantifiable influences from every single input byte. It's just not an interesting line of inquiry for the corporations benefiting from the legal gray area.

          • afro88 9 hours ago
            Dude come on, I clearly wasn't saying LLMs are people. My point was it's a tool and it's the responsibility of the person wielding it to check outputs.

            If it's too hard to check outputs, don't use the tool.

            Your arguments about copyright being different for LLMs: at the moment that's still being defined legally. So for now it's an ethical concern rather than a legal one.

            For what it's worth I agree that LLMs being trained on copyright material is an abuse of current human oriented copyright laws. There's no way this will just continue to happen. Megacorps aren't going to lie down if there's a piece of the pie on the table, and then there's precedent for everyone else (class action perhaps)

        • sarchertech 17 hours ago
          How could you do that though? You can’t guarantee that there aren’t chunks of copied code that infringes.
          • Andrex 17 hours ago
            Let me introduce you to the concept of submarine patents...
          • shevy-java 17 hours ago
            But the responsible party is still the human who added the code. Not the tool that helped do so.
            • aargh_aargh 17 hours ago
              The practical concern of Linux developers regarding responsibility is not being able to ban the author, it's that the author should take ongoing care for his contribution.
            • Cytobit 17 hours ago
              That's not going to shield the Linux organization.
              • cxr 15 hours ago
                A DCO bearing a claim of original authorship (or assertion of other permitted use) isn't going to shield them entirely, but it can mitigate liability and damages.
                • sarchertech 13 hours ago
                  Can it though? As far as I know this hasn’t been tested.
            • sarchertech 17 hours ago
              In a court case the responsibility party very well could be the Linux foundation because this is a foreseeable consequence of allowing AI contributions. There’s no reasonable way for a human to make such a guarantee while using AI generated code.
              • Chance-Device 17 hours ago
                It’s not about the mechanism: responsibility is a social construct, it works the way people say that it works. If we all agree that a human can agree to bear the responsibility for AI outputs, and face any consequences resulting from those outputs, then that’s the whole shebang.
                • sarchertech 17 hours ago
                  Sure we could change the law. It would be a stupid change to allow individuals, organizations, and companies to completely shield themselves from the consequences of risky behaviors (more than we already do) simply by assigning all liability to a fall guy.
                  • Chance-Device 16 hours ago
                    What law exactly are you suggesting needs to be changed? How is this any different from what already happens right now, today?
                    • sarchertech 16 hours ago
                      Right now it's very easy not to infringe on copyrighted code if you write the code yourself. In the vast majority of cases if you infringed it's because you did something wrong that you could have prevented (in the case where you didn't do anything wrong, inducement creation is an affirmative defense against copyright infringement).

                      That is not the case when using AI generated code. There is no way to use it without the chance of introducing infringing code.

                      Because of that if you tell a user they can use AI generated code, and they introduce infringing code, that was a foreseeable outcome of your action. In the case where you are the owner of a company, or the head of an organization that benefits from contributors using AI code, your company or organization could be liable.

                      • galaxyLogic 6 hours ago
                        So it's a bit as if Linux Organization told its contributors you can bring in infringing code but you must agree you are liable for any infringement?

                        But if a lawsuit was later brought who would be sued? The individual author or the organization? In other words can an organization reduce its liability if it tells its employees "You can break the law as long as you agree you are solely responsible for such illegal actions?

                        It would seem to me that the employer would be liable if they "encourage" this way of working?

                      • Chance-Device 16 hours ago
                        It’s a foreseeable outcome that humans might introduce copyrighted code into the kernel.

                        I think you’re looking for problems that don’t really exist here, you seem committed to an anti AI stance where none is justified.

                        • sarchertech 16 hours ago
                          A human has to willingly violate the law for that to happen though. There is no way for a human to use AI generated that doesn't have a chance of producing copyrighted code though. That's just expected.

                          If you don't think this is a problem take a look at the terms of the enterprise agreements from OpenAI and Anthropic. Companies recognize this is an issue and so they were forced to add an indemnification clause, explicitly saying they'll pay for any damages resulting in infringement lawsuits.

                      • johnisgood 13 hours ago
                        > Right now it's very easy not to infringe on copyrighted code if you write the code yourself.

                        Humans routinely produce code similar to or identical to existing copyrighted code without direct copying.

                        • sarchertech 13 hours ago
                          They don’t produce enough similar code to infringe frequently. And if they did independent creation is an affirmative defense to copyright infringement that likely doesn’t apply to LLMs since they have the demonstrated capability to produce code directly from their training set.
                          • johnisgood 13 hours ago
                            You have shifted from "very easy not to infringe" to "don't infringe frequently", which concedes the original point that humans can and do produce infringing code without intent.

                            On independent creation: you are conflating the tool with the user. The defense applies to whether the developer had access to the copyrighted work, not whether their tools did. A developer using an LLM did not access the training set directly, they used a synthesis tool. By your logic, any developer who has read GPL code on GitHub should lose independent creation defense because they have "demonstrated capability to produce code directly from" their memory.

                            LLM memorization/regurgitation is a documented failure mode, not normal operation (nor typical case). Training set contamination happens, but it is rare and considered a bug. Humans also occasionally reproduce code from memory: we do not deny them independent creation defense wholesale because of that capability!

                            In any case, the legal question is not settled, but the argument that LLM-assisted code categorically cannot qualify for independent creation defense creates a double standard that human-written code does not face.

                        • direwolf20 8 hours ago
                          And that's not an infringement. Actual copying is the infringement, not having the same code. The most likely way to have the same code is by copying, but it's not the only way.
                  • bpt3 16 hours ago
                    In this case, the "fall guy" is the person who actually introduced the code in question into the codebase.

                    They wouldn't be some patsy that is around just to take blame, but the actual responsible party for the issue.

                    • sarchertech 16 hours ago
                      Imagine your a factory owner and you need a chemical delivered from across the country, but the chemical is dangerous and if the tanker truck drives faster than 50 miles per hour it has a 0.001% chance per mile of exploding.

                      You hire an independent contractor and tell him that he can drive 60 miles per hour if he wants to but if it explodes he accepts responsibility.

                      He does and it explodes killing 10 people. If the family of those 10 people has evidence you created the conditions to cause the explosion in order to benefit your company, you're probably going to lose in civil court.

                      Linus benefits from the increase velocity of people using AI. He doesn't get to put all the liability on the people contributing.

                      • raincole 7 hours ago
                        Cool analogy! Which has nothing to do with the topic in hand.
                      • bpt3 13 hours ago
                        That is a nonsensical analogy on multiple levels, and doesn't even support your own argument.
                        • sarchertech 13 hours ago
                          Nice rebuttal.
                          • bpt3 13 hours ago
                            Why would I put much effort into responding to a post like yours, which makes no sense and just shows that you don't understand what you're talking about?
                • lo_zamoyski 15 hours ago
                  Responsibility is an objective fact, not just some arbitrary social convention. What we can agree or disagree about is where it rests, but that's a matter of inference, an inference can be more or less correct. We might assign certain people certain responsibilities before the fact, but that's to charge them with the care of some good, not to blame them for things before they were charged with their care.
              • bitwize 15 hours ago
                Because contributions to Linux are meticulously attributed to, and remain property of, their authors, those authors bear ultimate responsibility. If Fred Foobar sends patches to the kernel that, as it turns out, contain copyrighted code, then provided upstream maintainers did reasonable due diligence the court will go after Fred Foobar for damages, and quite likely demand that the kernel organization no longer distribute copies of the kernel with Fred's code in it.
                • sarchertech 13 hours ago
                  Anyone distributing infringing material can be liable, and it’s unlikely that this technicality will actually would shield anyone.

                  Anyone who thinks they have a strong infringement case isn’t going to stop at the guy who authored the code, they’re going to go after anyone with deep pockets with a good chance of winning.

                  • Marha01 3 hours ago
                    > Anyone distributing infringing material can be liable

                    There is still the "mens rea" principle. If you distribute infringing material unknowingly, it would very likely not result in any penalties.

      • Tomte 6 hours ago
        There is already lots and lots of non-GPL code in the kernel, under dozens of licenses, see https://raw.githubusercontent.com/Open-Source-Compliance/pac...

        As long as everything is GPLv2-compatible it‘s okay.

      • noosphr 16 hours ago
        Tab complete does not produce copyrightable material either. Yet we don't require software to be written in nano.
        • rpdillon 10 hours ago
          This is a nice point that I haven't seen before. It's interesting to regress AI to the simplest form and see how we treat it as a test for the more complex cases.
      • panzi 17 hours ago
        If the output is public domain it's fine as I understand it.
        • galaxyLogic 17 hours ago
          Makes sense to me. But so anybody can take Public Domain code and place it under GNU Public License (by dropping it into a Linux source-code file) ?

          Surely the person doing so would be responsible for doing so, but are they doing anything wrong?

          • robinsonb5 17 hours ago
            > Surely the person doing so would be responsible for doing so, but are they doing anything wrong?

            You're perfectly at liberty to relicense public domain code if you wish.

            The only thing you can't do is enforce the new license against people who obtain the code independently - either from the same source you did, or from a different source that doesn't carry your license.

            • cwnyth 17 hours ago
              This is correct, and it's not limited to code. I can take the story of Cinderella, create something new out of it, copyright my new work, but Cinderella remains public domain for someone else to do something with.

              If I use public domain code in a project under a license, the whole work remains under the license, but not the public domain code.

              I'm not sure what the hullabaloo is about.

              • manwe150 12 hours ago
                If someone else uses your exact same prompt to generate the exact same code, can you claim copyright infringement against them? If the output is possible to copyright, then you could claim their prompt is infringement (just like if it reproduced Harry Potter). If it isn’t copyrightable, then the kernel would not have legal standing to enforce the GPL on those lines of code against any future AI reproduction of them. The developers might need to show that the code is licensed under GPL and only GPL, otherwise there is the possibility the same original contributor (eg the AI) did permit the copy. The GPL is an imposed restriction on what the kernel can legally do with any code contributions. That seems legally complicated for some projects—probably not the kernel with the large amount of pre-AI code, but maybe it spells trouble for smaller newer projects if they want to sue over infringement. IANAL.
                • robinsonb5 6 hours ago
                  > If someone else uses your exact same prompt to generate the exact same code, can you claim copyright infringement against them?

                  No, because they've independently obtained it from the same source that you did, so their copy is "upstream" of your imposing of a new license.

                  Realistically, adding a license to public domain work is only really meaningful when you've used it as a starting point for something else, and want to apply your license to the derivative work.

                • direwolf20 8 hours ago
                  Copyright infringement is triggered by the act of copying, not by having the same bytes.
              • tomjen3 2 hours ago
                Be careful here - you cannot copyright a story, only the specific tangible form of the story.
          • jaggederest 16 hours ago
            The core thing about licenses, in general, is that they only grant new usage. If you can already use the code because it's public domain, they don't further restrict it. The license, in that case, is irrelevant.

            Remember that licenses are powered by copyright - granting a license to non-copyrighted code doesn't do anything, because there's no enforcement mechanism.

            This is also why copyright reform for software engineering is so important, because code entering the public domain cuts the gordian knot of licensing issues.

          • miki123211 17 hours ago
            Linux code doesn't have to strictly be GPL-only, it just has to be GPL-compatible.

            If your license allows others to take the code and redistribute it with extra conditions, your code can be imported into the kernel. AFAIK there are parts of the kernel that are BSD-licensed.

          • sambaumann 17 hours ago
            Sqlite’s source code is public domain. Surely if you dropped the sqlite source code into Linux, it wouldn’t suddenly become GPL code? I’m not sure how it works
            • jasomill 8 hours ago
              The Linux kernel would become a GPLv2-licensed derivative work of SQLite, but that doesn’t matter, because public domain works, by definition, are not subject to copyright restrictions.

              Claiming copyright on an unmodified public domain work is a lie, so in some circumstances could be an element of fraud, but still wouldn’t be a copyright violation.

        • martin-t 17 hours ago
          This ruling is IMO/IANAL based on lawyers and judges not understanding how LLMs work internally, falling for the marketing campaign calling them "AI" and not understanding the full implications.

          LLM-creation ("training") involves detecting/compressing patterns of the input. Inference generates statistically probable based on similarities of patterns to those found in the "training" input. Computers don't learn or have ideas, they always operate on representations, it's nothing more than any other mechanical transformation. It should not erase copyright any more than synonym substitution.

          • supern0va 16 hours ago
            >LLM-creation ("training") involves detecting/compressing patterns of the input.

            There's a pretty compelling argument that this is essentially what we do, and that what we think of as creativity is just copying, transforming, and combining ideas.

            LLMs are interesting because that compression forces distilling the world down into its constituent parts and learning about the relationships between ideas. While it's absolutely possible (or even likely for certain prompts) that models can regurgitate text very similar to their inputs, that is not usually what seems to be happening.

            They actually appear to be little remix engines that can fit the pieces together to solve the thing you're asking for, and we do have some evidence that the models are able to accomplish things that are not represented in their training sets.

            Kirby Ferguson's video on this is pretty great: https://www.youtube.com/watch?v=X9RYuvPCQUA

            • martin-t 15 hours ago
              So? Why should it be legal?

              If people find this cool and wanna play with it, they can, just make sure to only mix compatible licenses in the training data and license the output appropriately. Well, the attribution issue is still there, so maybe they can restrict themselves to public domain stuff. If LLMs are so capable, it shouldn't limit the quality of their output too much.

              Now for the real issue: what do you think the world will look like in 5 or 10 years if LLMs surpass human abilities in all areas revolving around text input and output?

              Do you think the people who made it possible, who spent years of their life building and maintaining open source code, will be rewarded? Or will the rich reap most of the benefit while also simultaneously turning us into beggars?

              Even if you assume 100% of the people doing intellectual work now will convert to manual work (i.e. there's enough work for everyone) and robots don't advance at all, that'll drive the value of manual labor down a lot. Do you have it games out in your head and believe somehow life will be better for you, let alone for most people? Or have yo not thought about it at all yet?

              • galaxyLogic 6 hours ago
                > Do you think the people who made it possible, who spent years of their life building and maintaining open source code, will be rewarded?

                I think they should be rewarded more than they are currently. But isn't the GNU Public License bassically saying you can use such source-code without giving any rewards what so ever?

                But I see your The reward for Open Source developers is the public recognition for their works. LLMs can take that recognition away.

              • Marha01 3 hours ago
                The best answer to those issues is still Basic Income.
                • martin-t 1 hour ago
                  UBI only means you won't starve or die of exposure. It doesn't mean that people who are already rich today won't become so obscenely rich tomorrow they are above the law or can change the law (and decide who gets medical treatment or even take your UBI away).
          • timmmmmmay 16 hours ago
            fortunately, you aren't only operating on representations, right? lemme check my Schopenhauer right quick...
    • shevy-java 17 hours ago
      But why should AI then be attributed if it is merely a tool that is used?
      • lonelyasacloud 15 hours ago
        Having an honesty based tag could be only way to monitor impact or get after a fix in code bases if things go south.

        That is at the moment: - Nobody knows for sure what agents might add and their long term effects on codebases.

        - It's at best unclear that AI content in a codebase can be reliably determined automatically.

        - Even if it's not malicious, at least some of its contributions are likely to be deleterious and pass undetected by human review.

      • yrds96 7 hours ago
        AI tools can do the entire job from finding the problem, implementing and testing it.

        It's different from the regular single purpose static tools.

      • plmpsu 17 hours ago
        it makes sense to keep track of what model wrote what code to look for patterns, behaviors, etc.
      • hgoel 12 hours ago
        This is a good point but I'd take it in the opposite direction from the implication, we should document which tools were used in general, it'd be a neat indicator of what people use.
      • streetfighter64 17 hours ago
        It isn't?

        > AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO).

        They mention an Assisted-by tag, but that also contains stuff like "clang-tidy". Surely you're not interpreting that as people "attributing" the work to the linter?

  • ninjagoo 16 hours ago

      > Signed-Off ...
      > The human submitter is responsible for:
        > Reviewing all AI-generated code
        > Ensuring compliance with licensing requirements
        > Adding their own Signed-off-by tag to certify the DCO
        > Taking full responsibility for the contribution
    
      > Attribution: ... Contributions should include an Assisted-by tag in the following format:
    
    Responsibility assigned to where it should lie. Expected no less from Torvalds, the progenitor of Linux and Git. No demagoguery, no b*.

    I am sure that this was reviewed by attorneys before being published as policy, because of the copyright implications.

    Hopefully this will set the trend and provide definitive guidance for a number of Devs that were not only seeing the utility behind ai assistance but also the acrimony from some quarters, causing some fence-sitting.

    • senko 6 hours ago
      > Expected no less from Torvalds

      This was written by Sasha Levin referencing a Linux maintainers’ discussion.

    • bsimpson 9 hours ago
      Signed-off-by is already a custom/formality that is surely cargo-culted by many first-time/infrequent contributors. It has an air of "the plans were on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard.'" There's no way to assert that every contributor has read a random document declaring what that line means in kernel parlance.

      I recently made a kernel contribution. Another contributor took issue with my patch and used it as the impetus for a larger refactor. The refactor was primarily done by a third contributor, but the original objector was strangely insistent on getting the "author" credit. They added our names at the bottom in "Co-developed-by" and "Signed-off-by" tags. The final submission included bits I hadn't seen before. I would have polished it more if I had.

      I'm not raising a stink about it because I want the feature to land - it's the whole reason I submitted the first patch. And since it's a refactor of a patch I initially submitted (and "Signed-off-by,") you can make the argument that I signed off on the parts of my code that were incorporated.

      But so far as I can tell, there's nothing keeping you from adding "Co-developed-by" and "Signed-off-by Jim-Bob Someguy" to the bottom of your submission. Maybe a lawyer would eventually be mad at you if Jim-Bob said he didn't sign off.

      There's no magic pixie dust that gives those incantations legal standing, and nothing that keeps LLMs from adding them unless the LLMs internalize the new AI guidance.

      • rwmj 6 hours ago
        The way you describe it, the developers all did the right thing. You contributed something to the patch, and even if it wasn't in your preferred final form (and it's basically never going to be for a kernel contribution of any significance), you were correctly credited.

        If you didn't want to be credited you should have said.

        Signed-off-by probably has some legal weight. When you add that to code you are making a clear statement about the origins of the code and that you have legal authority to contribute it - for example, that you asked your company for permission if needed. As far as I know none of this has been tested in court, but it seems reasonable to assume it might be one day.

        • zahlman 2 hours ago
          > You contributed something to the patch, and even if it wasn't in your preferred final form (and it's basically never going to be for a kernel contribution of any significance), you were correctly credited.

          I don't see how the "signed-off-by" attestation constitutes correct credit here. It's claiming that GP saw the final result and approved of it, which is apparently false.

  • HarHarVeryFunny 16 minutes ago
    It's a sane policy - human is responsible for what they contribute, regardless of what tools they use in the development process.

    However, the gotcha here seems to be that the developer has to say that the code is compatible with the GPL, which seems an impossible ask, since the AI models have presumably been trained on all the code they can find on the internet regardless of licensing, and we know they are capable of "regenerating" (regurgitating) stuff they were trained on with high fidelity.

  • oytis 4 hours ago
    How is one supposed to ensure license compliance while using LLMs which do not (and cannot) attribute sources having contributed to a specific response?
    • Lapel2742 3 hours ago
      > How one is supposed to ensure license compliance while using LLMs which do not (and cannot) attribute sources having contributed to a specific response?

      Additionally there seems to be a general problem with LLM output and copyright[1]. At least in Germany. LLM output cannot be copyrighted and the whole legal field seems under-explored.

      > This immediately raises the question of who is the author of this work and who owns the rights to it. Various solutions are possible here. It could be the user of the AI alone, or it could be a joint work between the user and the AI programmer. This question will certainly keep copyright experts in the various legal systems busy for some time to come.

      It seems that in the long run the kernel license might become unenforceable if LLM output is used?!

      [1] https://kpmg-law.de/en/ai-and-copyright-what-is-permitted-wh...

  • ipython 18 hours ago
    Glad to see the common-sense rule that only humans can be held accountable for code generated by AI agents.
    • pixel_popping 17 hours ago
      [flagged]
      • CivBase 9 hours ago
        In most cases I've seen it's because they get overwhelmed by sloppy contributions from developers who do not bother to review their AI's output. Code reviews are a lot of work.
        • Gigachad 9 hours ago
          Also “responsibility” and “accountability” mean little for anon contributors from the internet. You can ban them but a thousand more will still be spamming you with slop.
      • tom_ 15 hours ago
        It is no more insane than doing the opposite. This whole business has yet to play itself out.
      • pydry 17 hours ago
        And yet it puts a stop to the tsunami of slop and it's pretty much impossible to prove anything of value was lost.
        • pixel_popping 17 hours ago
          but why? it's a human making the PR and you can shame/ban that human anyway.
          • materielle 13 hours ago
            I think AI bans are more common in projects where the maintainers are nice people that thoughtfully want to consider each PR and provide a reasoned response if rejected.

            That’s only feasible when the people who open PRs are acting in good faith, and control both the quality and volume of PRs to something that the maintainers can realistically (and ought to) review in their 2-3 hours of weekly free time.

            Linux is a bit different. Your code can be rejected, or not even looked at in the first place, if it’s not a high quality and desired contribution.

            Also, it’s not just about PR quality, but also volume. It’s possible for contributions to be a net benefit in isolation. But most open source maintainers only have an hour or so a week to review PRs and need to prioritize aggressively. People who code with AI agents would benefit themselves to ask “does this PR align with the priorities and time availability of the maintainer?”

            For instance, I’m sure we could point AI at many open source projects and tell it to optimize performance. And the agent would produce a bunch of high quality PRs that are a good idea in isolation. But what if performance optimization isn’t a good use of time for a given maintainer’s weekly code review quota?

            Sure, maintainers can simply close the PR without a reason if they don’t have time.

            But I fear we are taking advantage of nice people, who want to give a reasoned response to every contribution, but simply can’t keep up with the volume that agents can produce.

          • podgietaru 16 hours ago
            Volume - things take time to review. If you’re inundated with so many PRs then it’s harder to curate in general
          • yoyohello13 17 hours ago
            > it's a human making the PR

            Is it? Remember when that agent wrote a hit piece about the maintainer because he wouldn't merge it's PR?

          • Ekaros 14 hours ago
            You are treating humans as reasonable actors. They very often are not. On easy to access platforms like github you can have humans just working as intermediaries between LLM and the github. Not actually checking or understanding what they put in a pull request. Banning these people outright with clear rules is much faster and easier than trying to argue with them.

            Linux is somewhat harder to contribute to and they already have sufficient barriers in place so they can rely on more reasonable human actors.

      • daveguy 17 hours ago
        Not insane at all. Just a very useful shortcut. Not everyone wants to move fast and break shit.
        • pixel_popping 17 hours ago
          I still think it's insane, why would you care about the "origin" of the code as long as there is a human accountable (that you can ban anyway)?
          • 59nadir 17 hours ago
            Because you don't want to deal with people who can't write their own code. If they can, the rule will do nothing to stop them from contributing. It'll only matter if they simply couldn't make their contribution without LLMs.
            • pixel_popping 17 hours ago
              So tomorrow, if a model genuinely find a bunch of real vulnerabilities, you just would ignore them? that makes no sense.
              • 59nadir 17 hours ago
                An LLM finding problems in code is not the same at all as someone using it to contribute code they couldn't write or haven't written themselves to a project. A report stating "There is a bug/security issue here" is not itself something I have to maintain, it's something I can react to and write code to fix, then I have to maintain that code.
                • seba_dos1 54 minutes ago
                  Well, until you start getting dozens of generated reports that you take your time to review just to find out that they're all plausible-looking bullshit about non-issues.

                  We already had that happening with other kinds of automated tooling, but at least it used to be easier to detect by quick skimming.

          • jeremyjh 12 hours ago
            Because they aren’t accountable - after it is merged only I am. And why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time? Anytime I want to work through a pile of slop I can ask for one, but I don’t work that way. I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.
            • Paracompact 9 hours ago
              > I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.

              As someone who has been using AI extensively lately, this is my preferred way of doing serious projects with them:

              Let them create the plan, help them refine it, let them rip; then scrutinize their diffs, fight back on the parts I don't like or don't trust; rinse and repeat until commit.

              Yet I assume this would still be unacceptable to most anti-AI projects, because 90%+ of the committed code was "written by the AI."

              > why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time?

              Presumably for the same reason you go back and forth with humans through PR comments even when you could just code it yourself in real time. That reason being, the individual on the other end of the PR should be saving you time. It's still hard work contributing quality MRs, even with AI.

              • jeremyjh 2 hours ago
                I don’t have a problem working with contributors who use AI like you described. But this thread is about working with people who could not do the work on their own. So they cannot do what you described, and they cannot save me any time, they can only waste it.
          • streetfighter64 16 hours ago
            If your doctor told you he used an ouija board to find your diagnosis, would you care about the origin of the diagnosis or just trust that he'll be accountable for it?
            • pixel_popping 16 hours ago
              If the Ouija board was powered by Opus, who knows :D
      • KoftaBob 14 hours ago
        It's just a form of sanctimonious virtue-signaling that's trendy right now.
  • sarchertech 17 hours ago
    This does nothing to shield Linux from responsibility for infringing code.

    This is essentially like a retail store saying the supplier is responsible for eliminating all traces of THC from their hemp when they know that isn’t a reasonable request to make.

    It’s a foreseeable consequence. You don’t get to grant yourself immunity from liability like this.

    • zarzavat 5 hours ago
      Shield from what exactly? The Linux kernel is not a legal entity. It's a collection of contributions from various contributors. There is the Linux Foundation but they do not own Linux.

      If Linux were to contain 3rd party copyrighted code the legal entity at risk of being sued would be... Linux users, which given how widely deployed Linux is is basically everyone on Earth, and all large companies.

      Linux development is funded by large companies with big legal departments. It's safe to say that nobody is going to be picking this legal fight any time soon.

    • lukeify 4 hours ago
      An open-source project receiving open-source contributions from (often anonymous) volunteers is not even close to analogous to a storefront selling products with a consumer guarantee they are backing on the basis of their supply chain.
    • SirHumphrey 17 hours ago
      Quite a lot of companies use and release AI written code, are they all liable?
      • sarchertech 17 hours ago
        1. Almost definitely if discovered

        2. Infringement in closed source code isn’t as likely to be discovered

        3. OpenAI and Anthropic enterprise agreements agree to indemnify (pay for damages essentially) companies for copyright issues.

      • nitwit005 15 hours ago
        Yep, and honestly it's going to come up with things other than lawsuits.

        I've worked at a company that was asked as part of a merger to scan for code copied from open source. That ended up being a major issue for the merger. People had copied various C headers around in odd places, and indeed stolen an odd bit of telnet code. We had to go clean it up.

        • LtWorf 3 hours ago
          Headers are normally fine. GPL license recognises that you might need them to read binary files.
    • testing22321 10 hours ago
      > This does nothing to shield Linux from responsibility for infringing code.

      It’s no worse than non-AI assisted code.

      I could easily copy-paste proprietary code, sign my name that it’s not and that it complies with the GPL and submit it.

      At the end of the day, it just comes down to a lying human.

  • KronisLV 44 minutes ago
    This is actually a pretty nice idea:

      Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]
    
    I feel like a lot of people will have an ideological opposition to AI, but that would lead to people sometimes submitting AI generated code with no attribution and just lying about it.

    At the same time, I feel bad for all the people that have to deal with low quality AI slop submissions, in any project out there.

    The rules for projects that allow AI submissions might as well state: "You need to spend at least ~10 iterations of model X review agents and 10 USD of tokens on reviewing AI changes before they are allowed to be considered for inclusion."

    (I realize that sounds insane, but in my experience iterated review even by the same Opus model can help catch bugs in the code, I feel like the next token prediction in of itself is quite error prone alone)

  • newsoftheday 17 hours ago
    > All code must be compatible with GPL-2.0-only

    How can you guarantee that will happen when AI has been trained a world full of multiple licenses and even closed source material without permission of the copyright owners...I confirmed that with several AI's just now.

    • philipov 17 hours ago
      You take responsibility. That means if the AI messes up, you get punished. No pushing blame onto the stupid computer. If you're not comfortable with that, don't use the AI.
      • sarchertech 17 hours ago
        There’s no reasonable way for you to use AI generated code and guarantee it doesn’t infringe.

        The whole use it but if it behaves as expected, it’s your fault is a ridiculous stance.

        • philipov 17 hours ago
          If you think it's an unacceptable risk to use a tool you can't trust when your own head is on the line, you're right, and you shouldn't use it. You don't have to guarantee anything. You just have to accept punishment.
          • sarchertech 17 hours ago
            That’s just it though it’s not just your head. The liability could very likely also fall on the Linux foundation.

            You can’t say “you can do this thing that we know will cause problems that you have no way to mitigate, but if it does we’re not liable”. The infringement was a foreseeable consequence of the policy.

            • philipov 16 hours ago
              This policy effectively punts on the question of what tools were used to create the contribution, and states that regardless of how the code was made, only humans may be considered authors.

              From the foundation's point of view, humans are just as capable of submitting infringing code as AI is. If your argument is sound, then how can Linux accept contributors at all?

              EDIT: To answer my own question:

                  Instead of a signed legal contract, a DCO is an affirmation that a certain person confirms that it is (s)he who holds legal liability for the act of sending of the code, that makes it easier to shift liability to the sender of the code in the case of any legal litigation, which serves as a deterrent of sending any code that can cause legal issues.
              
              This is how the Foundation protects itself, and the policy is that a contribution must have a human as the person who will accept the liability if the foundation comes under fire. The effectiveness of this policy (or not) doesn't depend on how the code was created.
              • sarchertech 13 hours ago
                Anyone distributing copyrighted material can be liable that DCO isn’t going to stop anyone.

                If that worked any corporation that wanted to use code they legally couldn’t could just use a fork from someone who assumed responsibility and worst case they’d have to stop using it if someone found out.

            • testing22321 10 hours ago
              > liability could very likely also fall on the Linux foundation.

              It’s just the same as if I copy-paste proprietary code into the kernel and lie about it being GPL.

              Is the Linux foundation liable there?

            • empath75 15 hours ago
              The only lawsuits so far have been over training on open source software. You're inventing a liability problem that essentially does not exist.
              • sarchertech 13 hours ago
                OpenAI and Anthropic added an indemnity clause to their enterprise contracts specifically to cover this scenario because companies wouldn’t adopt otherwise.
          • streetfighter64 16 hours ago
            Yeah, but that's not a useful thing to do because not everybody thinks about that or considers it a problem. If somebody's careless and contributes copyrighted code, that's a problem for linux too, not only the author.

            For comparison, you wouldn't say, "you're free to use a pair of dice to decide what material to build the bridge out of, as long as you take responsibility if it falls down", because then of course somebody would be careless enough to build a bridge that falls down.

            Preventing the problem from the beginning is better than ensuring you have somebody to blame for the problem when it happens.

            • jcelerier 4 hours ago
              > Preventing the problem from the beginning is better than ensuring you have somebody to blame for the problem when it happens.

              that's assuming that the problems and incentives are the same for everyone. Someone whose uncle happens to own a bridge repair company would absolutely be incentivized to say

              > "you're free to use a pair of dice to decide what material to build the bridge out of, as long as you take responsibility if it falls down"

            • philipov 15 hours ago
              It was already necessary to solve the problem of humans contributing infringing code. It was solved by having contributors assume liability with a DCO. The policy being discussed today asserts that, because AI may not be held legally liable for its contributions, AI may not sign a DCO. A human signature is required. This puts the situation back to what it was with human contributors. What you are proposing goes beyond maintaining the status quo.
              • sarchertech 13 hours ago
                It’s not solved. It hasn’t been tested in court to my knowledge and in my opinion is unlikely to hold up to serious challenge. You can be held liable for just distributing copyrighted code even if the whole “the Linux foundation doesn’t own anything” holds up.
        • adikso 16 hours ago
          Their position is probably that LLM technology itself does not require training on code with incompatible licenses, and they probably also tend to avoid engaging in the philosophical debate over whether LLM-generated output is a derivative copy or an original creation (like how humans produce similar code without copying after being exposed to code). I think that even if they view it as derivative, they're being pragmatic - they don't want to block LLM use across the board, since in principle you can train on properly licensed, GPL-compatible data.
      • newsoftheday 17 hours ago
        > That means if the AI messes up

        I'm not talking about maintainability or reliability. I'm talking about legal culpability.

      • benatkin 7 hours ago
        If they merge it in despite it having the model version in the commit, then they're arguably taking a position on it too - that it's fine to use code from an AI that was trained like that.
    • XYen0n 5 hours ago
      Even human developers are unlikely to have only ever seen GPL-2.0-only code.
      • tmalsburg2 3 hours ago
        Humans will not regurgitate longer segments of code verbatim. Even if we wanted to, we couldn’t do it because our memory doesn’t work that way. LLM on the other hand can totally do that, and there’s nothing you can do to prevent it.
    • tmp10423288442 17 hours ago
      Wait for court cases I suppose - not really Linus Torvalds' job to guess how they'll rule on the copyright of mere training. Presumably having your AI actually consult codebases with incompatible licenses at runtime is more risky.
    • Luker88 5 hours ago
      NIT: All AI code satisfies the GPL license.

      Anything generated by an AI is public domain. You can include public domain in your GPL code.

      I would urge some stronger requirement with the help of a lawyer. You only need a comment like "completely coded by AI, but 100% reviewed by me" to make that code's license worthless.

      The only AI-generated part copyrightable are the ones modified by a human.

      I am afraid that this "waters down" the actual licensed code.

      ...We should start opening issues on "100% vibecoded" projects for relicensing to public domain to raise some awareness to the issue.

  • KaiLetov 8 hours ago
    The policy makes sense as a liability shield, but it doesn't address the actual problem, which is review bandwidth. A human signs off on AI-generated code they don't fully understand, the patch looks fine, it gets merged. Six months later someone finds a subtle bug in an edge case no reviewer would've caught because the code was "too clean."
    • ugh123 7 hours ago
      > they don't fully understand, the patch looks fine

      I don't get this part. Why is the reviewer signing off on it? AI code should be fully documented (probably more so than a human could) and require new tests. Code review gates should not change

    • altmanaltman 5 hours ago
      I mean the same can happen with human-written code no? Reviewer signs off on it and subtle bug in edge case no one saw?

      Or you mean the velocity of commits will be so much that reviewers will start making more mistakes?

  • dataviz1000 17 hours ago
    This is discussed in the Linus vs Linus interview, "Building the PERFECT Linux PC with Linus Torvalds". [0]

    [0] https://youtu.be/mfv0V1SxbNA?si=CBnnesr4nCJLuB9D&t=2003

    • globular-toast 4 hours ago
      Hardly "discussed", perhaps "mentioned". Sebastian is basically an entertainer who can plug things in to sockets.
  • deadbabe 55 minutes ago
    How can we automate the disclosure of what AI agent was used in a PR and the extent of code? Would be nice to also have an audit of prompts used, as that could also be considered “code”.
  • dec0dedab0de 17 hours ago
    All code must be compatible with GPL-2.0-only

    Am I being too pedantic if I point out that it is quite possible for code to be compatible with GPL-2.0 and other licenses at the same time? Or is this a term that is well understood?

    • compyman 17 hours ago
      You might be being too pedantic :)

      https://spdx.org/licenses/GPL-2.0-only.html It's a specific GPL license (as opposed to GPL 2.0-later)

    • philipov 17 hours ago
      GPL-2.0-only is the name of a license. One word. It is an alternative to GPL-2.0-or-later.
      • kbelder 14 hours ago
        Right, the final hyphen changes the meaning of the sentence.

        "GPL-2.0-only" "GPL-2.0 only"

  • feverzsj 9 hours ago
    Linux is founded by all these big companies. Linus couldn't block AI pushes from them forever.
    • becquerel 2 hours ago
      He's been vibecoding some stuff himself personally, on one of his scuba projects. You could take people as actually believing in the things they do and say.
    • paganel 2 hours ago
      Correct, in the end big money talks.
  • themafia 16 hours ago
    > All contributions must comply with the kernel's licensing requirements:

    I just don't think that's realistically achievable. Unless the models themselves can introspect on the code and detect any potential license violations.

    If you get hit with a copyright violation in this scheme I'd be afraid that they're going to hammer you for negligence of this obvious issue.

    • Joel_Mckay 9 hours ago
      US legal consensus has set the precedent that "AI" output can't be copyrighted. Thus, technically no one can really own or re-license prompt output.

      Re-licensing public domain uncopyrightable work as GPL/LGPL is almost certainly a copyright violation, and no different than people violating GPL/LGPL in commercial works.

      Linus is 100% wrong on this choice, and has introduced a serious liability into the foundation upstream code. =3

      https://en.wikipedia.org/wiki/Founder%27s_syndrome

      https://www.youtube.com/watch?v=X6WHBO_Qc-Q

      • kam 9 hours ago
        > Being in the public domain is not a license; rather, it means the material is not copyrighted and no license is needed. Practically speaking, though, if a work is in the public domain, it might as well have an all-permissive non-copyleft free software license. Public domain material is compatible with the GNU GPL.

        https://www.gnu.org/licenses/license-list.html#PublicDomain

        • Joel_Mckay 9 hours ago
          Yes, if it is clearly labeled as such, than GPL/LGPL licenced works may be included in such products. However, this relationship cannot make such works GPL without violating copyright, and doesn't magically become yours to re-license isomorphic plagiarized code from LLM.

          For example, one may use NASA public domain photos as you wish, but cannot register copyright under another license you find convenient to sue people. Also, if that public domain photo includes the Nutella trademark, it doesn't protect you from getting sued for violating Ferrero trademarks/patents/copyrights in your own use-case.

          Very different than slapping a new label on something you never owned. =3

      • noosphr 9 hours ago
        >Re-licensing public domain work as GPL/LGPL is almost certainly a copyright violation

        Remember kids never get your legal advice from hn comments.

        • Joel_Mckay 9 hours ago
          I hire specialized IP lawyers to advise me how to mitigate risk: One can't assign licenses on something no one can legally claim right to. You should do the same unless you live in India or China.

          Don't become the cautionary tale kid, as crawlers like sriplaw.com will be DMCA striking your public repos eventually. =3

          https://www.youtube.com/watch?v=xkzy_420hts

  • KhayaliY 15 hours ago
    We've seen in the past, for instance in the world of compliance, that if companies/governments want something done or make a mistake, they just have a designated person act as scapegoat.

    So what's preventing lawyers/companies having a batch of people they use as scapegoats, should something go wrong?

  • gnarlouse 8 hours ago
    I wonder if this is happening because Mythos
  • zxexz 8 hours ago
    I like this. It's just saying you have responsibility for the tools you wield. It's concise.

    Side note, I'm not sure why I feel weird about having the string "Assisted-by: AGENT_NAME:MODEL_VERSION" [TOOL1] [TOOL2] in the kernel docs source :D. Mostly joking. But if the Linux kernel has it now, I guess it's the inflection point for...something.

  • lowsong 16 hours ago
    At least it'll make it easy to audit and replace it all in a few years.
  • bharat1010 9 hours ago
    Honestly kind of surprised they went this route -- just 'you own it, you're responsible for it' is such a clean answer to what feels like an endlessly complicated debate.
  • martin-t 17 hours ago
    This feels like the OSS community is giving up.

    LLMs are lossily-compressed models of code and other text (often mass-scraped despite explicit non-consent) which has licenses almost always requiring attribution and very often other conditions. Just a few weeks ago a SOTA model was shown to reproduce non-trivial amounts of licensed code[0].

    The idea of intelligence being emergent from compression is nothing new[1]. The trick here is giving up on completeness and accuracy in favor of a more probabilistic output which

    1) reproduces patterns and interpolates between patterns of training data while not always being verbatim copies

    2) serves as a heuristic when searching the solution-space which is further guided by deterministic tools such as compilers, linters, etc. - the models themselves quite often generate complete nonsense, including making up non-existent syntax in well-known mainstream languages such as C#.

    I strongly object to anthropomorphising text transformers (e.g. "Assisted-by"). It encourages magical thinking even among people who understand how the models operate, let alone the general public.

    Just like stealing fractional amounts of money[3] should not be legal, violating the licenses of the training data by reusing fractional amounts from each should not be legal either.

    [0]: https://news.ycombinator.com/item?id=47356000

    [1]: http://prize.hutter1.net/

    [2]: https://en.wikipedia.org/wiki/ELIZA_effect

    [3]: https://skeptics.stackexchange.com/questions/14925/has-a-pro...

    • ninjagoo 15 hours ago
      > Just like stealing fractional amounts of money[3] should not be legal, violating the licenses of the training data by reusing fractional amounts from each should not be legal either.

      I think you'll find that this is not settled in the courts, depending on how the data was obtained. If the data was obtained legally, say a purchased book, courts have been finding that using it for training is fair use (Bartz v. Anthropic, Kadrey v. Meta).

      Morally the case gets interesting.

      Historically, there was no such thing as copyright. The English 1710 Statute of Anne establishing copyright as a public law was titled 'for the Encouragement of Learning' and the US Constitution said 'Congress may secure exclusive rights to promote the progress of science and useful arts'; so essentially public benefits driven by the grant of private benefits.

      The Moral Bottomline: if you didn't have to eat, would you care about who copies your work as long as you get credited?

      The more the people that copy your work with attribution, the more famous you'll be. Now that's the currency of the future*. [1]

      You'll do it for the kudos. [2][3]

        *Post-Scarcity Future. 
        [1] https://en.wikipedia.org/wiki/Post-scarcity
        [2] https://en.wikipedia.org/wiki/The_Quiet_War, et. al.
        [3] https://en.wikipedia.org/wiki/Accelerando
      • martin-t 14 hours ago
        > The Moral Bottomline: if you didn't have to eat, would you care about who copies your work as long as you get credited?

        Yes.

        I have 2 issues with "post-scarcity":

        - It often implicitly assumes humanity is one homogeneous group where this state applies to everyone. In reality, if post-scarcity is possible, some people will be lucky enough to have the means to live that lifestyle while others will still by dying of hunger, exposure and preventable diseases. All else being equal, I'd prefer being in the first group and my chance for that is being economically relevant.

        - It often ignores that some people are OK with having enough while others have a need to have more than others, no matter how much they already have. The second group is the largest cause of exploitation and suffering in the world. And the second group will continue existing in a post-scarcity world and will work hard to make scarcity a real thing again.

        ---

        Back to your question:

        I made the mistake of publishing most of my public code under GPL or AGPL. I regret is because even though my work has brought many people some joy and a bit of my work was perhaps even useful, it has also been used by people who actively enjoy hurting others, who have caused measurable harm and who will continue causing harm as long as they're able to - in a small part enabled by my code.

        Permissive licenses are socially agnostic - you can use the work and build on top of it no matter who you are and for what purpose.

        A(GPL) is weakly pro-social - you can use the work no matter what but you can only build on top of it if you give back - this produces some small but non-zero social pressure (enforced by violence through governments) which benefits those who prefer cooperation instead of competition.

        What I want is a strongly pro-social license - you can use or build on top of my work only if you fulfill criteria I specify such as being a net social good, not having committed any serious offenses, not taking actions to restrict other people's rights without a valid reason, etc.

        There have been attempts in this direction[0] but not very successful.

        In a world without LLMs, I'd be writing code using such a license but more clearly specified, even if I had to write my own. Yes, a layer would do a better job, that does not mean anything written by a non-lawyer is completely unenforceable.

        With LLMs, I have stopped writing public code at all because the way I see it, it just makes people much richer than me even richer at a much faster rate than I can ever achieve myself. Ir just makes inequality worse. And with inequality, exploitation and oppression tends to soon follow.

        [0]: https://json.org/license.html

        • ninjagoo 11 hours ago
          > In reality, if post-scarcity is possible, some people will be lucky enough to have the means to live that lifestyle while others will still by dying of hunger, exposure and preventable diseases.

          By definition, that's not a post-scarcity world; and that's already today's world.

          > It often ignores that some people are OK with having enough while others have a need to have more than others, no matter how much they already have.

          Do you think that's genetic, or environmental? Either way, maybe it will have been trained out of the kids.

          > it has also been used by people who actively enjoy hurting others, who have caused measurable harm

          Taxes work the same way too. "The Good Place" explores these second-order and higher-order effects in a surprisingly nuanced fashion.

          Control over the actions of others, you have not. Keep you from your work, let them not.

          > What I want is a strongly pro-social license - you can use or build on top of my work only if you fulfill criteria I specify such as being a net social good

          These are all things necessary in a society with scarcity. Will they be needed in a post-scarcity society that has presumably solved all disorder that has its roots in scarcity?

          > With LLMs, I have stopped writing public code at all because the way I see it, it just makes people much richer than me even richer at a much faster rate than I can ever achieve myself.

          Yes, the futility of our actions can be infuriating, disheartening, and debilitating. Comes to mind the story about the chap that was tossing washed-ashore starfish one by one. There were thousands. When asked why do this futile task - can't throw them all back- he answered as he threw the next ones: it matters to this one, it matters to this one, ...

          Hopefully, your code helped someone. That's a good enough reason to do it.

        • ninjagoo 11 hours ago
          [dead]
    • williamcotton 2 hours ago
      "Just a few weeks ago a SOTA model was shown to reproduce non-trivial amounts of licensed code[0]."

      That LLM response is describing a specific project with full attribution.

      • martin-t 34 minutes ago
        And it proves the code is stored (in a compressed form) in the model.
    • KK7NIL 17 hours ago
      > I strongly object to anthropomorphising text transformers (e.g. "Assisted-by").

      I don't think this is anthropomorphising, especially considering they also include non-LLM tools in that "Assisted-by" section.

      We're well past the Turing test now, whether these things are actually sentient or not is of no pragmatic importance if we can't distinguish their output from a sentient creature, especially when it comes to programming.

      • davemp 10 hours ago
        > We're well past the Turing test now

        Nope, there is no “The” Turing Test. Go read his original paper before parroting pop sci nonsense.

        The Turing test paper proposes an adversarial game to deduce if the interviewee is human. It’s extremely well thought out. Seriously, read it. Turing mentions that he’d wager something like 70% of unprepared humans wouldn’t be able to correctly discern in the near future. He never claims there to be a definitive test that establishes sentience.

        Turing may have won that wager (impressive), but there are clear tells similar to the “how many the r’s are in strawberries?” that an informed interrogator could reliably exploit.

      • martin-t 17 hours ago
        Would you say "assisted by vim" or "assisted by gcc"?

        It should be either something like "(partially/completely) generated by" or if you want to include deterministic tools, then "Tools-used:".

        The Turing test is an interesting thought experiment but we've seen it's easy for LLMs to sound human-like or make authoritative and convincing statements despite being completely wrong or full of nonsense. The Turing test is not a measure of intelligence, at least not an artificial one. (Though I find it quite amusing to think that the point at which a person chooses to refer to LLMs as intelligence is somewhat indicative of his own intelligence level.)

        > whether these things are actually sentient or not is of no pragmatic importance if we can't distinguish their output from a sentient creature, especially when it comes to programming

        It absolutely makes a difference: you can't own a human but you can own an LLM (or a corporation which is IMO equally wrong as owning a human).

        Humans have needs which must be continually satisfied to remain alive. Humans also have a moral value (a positive one - at least for most of us) which dictates that being rendered unable to remain alive is wrong.

        Now, what happens if LLMs have the same legal standing as humans and are thus able to participate in the economy in the same manner?

        • zbentley 16 hours ago
          If a linter insists on a weird line of code, I’m probably commenting that line as “recommended by whatever-linter”, yes.
          • martin-t 14 hours ago
            I wouldn't but I can see why some people would.

            I can't point out where I draw the line clearly but here's one different I notice:

            A recommendation can be both a thing and an action. A piece of text is a recommendation and it does not matter how it was created.

            Assistance implies some parity in capabilities and cooperative work. Also it can pretty much only be an action, you cannot say "here is some assistance" and point to a thing.

    • tmp10423288442 17 hours ago
      On https://news.ycombinator.com/item?id=47356000, it looks like the user there was intentionally asking about the implementation of the Python chardet library before asking it to write code, right? Not surprising the AI would download the library to investigate it by default, or look for any installed copies of `chardet` on the local machine.
      • martin-t 17 hours ago
        The comment says "Opus 4.6 without tool use or web access"
    • user34283 5 hours ago
      For [0], it was supposedly shown to do it when specifically prompted to do so.

      Despite agentic tools being used by millions of developers now, I am not aware of a single real case where accidental reproduction of copyrightable code has been an issue.

      Further, some model providers offer indemnity clauses.

      It seems like a non-issue to me, practically.

  • shevy-java 17 hours ago
    Fork the kernel!

    Humans for humans!

    Don't let skynet win!!!

    • aruametello 16 hours ago
      > Fork the kernel!

      pre "clanker-linux".

      I am more intrigued by the inevitable Linux distro that will refuse any code that has AI contributions in it.

  • baggy_trough 18 hours ago
    Sounds sensible.
  • spwa4 16 hours ago
    Why does this file have an extension of .rst? What does that even mean for the fileformat?
    • jdreaver 16 hours ago
      https://en.wikipedia.org/wiki/ReStructuredText

      This format really took off in the Python community in the 2000's for documentation. The Linux kernel has used it for documentation as well for a while now.

    • adikso 16 hours ago
      reStructuredText. Just like you have .md files everywhere.
    • SV_BubbleTime 10 hours ago
      Everyone missed a great opportunity to lie to you and tell you that the Linux kernel now requires you to program in rust.
  • Xiaoher-C 18 minutes ago
    [dead]
  • BahaaKhateeb123 18 minutes ago
    [dead]
  • cameolkc 3 hours ago
    [dead]
  • techpulselab 13 hours ago
    [dead]
  • redoh 17 hours ago
    [dead]
  • builderhq_io 5 hours ago
    [dead]
  • midnightn 13 hours ago
    [dead]
  • northstar-au 15 hours ago
    [dead]
  • chaosprint 10 hours ago
    [dead]
  • the_biot 16 hours ago
    [flagged]
    • _blaise_ 15 hours ago
      Linus is the original vibe coder. He barks orders at cadre of human contributor agents and subsystem maintainer agents until the code looks the way he likes.
      • sph 5 hours ago
        > He barks orders at cadre of human contributor agents and subsystem maintainer agents until the code looks the way he likes

        That's called being a manager, not a vibe coder.

      • ninjagoo 14 hours ago
        > Linus is the original vibe coder.

        LoL.

        Jesting aside, OpenHub lists Linus Torvalds as having made 46,338 commits. 45,178 for Linux, 1,118 for Git. His most recent commit was 17 days ago. [1]

        That is a far cry from a vibe-coder, no? :-)

        Bit unfair to call his leadership vibe-coding, methinks.

        [1] https://openhub.net/accounts/9897

  • bitwize 18 hours ago
    Good. The BSDs should follow suit. It is unreasonable to expect any developer not to use AI in 2026.
  • NetOpWibby 15 hours ago
    inb4 people rage against Linux
  • rwmj 6 hours ago
    Interesting that coccinelle, sparse, smatch & clang-tidy are included, at least as examples. Those aren't AI coding tools in the normal sense, just regular, deterministic static analysis / code generation tools. But fine, I guess.

    We've been using Co-Developed-By: <email> for our AI annotations.