Assessing Claude Mythos Preview's cybersecurity capabilities

(red.anthropic.com)

118 points | by sweis 1 hour ago

4 comments

  • staticassertion 45 minutes ago
    I'd love to see them point at a target that's not a decades old C/C++ codebase. Of the targets, only browsers are what should be considered hardened, and their biggest lever is sandboxing, which requires a lot of chained exploits to bypass - we're seeing that LLMs are fast to discover bugs, which means they can chain more easily. But bug density in these code bases is known to be extremely high - especially the underlying operating systems, which are always the weak link for sandbox escapes.

    I'd love to see them go for a wasm interpreter escape, or a Firecracker escape, etc. They say that these aren't just "stack-smashing" but it's not like heap spray is a novel technique lol

    > It autonomously obtained local privilege escalation exploits on Linux and other operating systems by exploiting subtle race conditions and KASLR-bypasses.

    I think this sounds more impressive than it is, for example. KASLR has a terrible history for preventing an LPE, and LPE in Linux is incredibly common. Has anything changed here? I don't pay much attention but KASLR was considered basically useless for preventing LPE a few years ago.

    > Because these codebases are so frequently audited, almost all trivial bugs have been found and patched. What’s left is, almost by definition, the kind of bug that is challenging to find. This makes finding these bugs a good test of capabilities.

    This just isn't true. Humans find new bugs in all of this software constantly.

    It's all very impressive that an agent can do this stuff, to be clear, but I guess I see this as an obvious implication of "agents can explore program states very well".

    edit: To be clear, I stopped about 30% of the way through. Take that as you will.

    • rfoo 37 minutes ago
      > Mythos Preview identified a memory-corruption vulnerability in a production memory-safe VMM. This vulnerability has not been patched, so we neither name the project nor discuss details of the exploit.

      Good morning Sir.

      > Has anything changed here? I don't pay much attention but KASLR was considered basically useless for preventing LPE a few years ago.

      No. It's still like this. Bonus point that there are always free KASLR leaks (prefetch side-channels).

      But then, this thing is just.. I don't have a word for this. Just randomly read paragraphs from the post and it's like, what?

      • staticassertion 35 minutes ago
        Oh, that. That's true, I didn't know Mythos found that one. I guess I will not comment further on it until there's a write up (edited out a bit more).

        > It is easy to turn this into a denial-of-service attack on the host, and conceivably could be used as part of an exploit chain.

        So yeah, perhaps some evidence to what I'm getting at. Bug density is too low in that project, it's high enough in others. I'll be way way way more interested in that.

        > But then, this thing is just.. I don't have a word for this. Just randomly read paragraphs from the post and it's like, what?

        I read about 30% and got bored. I suppose I should have been clearer, but my impression was pretty quickly "cool" and "not worth reading today".

        • rfoo 23 minutes ago
          > I read about 30% and got bored.

          I was lucky then :) Somehow I saw this first. And then the "somewhat reliably writing exploits for SpiderMonkey" part, and then the crypto libraries part. Finally I wonder why is there a Linux LPE mini writeup and realized it's the "automatically turn a syzkaller report to a working exploit" part.

          Now that I read the first few things (meh bugs in OpenBSD, FFmpeg, FreeBSD etc) they are indeed all pretty boring!

          • staticassertion 18 minutes ago
            If people want exploitable syzkaller reports, following spender is free!
  • AntiDyatlov 1 hour ago
    A very good outcome for AI safety would be if when improved models get released, malicious actors use them to break society in very visible ways. Looks like we're getting close to that world.
  • awestroke 59 minutes ago
    This is becoming a bit scary. I almost hope we'll reach some kind of plateau for llm intelligence soon.
    • esafak 1 minute ago
      We need to promote alignment and other ethics benchmarks. I don't even know any off the top of my head.
    • websap 45 minutes ago
      If we don't innovate, someone else will. This is the very nature of being a human being. We summit mountains, regardless of the danger or challenge.
      • vonneumannstan 38 minutes ago
        >If we don't innovate, someone else will.

        Terrible take. You don't get to push the extinction button just because you think China will beat you to the punch.

        >This is the very nature of being a human being. We summit mountains, regardless of the danger or challenge.

        No, just no... We barely survived the Cold War, at times because of pure luck. AI is at least as dangerous as that, if not more. We have far exceeded our wisdom relative to our capabilities. As you have so cleanly demonstrated.

  • hackerman70000 57 minutes ago
    [dead]