• Terrasque@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 days ago

    This is a very simple one, but someone lower down apparently had issue with a script like this:

    https://i.imgur.com/wD9XXYt.png

    I tested the code, it works. If I was gonna change anything, probably move matplotlib import to after else so it’s only imported when needed to display the image.

    I have a lot more complex generations in my history, but all of them have personal or business details, and have much more back and forth. But try it yourself, claude have a free tier. Just try to be clear in the prompt what you want. It might surprise you.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      I appreciate the effort you put into the comment and your kind tone, but I’m not really interested in increasing LLM presence in my life.

      I said what I said, and I experienced what I experienced. Providing me an example where it works is in no way a falsification of the core of my original comment: LLMs have no place generating code for secure applications apart from human review, because they don’t have a mechanism to comprehend or proof their own work.

      • FlorianSimon@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        I’d also add that, depending on the language, the ways you can shoot yourself in the foot are very subtle (cf C++/C, which are popular languages for “secure” stuff).

        It’s already hard to not write buggy code, but I don’t think you will detect them by just reviewing LLM code, because detecting issues during code review is much harder than when you’re writing code.

        Oh, and I assume it’ll be tough to get an LLM to follow MISRA conventions.

        • Telorand@reddthat.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          It’s already hard to not write buggy code, but I don’t think you will detect them by just reviewing LLM code, because detecting issues during code review is much harder than when you’re writing code.

          Definitely. That’s what I was trying to drive at, but you said it well.