• jjjalljs@ttrpg.network
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    1 day ago

    But don’t LLMs not do math, but just look at how often tokens show up next to each other? It’s not actually doing any prime number math over there, I don’t think.

    • Agent641@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      If I fed it a big enough number, it would report back to me that a particular python math library failed to complete the task, so it must be neralling it’s answer AND crunching the numbers using sympy on its big supercomputer

      • jjjalljs@ttrpg.network
        link
        fedilink
        English
        arrow-up
        3
        ·
        21 hours ago

        Is it running arbitrary python code server side? That sounds like a vector to do bad things. Maybe they constrained it to only run some trusted libraries in specific ways or something.

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      They do math, just in a very weird (and obviously not super reliable) way. There is a recent paper by anthropic that explains it, I can track it down if you’d be interested.

      Broadly speaking, the weights in a model will form sorts of “circuits” which can perform certain tasks. On something hard like factoring numbers the performance is probably abysmal but I’d guess the model is still trying to approximate the task somehow.