• Rooskie91@discuss.online
    link
    fedilink
    English
    arrow-up
    95
    ·
    2 days ago

    Seems like a flacid attempt to shift the blame of consuming immense amounts of resources Chat got uses from the company to the end user.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      10
      ·
      24 hours ago

      They’re just making excuses for the fact that no one can work out how to make money with AI except to sell access to it in the vague hope that somebody else can figure something useful to do with it and will therefore pay for access.

      I can run an AI locally on expensive but still consumer level hardware. Electricity isn’t very expensive so I think their biggest problem is simply their insistence on keeping everything centralised. If they simply sold the models people could run them locally and they could push the burden of processing costs onto their customers, but they’re still obsessed with this attitude that they need to gather all the data in order to be profitable.

      Personally I hope we either run into AGI pretty soon or give up on this AI thing. In either situation we will finally stop talking about it all the time.

    • vivendi@programming.dev
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      3
      ·
      edit-2
      2 days ago

      Inference costs are very, very low. You can run Mistral Small 24B finetunes that are better than GPT-4o and actually quite usable on your own local machine.

      As for training costs, Meta’s LLAMA team displace their emissions with environmental programs, which is more green than 99.9% of any company making any product you use

      TLDR; don’t use ClosedAI use Mistral or other foss projects

      EDIT: I recommend cognitivecomputations Dolphin 3.0 Mistral Small R1 fine tune in particular. I’ve only used it for mathematical workloads in truth, but it has been exceedingly good at my tasks thus far. The training set and the model are both FOSS and uncensored. You’ll need a custom system prompt to activate the Chain of Thought reasoning, and you’ll need a comparatively low temperature to keep the model from creating logic loops for itself (0.1 - 0.4 range should be OK)