• 0 Posts
  • 5 Comments
Joined 3 months ago
cake
Cake day: July 2nd, 2025

help-circle
  • They do simultaneous workloads but each load is essentially performing the same function. A person, on the other hand, is a huge variety of differing functions all working in tandem; a set of vastly different operations that are all inextricably linked together to form who we are. No single form of generative algorithm is anything more than a tiny fraction of what we think of as a conscious being. They might perform the one operation they’re meant for on a much greater scale than we do, but we are nowhere near linking all those pieces together in a meaningful, cohesive stream.

    Edit, think of the algorithms like adding more cores to a CPU. Sure, you can process workloads simultaneously, but each workload is interchangeable and can be arbitrarily assigned to each core or thread. Not so with people. Every single operation is assigned to a specialized part of the brain that only does that one specific type of operation. And you can’t just swap out the RAM or GPU; every piece is wired specifically to interact with only the other pieces they were grown together with.


  • I think it’s a bit overzealous to say LLMs are the wrong approach. It is possible the math behind them would be useless to a true AI, but as far as I can tell, the only definitive statement we can make right now is that they can’t be the whole approach. Still, you’re absolutely right that there is a huge set of operations we haven’t figured out yet if we want a genuine AI.

    My understanding of consciousness is that it isn’t one single sequence of operations, but a set of simultaneous ones. There’s a ton of stuff all happening at once for each of our senses. Take sight, for example. Just to see things, we need to measure light intensity, color definition, and spacial relationships, and then mix that all together in a meaningful way. Then we have to balance each of our senses, decide which one to focus on in the moment, or even to focus on several at once. And that hasn’t even touched on thoughts, emotions, social relationships, unconscious bodily functions, or the systems in place that let us switch things back and forth between conscious and unconscious, like breathing, or blinking, or walking, and so on. There are hundreds, maybe thousands of operations happening in our brains simultaneously at any given moment.

    So, without a doubt, LLMs aren’t the most energy efficient way to do pattern recognition. But I find it hard to believe that a strong system for pattern recognition would be fully unusable in a greater system. If/when we figure the rest out, I’m sure an LLM could be used as a piece of a much greater puzzle… if we wanted to burn all that energy.



  • On the other hand, jellyfin’s identify feature works better than plex’s did for me, and it lets you rename stuff very easily whereas Plex needed you to find the exact piece of media in a database.

    My mom asked me to rip a set of weirdo bootleg tai chi DVDs years ago, back when I used Plex, but I couldn’t figure out how to get them to show up in the library because, again, weirdo bootleg media and I have no idea where she got them. But I switched to jellyfin last year and on a whim decided to mess with them, and getting them to show up in my jellyfin library was basically automatic

    Edit, another fun example of fucking with Plex’s identify feature just came to mind. For some reason it kept deciding that random movies were actually some movie named “A Fish Called Wanda.” I’d never heard of it before, the movies it would misidentify were entirely random as far as I could tell, and no amount of fuckery would get it to identify the movie correctly. It would decide that, say, The Matrix was actually AFCW, I’d remove the files for The Matrix, and it would decide something else was AFCW. Eventually I got fed up and downloaded an actual copy of AFCW, but it still refused to play the correct files if I navigated to AFCW in my library. Never did figure that one out.