MrLLM@ani.socialtoAnime@ani.social•"Vampire-chan Can't Suck Properly" Anime Adaptation Announced with New Teaser VisualEnglish
4·
21 hours agoFBI here, as long as OP is a teenager, I see no problem.
FBI here, as long as OP is a teenager, I see no problem.
What would happen if two Ninjas come across each other?
Uhh, oh, fair enough (゚∀゚)
Yeah, I’ve successfully run the cut down version of deepseek-r1 through Ollama. The model itself is the 7b (I’m VRAM limited to 8GB). I used an M1 Mac Mini to run it, in terms of performance, is fast and the quality of the generated content is okay.
Depending on your hardware and SO, you will or not be able to get to run a LLM locally with reasonable speed. You might want to check the GPU support for Ollama. You don’t need a GPU as it can run on the CPU, but it’ll certainly be slower.