• @[email protected]
        link
        fedilink
        English
        153 months ago

        I’ve got it running with a 3090 and 32GB of RAM.

        There are some models that let you run with hybrid system RAM and VRAM (it will just be slower than running it exclusively with VRAM).

        • Deceptichum
          link
          fedilink
          163 months ago

          Yeah but damn does it get slow.

          I always find it interesting how text is so much slower than image generation. I can do a 1024x1024 in probably 20s, but I get like 1 word a second with text.

          • ferret
            link
            fedilink
            English
            53 months ago

            Languages are complex and, more importantly, much less forgiving to error

      • DarkThoughts
        link
        fedilink
        13 months ago

        Hopefully we see more specific hardware for this. Like extension cards with pretty much just tensor cores and their own ram.

        • Deceptichum
          link
          fedilink
          13 months ago

          I’d love to see some consumer level AI stuff, sadly it all seems to be designed for server farms and by the time it ages out into consumer prices it’s so obsolete there’s no point in getting it.

    • mesamune
      link
      fedilink
      English
      103 months ago

      Nice! Thats a cool project, ill have to give it a try. I love the idea of self hosting local LLMs. Ive been playing around with: https://lmstudio.ai/ and it directly downloads from hugging face.

    • DarkThoughts
      link
      fedilink
      13 months ago

      I tried llamafile for text gen too but I couldn’t get ROCm to properly work with it to run it through my GPU without having to build it myself, which I’m really not into. And CPU text gen is waaaaaay too slow for anything. Mixtral response was like ~250 seconds or so for ~1k context tokens, I think Mistral was about 52 seconds or something around that number.

      https://github.com/Mozilla-Ocho/llamafile Mixtral is definitely beefy, Mistral is quite a bit faster and there’s a few even smaller prebuilt ones. But the smaller you go the less complex the responses will be. I think llamafile is a good step in the right direction though, but it’s still not a good out of the box experience yet. At least I got farther with it than with oobabooga, which is the recommendation for SillyTavern, which would just crash whenever it generated anything without even giving me an error.

        • DarkThoughts
          link
          fedilink
          03 months ago

          Have you missed the first part where I explained that I couldn’t get it to run through my GPU? I would only have a 6650 XT anyway but even that would be significantly faster than my CPU. How far I can’t say exactly without experiencing it though, but I suspect with longer chats and consequently larger context sizes it would still be too slow to be really usable. Unless you’re okay waiting for ages for a response.

          • @[email protected]
            link
            fedilink
            English
            13 months ago

            Sorry, I’m just curious in general how fast these local LLMs are. Maybe someone else can give some rough info.

  • @[email protected]
    link
    fedilink
    English
    283 months ago

    Can we have smaller more domain specific models. that shouldn’t require more than casual hardware. like a small model for coding, one for medicine, one for history, and so on. ???

    • @[email protected]
      link
      fedilink
      English
      143 months ago

      Check out hugging face! Honestly fine tunned models for specific domains seems very popular (if for nothing else because training smaller models is just easier!).

      • DarkThoughts
        link
        fedilink
        13 months ago

        Unfortunately the roleplaying chatbot type models are typically fairly sizeable / demanding. I’m curious how this will develop with more specific AI hardware though, like extension cards with primarily tensor cores + their own ram, so that you don’t have to use your GPU for that. If we can drag down the price for such hardware then locally run models could become much more viable and mainstream.

            • @[email protected]
              link
              fedilink
              English
              73 months ago

              but you have the use for the very software you’re using daily or medicine developments.

              I play D&D from time to time, but saying that roleplaying is more important than medicine is just nuts.

              • @[email protected]
                link
                fedilink
                English
                33 months ago

                Not wanting to be mean, I just find the thought of people talking to robots a bit strange, and use them as tools only. Not sure what “roleplay” means, if it is some “fantasy DND generator” still you could say this may be better done by humans to keep that grey matter running.

              • DarkThoughts
                link
                fedilink
                23 months ago

                Not so much for the latter but I’m pretty specifically talking about my personal use case here. lol “Roleplaying” in this scenario isn’t really referring to actual tabletop type RPGs btw. It’s the LLM roleplaying specific characters or personas that you then chat with in specific (or not so specific) scenarios. Although that same tech is also experimented with to be used in video games for NPCs. But who knows. A specifically trained model could potentially make a half decent dungeon master too.

                • @[email protected]
                  link
                  fedilink
                  English
                  03 months ago

                  There also a huge amount of training, medical and otherwise, that’s done through role-playing. I could definitely see medical students getting use out of learning telemedicine with LLMs that were ultimately adapted from TTRPGs character generator schemas.

    • melroy
      link
      fedilink
      33 months ago

      I cannot function with T-Mobile internet, that is for sure. I’m moving to another ISP

  • @[email protected]
    link
    fedilink
    English
    13 months ago

    This is a big part of why I’m not worried about this wave of AI.

    It was all trained on consumer hardware. Lots of it, yes, at great expense… but brute force keeps ceding ground to smaller models built on that experience. Google went from a monolithic Go bot trained on historical games, to a much smaller Go bot trained by playing that bot and itself, to an even smaller bot that plays a wide variety of games. It’s just matrix math and we know we’re doing it badly. The endgame is running Not Hotdog on a Game Boy Camera.

    On the other side, the fact you can run these on anything means we’re never going to stop it. This fight is over. Fantasies about Bing and OpenAI preventing anyone from rendering Bad Things™ only push people toward local models. Higher adoption creates a virtuous circle of streamlining and empowerment for anyone getting into the technology. And since porn was the first thing all these billion-dollar companies tried stopping, well, guess what any rando with a high-end GPU can crank out.

    … phrasing.