• DarkThoughts
    link
    fedilink
    03 months ago

    Have you missed the first part where I explained that I couldn’t get it to run through my GPU? I would only have a 6650 XT anyway but even that would be significantly faster than my CPU. How far I can’t say exactly without experiencing it though, but I suspect with longer chats and consequently larger context sizes it would still be too slow to be really usable. Unless you’re okay waiting for ages for a response.

    • @[email protected]
      link
      fedilink
      English
      13 months ago

      Sorry, I’m just curious in general how fast these local LLMs are. Maybe someone else can give some rough info.