Ask HN: Setup for Local LLM Backups?
3 by andy99 | 1 comments on Hacker News.
As a backup in case future access changes, I'd like to have sota LLM weights and a machine that can run queries with them in reserve. OpenAI releasing weights is as good a time as any to actually do it. My question is what hardware setup would you buy, that is reasonable accessible (say under 5k, ideally well under) and can do a good job running the models for local queries. And, if it matters, could be suitable for archiving, say until either there is a substantial advance rendering today's LLMs obsolete or until needed because good, open weights aren't available anymore.