About
Aritra Mondal
One-man bug factory
WhichLLM was built because picking the right local model shouldn't require reading dozens of benchmark tables and cross-referencing file sizes against your GPU specs. Tell us your hardware and use case, we do the rest.
Found a bug or have a suggestion? Open an issue on GitHub.