From f54b1ce44da8ee34ea5eecf1b5cf67f28bc62968 Mon Sep 17 00:00:00 2001 From: Nathan Sarrazin Date: Fri, 31 Mar 2023 08:51:54 +0200 Subject: [PATCH] Update README.md --- README.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 2068f89..7bb38cc 100644 --- a/README.md +++ b/README.md @@ -39,6 +39,9 @@ llama will just crash if you don't have enough available memory for your model. - 13B requires about 12GB free - 30B requires about 20GB free +### Compatible CPUS +Currently Serge requires a CPU compatible with AVX2 instructions. Try `lscpu | grep avx2` in a shell, and if this returns nothing then your CPU is incompatible for now. + ## Support Feel free to join the discord if you need help with the setup: https://discord.gg/62Hc6FEYQH @@ -65,9 +68,9 @@ DOCKER_BUILDKIT=1 docker compose up -d --build - [x] Front-end to interface with the API - [x] Pass model parameters when creating a chat +- [x] Manager for model files +- [ ] Support for other models +- [ ] LangChain integration - [ ] User profiles & authentication -- [ ] Different prompt options -- [ ] LangChain integration with a custom LLM -- [ ] Support for other llama models, quantization, etc. And a lot more!