* integrate langchain
get rid of mongodb
use llama-cpp-python bindings
* fixed most chat endpoints except posting questions
* Working post endpoint !
* everything works except streaming
* current state
* streaming as is
* got rid of langchain wrapper for calling llm, went back to using bindings directly
* working streaming
* sort chats by time
* cleaned up styling and added back loading indicator
* Add persistence support to redis
* fixed tooltips
* fixed default prompts
* added link to api docs (closes How to use the api #155 )
* begin work on dev environment
* more work on dev image
* working dev + prod images with SPA front-end
* reworked dockerfile
* make CI point to the right action
* Improvements to github actions (#79)
* Improvements to github actions
* Change username to repo owner username
* Add fix for login into ghcr (#81)
* Update bug_report.yml
* added dev instructions to readme
* reduced number of steps in dockerfile
---------
Co-authored-by: Juan Calderon-Perez <835733+gaby@users.noreply.github.com>
* initial work on linting & templates
* moved everyone into a nice dockerfile
* move everyone into a single dockerfile
* update sample .env file
* got rid of .env file
* rename db volume to avoid confusion and conflicts with previous version
* added bug report template
- Added nginx, api & web app on the same port now.
- Allowed CSR, through sveltekit, with a hook for redirecting server side api requests.
- Implemented menu to pass model parameters on start page.
- Added a loading indicator while the model is computing
- Added convert script, thanks to @eiz, will catch unconverted .bin files and convert them on startup.
- Switched back to main branch of llama.cpp
- Got rid of code to handle magic.dat