mirror of
https://gitlab.alpinelinux.org/alpine/aports.git
synced 2026-03-29 18:32:43 +02:00
The standalone ggml library does not have a matching API and cannot be used to build llama.cpp. It's pointless to package the vendored version separately, since there's no other project which can rely on it. convert_hf_to_gguf requires several missing depends, so is omitted for now.
8 lines
284 B
Bash
8 lines
284 B
Bash
# Examples:
|
|
#
|
|
# Single model with local path:
|
|
#command_args="-m /var/lib/llama-server/models/some_model.gguf --host 127.0.0.1 --port 8080"
|
|
# Router mode with multiple models:
|
|
#command_args="--models-dir /var/lib/llama-server/models --host 127.0.0.1 --port 8080 -np 4"
|
|
command_args=""
|