aports/testing/llama.cpp/llama-server.confd
Hugo Osvaldo Barrera beb54cadeb testing/llama.cpp: new aport
The standalone ggml library does not have a matching API and cannot be
used to build llama.cpp. It's pointless to package the vendored version
separately, since there's no other project which can rely on it.

convert_hf_to_gguf requires several missing depends, so is omitted for
now.
2026-03-20 15:02:47 +00:00

8 lines
284 B
Bash

# Examples:
#
# Single model with local path:
#command_args="-m /var/lib/llama-server/models/some_model.gguf --host 127.0.0.1 --port 8080"
# Router mode with multiple models:
#command_args="--models-dir /var/lib/llama-server/models --host 127.0.0.1 --port 8080 -np 4"
command_args=""