aports/testing/llama.cpp/llama-server.initd
Hugo Osvaldo Barrera e1346e394a testing/llama.cpp: upgrade to 0.0.8697
Also fix segfault fetching models. The bug in httplib was fixed
upstream; switch to using the system httplib instead of the vendored
one.

Fix OpenRC discarding logs for llama-server.
2026-04-08 02:09:57 +00:00

19 lines
430 B
Bash

#!/sbin/openrc-run
description="HTTP Server for LLM inference"
command=/usr/bin/llama-server
: ${command_user:=llama-server:llama-server}
output_logger="logger -t llama-server -p daemon.info"
error_logger="logger -t llama-server -p daemon.info"
start_pre() {
if [ -z "${command_args}" ]; then
eerror "command_args not specified in /etc/conf.d/llama-server"
return 1
fi
}
no_new_privs="yes"
supervisor="supervise-daemon"