Your infra's Ark — back up everything, keep it yours.
Arkeep is an open-source backup management tool with a server/agent architecture. Deploy the server once, install lightweight agents on every machine you want to back up, and manage everything from a single web interface — built on top of Restic and Rclone.
🚧 Arkeep is in early access — core features are working and ready for testing. Not yet recommended for production use. Star the repository to follow progress.
- Why Arkeep
- Architecture
- Features
- Supported Destinations
- Deployment
- Configuration
- Observability
- Notifications
- Development
- FAQ
- Upgrading
- Roadmap
- Telemetry
- Contributing
- License
Managing backups across multiple machines means juggling separate Restic configs, cron jobs, and shell scripts on every host. There is no central view, no unified alerting, and no easy way to verify everything ran successfully. Arkeep fixes this.
- Centralized management — one dashboard for all your servers, no more managing backup configs machine by machine
- Docker-aware — automatically discovers containers and volumes, adapts when you add or remove services without restarts
- OIDC ready — integrates with Zitadel, Keycloak, Authentik, or any standard identity provider
- Multi-destination — apply the 3-2-1 rule with multiple backup destinations per policy
- End-to-end encryption — all backups are encrypted client-side; credentials are never stored in plain text
- Real-time — live logs and status updates while backups run, accessible from any device
- No vendor lock-in — built on Restic and Rclone; your data is always accessible even without Arkeep
┌─────────────────────────────────────────┐
│ Arkeep Server │
│ ┌──────────┐ ┌──────────────────────┐ │
│ │ REST API │ │ gRPC Server │ │
│ │ :8080 │ │ :9090 │ │
│ └──────────┘ └──────────────────────┘ │
│ ┌────────────────────────────────────┐ │
│ │ Scheduler│Auth│DB│Notif│WebSocket │ │
│ └────────────────────────────────────┘ │
└─────────────────────────────────────────┘
▲ ▲
│ REST/WS │ gRPC (persistent, Pull)
│ │
┌────┴────┐ ┌──────┴──────┐
│ GUI │ │ Agent(s) │
│ (PWA) │ │ (one per │
└─────────┘ │ machine) │
└─────────────┘
Server exposes a REST API for the GUI (port 8080) and a gRPC server for agents (port 9090). It handles scheduling, notifications, and stores all state in SQLite (default) or PostgreSQL.
Agent runs on each machine to be backed up. It initiates a persistent outbound gRPC connection to the server — it never listens on any port. This makes deployment behind NAT and corporate firewalls effortless.
gRPC transport limits: the server and agent both enforce a maximum message size of 16 MB per RPC call. The agent also sends keepalive pings every 30 seconds (10-second timeout) to detect stale connections early. These values are fixed in the binary; no configuration is required.
GUI is a Vue 3 PWA served directly by the server as embedded static files. No separate web server required.
| Feature | Status |
|---|---|
| Server/agent architecture | ✓ |
| Web GUI (PWA, mobile-first) | ✓ |
| Local auth + OIDC | ✓ |
| Multi-destination (3-2-1) | ✓ |
| Docker volume discovery | ✓ |
Pre/post hooks (pg_dump, etc.) |
✓ |
| Integrity verification | ✓ |
| Retention policies | ✓ |
| Email + webhook notifications | ✓ |
| Restore & restore test | ✓ |
| Helm chart | ✓ |
| Proxmox / VMware integration | 🗓 planned |
| Bandwidth throttling | 🗓 planned |
| BYOK encryption key management | 🗓 planned |
| Type | Notes |
|---|---|
| Local filesystem | Direct path on the agent's host |
| S3-compatible | AWS S3, MinIO, Backblaze B2, Cloudflare R2, and more |
| SFTP | Any SSH server |
| Restic REST Server | Self-hosted rest-server |
| Rclone | 40+ backends including Google Drive, OneDrive, Azure Blob, and more |
The simplest way to get started. All images are published to both GitHub Packages and Docker Hub.
Server only (GUI included):
curl -O https://raw.githubusercontent.com/arkeep-io/arkeep/main/deploy/docker/docker-compose.yml
curl -O https://raw.githubusercontent.com/arkeep-io/arkeep/main/deploy/docker/.env.example
cp .env.example .env
# Edit .env — at minimum set ARKEEP_SECRET_KEY and ARKEEP_AGENT_SECRET
docker compose up -dThe GUI is available at http://localhost:8080.
gRPC TLS with Docker:
The server generates a private CA and server certificate on first startup (auto-PKI). Agents auto-enroll via HTTP on first run and use mTLS from that point on — no manual cert configuration required.
For the server you can alternatively use an external certificate:
- Reverse proxy (recommended): put Caddy or Nginx in front and let it handle TLS termination for both ports. No cert config inside the containers.
- Direct TLS: mount a certificate and set
ARKEEP_GRPC_TLS_CERT/ARKEEP_GRPC_TLS_KEYon the server container (see the commented lines indocker-compose.ymland.env.example).
The all-in-one compose file sets ARKEEP_GRPC_INSECURE=true on both server and agent automatically, since they share the same private Docker network and TLS adds no security benefit there.
Agent only (on the machines you want to back up):
curl -O https://raw.githubusercontent.com/arkeep-io/arkeep/main/deploy/docker/docker-compose.agent.yml
# Set ARKEEP_SERVER_ADDR and ARKEEP_AGENT_SECRET in your environment or .env
docker compose -f docker-compose.agent.yml up -dAll-in-one (server + agent on the same host):
curl -O https://raw.githubusercontent.com/arkeep-io/arkeep/main/deploy/docker/docker-compose.all.yml
docker compose -f docker-compose.all.yml up -dFor local filesystem destinations and restore behaviour with Docker, see Local destinations in Docker and Restore in Docker below.
Pre-built binaries for Linux, macOS, and Windows are available on the Releases page.
Server:
# Download and extract the server binary for your platform
curl -L https://github.com/arkeep-io/arkeep/releases/latest/download/arkeep-server_linux_amd64.tar.gz | tar xz
# Generate secrets
export ARKEEP_SECRET_KEY=$(openssl rand -hex 32)
export ARKEEP_AGENT_SECRET=$(openssl rand -hex 32)
./arkeep-server \
--db-dsn /var/lib/arkeep/arkeep.db \
--data-dir /var/lib/arkeep/data \
--http-addr :8080 \
--grpc-addr :9090TLS (auto-PKI): by default the server generates a private CA and server certificate under
--data-dir/grpc/on first startup. Agents auto-enroll viaPOST /api/v1/agents/enrollon first run and use mTLS from then on — no manual cert management required.To use an external certificate instead (e.g. Let's Encrypt via Caddy), pass
--grpc-tls-certand--grpc-tls-key. Auto-PKI is then skipped entirely.
Agent:
curl -L https://github.com/arkeep-io/arkeep/releases/latest/download/arkeep-agent_linux_amd64.tar.gz | tar xz
./arkeep-agent \
--server-addr your-server:9090 \
--agent-secret your-agent-secret \
--state-dir /var/lib/arkeep-agentAuto-enrollment: on first run the agent calls the server's HTTP API (derived from
--server-addrwith port 8080 by default) to obtain its client certificate. The CA cert and client cert are stored in--state-dirand reused on every subsequent startup — no re-enrollment unless you delete them.Reverse proxy (Traefik, Nginx, Caddy): if the server's HTTP API is only reachable via HTTPS (port 8080 is not exposed directly), set
--server-http-addrto the public HTTPS URL — e.g.--server-http-addr https://arkeep.example.com. The agent uses this address only for the one-time enrollment request; afterwards the mTLS certificates in--state-dirare reused on every restart.Use
--grpc-tls-caonly when connecting to a server that uses an external cert (not auto-PKI) signed by a non-system CA.
Server and agent on the same machine (no reverse proxy, no TLS):
If you are running both binaries on the same host and do not want TLS on the loopback interface, add --grpc-insecure to both. Communication stays on loopback and is never exposed to the network.
./arkeep-server \
--db-dsn /var/lib/arkeep/arkeep.db \
--data-dir /var/lib/arkeep/data \
--grpc-insecure
./arkeep-agent \
--server-addr localhost:9090 \
--agent-secret your-agent-secret \
--state-dir /var/lib/arkeep-agent \
--grpc-insecureFor any setup where the gRPC port is reachable from other machines, always use TLS (the default).
A systemd unit file is provided at
deploy/systemd/arkeep-agent.service.
# Copy the binary
sudo cp arkeep-agent /usr/local/bin/arkeep-agent
sudo chmod +x /usr/local/bin/arkeep-agent
# Copy and edit the unit file
sudo cp deploy/systemd/arkeep-agent.service /etc/systemd/system/
sudo systemctl daemon-reload
# Create an environment file with your credentials
sudo mkdir -p /etc/arkeep
sudo tee /etc/arkeep/agent.env > /dev/null <<EOF
ARKEEP_SERVER_ADDR=your-server:9090
ARKEEP_AGENT_SECRET=your-agent-secret
# For self-signed server certs only — leave empty for Let's Encrypt/trusted CAs:
# ARKEEP_GRPC_TLS_CA=/etc/arkeep/ca.crt
EOF
sudo chmod 600 /etc/arkeep/agent.env
sudo systemctl enable --now arkeep-agent
sudo journalctl -u arkeep-agent -fAll options can be set via CLI flags or environment variables. CLI flags take precedence over environment variables when both are provided.
| Flag | Env | Default | Description |
|---|---|---|---|
--http-addr |
ARKEEP_HTTP_ADDR |
:8080 |
HTTP API and GUI listen address |
--grpc-addr |
ARKEEP_GRPC_ADDR |
:9090 |
gRPC listen address for agents |
--grpc-tls-cert |
ARKEEP_GRPC_TLS_CERT |
— | Path to PEM certificate for gRPC TLS (requires --grpc-tls-key) |
--grpc-tls-key |
ARKEEP_GRPC_TLS_KEY |
— | Path to PEM private key for gRPC TLS (requires --grpc-tls-cert) |
--db-driver |
ARKEEP_DB_DRIVER |
sqlite |
Database driver (sqlite or postgres) |
--db-dsn |
ARKEEP_DB_DSN |
./arkeep.db |
SQLite file path or PostgreSQL DSN |
--secret-key |
ARKEEP_SECRET_KEY |
— | Required. Master key for AES-256-GCM credential encryption |
--agent-secret |
ARKEEP_AGENT_SECRET |
— | Shared secret for gRPC agent authentication |
--data-dir |
ARKEEP_DATA_DIR |
./data |
Directory for RSA JWT keys and server state |
--log-level |
ARKEEP_LOG_LEVEL |
info |
Log level (debug, info, warn, error) |
--secure-cookies |
ARKEEP_SECURE_COOKIES |
false |
Set Secure flag on auth cookies (enable in production over HTTPS) |
--telemetry |
ARKEEP_TELEMETRY |
true |
Send anonymous usage stats (opt-out) |
--grpc-insecure |
ARKEEP_GRPC_INSECURE |
false |
Disable TLS for gRPC transport — development and same-machine deployments only |
Generating secrets:
# Secret key (AES-256, must be kept stable — changing it invalidates all stored credentials)
openssl rand -hex 32
# Agent secret (any random string)
openssl rand -hex 24PostgreSQL DSN example:
postgres://arkeep:password@localhost:5432/arkeep?sslmode=require
| Flag | Env | Default | Description |
|---|---|---|---|
--server-addr |
ARKEEP_SERVER_ADDR |
localhost:9090 |
Server gRPC address (host:port) |
--agent-secret |
ARKEEP_AGENT_SECRET |
— | Shared secret (must match server) |
--state-dir |
ARKEEP_STATE_DIR |
~/.arkeep |
Directory for agent state and extracted binaries |
--docker-socket |
ARKEEP_DOCKER_SOCKET |
(platform default) | Docker socket path |
--log-level |
ARKEEP_LOG_LEVEL |
info |
Log level |
--server-http-addr |
ARKEEP_SERVER_HTTP_ADDR |
(derived from --server-addr) |
Base URL of the server HTTP API used for enrollment. Required when the server is behind a TLS-terminating reverse proxy (e.g. https://arkeep.example.com). Default: --server-addr host with port 8080 over plain HTTP. |
--grpc-tls-ca |
ARKEEP_GRPC_TLS_CA |
— | Path to CA certificate for gRPC TLS (only needed when the server uses an external, non-system-trusted cert) |
--grpc-insecure |
ARKEEP_GRPC_INSECURE |
false |
Disable TLS for gRPC transport — development and same-machine deployments only |
--docker-host-root |
ARKEEP_DOCKER_HOST_ROOT |
/hostfs (auto-detected inside Docker) |
Container path where the host filesystem is mounted. Auto-defaults to /hostfs inside Docker — no configuration required. Set only when using a custom mount point. See Local destinations in Docker. |
When the agent runs in Docker, it can only write to paths that are mounted into the container. Mount the entire host filesystem once at /hostfs — the agent detects it is inside Docker and enables path translation automatically. No environment variable required.
Linux — add to the agent service volumes:
volumes:
- /:/hostfs:ro # :ro for backup-only; change to :rw to also restore to local pathsWindows (Docker Desktop) — one entry per drive letter:
volumes:
- C:/:/hostfs/c:ro
- D:/:/hostfs/d:ro # add more drives as neededWith this in place you can type any native host path directly in the Arkeep UI as a backup destination (e.g. C:\Users\Filippo\Downloads or /home/user/backups) and the agent will translate it automatically. No per-directory bind-mounts required.
Advanced: if you mount the host filesystem at a path other than
/hostfs, setARKEEP_DOCKER_HOST_ROOTto your custom mount point. For binary, systemd, and Helm deployments leave it unset — paths are used as-is.
Restore to a custom path works out of the box — enter any host path in the UI (e.g. C:\Users\Filippo\Downloads\restore) and the agent writes the files there via the hostfs mount.
In-place restore (original location) behaviour depends on the /var/lib/docker/volumes mount mode:
| Mount mode | Local filesystem paths | Docker volume paths |
|---|---|---|
:ro (default) |
Restored normally | Skipped — agent logs a warning. Change to :rw to enable. |
:rw |
Restored normally | Restored if the container is stopped; skipped with a warning if running. |
To restore Docker volumes in-place:
- Change
:roto:rwfor the/var/lib/docker/volumesmount indocker-compose.yml - Stop the containers whose volumes you want to restore
- Run the in-place restore from the UI
- Restart the containers
Volumes of containers that are still running when the restore starts are automatically skipped with a log entry that names the container. The rest of the restore (local paths + stopped-container volumes) completes normally.
The server exposes two health endpoints, both unauthenticated on the HTTP port (default :8080):
| Endpoint | Purpose | Response |
|---|---|---|
GET /health/live |
Liveness — is the process alive? | 200 ok (plain text, always) |
GET /health/ready |
Readiness — can the server serve traffic? | 200 / 503 + JSON |
The /health/ready response checks the database and scheduler:
{
"status": "healthy",
"checks": {
"database": { "status": "ok", "latency_ms": 2 },
"scheduler": { "status": "ok" }
}
}Returns 503 with "status": "unhealthy" if any check fails (e.g. database unreachable). Docker Compose and Kubernetes probes use /health/ready so containers are automatically restarted or removed from rotation when the database is down.
The server exposes a Prometheus-compatible metrics endpoint at GET /metrics (same port as the HTTP API, default :8080).
| Metric | Type | Labels | Description |
|---|---|---|---|
arkeep_jobs_total |
Counter | status, job_type |
Jobs that reached a terminal state (succeeded, failed, cancelled) |
arkeep_job_duration_seconds |
Histogram | job_type |
Job wall-clock duration in seconds |
arkeep_agents_connected |
Gauge | — | Agents currently holding an active gRPC connection |
arkeep_http_requests_total |
Counter | method, route, status_code |
HTTP requests handled by the server |
arkeep_http_request_duration_seconds |
Histogram | method, route |
HTTP request latency in seconds |
go_*, process_* |
various | — | Go runtime and process metrics (goroutines, GC, file descriptors) |
Prometheus scrape config:
scrape_configs:
- job_name: arkeep
static_configs:
- targets: ['arkeep-server:8080']
scrape_interval: 15sSecurity: /metrics is unauthenticated (like /health/live and /health/ready). In production, restrict access at the reverse-proxy level so only your Prometheus scraper can reach it:
location /metrics {
allow 10.0.0.0/8; # your internal network / Prometheus scraper
deny all;
}See SECURITY.md for details.
Navigate to Settings → Notifications in the Arkeep web UI to configure:
- SMTP — email notifications sent to admin addresses (or a custom recipient list)
- Webhook — HTTP POST to any URL on every backup event
Both channels support automatic retry with exponential backoff (up to 3 attempts: immediately, +5 min, +30 min). Failed and exhausted deliveries are visible under GET /api/v1/admin/notifications/queue.
Every event sends a POST request with a JSON body and Content-Type: application/json.
The text field is named for Slack/Discord compatibility — it contains the same human-readable message as title but includes full context.
job_success example:
{
"type": "job_success",
"title": "Backup completed: my-server",
"text": "Policy \"my-server\" completed successfully at 2026-01-15T10:30:00Z.",
"payload": {
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"policy_id": "6ba7b810-9dad-11d1-80b4-00c04fd430c8",
"policy_name": "my-server"
},
"timestamp": "2026-01-15T10:30:00Z"
}job_failure example:
{
"type": "job_failure",
"title": "Backup failed: my-server",
"text": "Policy \"my-server\" failed at 2026-01-15T10:30:00Z: exit status 1",
"payload": {
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"policy_id": "6ba7b810-9dad-11d1-80b4-00c04fd430c8",
"policy_name": "my-server",
"error": "exit status 1"
},
"timestamp": "2026-01-15T10:30:00Z"
}agent_offline example:
{
"type": "agent_offline",
"title": "Agent offline: prod-node-1",
"text": "Agent \"prod-node-1\" stopped responding at 2026-01-15T10:30:00Z.",
"payload": {
"agent_id": "7c9e6679-7425-40de-944b-e07fc1f90ae7",
"agent_name": "prod-node-1"
},
"timestamp": "2026-01-15T10:30:00Z"
}type |
Trigger | Extra payload fields |
|---|---|---|
job_success |
Backup completed successfully | job_id, policy_id, policy_name |
job_failure |
Backup failed with an error | job_id, policy_id, policy_name, error |
agent_offline |
Agent stopped sending heartbeats | agent_id, agent_name |
When a Webhook secret is set in the Arkeep UI, every request includes an X-Arkeep-Signature header carrying an HMAC-SHA256 signature of the raw request body:
X-Arkeep-Signature: sha256=<lowercase hex>
This follows the same convention as GitHub and Stripe webhooks. Always verify the signature before processing the payload.
Go:
import (
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"strings"
)
func verifyArkeepSignature(body []byte, secret, header string) bool {
expected := strings.TrimPrefix(header, "sha256=")
mac := hmac.New(sha256.New, []byte(secret))
mac.Write(body)
actual := hex.EncodeToString(mac.Sum(nil))
return hmac.Equal([]byte(expected), []byte(actual))
}Python:
import hmac, hashlib
def verify_arkeep_signature(body: bytes, secret: str, header: str) -> bool:
expected = header.removeprefix("sha256=")
actual = hmac.new(secret.encode(), body, hashlib.sha256).hexdigest()
return hmac.compare_digest(expected, actual)Node.js:
const crypto = require('crypto')
function verifyArkeepSignature(body, secret, header) {
const expected = header.replace(/^sha256=/, '')
const actual = crypto.createHmac('sha256', secret).update(body).digest('hex')
return crypto.timingSafeEqual(Buffer.from(expected), Buffer.from(actual))
}- Create an incoming webhook in your Slack workspace.
- Paste the URL into Settings → Notifications → Webhook URL.
- Done — Arkeep's
textfield maps directly to Slack's message body. No additional configuration required.
- In your Discord server, open Channel Settings → Integrations → Webhooks and create a new webhook.
- Copy the webhook URL and paste it into Arkeep's Webhook URL setting.
- Discord accepts Slack-compatible payloads at the
/slackURL suffix — append it if you want richer formatting:Without the suffix, Discord renders thehttps://discord.com/api/webhooks/<id>/<token>/slacktextfield as the message content, which works for plain-text alerts.
- Add a Webhook trigger node (Method:
POST, Response mode:Immediately). - Paste the generated URL into Arkeep's Webhook URL setting.
- (Optional) Add a Code node after the trigger to verify
X-Arkeep-Signature(see Signature verification). - Add a Switch node that routes on
{{ $json.type }}—job_failure,job_success,agent_offline. - Connect downstream nodes (Slack, PagerDuty, Telegram, email, etc.) to each branch.
All event fields are available as {{ $json.payload.policy_name }}, {{ $json.payload.error }}, and so on.
- Create a new Zap and add a Webhooks by Zapier → Catch Hook trigger.
- Paste the generated hook URL into Arkeep's Webhook URL setting.
- Send a test backup event from the Arkeep UI to populate Zapier's sample data.
- Use
type,title,text,payload__policy_name,payload__error, andtimestampfields in your Zap actions. - Add a Filter step to route on
type(e.g. only trigger downstream actions forjob_failure).
| Tool | Version | Install |
|---|---|---|
| Go | 1.26+ | go.dev |
| Node.js | 22+ | nodejs.org |
| pnpm | 9+ | corepack enable |
| Docker | any | docker.com |
| Task | latest | go install github.com/go-task/task/v3/cmd/task@latest |
| protoc | latest | apt install protobuf-compiler / brew install protobuf |
| protoc-gen-go | latest | go install google.golang.org/protobuf/cmd/protoc-gen-go@latest |
| protoc-gen-go-grpc | latest | go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest |
git clone https://github.com/arkeep-io/arkeep
cd arkeep
# Download restic and rclone binaries (embedded into the agent at build time)
task deps:download
# Generate gRPC code from .proto definitions
task proto
# Start the server (http://localhost:8080, gRPC :9090)
task run:server
# In a separate terminal — start the GUI dev server with HMR (http://localhost:5173)
task run:gui
# In a separate terminal — start an agent pointing to the local server
task run:agentThe GUI dev server proxies API requests to the server, so you can work on frontend and backend simultaneously with hot reload on both sides.
Note: In development, the GUI runs as a separate Vite dev server on port 5173. In production (binary or Docker), the GUI is compiled and embedded directly inside the server binary — no separate process or web server is needed.
First login:
Open http://localhost:8080 in your browser. On first access you will be
redirected to the setup page where you can create the initial admin account.
arkeep/
├── agent/ # Agent binary
│ ├── cmd/agent/ # Entry point
│ └── internal/
│ ├── connection/ # gRPC client, job stream, state persistence
│ ├── executor/ # Job queue and execution orchestration
│ ├── restic/ # Restic/rclone wrapper (binary extraction, backup, forget, check)
│ ├── docker/ # Docker volume discovery
│ ├── hooks/ # Pre/post backup hook runner
│ └── metrics/ # Host metrics (CPU, RAM, disk) via gopsutil
├── server/ # Server binary
│ ├── cmd/server/ # Entry point
│ └── internal/
│ ├── api/ # Chi router, HTTP handlers, middleware
│ ├── auth/ # JWT (RS256), local auth, OIDC
│ ├── db/ # GORM setup, migrations, EncryptedString type
│ ├── repositories/ # Data access layer (explicit queries, no GORM Preload)
│ ├── grpc/ # gRPC server — receives agent streams, dispatches jobs
│ ├── agentmanager/ # In-memory registry of connected agents
│ ├── scheduler/ # gocron-based backup scheduler
│ ├── notification/ # Email (SMTP) and webhook notification senders
│ ├── metrics/ # Prometheus metric collectors and HTTP middleware
│ └── websocket/ # WebSocket hub for real-time GUI updates
├── shared/ # Code shared between server and agent
│ ├── proto/ # Protobuf definitions and generated Go code
│ └── types/ # Shared type definitions
├── gui/ # Vue 3 PWA frontend
│ └── src/
│ ├── components/ # Reusable UI components (shadcn-vue based)
│ ├── pages/ # Route-level page components
│ ├── stores/ # Pinia state stores
│ ├── services/ # API client, WebSocket client
│ ├── composables/ # Vue composables
│ ├── router/ # Vue Router configuration
│ └── types/ # TypeScript interfaces
├── deploy/
│ ├── docker/ # Docker Compose files
│ ├── systemd/ # systemd unit file for the agent
│ └── helm/ # Helm chart
├── go.work # Go workspace (agent + server + shared)
└── Taskfile.yml # Task runner
task build # Build all binaries (server + agent, GUI included)
task build:server # Build server binary (builds GUI first, then embeds it)
task build:agent # Build agent binary (downloads restic + rclone first)
task build:gui # Build the Vue GUI only (output to gui/dist/)
task test # Run all tests (Go + GUI)
task lint # Run linters (golangci-lint + vue-tsc)
task proto # Regenerate gRPC code from .proto definitions
task tidy # Tidy all Go modules
task clean # Remove build artifacts
task run:server # Run the server in development mode (GUI via task run:gui)
task run:agent # Run the agent in development mode
task run:gui # Run the GUI dev server with HMR (proxies API to :8080)
task deps:download # Download restic and rclone binaries for the current platformWhy Restic under the hood?
Restic is battle-tested, content-addressable, and has excellent deduplication. It handles encryption, chunking, and repository integrity natively. Arkeep adds the management layer on top — scheduling, multi-machine coordination, a GUI, notifications — without reinventing the storage engine.
Can I access my backups without Arkeep?
Yes. Since the underlying engine is Restic, you can always use the restic CLI
directly against any repository that Arkeep has created. Your data is never locked in.
Does the agent need root privileges?
No. The agent runs as an unprivileged user. The only exception is Docker volume backup:
to access volume mountpoints at /var/lib/docker/volumes/ on Linux, the agent
needs to be in the docker group (or run as root). The Docker Compose deployment
handles this automatically via the socket mount.
Are pre/post backup hooks safe?
Hooks are shell commands executed by the agent process on the backup target machine. This means hooks run with the same privileges as the agent — typically an unprivileged user, but root if you configured the agent that way, or with Docker socket access if you enabled Docker volume backup.
Treat hook configuration with the same level of trust as SSH access to the machine. For this reason, only admin users can set or modify hook commands. Regular users can view policies but cannot change hook fields.
Supported hook patterns:
# Database dump before backup
pg_dump mydb > /var/backups/mydb.sql
# Stop a container before backup, restart it after
docker stop my-container
docker start my-container
# Custom script
/opt/scripts/pre-backup.shThe following patterns are rejected by the server to prevent credential exfiltration:
- Command substitution:
$(...)and backticks - Path traversal:
.. - Internal environment variable references:
$RESTIC_*,$RCLONE_*,$ARKEEP_*
Why does the agent connect to the server, not the other way around?
Pull architecture means agents work behind NAT, firewalls, and dynamic IPs without any port-forwarding or VPN. The server never needs to reach out to agents — agents maintain a persistent gRPC stream and receive jobs through it.
SQLite or PostgreSQL?
SQLite is the default and works well for most deployments. Switch to PostgreSQL if you need high concurrency (many agents running jobs simultaneously) or if you want to run multiple server replicas behind a load balancer.
Is there a Kubernetes deployment?
Yes. A Helm chart is available in deploy/helm/. Set grpc.tls.existingSecret to the name of a TLS Secret (type kubernetes.io/tls) to enable TLS on the gRPC port — cert-manager with Let's Encrypt is the recommended approach. For simpler setups, Docker Compose on a single node is also supported.
Always back up the database before upgrading — see docs/operations/backup-recovery.md for the full procedure and disaster-recovery runbook.
- Back up the database before every upgrade (see Backing up Arkeep itself).
- Pull the new image / download the new binary.
- Stop the server.
- Start the new server — schema migrations run automatically on startup.
- Verify
GET /health/readyreturns"status": "healthy"before sending traffic.
The agent is independently versioned. Upgrade agents after the server is confirmed healthy. Agents running an older version continue to work during a rolling upgrade — the gRPC protocol is backwards-compatible within a major version.
docker compose pull
docker compose up -d
docker compose logs -f arkeep-server # watch for "migrations applied" + "server started"# Replace the binary, then restart the service
sudo systemctl stop arkeep-server
sudo cp arkeep-server-new /usr/local/bin/arkeep-server
sudo systemctl start arkeep-server
sudo journalctl -u arkeep-server -fhelm repo update
helm upgrade arkeep arkeep/arkeep --reuse-values
kubectl rollout status deployment/arkeep-serverAfter upgrading, verify GET /health/ready returns "status": "healthy" before
sending traffic. Schema migrations run automatically on startup.
The agent is independently versioned. Agents running an older version continue to work during a rolling upgrade — the gRPC protocol is backwards-compatible within a major version.
| From → To | Breaking change | Action required |
|---|---|---|
| any → 1.0.0 | None — all migrations are additive and run automatically | None |
Arkeep backs up your machines — but who backs up Arkeep? Back up the database before every upgrade and on a regular schedule.
SQLite:
# One-off snapshot (safe to run while the server is running — SQLite WAL mode)
sqlite3 /var/lib/arkeep/arkeep.db ".backup '/var/lib/arkeep/arkeep.db.bak'"
# Or with a timestamp
sqlite3 /var/lib/arkeep/arkeep.db \
".backup '/var/lib/arkeep/arkeep-$(date +%Y%m%d-%H%M%S).db'"PostgreSQL:
pg_dump -Fc arkeep > arkeep-$(date +%Y%m%d-%H%M%S).dumpWhat to back up: the database file (SQLite) or database dump (PostgreSQL), plus the
--data-dir directory (contains the CA private key and gRPC certificates). If you lose
--data-dir you will need to re-enroll all agents.
The underlying Restic repositories (your actual backup data) are stored on the destinations you configured — they are not affected by an Arkeep server failure.
- Restore & restore test
- Helm chart
- Comprehensive test coverage (server + agent + GUI)
- Full documentation site
- Proxmox backup (VM and LXC)
- VMware vSphere integration
- Bandwidth throttling
- BYOK encryption key management
Arkeep sends anonymous usage statistics once per day to help prioritize development. No personal data, backup contents, credentials, or hostnames are ever transmitted.
What is sent: a stable random instance ID, Arkeep version, OS, number of connected agents, and number of active policies.
Aggregate stats are public at: https://telemetry.arkeep.io/stats
To opt out: set ARKEEP_TELEMETRY=false or pass --telemetry=false.
Contributions are welcome. Please read CONTRIBUTING.md first.
Arkeep is licensed under the Apache License 2.0.
Copyright 2026 Filippo Crotti / Arkeep Contributors