TSI-Telemetry Backend Setup on Proxmox LXC
This document outlines the setup for the TSI-Telemetry project's backend components within a Proxmox LXC container, leveraging Docker for TimescaleDB and a Python script for MQTT data ingestion. This setup was implemented to run the telemetry pipeline on a dedicated server environment, decoupling it from the Mac Mini control plane.
1. lxc-docker Container Setup
Given the inability to enable KVM virtualization, a dedicated LXC container (lxc-docker) was created and specially configured to host Docker and its containers.
Container Details:
- Hostname:
lxc-docker - IP Address:
192.168.68.30/24 - Resources: 2 Cores, 2GB RAM, 32GB Disk
- Key Feature: Created with
nestingandkeyctlfeatures enabled in Proxmox to allow Docker to run inside.
Docker Installation:
Inside the lxc-docker console, Docker was installed using the official convenience script:
apt update && apt upgrade -y
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
2. TimescaleDB Deployment
The TimescaleDB instance, crucial for time-series data storage, was deployed as a Docker container within lxc-docker.
Deployment Command:
docker run -d --name timescaledb \
-p 5433:5432 \
-e POSTGRES_PASSWORD="YOUR_STRONG_PASSWORD" \
-v timescaledb_data:/var/lib/postgresql/data \
--restart unless-stopped \
timescale/timescaledb:latest-pg15
Database Initialization:
After deployment, the telemetry database and timescaledb extension were created:
docker exec -it timescaledb psql -U postgres -d postgres
# In psql:
CREATE DATABASE telemetry;
\q
docker exec -it timescaledb psql -U postgres -d telemetry
# In psql:
CREATE EXTENSION IF NOT EXISTS timescaledb;
CREATE TABLE car_metrics (
time TIMESTAMPTZ NOT NULL,
rpm INTEGER,
speed INTEGER,
coolant INTEGER,
intake_temp INTEGER,
throttle INTEGER,
engine_load INTEGER,
map_kpa INTEGER,
fuel_level INTEGER,
timing_adv INTEGER,
battery_voltage DECIMAL(4,2)
);
SELECT create_hypertable('car_metrics', 'time');
\q
3. Python MQTT Bridge Deployment
The Python script (bridge.py) acts as the intermediary, subscribing to HiveMQ Cloud and inserting data into TimescaleDB.
Installation & Configuration (lxc-docker console):
# Install Python and dependencies
apt update
apt install python3 python3-pip -y
pip3 install paho-mqtt psycopg2-binary
# Create script directory
mkdir -p /opt/telemetry-bridge
# Create bridge.py (content copied from bridge-script.md)
# Ensure HiveMQ and TimescaleDB credentials are updated within the script.
nano /opt/telemetry-bridge/bridge.py
systemd Service Setup (lxc-docker console):
The script was configured to run as a robust systemd service:
cat << EOF > /etc/systemd/system/telemetry-bridge.service
[Unit]
Description=TSI Telemetry Bridge
After=docker.service
Requires=docker.service
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/python3 /opt/telemetry-bridge/bridge.py
Restart=always
User=root
StandardOutput=journal
StandardError=journal
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now telemetry-bridge
systemctl status telemetry-bridge
4. Grafana (lxc-monitoring) Integration
A separate container (lxc-monitoring) was set up to host Grafana for data visualization.
Container Details:
- Hostname:
lxc-monitoring - IP Address:
192.168.68.22/24
Grafana Installation: Installed using the official Grafana APT repository.
Data Source Configuration:
A PostgreSQL data source was added in Grafana, connecting to 192.168.68.30:5433 (the lxc-docker container), database telemetry, user postgres, and the TimescaleDB password. The "TimescaleDB" toggle was enabled.
Dashboard Import: A pre-generated JSON model was imported to instantly create the full "TSI Telemetry" dashboard, featuring gauges for real-time metrics and a time-series graph.