Bosch Heat-pump, Influx, Grafana and Home-Assistant: get the water-temperature presented

Used Hard/Software:
Home Assistant Frontend 20240806.1
Core 2024.8.0
Supervisor 2024.08.0
Operating System 12.4

HA-Integration: Bosch custom component v0.26.1

It was a hard time to find the right Grafana-settings to get the water-temperature presented.
Bosch/Home-Assistant transports this value based on the state-series:

Hence Grafana needs a statement as shown:

Example output:

Done 🙂

Pi-hole: send pihole.log and pihole-FTL.log to external Syslog-server

The option to send Pihole-logs to a remote syslogserver is not implemented.
Good thing is that Pi-hole creates log-files.
How about adding these logfiles to the local (R-)Syslog-daemon and send it over to the external Syslog-Server via port 514 TCP/UDP ?

I use Dietpi as the OS on my Raspi3+ and added Pi-hole as one of the supported apps .

Dietpi has per default noSyslog-daemon activated.
The first step is to install a Syslog-daemon, in my case Rsyslog.
Install with root-privileges (sudo su):

apt install rsyslog

cd to /etc/rsyslog.d and create two files:

nano pihole.conf

$InputFileName /var/log/pihole.log
$InputFileTag pihole
$InputRunFileMonitor
$InputFilePersistStateInterval 1000
nano piholeftl.conf

$InputFileName /var/log/pihole-FTL.log
$InputFileTag pihole-ftl
$InputRunFileMonitor
$InputFilePersistStateInterval 1000

as next alter rsyslog.conf and add the following lines

nano /etc/rsyslog.conf

$ModLoad imfile
$InputFileName /var/log/pihole.log
$InputFileName /var/log/pihole-FTL.log
*.* @10.50.100.5:514

10.50.100.5 is the external Syslog-Server. Dietpi sends now the syslog-information to the Syslog-Server 10.50.100.5 via TCP port 514.
Change the IP 10.50.100.5 to the Syslog-Server IP you want to use.

restart Rsyslog-daemon:

systemctl restart rsyslog

The Syslog-daemon receives now the Pi-hole logs:

Addition: If you want to see also the DNS-queries go to Settings and Enable query logging:

This brings also the DNS content to the external Syslog-Server:

May 14 09:58:27 DietPi local0 pihole May 14 09:58:26 dnsmasq[15529]: query[A] bier.de from 10.50.100.13
May 14 09:58:27 DietPi local0 pihole May 14 09:58:26 dnsmasq[15529]: forwarded bier.de to 1.0.0.1
May 14 09:58:27 DietPi local0 pihole May 14 09:58:26 dnsmasq[15529]: dnssec-query[DS] bier.de to 1.0.0.1
May 14 09:58:27 DietPi local0 pihole May 14 09:58:26 dnsmasq[15529]: reply bier.de is no DS
May 14 09:58:27 DietPi local0 pihole May 14 09:58:26 dnsmasq[15529]: reply bier.de is 212.53.128.75
May 14 09:58:27 DietPi local0 pihole May 14 09:58:26 dnsmasq[15529]: query[AAAA] bier.de from 10.50.100.13
May 14 09:58:27 DietPi local0 pihole May 14 09:58:26 dnsmasq[15529]: forwarded bier.de to 1.0.0.1
May 14 09:58:27 DietPi local0 pihole May 14 09:58:26 dnsmasq[15529]: reply bier.de is NODATA-IPv6

Done!

Influxdb: show real database size in Grafana

Influxdb has no useful functions to look at the real database size.

As I have created a retention policy combined with continuous queries I want to see the exact values per database.

I use the following approach:

  • in Crontab a bash is executed to get the real influxes database values
  • the values are then pushed into a dedicated database for storing the values
  • Grafana dashboard setup to show the values

Crontab

vi /etc/crontab and add the following line:

*/30  * * * *	root	sh /home/pi/test.sh >/dev/null 2>&1

the script /home/pi/test.sh is executed every half an hour

Bash-script

vi /home/pi/test.sh and paste this code in:

#!/usr/bin/env bash

#get real database size
telegraf_data=$(du -s /var/lib/influxdb/data/telegraf | awk '{print $1}')
_internal_data=$(du -s /var/lib/influxdb/data/_internal | awk '{print $1}')
openhab_db_data=$(du -s /var/lib/influxdb/data/openhab_db | awk '{print $1}')
collectd_data=$(du -s /var/lib/influxdb/data/collectd | awk '{print $1}')
mydbsize_data=$(du -s /var/lib/influxdb/data/mydbsize | awk '{print $1}')

#print real database size to screen
echo "******"
echo $telegraf_data
echo $_internal_data
echo $openhab_db_data
echo $collectd_data
echo $mydbsize_data
echo "******"

#push real database size values to database mydbsize
curl -i -XPOST 'http://localhost:8086/write?db=mydbsize' --data-binary 'database,dbsize=telegraf Size='"$telegraf_data"' '
curl -i -XPOST 'http://localhost:8086/write?db=mydbsize' --data-binary 'database,dbsize=_internal Size='"$_internal_data"' '
curl -i -XPOST 'http://localhost:8086/write?db=mydbsize' --data-binary 'database,dbsize=openhab_db Size='"$openhab_db_data"' '
curl -i -XPOST 'http://localhost:8086/write?db=mydbsize' --data-binary 'database,dbsize=collectd Size='"$collectd_data"' '
curl -i -XPOST 'http://localhost:8086/write?db=mydbsize' --data-binary 'database,dbsize=mydbsize Size='"$mydbsize_data"' '

#End

I have created a dedicated database for these measurements in my case:

root@raspi5:/home/pi# influx
Connected to http://localhost:8086 version 1.8.10
InfluxDB shell version: 1.8.10
> create database mydbsize

Grafana-dashboard

Create a new db-source in Grafana:

Add the queries per database in Grafana


Panel shown as JSON-data

{
  "id": 8,
  "gridPos": {
    "h": 14,
    "w": 23,
    "x": 0,
    "y": 9
  },
  "type": "timeseries",
  "title": "DB size",
  "datasource": {
    "type": "influxdb",
    "uid": "cq2DkaZgk"
  },
  "pluginVersion": "8.4.4",
  "links": [],
  "fieldConfig": {
    "defaults": {
      "custom": {
        "drawStyle": "line",
        "lineInterpolation": "smooth",
        "barAlignment": 0,
        "lineWidth": 2,
        "fillOpacity": 10,
        "gradientMode": "none",
        "spanNulls": true,
        "showPoints": "auto",
        "pointSize": 4,
        "stacking": {
          "mode": "none",
          "group": "A"
        },
        "axisPlacement": "auto",
        "axisLabel": "",
        "scaleDistribution": {
          "type": "log",
          "log": 2
        },
        "hideFrom": {
          "tooltip": false,
          "viz": false,
          "legend": false
        },
        "thresholdsStyle": {
          "mode": "off"
        },
        "lineStyle": {
          "fill": "solid"
        }
      },
      "color": {
        "mode": "palette-classic"
      },
      "mappings": [],
      "thresholds": {
        "mode": "absolute",
        "steps": [
          {
            "color": "green",
            "value": null
          },
          {
            "color": "red",
            "value": 80
          }
        ]
      },
      "unit": "deckbytes"
    },
    "overrides": []
  },
  "options": {
    "tooltip": {
      "mode": "multi",
      "sort": "none"
    },
    "legend": {
      "displayMode": "table",
      "placement": "bottom",
      "calcs": [
        "lastNotNull",
        "max",
        "min"
      ]
    }
  },
  "targets": [
    {
      "alias": "_internal",
      "datasource": {
        "type": "influxdb",
        "uid": "cq2DkaZgk"
      },
      "groupBy": [
        {
          "params": [
            "$__interval"
          ],
          "type": "time"
        },
        {
          "params": [
            "null"
          ],
          "type": "fill"
        }
      ],
      "hide": false,
      "measurement": "database",
      "orderByTime": "ASC",
      "policy": "default",
      "refId": "A",
      "resultFormat": "time_series",
      "select": [
        [
          {
            "params": [
              "Size"
            ],
            "type": "field"
          },
          {
            "params": [],
            "type": "last"
          }
        ]
      ],
      "tags": [
        {
          "key": "dbsize",
          "operator": "=",
          "value": "_internal"
        }
      ]
    },
    {
      "alias": "collectd",
      "datasource": {
        "type": "influxdb",
        "uid": "cq2DkaZgk"
      },
      "groupBy": [
        {
          "params": [
            "$__interval"
          ],
          "type": "time"
        },
        {
          "params": [
            "null"
          ],
          "type": "fill"
        }
      ],
      "hide": false,
      "measurement": "database",
      "orderByTime": "ASC",
      "policy": "default",
      "refId": "B",
      "resultFormat": "time_series",
      "select": [
        [
          {
            "params": [
              "Size"
            ],
            "type": "field"
          },
          {
            "params": [],
            "type": "last"
          }
        ]
      ],
      "tags": [
        {
          "key": "dbsize",
          "operator": "=",
          "value": "collectd"
        }
      ]
    },
    {
      "alias": "openhab_db",
      "datasource": {
        "type": "influxdb",
        "uid": "cq2DkaZgk"
      },
      "groupBy": [
        {
          "params": [
            "$__interval"
          ],
          "type": "time"
        },
        {
          "params": [
            "null"
          ],
          "type": "fill"
        }
      ],
      "hide": false,
      "measurement": "database",
      "orderByTime": "ASC",
      "policy": "default",
      "refId": "C",
      "resultFormat": "time_series",
      "select": [
        [
          {
            "params": [
              "Size"
            ],
            "type": "field"
          },
          {
            "params": [],
            "type": "last"
          }
        ]
      ],
      "tags": [
        {
          "key": "dbsize",
          "operator": "=",
          "value": "openhab_db"
        }
      ]
    },
    {
      "alias": "mydbsize",
      "datasource": {
        "type": "influxdb",
        "uid": "cq2DkaZgk"
      },
      "groupBy": [
        {
          "params": [
            "$__interval"
          ],
          "type": "time"
        },
        {
          "params": [
            "null"
          ],
          "type": "fill"
        }
      ],
      "hide": false,
      "measurement": "database",
      "orderByTime": "ASC",
      "policy": "autogen",
      "refId": "D",
      "resultFormat": "time_series",
      "select": [
        [
          {
            "params": [
              "Size"
            ],
            "type": "field"
          },
          {
            "params": [],
            "type": "last"
          }
        ]
      ],
      "tags": [
        {
          "key": "dbsize",
          "operator": "=",
          "value": "mydbsize"
        }
      ]
    },
    {
      "alias": "telegraf",
      "datasource": {
        "type": "influxdb",
        "uid": "cq2DkaZgk"
      },
      "groupBy": [
        {
          "params": [
            "$__interval"
          ],
          "type": "time"
        },
        {
          "params": [
            "null"
          ],
          "type": "fill"
        }
      ],
      "hide": false,
      "measurement": "database",
      "orderByTime": "ASC",
      "policy": "default",
      "refId": "E",
      "resultFormat": "time_series",
      "select": [
        [
          {
            "params": [
              "Size"
            ],
            "type": "field"
          },
          {
            "params": [],
            "type": "last"
          }
        ]
      ],
      "tags": [
        {
          "key": "dbsize",
          "operator": "=",
          "value": "telegraf"
        }
      ]
    }
  ]
}

Example-view of dashboard:

done!

daily compression of InfluxDB

InfluxDB is an open source time series database built by InfluxData and used in e.g. Openhab for data persistance.
For small computers like raspi’s it is a best practice to compress the database regurlarly.

Why not using cron ?

Well, it works.. the only (security)-drawback is to grant /bin/bash to the influx-user:

cat /etc/cron.daily/00influx:

#!/bin/bash
#execute
/home/pi/1_influxrepair.sh


cat /home/pi/1_influxrepair.sh:

#!/bin/bash
service influxdb stop
echo "before su"
su influxdb -c /home/pi/1_subscript.sh
echo "after su"
whoami
service influxdb start

cat /home/pi/1_subscript.sh:

#!/bin/bash
echo "now in sub shell"
whoami
cd /var/lib/influxdb
influx_inspect buildtsi -compact-series-file -datadir ./data -waldir ./wal
exit
echo "exiting sub shell"

cat /etc/passwd|grep infl:

modify influxdb:/bin/bash

done!