Make Proxmox VLAN aware

I’m using Proxmox as a hypervisor to run my virtual machines and use two VLANs in my home network: one for normal traffic and one separate VLAN for IoT traffic. Virtual machines should be connected to one of those networks. The normal network is typically untagged (vlan ID 20) while the IoT traffic is tagged with VLAN 21.

Configuration file: /etc/network/interfaces

auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet manual
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto vmbr0.20
iface vmbr0.20 inet static
        address 192.168.20.x/24
        gateway 192.168.20.1

This should result in the following Proxmox network configuration:

Proxmox host system network configuration

Now you can easily add a network adapter to a virtual machine and tag it with the correct VLAN.

Virtual machine network adapter configuration, including tagged VLAN.

Install Telegraf for monitoring purposes

curl -fsSL https://repos.influxdata.com/influxdata-archive_compat.key -o /etc/apt/keyrings/influxdata-archive_compat.key
echo "deb [signed-by=/etc/apt/keyrings/influxdata-archive_compat.key] https://repos.influxdata.com/debian stable main" | tee /etc/apt/sources.list.d/influxdata.list
apt update
apt -y install telegraf 

The telegraf configuration is located in /etc/telegraf/

Example configuration:

[global_tags]
[agent]
  interval = "60s"
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = ""
  hostname = "DB152"
  omit_hostname = false
[[outputs.influxdb]]
  urls = ["https://192.168.21.152:8086"]
  database = "Verhaeg_Monitoring"
  username = "xxx"
  password = "xxx"
  insecure_skip_verify = true
[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false
[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.mem]]
[[inputs.swap]]
[[inputs.net]]
  interfaces = ["ens18"]
[[inputs.netstat]]
[[inputs.kernel]]
[[inputs.system]]
[[inputs.processes]]
[[inputs.diskio]]

Installing Eclipse Ditto

Download Ditto from Github and unzip it in your favorite directory:

cd /data/install
wget https://github.com/eclipse/ditto/archive/master.zip
unzip master.zip

Adjust the nginx password:

openssl passwd -quiet
 Password: <enter password>
 Verifying - Password: <enter password>

Append the printed hash in the nginx.httpasswd (in the same folder as docker-compose.yml) file placing the username who shall receive this password in front like this:

ditto:A6BgmB8IEtPTs

Configure the docker data directory in /etc/docker/deamon.json:

{
   "data-root": "/data/docker"
}

And finally, install Ditto using the Docker compose script:

cd ditto-master/deployment/docker/
docker-compose up -d

To automatically start Ditto at system start, and clean up the related log-files and the following two lines to crontab:

crontab -e

@reboot sleep 30 && cd /data/docker && find . -name "*json.log" -type f -delete
@reboot sleep 60 && cd /data/install/ditto-master/deployment/docker && sudo docker-compose up -d

Done!

Updating Eclipse Ditto

First, kill all docker containers. Then remove them, and remove their related images:

docker kill $(docker ps -q) && docker rm $(docker ps -a -q) && docker rmi $(docker images -q)

Then we get the latest version of the docker-compose from GitHub:

wget https://github.com/eclipse/ditto/archive/refs/heads/master.zip
unzip master.zip

Unzip it, browse to the right directory, and start it:

cd master/deployment/docker
docker-compose up -d

Don’t forget to adjust the password:

openssl passwd -quiet
 Password: <enter password>
 Verifying - Password: <enter password>

Append the printed hash in the nginx.httpasswd (in the same folder as docker-compose.yml) file placing the username who shall receive this password in front like this:

ditto:A6BgmB8IEtPTs

Configuring Juniper SRX (100) as a DHCP server

Create a vlan interface:

set interfaces vlan unit 21 family inet address 192.168.x.2/24

Assign the vlan interface to a vlan :

set vlans vlan_name vlan-id 21
set vlans vlan_name l3-interface vlan.21

Assign the vlan to a physical interface:

set interfaces fe-0/0/0 unit 0 family ethernet-switching vlan members vlan_name

Assign the dhcp-local-server service to the vlan interface:

set system services dhcp-local-server group IoT interface vlan.21

Create the DHCP pool:

set access address-assignment pool Verhaeg_IoT family inet network 192.168.x.0/24
set access address-assignment pool Verhaeg_IoT family inet range r1 low 192.168.x.200
set access address-assignment pool Verhaeg_IoT family inet range r1 high 192.168.x.250
set access address-assignment pool Verhaeg_IoT family inet dhcp-attributes name-server 192.168.x.1
set access address-assignment pool Verhaeg_IoT family inet dhcp-attributes name-server 8.8.4.4
set access address-assignment pool Verhaeg_IoT family inet dhcp-attributes name-server 8.8.8.8
set access address-assignment pool Verhaeg_IoT family inet dhcp-attributes router 192.168.x.1

Don’t forget your security zone to allow dhcp traffic:

set security-zone x interfaces vlan.20 host-inbound-traffic system-services dhcp

Validate clients have received an IP address from the DHCP server:

root@ROU-02> show dhcp server binding

IP address        Session Id  Hardware address   Expires     State      Interface
192.168.x.201    2           b6:54:ca:26:51:ae  70785       BOUND      vlan.21
192.168.x.202    3           c8:34:8e:5f:a4:2d  85932       BOUND      vlan.21

Determine washer program start-time based on predicted PV energy production.

When we decided we wanted to build a new house I wanted to invest in both passive and active technology to reduce our energy consumption as much as possible. My goal is to reduce our dependency from external energy sources, without installing batteries. This means I need to match our energy consumption with its (local) availability (or simply said: solar-powered production).

Impression of the design of our house.

Use case

The washer, dishwasher, and dryer are energy-consuming devices. We are used to start these devices at night: the energy was cheaper, there was no noise pollution in the living room, and it made sense at the end of the day.

At the same time, it doesn’t really matter when these devices finish their work. Typically you want them finished within the next 12 hours or so. Therefore, a smart system could nicely plan their consumption based on the next available solar-power production peak, which typically happens around lunch-time anyway. This would increase the energy we consume directly from the solar panels and reduce the energy we need to consume from the grid.

To make things a bit easier I decided to buy devices that support the B/S/H Home Connect system. As we are still building the house I don’t have a dishwasher and at the moment we don’t use a dryer, so I started with the washer.

Challenges

I need a lot of information to make this work. Luckily there are some public (free-of-charge) cloud-services available that helped me a bit here and there. All of the selected services have well documented APIs that I could implement with ease. My biggest challenge was to get the OAuth 2.0 Device Authorization flow up and running for Home Connect.

In our current apartment I don’t have solar-panels. Therefore I’m using data from another (live) solar production site to simulate the behaviour of the concept.

System architecture

To manage expectations: I’m not a professional software engineer. I’m not planning to productise this, and just want to be able to maintain everything myself. I might publish some of the related projects on Github, but don’t expect a lot of documentation on it.

System architecture overview

The image below shows a high-level system architecture. I’m using Ditto as a local digital twin for capturing the current state of all the entities in the system. Additionally, I’m using InfluxDB to store the historical states of the digital twins and Grafana to visualise the historical states. The green services are “sensors”: they retrieve data from their sources and update the digital twin. The blue services are “controllers”: they control the devices based on the status of the digital twin. The orange services are “processors”: they transform data or decide to start actions. The red services are “communicators”: they communicate with the user of the specific use-case.

How to use it?

Well, that’s the good news. Instead of pressing the “start” button on the washer, we press the “remote start enabled” (or “app”) button. The HomeConnect.Sensor captures this event and updates the digital twin. The HomeConnect.Planner is subscribed to this update and starts calculating the best possible timeslot in the next 16 hours to start the program. It considers the predicted solar-panel production, the history washer program consumption registered by the Shelly Plug, and the Epex spot pricing. Once it has calculated the ideal start time updates the “scheduledprogram” digital twin. The HomeConnect.Controller is subscribed on this update and sends the start command to the washer based on the time HomeConnect.Planner has defined. Slack (using Slack.Messenger) keeps the user up-to-date on what is happening, for example, when the program is planned, started, finished, or cancelled.