Blog

Minimizing grid consumption (2)

Last year, I described my view on home energy consumption and my personal goals here: minimizing grid consumption.

Today, the summer of 2024 has passed and we’re moving into the “dark” season. It’s time to look ahead towards the end of the year, look back at my previous goals and adjust them where necessary.

Before we do that let’s have another look at my ambitious goals for the next years:

YearGrid consumptionDirect consumption
2023 (today)80%20%
202460%40%
202553%47%
202650%50%
Overview of yearly goals to reach my vision in 2027 (defined in December 2023).

Status quo

At the end of 2023 roughly 20% of the energy we consumed was directly consumed from our solar panels. Today, in the beginning of October 2024, our direct consumption of the year 2024 (year-to-date) is 28.4%.

A weekly overview of the direct consumption (0-100%).

The start of the year seemed good. However, in the beginning of the year the solar panels deliver less energy compared to the summer months, so the direct consumption is higher in the “dark” months. In the end, I’m far away from the 40% I was hoping to achieve this year.

Changes

This was not totally unexpected, but still a bit of a disappointment. Especially considering the changes I’ve worked on during the year:

Introduction of a price index

I was already monitoring the epex-spot prices and predicting the production of solar energy. I used both data points to schedule energy consumption, but the algorithm was (too) complex to maintain and wasn’t always showing optimal results. I’ve created a price-index that combines the epex-price and expected solar production into a single value.

Energy planning using the price index

To use the price-index I needed to adjust the scheduling process. This was a difficult change because appliance specific micro services where doing their own planning. This meant I needed to change the “controllers” for the HomeConnect appliances, heatpump and car. I took this as an opportunity to create a separate scheduling micro service. The controllers now request the scheduling service for the best time to start a appliance specific program. The scheduler calculates the best start time based on the expect program duration, consumption and time to complete. The image below visualizes the result of the new scheduler process. Most programs are scheduled during the daily solar peak, when the price index is at it lowest point. However, when I get home from work at the end of the afternoon, and need to start driving early morning again, the scheduler uses the price index to chose the best time to charge the car during the night.

Visualization of the planning process: (green) available energy grid+solar; (blue) price-index; (yellow) planned energy consumption.

Unplanned car charging

When I don’t know when I need the car I just connect it to the socket. The controller then considers the actual solar production and redirects this energy to the car. To ensure a stable charging process I needed to build in some delays. The curve is not perfect, but I think it’s close enough.

(Orange) solar production; (green) car charging.

There is some discussion on the impact of the power quality of “smart charging”. I need to understand this a bit better and probably need to adjust my algorithm.

What’s going on?

Why am I stuck at 28%? To answer this question I need to look at the two largest consumers: the car and the heatpump.

Car

Let’s start with the car: the direct consumption of the car is 31%. It will be very difficult to further improve this, as it’s depending on a combination of my work schedule and the weather. The situation has improved since the beginning of the year. I’m expecting the year end to be difficult as the solar production will go down in the last months of the year. I might be able to find small optimizations in the scheduler over the next months.

31% of the car energy consumption is coming from solar production.

Heatpump

The heatpump is even more complex as it uses two energy sources (the compressor and the COP1 heater) and has three different goals: heating, cooling, and domestic hot water. For this analysis I have only looked at the difference between the compressor and the COP1 heater.

The digital twin of the heatpump was already operational at the end of 2023. It gives me full control of when domestic hot water, disinfection, heating and cooling are enabled.

Digital twin of the heatpump

I’ve started to work on a couple of things:

  • Detailed planning of the generation of domestic hot water;
  • Detailed planning of the disinfection process (to prevent bacteria);
  • Reducing the operation hours of the heating and cooling process.

The domestic hot water is now only generated during the day, based on the price index. The scheduler does not wait until the hot water is “empty”, but starts generating hot water if it expects a sunny day to generate a buffer for the day after.

The disinfection is typically planned once a week. I needed to add a bit of slack and, depending on the predicted availability of solar energy, it now schedules the best time to run a disinfection run between 6-9 days after the last run. Additionally, the power of the disinfection is adjusted based on the maximum available solar energy. If a maximum of 3kW of solar energy is expected, the disinfection power is adjusted down from 9kW to 3kW, it’s lowest value. The disadvantage is that disinfection takes longer, but that has not proven to create new problems.

Last winter, the heatpump was running the entire night to heat the house. Then, on sunny days, the sun started heating up our living room through our glass “walls” resulting in an inside temperature of 25 degrees Celsius. The energy consumed by the heatpump in the night and morning was, more or less, wasted. The controller now looks at the predicted temperature and solar radiation of the upcoming day and only turns on the heatpump if a combination of a low temperature and low sun radiation is expected. Cooling is only enabled if the inside temperature reaches a configured threshold. This mechanism should prevent wasted energy, but is not optimized for generating heat a the best time of day.

This has resulted in 20% direct consumption, which is far from the target.

Direct consumption of the heatpump 2024 YTD.

The largest problem is the heating in the winter. There is no solar energy produced so the direct consumption is low.

When I only look at the direct consumption over the last 90 days (7 Juli – 7 October) the situation looks a lot better.

Direct consumption of the heatpump over the last 90 days.

Conclusion

I don’t think the 40% direct consumption target is realistic for 2024. I probably need another year to come to 40%.

Over the next months I need to start understanding in which situations the heatpump is consuming energy from the grid. Additionally, I will slowly start looking into the consumption and planning of the WTW (ventilation) unit, fridge, and freezer. At the same time I need to prevent the risk of food poisoning by (control) software failure.

YTD consumption of appliances.

Install a clean Kubernetes cluster on Debian 12

Ensure the /etc/hosts files are equal on all three the machines:

192.168.20.218 K8S01.verhaeg.local K8S01
192.168.20.219 K8S02.verhaeg.local K8S02
192.168.20.220 K8S03.verhaeg.local K8S03

Disable swap:

systemctl --type swap

  UNIT          LOAD   ACTIVE SUB    DESCRIPTION
  dev-sda3.swap loaded active active Swap Partition

systemctl mask dev-sda3.swap
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
reboot

Prepare the installation of containerd:

cat <<EOF | tee /etc/modules-load.d/containerd.conf 
overlay 
br_netfilter
EOF

modprobe overlay && modprobe br_netfilter

cat <<EOF | tee /etc/sysctl.d/99-kubernetes-k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 
net.bridge.bridge-nf-call-ip6tables = 1 
EOF

Install containerd:

apt-get update && apt-get install containerd -y

Configure contianerd so that it works with Kubernetes:

containerd config default | tee /etc/containerd/config.toml >/dev/null 2>&1

Both the kubelet and the underlying container runtime need to interface with control groups to enforce resource management for pods and containers and set resources such as cpu/memory requests and limits. To interface with control groups, the kubelet and the container runtime need to use a cgroup driver. Set cgroupdriver to systemd (true) on all the nodes:

nano /etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = true

Restart and enable containerd on all nodes:

systemctl restart containerd && systemctl enable containerd

Add Kubernetes apt repository:

apt-get install curl pgp -y
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Install Kubernetes tools:

apt-get update && apt-get install kubelet kubeadm kubectl -y && apt-mark hold kubelet kubeadm kubectl

Install Kubernetes cluster with Kubeadm. Kubelet doesn’t appreciate the command-line options anymore (these are deprecated). Instead, I suggest to create a configuration file, say ‘kubelet.yaml’ with following content.

Create the kubelet.yaml file on the master node (K8S01):

nano kubelet.yaml

apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "1.30.0" # Replace with your desired version
controlPlaneEndpoint: "K8S01"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration

Initialise the cluster:

kubeadm init --config kubelet.yaml --upload-certs

Result:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8s01.verhaeg.local:6443 --token 965cpz.xvmun07kjrezlzg9 \
        --discovery-token-ca-cert-hash sha256:3ea38e43e5304e0124e55cd5b3fb00937026a2b53bc9d930b6c2dab95482225a \
        --control-plane --certificate-key e48ada5b6340b8e217bcf4c7c5427ae245704be43eee46c07bfa0b6e1c4abdd8

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s01.verhaeg.local:6443 --token 965cpz.xvmun07kjrezlzg9 \
        --discovery-token-ca-cert-hash sha256:3ea38e43e5304e0124e55cd5b3fb00937026a2b53bc9d930b6c2dab95482225a

To start interacting with cluster, run following commands on master node,

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Let the other nodes join the cluster:

kubeadm join k8s01.verhaeg.local:6443 --token bcd2xw.32pzfgroijg1sax3 \
        --discovery-token-ca-cert-hash sha256:0c0f18cf32bc2342024efce9313e0e4fcf8a2b87275fd33e9ceb853d77b41f8b \
        --control-plane

Result:

root@K8S01:~# kubectl get nodes
NAME    STATUS     ROLES           AGE   VERSION
k8s01   NotReady   control-plane   62s   v1.28.11
k8s02   NotReady   <none>          26s   v1.28.11
k8s03   NotReady   <none>          21s   v1.28.11

Install Calico (container networking and security):

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml

Adjusting the configuration of the RFRobot SEN0395

Manual: SEN0395 Manual

Connect to the device using a serial port (115200 baud rate, 1 stop bit, 8 data bits, no parity bit, no flow control):

cu -l /dev/ttyS0 -s 115200

The sensor should start showing information on the screen:

The 0 should change into 1 when the sensor detects human presence.

Send the sensorstop command to stop the sensor and enter the configuration mode:

$JYBSS,0, , , *
sensorStop
Done
leapMMW:/>

Adjusting sensitivity

This seems to be possible, but I cannot find any documentation on the related commands.

Adjusting range

Table source: SEN0395 manual

ExampleCommand
(Default configuration) Sensing distance: “0m to 3m”
(0m=0 * 0.15cm, 3m=20 * 0.15cm)
detRangeCfg -1 0 20
Sensing distance: “1.5m to 3m”
(1.5m=10 * 0.15cm, 3m=20 * 0.15cm)
detRangeCfg -1 10 20
Sensing distance: “1.5m to 3m” “7.5m to 12m”
(1.5m=10 * 0.15cm, 3m=20*0.15cm)
(7.5m=50 * 0.15cm, 12m=80 * 0.15cm)
detRangeCfg -1 10 20 50 80
Sensing distance: “1.5m to 3m” “7.5m to 12m” “13.5m to 15m”
(1.5m=10 * 0.15cm, 3m=20 * 0.15cm)
(7.5m=50 * 0.15cm, 12m=80 * 0.15cm)
(13.5m=90 * 0.15cm, 15m=100 * 0.15cm)
detRangeCfg -1 10 20 50 80 90 100
Sensing distance: “1.5m to 3m” “7.5m to 12m” “13.5m to 15m” “15.75m to 16.5m”
(1.5m=10 * 0.15cm, 3m=20 * 0.15cm)
(7.5m=50 * 0.15cm, 12m=80 * 0.15cm)
(13.5m=90 * 0.15cm, 15m=100 * 0.15cm)
(15.75m=105 * 0.15cm, 16.5m=110 * 0.15cm)
detRangeCfg -1 10 20 50 80 90 100 105 110

Save configuration

saveCfg 0x45670123 0xCDEF89AB 0x956128C6 0xDF54AC89

(Re)start sensor operations

sensorStart

Configuration

Entrance: detRangeCfg -1 0 12

Upstairs: detRangeCfg -1 . .

Technical architecture of my digital home

This article describes the components that build up the architecture of my (digital) home.

  • Logical configuration (ArangoDB): the logical configuration includes the areas (rooms) in the house, the sensors and actors located in those areas, and the “scenes” that can be applied to actors and areas.
  • Use-case controllers (custom .NET Core worker services): I’ve build a separate micro service for each use-case, for example, controlling lights, planning washer, dishwasher, and dryer, and planning the generation of domestic hot water.
  • Digital Twin (Eclipse Ditto): the Digital Twin stores the state of all connected sensors and is used by the use-case controllers to consume sensor states or push commands back down to the sensors.
  • Messaging (Slack): I’m using Slack as a messaging service to interact with my home. Slack informs me on specific state changes in the Digital Twin and I can feed commands in Slack to influence the behavior of my home. I try to minimize this as most decisions should be fully automated.
  • Sensor services (custom .NET Core worker services): the sensor services read sensor states via open or proprietary protocols. They are also responsible for pushing commands down to actors.
  • Sensor history (InfuxDB): InfluxDB stores the (relevant) history of the Digital Twin as well as the history from the different energy services that feed directly into InfluxDB.
  • Sensor configuration (ArangoDB): ArangoDB stores the information needed to communicate with local and cloud-based sensors.
  • Visualisation (Grafana): I’m using Grafana as a visualisation front-end.
Visualization of the architecture of my digital home.

Book summary: Farther, Faster, and Far Less Drama

Take-aways

1. Orient honestly

To do this ask:

  • Where are we now
  • Are we all in the same place? Do we have the same understanding?
  • What makes this moment complicated?
  • What is the uncomfortable truth that I’m not allowing myself to see or accept?

2. Value outcomes

When you value plans more than outcomes it’s easy to mistake effort for achievement.

3. Leverage the brains

You only need three kinds of people in any decision meeting: the people with the authority to say yes, the people with the information or expertise, and the people who have to live with the outcome. If this means leaving out lots of people from a decision team, then think of it as representative rather than direct democracy, says Janice.

4. Make durable decisions

Eliminate waste by changing your standards. Two kinds of waste come into decision making, says Janice – decisions can either be very slow or made quickly but chaotically. In the latter case this is because they may be made unilaterally, made by the wrong people, or with no commitment from others. While such decisions may be fast, productivity will be slow, Janice says.

Durable decisions should balance speed and strength. Two sentences to look out for here are “Can you live with it?” and “Does it move us forward?”, says Janice, these are different from “Do we all agree?” and “Is it the best?”.

“I think most drama comes from difficulties in decision-making,” Janice concludes. “I think decision making is a leadership skill. If you can facilitate great decision-making in a great decision-making process by bringing this more realistic framing to it, then you’re going to improve the culture of your organisations tremendously.”

Summary

Orient honestly

Vision

Point A and point B. Where are we now? Where do we want to be? Relates to Vision.

What makes a good point A? You need to be brutally honest about your status quo. Include any disorder or messiness. Be sure to capture your hardest win, most durable markers of progress, points of tension, messiness, and uncertainty.

What makes a good point B? Point B should be specific enough to guide your actions, but not so specific that it points you towards a single possible outcome.

Outcomes

Concrete and measurable goals are mostly framed as outputs: writing a book, hitting a sales goal, losing weight. When we orient around outcomes we define what we want to have, change, or be, and put loose parameters around that. We decide on an ideal end-state that could come to pass in a variety of ways.

Be happy going to work in the morning.Quit job/company.New job & company.
Have less stress, drama, conflict.Find a new situation; talk with supervisors.New role at same company; new project with different team.
Manage or mitigate the dysfunctional situation.Build relationship with coworkers.Support within current role.

A tool for finding the truth: Five times why

Self deception: we don’t know everything. What do I know? How do I know it?

How do we know we’re not honest to ourselves?

1WHY is there no coffee in the coffeepot?Because we’re out of coffee filters.
2WHY are there no coffee filters?Because nobody bought them at the store.
3WHY didn’t someone buy them on the last shipping trip?Because they weren’t on the shopping list.
4WHY weren’t they on the shopping list?Because there were still some left when we wrote the list.
5WHY were there still a few left when we wrote the list?uhhhh?

A hyper dynamic world

The OODA loop makes it easier for individuals to decide which actions to take in dynamic situations.

OODA loop example:

Observe: based on many inputs, each day an air mission plan is released that describes what each combat plane will be doing the next day.

Orient: Once the plan is released, a tanker planner team evaluates the needs, assesses where tankers will be needed, and at what exact times in order to keep the combat planes on-mission.

Decide: tankers and personnel are allocated to each of the specified coordinates.

Act: each morning, a tanker plan is submitted for the day’s sorties.

Value outcomes

Everyone takes it on faith that if we execute our little portion of the plan according to the items that were specified up front, then we will necessarily have done the right thing and have delivered the value. When you value planning above outcomes, it’s easy to conflate effort with achievement.

Outcome oriented roadmap

  1. It you spend time doing it, it’s an activity.
  2. The tangible result of that time spend is an output.
  3. The reason for doing those two things is the outcome.

People involved in creating an OORM:

  1. People with the authority to say yes;
  2. People who have relevant knowledge;
  3. People who have to live with the outcome.

Tests to bulltproof the OORM:

  1. Are the outcomes clearly articulated?
  2. How good are your metrics?
  3. Have you articulated risks and mitigations?
  4. Does your roadmap represent an aligned viewpoint?
  5. Is your roadmap easy to discover and consume?

Making outcomes actionable

Provide just enough direction.

OGSM: Objective, Goals, Strategies, Measures.

OKRs: Objects, Key Results

V2MOM: Vision, Values, Methods, Obstacles, and Measures.

Leverage the brains

Leverage the brain in three steps:

  1. Frame the problem
  2. Get the right people in the room
  3. Respect your collaborators

Externalize, Organize, Focus

Externalize (go-wide): the first step is putting as much of the situation as possible into physical space, so that people can literally see what’s going on in each other’s heads. In most cases, externalizing means writing down our thoughts our ideas where others can see and understand them. This can be, for example, in a narrative (Narratives).

Organize (now, decide): next, we make sense of the undifferentiated mess of items we’ve just externalized. The most common approach to organizing is to group items, ideas or subjects into logical sets, for example, by type, urgency, cost, or any other criterion.

Focus (prepare for action): we need to decide what matters based on the organizing. We need to interpret the significance of each group and let that guide us in a decision.

Reinventing “the meeting”

Definition in 1976: A meeting is a place where the group revises, updates, and adds to what it knows as a group. We didn’t have computers, email, text, or video in 1976, so this made sense. Our options for communications and collaboration where very limited, so this was the only efficient format.

Tips on how to run a modern meeting:

  • Switch up language: let’s move from “agenda” or “topics to discuss” to “work plan”. Let’s stop using “meeting” but use “decision session” or “work session” instead. These small language tweaks set different expectations for how the time will be spend.
  • Frame the meeting with an AB3: explain point A (start of the meeting) and point B (end of the meeting), and three agreements / methods that you will use to move from A to B.

Make durable decisions

Questions:

  1. It this a decision we can all live with?
  2. If we went in this direction, is that something we could all live with?

These questions widens the aperture of possibility, while reducing the chances that someone will back out later.

Decision-making process

Steps in the decision-making process:

  1. Notice there is a decision to make;
  2. Gather inputs (costs, potential solutions, time frame);
  3. Weighing options (pros and cons);
  4. The moment of choice;
  5. Resourcing (who is going to execute?)
  6. Execution.

It’s fine to move back into a previous step if this feels right.

Durable decisions

  1. Avoid absolutes (find a good enough decision instead of the “right” decision);
  2. Get the facts right (to move quickly, first gather facts and inputs);
  3. Consent, not consensus (agree to disagree);
  4. The right people, but not too many (people with the authority to say yes, people who have subject-matter knowledge, people who have to live with the outcome);
  5. Reduce the scope (break down large decisions into incremental moves);
  6. Mental agility and growth mindset required (if stakeholders come with their mind made up, it will be a difficult discussion).

UBAD

Understanding: stakeholders need to understand what it is you’re proposing. Any decision that is made without understanding is highly vulnerable;

Belief: once a stakeholder understands your thinking, you need to help them believe in it. That means hearing objections, exploring concerns, and satisfying curiosity;

Advocacy: when someone is putting their social capital on the line to spread the idea, you know that they must truly support it;

Decision-making: are stakeholders making decisions in support of the idea? This represents the most enduring form of support.

Narratives

Writing narratives forces me to focus on the content and the story. The writing process itself triggers a deep thought process and helps me to iteratively improve the story.

A narrative can have different structures. One example is:

  • In the past it was like this …
  • Then something happened …
  • So now we should do this …
  • So the future might be like this …

Another example of the structure of a narrative:

  1. Context (or question)
  2. Approach (Approaches to answer the question – by whom, by which method, and their conclusions)
  3. Differentiation (How is your attempt at answering the question different or same from previous approaches? Also compared to competitors)
  4. Now what? (that is, what’s in it for the customer, the company, and how does the answer to the question enable innovation on behalf of the customer?)
  5. Appendix

So, why is this 4-6 page memo concept effective in improving meeting outputs?

  • It forces deep thinking. The 6-page data-rich narratives that are handed out are not easy to write. Most people spend weeks preparing them in order to be clear. Needless to say, this forces incredible, deep thinking. The document is intended to stand on its own. Amazon’s leaders believe the quality of a leader’s writing is synonymous with the quality of their thinking.
  • It respects time. Each meeting starts with silent reading time. When I asked why they don’t send out the narratives in advance, the response was, “we know people don’t have the time to read the document in advance.”
  • It levels the playing field. Think of the introverts on your team who rarely speak during a meeting. Introverted leaders at Amazon “speak” through these well-prepared memos. They get a chance to be heard, even though they may not be the best presenter in the organization.
  • It leads to good decisions. Because rigorous thinking and writing is required – all Amazon job candidates at a certain level are required to submit writing samples, and junior managers are offered writing style classes – team members are forced to take an idea and think about it completely.
  • It prevents the popularity bias. The logic of a well thought out plan speaks louder than the executive who knows how to “work the halls” and get an idea sold through influence rather than solid, rigorous thinking and clear decision making.

“I didn’t realize how deeply the document culture runs at Amazon. Most meetings have a document attached to them, from hiring meetings to weekly business reviews. These documents force us to think more clearly about issues because we have to get to a clear statement of the customer problem. They also enable us to tackle potentially controversial topics. As the group sits and reads, each individual gets time to process their feelings and come to a more objective point of view before the discussion starts. Documents can be very powerful.”

Chris, Senior Manager, Amazon Advertising

Minimizing grid consumption

Context

Roughly 20% of Dutch homes is equipped with solar panels. In Germany the adoption rate of solar panels is roughly 11%. This means that these homes produce power locally, but they hardly use it at the same time when it’s being produced. Only 30% of the produced energy is consumed directly. That means that 70% of the energy consumption is still consumed from the grid.

Metrics

Considering my own installation the numbers are even worse. Only ~20% of the locally produced (10mWh) energy is directly consumed (2Mwh). The real problem is that almost 80% of the consumed energy is consumed from the grid.

Vision

In 2026, three years from now, I want to reduce my grid consumption to 50% (on average, per year).

I have no idea if this is achievable, but it’s good to have a concrete goal. If I figure out I can achieve this in one year, I will increase my ambition.

In order to achieve this vision, I need to increase the direct consumption which will result in a decrease of overall grid consumption.

Steps along the way

To reach this vision, I need to understand at which times energy is already directly consumed and which consumers are causing this consumption. Additionally, I need to understand when energy is consumed from the grid and which consumers are causing this consumption. I’m hoping that, by influencing the consumption patterns of the largest consumers, I can make the first large step.

YearGrid consumptionDirect consumption
2023 (today)80%20%
202460%40%
202553%47%
202650%50%
Overview of yearly goals to reach my vision in 2027.

I expect that the first major improvements will already start paying off in 2024. I’m aiming for a 20% increase in 2024 over 2023. Then the real challenge will probably start.

First analysis of available data

In the first half of the year my system was still in development. After that is has become more stable, which resulted in a stable data collection since June.

When is direct consumption low?

At this moment I considered one interesting data points per week:

  • Direct consumption (%) = direct consumption / total production (solar);
The direct consumption (%) in red per week. It’s clearly visible that when the solar production goes down in the winter months, the direct consumption goes up.

Considering the available data I should start trying to focus on the summer months when solar production is high.

Which are the largest consumers?

Let’s have a look at the usual suspects fist: the car, heatpump, washer, and dishwasher.

The car takes roughly 35% of the yearly consumption. The (combined) heatpump follows with 28%.

Next steps

Considering my vision and the available data I should focus on moving grid consumption for the car and heatpump to direct consumption, especially in the summer months. I will share concrete objectives and key-results for this in a next post.

InfluxDB api unavailable after x attempts

The InfluxDB start-up script checks if the HTTP service is running by trying to connect to it. However, I have disabled HTTP in my configuration and use HTTPS. This behavior is also described on Github.

You can workaround this issue by adjusting the InfluxDB service configuration file (/etc/systemd/system/influxd.service). The commented out lines are old configuration and replaced by the used lines.

[Unit]
Description=InfluxDB is an open-source, distributed, time series database
Documentation=https://docs.influxdata.com/influxdb/
After=network-online.target

[Service]
User=influxdb
Group=influxdb
LimitNOFILE=65536
EnvironmentFile=-/etc/default/influxdb
ExecStart=/usr/bin/influxd -config /etc/influxdb/influxdb.conf $INFLUXD_OPTS
#ExecStart=/usr/lib/influxdb/scripts/influxd-systemd-start.sh
KillMode=control-group
Restart=on-failure
Type=simple
#Type=forking
PIDFile=
#PIDFile=/var/lib/influxdb/influxd.pid

[Install]
WantedBy=multi-user.target
Alias=influxd.service