In this case we sort based on memory and filter on dotnet8 processes:
top -b -c -w 200 -o %MEM -d 30 | grep dotnet8 >> top-output.log
My personal knowledge base, online.
In this case we sort based on memory and filter on dotnet8 processes:
top -b -c -w 200 -o %MEM -d 30 | grep dotnet8 >> top-output.log
Last year, I described my view on home energy consumption and my personal goals here: minimizing grid consumption.
Today, the summer of 2024 has passed and we’re moving into the “dark” season. It’s time to look ahead towards the end of the year, look back at my previous goals and adjust them where necessary.
Before we do that let’s have another look at my ambitious goals for the next years:
Year | Grid consumption | Direct consumption |
2023 (today) | 80% | 20% |
2024 | 60% | 40% |
2025 | 53% | 47% |
2026 | 50% | 50% |
At the end of 2023 roughly 20% of the energy we consumed was directly consumed from our solar panels. Today, in the beginning of October 2024, our direct consumption of the year 2024 (year-to-date) is 28.4%.
The start of the year seemed good. However, in the beginning of the year the solar panels deliver less energy compared to the summer months, so the direct consumption is higher in the “dark” months. In the end, I’m far away from the 40% I was hoping to achieve this year.
This was not totally unexpected, but still a bit of a disappointment. Especially considering the changes I’ve worked on during the year:
I was already monitoring the epex-spot prices and predicting the production of solar energy. I used both data points to schedule energy consumption, but the algorithm was (too) complex to maintain and wasn’t always showing optimal results. I’ve created a price-index that combines the epex-price and expected solar production into a single value.
To use the price-index I needed to adjust the scheduling process. This was a difficult change because appliance specific micro services where doing their own planning. This meant I needed to change the “controllers” for the HomeConnect appliances, heatpump and car. I took this as an opportunity to create a separate scheduling micro service. The controllers now request the scheduling service for the best time to start a appliance specific program. The scheduler calculates the best start time based on the expect program duration, consumption and time to complete. The image below visualizes the result of the new scheduler process. Most programs are scheduled during the daily solar peak, when the price index is at it lowest point. However, when I get home from work at the end of the afternoon, and need to start driving early morning again, the scheduler uses the price index to chose the best time to charge the car during the night.
When I don’t know when I need the car I just connect it to the socket. The controller then considers the actual solar production and redirects this energy to the car. To ensure a stable charging process I needed to build in some delays. The curve is not perfect, but I think it’s close enough.
There is some discussion on the impact of the power quality of “smart charging”. I need to understand this a bit better and probably need to adjust my algorithm.
Why am I stuck at 28%? To answer this question I need to look at the two largest consumers: the car and the heatpump.
Let’s start with the car: the direct consumption of the car is 31%. It will be very difficult to further improve this, as it’s depending on a combination of my work schedule and the weather. The situation has improved since the beginning of the year. I’m expecting the year end to be difficult as the solar production will go down in the last months of the year. I might be able to find small optimizations in the scheduler over the next months.
The heatpump is even more complex as it uses two energy sources (the compressor and the COP1 heater) and has three different goals: heating, cooling, and domestic hot water. For this analysis I have only looked at the difference between the compressor and the COP1 heater.
The digital twin of the heatpump was already operational at the end of 2023. It gives me full control of when domestic hot water, disinfection, heating and cooling are enabled.
I’ve started to work on a couple of things:
The domestic hot water is now only generated during the day, based on the price index. The scheduler does not wait until the hot water is “empty”, but starts generating hot water if it expects a sunny day to generate a buffer for the day after.
The disinfection is typically planned once a week. I needed to add a bit of slack and, depending on the predicted availability of solar energy, it now schedules the best time to run a disinfection run between 6-9 days after the last run. Additionally, the power of the disinfection is adjusted based on the maximum available solar energy. If a maximum of 3kW of solar energy is expected, the disinfection power is adjusted down from 9kW to 3kW, it’s lowest value. The disadvantage is that disinfection takes longer, but that has not proven to create new problems.
Last winter, the heatpump was running the entire night to heat the house. Then, on sunny days, the sun started heating up our living room through our glass “walls” resulting in an inside temperature of 25 degrees Celsius. The energy consumed by the heatpump in the night and morning was, more or less, wasted. The controller now looks at the predicted temperature and solar radiation of the upcoming day and only turns on the heatpump if a combination of a low temperature and low sun radiation is expected. Cooling is only enabled if the inside temperature reaches a configured threshold. This mechanism should prevent wasted energy, but is not optimized for generating heat a the best time of day.
This has resulted in 20% direct consumption, which is far from the target.
The largest problem is the heating in the winter. There is no solar energy produced so the direct consumption is low.
When I only look at the direct consumption over the last 90 days (7 Juli – 7 October) the situation looks a lot better.
I don’t think the 40% direct consumption target is realistic for 2024. I probably need another year to come to 40%.
Over the next months I need to start understanding in which situations the heatpump is consuming energy from the grid. Additionally, I will slowly start looking into the consumption and planning of the WTW (ventilation) unit, fridge, and freezer. At the same time I need to prevent the risk of food poisoning by (control) software failure.
Ensure the /etc/hosts files are equal on all three the machines:
192.168.20.218 K8S01.verhaeg.local K8S01
192.168.20.219 K8S02.verhaeg.local K8S02
192.168.20.220 K8S03.verhaeg.local K8S03
Disable swap:
systemctl --type swap
UNIT LOAD ACTIVE SUB DESCRIPTION
dev-sda3.swap loaded active active Swap Partition
systemctl mask dev-sda3.swap
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
reboot
Prepare the installation of containerd:
cat <<EOF | tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
modprobe overlay && modprobe br_netfilter
cat <<EOF | tee /etc/sysctl.d/99-kubernetes-k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
Install containerd:
apt-get update && apt-get install containerd -y
Configure contianerd so that it works with Kubernetes:
containerd config default | tee /etc/containerd/config.toml >/dev/null 2>&1
Both the kubelet and the underlying container runtime need to interface with control groups to enforce resource management for pods and containers and set resources such as cpu/memory requests and limits. To interface with control groups, the kubelet and the container runtime need to use a cgroup driver. Set cgroupdriver to systemd (true) on all the nodes:
nano /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = true
Restart and enable containerd on all nodes:
systemctl restart containerd && systemctl enable containerd
Add Kubernetes apt repository:
apt-get install curl pgp -y
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Install Kubernetes tools:
apt-get update && apt-get install kubelet kubeadm kubectl -y && apt-mark hold kubelet kubeadm kubectl
Install Kubernetes cluster with Kubeadm. Kubelet doesn’t appreciate the command-line options anymore (these are deprecated). Instead, I suggest to create a configuration file, say ‘kubelet.yaml’ with following content.
Create the kubelet.yaml file on the master node (K8S01):
nano kubelet.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "1.30.0" # Replace with your desired version
controlPlaneEndpoint: "K8S01"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
Initialise the cluster:
kubeadm init --config kubelet.yaml --upload-certs
Result:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join k8s01.verhaeg.local:6443 --token 965cpz.xvmun07kjrezlzg9 \
--discovery-token-ca-cert-hash sha256:3ea38e43e5304e0124e55cd5b3fb00937026a2b53bc9d930b6c2dab95482225a \
--control-plane --certificate-key e48ada5b6340b8e217bcf4c7c5427ae245704be43eee46c07bfa0b6e1c4abdd8
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s01.verhaeg.local:6443 --token 965cpz.xvmun07kjrezlzg9 \
--discovery-token-ca-cert-hash sha256:3ea38e43e5304e0124e55cd5b3fb00937026a2b53bc9d930b6c2dab95482225a
To start interacting with cluster, run following commands on master node,
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
Let the other nodes join the cluster:
kubeadm join k8s01.verhaeg.local:6443 --token bcd2xw.32pzfgroijg1sax3 \
--discovery-token-ca-cert-hash sha256:0c0f18cf32bc2342024efce9313e0e4fcf8a2b87275fd33e9ceb853d77b41f8b \
--control-plane
Result:
root@K8S01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s01 NotReady control-plane 62s v1.28.11
k8s02 NotReady <none> 26s v1.28.11
k8s03 NotReady <none> 21s v1.28.11
Install Calico (container networking and security):
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
Go the the directory of the .NET core project and run dotnet publish:
dotnet publish --os linux --arch x64 /t:PublishContainer -p ContainerArchiveOutputPath=./images/xxx.tar.gz
Additional information on Stackoverflow: https://stackoverflow.com/questions/58916308/kubernetes-deploy-with-tar-docker-image
Manual: SEN0395 Manual
Connect to the device using a serial port (115200 baud rate, 1 stop bit, 8 data bits, no parity bit, no flow control):
cu -l /dev/ttyS0 -s 115200
The sensor should start showing information on the screen:
The 0 should change into 1 when the sensor detects human presence.
Send the sensorstop command to stop the sensor and enter the configuration mode:
$JYBSS,0, , , *
sensorStop
Done
leapMMW:/>
This seems to be possible, but I cannot find any documentation on the related commands.
Table source: SEN0395 manual
Example | Command |
---|---|
(Default configuration) Sensing distance: “0m to 3m” (0m=0 * 0.15cm, 3m=20 * 0.15cm) | detRangeCfg -1 0 20 |
Sensing distance: “1.5m to 3m” (1.5m=10 * 0.15cm, 3m=20 * 0.15cm) | detRangeCfg -1 10 20 |
Sensing distance: “1.5m to 3m” “7.5m to 12m” (1.5m=10 * 0.15cm, 3m=20*0.15cm) (7.5m=50 * 0.15cm, 12m=80 * 0.15cm) | detRangeCfg -1 10 20 50 80 |
Sensing distance: “1.5m to 3m” “7.5m to 12m” “13.5m to 15m” (1.5m=10 * 0.15cm, 3m=20 * 0.15cm) (7.5m=50 * 0.15cm, 12m=80 * 0.15cm) (13.5m=90 * 0.15cm, 15m=100 * 0.15cm) | detRangeCfg -1 10 20 50 80 90 100 |
Sensing distance: “1.5m to 3m” “7.5m to 12m” “13.5m to 15m” “15.75m to 16.5m” (1.5m=10 * 0.15cm, 3m=20 * 0.15cm) (7.5m=50 * 0.15cm, 12m=80 * 0.15cm) (13.5m=90 * 0.15cm, 15m=100 * 0.15cm) (15.75m=105 * 0.15cm, 16.5m=110 * 0.15cm) | detRangeCfg -1 10 20 50 80 90 100 105 110 |
saveCfg 0x45670123 0xCDEF89AB 0x956128C6 0xDF54AC89
sensorStart
Entrance: detRangeCfg -1 0 12
Upstairs: detRangeCfg -1 . .
This article describes the components that build up the architecture of my (digital) home.
To do this ask:
When you value plans more than outcomes it’s easy to mistake effort for achievement.
You only need three kinds of people in any decision meeting: the people with the authority to say yes, the people with the information or expertise, and the people who have to live with the outcome. If this means leaving out lots of people from a decision team, then think of it as representative rather than direct democracy, says Janice.
Eliminate waste by changing your standards. Two kinds of waste come into decision making, says Janice – decisions can either be very slow or made quickly but chaotically. In the latter case this is because they may be made unilaterally, made by the wrong people, or with no commitment from others. While such decisions may be fast, productivity will be slow, Janice says.
Durable decisions should balance speed and strength. Two sentences to look out for here are “Can you live with it?” and “Does it move us forward?”, says Janice, these are different from “Do we all agree?” and “Is it the best?”.
“I think most drama comes from difficulties in decision-making,” Janice concludes. “I think decision making is a leadership skill. If you can facilitate great decision-making in a great decision-making process by bringing this more realistic framing to it, then you’re going to improve the culture of your organisations tremendously.”
Point A and point B. Where are we now? Where do we want to be? Relates to Vision.
What makes a good point A? You need to be brutally honest about your status quo. Include any disorder or messiness. Be sure to capture your hardest win, most durable markers of progress, points of tension, messiness, and uncertainty.
What makes a good point B? Point B should be specific enough to guide your actions, but not so specific that it points you towards a single possible outcome.
Concrete and measurable goals are mostly framed as outputs: writing a book, hitting a sales goal, losing weight. When we orient around outcomes we define what we want to have, change, or be, and put loose parameters around that. We decide on an ideal end-state that could come to pass in a variety of ways.
Be happy going to work in the morning. | Quit job/company. | New job & company. | |
Have less stress, drama, conflict. | Find a new situation; talk with supervisors. | New role at same company; new project with different team. | |
Manage or mitigate the dysfunctional situation. | Build relationship with coworkers. | Support within current role. |
Self deception: we don’t know everything. What do I know? How do I know it?
How do we know we’re not honest to ourselves?
1 | WHY is there no coffee in the coffeepot? | Because we’re out of coffee filters. | |
2 | WHY are there no coffee filters? | Because nobody bought them at the store. | |
3 | WHY didn’t someone buy them on the last shipping trip? | Because they weren’t on the shopping list. | |
4 | WHY weren’t they on the shopping list? | Because there were still some left when we wrote the list. | |
5 | WHY were there still a few left when we wrote the list? | uhhhh? |
The OODA loop makes it easier for individuals to decide which actions to take in dynamic situations.
OODA loop example:
Observe: based on many inputs, each day an air mission plan is released that describes what each combat plane will be doing the next day.
Orient: Once the plan is released, a tanker planner team evaluates the needs, assesses where tankers will be needed, and at what exact times in order to keep the combat planes on-mission.
Decide: tankers and personnel are allocated to each of the specified coordinates.
Act: each morning, a tanker plan is submitted for the day’s sorties.
Everyone takes it on faith that if we execute our little portion of the plan according to the items that were specified up front, then we will necessarily have done the right thing and have delivered the value. When you value planning above outcomes, it’s easy to conflate effort with achievement.
People involved in creating an OORM:
Tests to bulltproof the OORM:
Provide just enough direction.
OGSM: Objective, Goals, Strategies, Measures.
OKRs: Objects, Key Results
V2MOM: Vision, Values, Methods, Obstacles, and Measures.
Leverage the brain in three steps:
Externalize (go-wide): the first step is putting as much of the situation as possible into physical space, so that people can literally see what’s going on in each other’s heads. In most cases, externalizing means writing down our thoughts our ideas where others can see and understand them. This can be, for example, in a narrative (Narratives).
Organize (now, decide): next, we make sense of the undifferentiated mess of items we’ve just externalized. The most common approach to organizing is to group items, ideas or subjects into logical sets, for example, by type, urgency, cost, or any other criterion.
Focus (prepare for action): we need to decide what matters based on the organizing. We need to interpret the significance of each group and let that guide us in a decision.
Definition in 1976: A meeting is a place where the group revises, updates, and adds to what it knows as a group. We didn’t have computers, email, text, or video in 1976, so this made sense. Our options for communications and collaboration where very limited, so this was the only efficient format.
Tips on how to run a modern meeting:
Questions:
These questions widens the aperture of possibility, while reducing the chances that someone will back out later.
Steps in the decision-making process:
It’s fine to move back into a previous step if this feels right.
Understanding: stakeholders need to understand what it is you’re proposing. Any decision that is made without understanding is highly vulnerable;
Belief: once a stakeholder understands your thinking, you need to help them believe in it. That means hearing objections, exploring concerns, and satisfying curiosity;
Advocacy: when someone is putting their social capital on the line to spread the idea, you know that they must truly support it;
Decision-making: are stakeholders making decisions in support of the idea? This represents the most enduring form of support.
Writing narratives forces me to focus on the content and the story. The writing process itself triggers a deep thought process and helps me to iteratively improve the story.
A narrative can have different structures. One example is:
Another example of the structure of a narrative:
So, why is this 4-6 page memo concept effective in improving meeting outputs?
“I didn’t realize how deeply the document culture runs at Amazon. Most meetings have a document attached to them, from hiring meetings to weekly business reviews. These documents force us to think more clearly about issues because we have to get to a clear statement of the customer problem. They also enable us to tackle potentially controversial topics. As the group sits and reads, each individual gets time to process their feelings and come to a more objective point of view before the discussion starts. Documents can be very powerful.”
Chris, Senior Manager, Amazon Advertising
Roughly 20% of Dutch homes is equipped with solar panels. In Germany the adoption rate of solar panels is roughly 11%. This means that these homes produce power locally, but they hardly use it at the same time when it’s being produced. Only 30% of the produced energy is consumed directly. That means that 70% of the energy consumption is still consumed from the grid.
Considering my own installation the numbers are even worse. Only ~20% of the locally produced (10mWh) energy is directly consumed (2Mwh). The real problem is that almost 80% of the consumed energy is consumed from the grid.
In 2026, three years from now, I want to reduce my grid consumption to 50% (on average, per year).
I have no idea if this is achievable, but it’s good to have a concrete goal. If I figure out I can achieve this in one year, I will increase my ambition.
In order to achieve this vision, I need to increase the direct consumption which will result in a decrease of overall grid consumption.
To reach this vision, I need to understand at which times energy is already directly consumed and which consumers are causing this consumption. Additionally, I need to understand when energy is consumed from the grid and which consumers are causing this consumption. I’m hoping that, by influencing the consumption patterns of the largest consumers, I can make the first large step.
Year | Grid consumption | Direct consumption |
2023 (today) | 80% | 20% |
2024 | 60% | 40% |
2025 | 53% | 47% |
2026 | 50% | 50% |
I expect that the first major improvements will already start paying off in 2024. I’m aiming for a 20% increase in 2024 over 2023. Then the real challenge will probably start.
In the first half of the year my system was still in development. After that is has become more stable, which resulted in a stable data collection since June.
At this moment I considered one interesting data points per week:
Considering the available data I should start trying to focus on the summer months when solar production is high.
Let’s have a look at the usual suspects fist: the car, heatpump, washer, and dishwasher.
Considering my vision and the available data I should focus on moving grid consumption for the car and heatpump to direct consumption, especially in the summer months. I will share concrete objectives and key-results for this in a next post.
I want to move my energy measurement data to another InfluxDB database on the same server to create a new downsampling policy.
select * into Verhaeg_Energy..[measurement_name_destination] from Verhaeg_IoT..[measurement_name_source] group by *
Be aware of the .. in between the database name and the measurement name.