Categories
kubernetes
-
kuberneteskubernetesflatcarcoreoshypriotoskubesprayk3spulumihelmsmanhelm
My approach to Kubernetes installation & management on bare metal
Installation
-
x86_64
:- CoreOS (now Flatcar Container Linux) as a Linux distro
- Kubespray as a Kubernetes installer
- Metallb and NGINX Ingress Controller for incoming traffic
-
arm
(Raspberry PI 3B)- HypriotOS as a lightweight container-oriented Debian-based Linux
-
k3s as a lightweight Kubernetes distribution with
sqlite
instead ofetcd
- Metallb and Traefik v1 for incoming traffic
Configuration
- Pulumi for everything except Helm charts
- Helmsman for Helm charts
- kubie for using multiple Kubernetes contexts simultaneously in different terminals
-
-
kuberneteskuberneteshelmsmanhelmsopshelm-secretshelm-whatup
Automating Helm applications installation and upgrade with Helmsman
As I’ve mentioned in my post about Pulumi, I don’t like
helm template
approach. In my opinion, it’s better to stick with the tool rather that mimic it’s behaviour. In case of helm “sticking with the tool” also means out of the box support for the standardhelm
tool, including plugins.My tool of choice is Helmsman
-
kuberneteskubernetespulumipythonhelm
Kubernetes state management with Pulumi and Python
I like Kubernetes way of declarative workload configuration, but handling cluster state using dozens or hundreds of YAML files is impractical.
Of course, one can just combine them all into a single uber-YAML . But the harsh reality is, despite the fact that Kubernetes by design can and will apply this configuration asynchronously, and eventually cluster state will achieve the desired state, this “eventually” might be equal to infinity.
There are certain cases when order matters, for instance when new CRD definitions are added, and then new objects with that
kind
are declared.Another aspect is complexity, which can be encapsulated by tools such as Helm. While Helm is a good solution for the problem of installing third-party apps, it’s not necessary a right choice for your own services, or for lightweight overall cluster configuration.
And one more thing. I enjoy the kubernetes architecture, even (and especially!) the fact that numerous abstractions are needed to “canonically” expose a single container to the rest of the world. But it doesn’t mean that I enjoy to break a DRY principle, and copy-paste-modify same YAMLs over and over.
So… Pulumi to the rescue!
-
kubernetescoreoskuberneteskubespraynucudoometallbnginxcert-manageropenwrthaproxyoauth2-proxyhelm
Home pet cluster. Kubernetes on CoreOS. Part 3: Ingress
My Kubernetes is up and running, and I’ve decided to expose certain services to the Internet, while keeping other services inside the home network.
-
kubernetescoreoskuberneteskubespraynucudoo
Home pet cluster. Kubernetes on CoreOS. Part 2: Spraying some kubes with Kubespray
At this point I have two Linux machines running CoreOS Container Linux.
Now it’s time to finally install Kubernetes on them!
-
kubernetescoreoskuberneteskubespraynucudoo
Home pet cluster. Kubernetes on CoreOS. Part 1: don't call us cattle!
I always wanted to run a small Kubernetes cluster at home.
Why not in cloud? Kubernetes in cloud is still expensive if it’s just for fun. And I have a couple mini computers at home, as well as desire to look how Kubernetes works on the network layer.
I was never satisfied with “fat” Linux distributions for running containers, and didn’t want to configure OS auto upgrades. Trying out CoreOS Container Linux sounded like a natural fit for my little pets.
Upd 2020-04-23: I have migrated to Flatcar Container Linux, which is more or less a drop-in replacement for CoreOS Container Linux
elasticsearch
-
elasticsearchelasticsearchELK32biti586
Running ElasticSearch on 32-bit Linux machine
I have an old piece of hardware with Atom D2700 CPU, which according to ARK is capable of running x64 OS. Vendor, however, never released a BIOS with x64 support, and I was unable to find it on an Internet.
Aside this sad fact, that small PC have decent specs, including 4 gigs of RAM, which makes it a good candidate for a single-node ElasticSearch cluster. I have to collect logs from my Kubernetes cluster somewhere, right?
Unfortunately, Elastic dropped an official
i586
support long time ago, which totally makes sense from commercial perspective.Well, thanks to Debian and Java we still can run ElasticSearch on top of 32-bit Linux!
This tutorial is written for ElasticSearch OSS v7.5.1, and might not work for newer versions, as source code might change!