Home
Hi there, my name is Max. Or Maksym, as you wish.
It’s just a homepage. I’m too lazy to fill it with my personal info. Look at my blog posts though
Recent posts
-
kuberneteskubernetesflatcarcoreoshypriotoskubesprayk3spulumihelmsmanhelm
My approach to Kubernetes installation & management on bare metal
Installation
-
x86_64
:- CoreOS (now Flatcar Container Linux) as a Linux distro
- Kubespray as a Kubernetes installer
- Metallb and NGINX Ingress Controller for incoming traffic
-
arm
(Raspberry PI 3B)- HypriotOS as a lightweight container-oriented Debian-based Linux
-
k3s as a lightweight Kubernetes distribution with
sqlite
instead ofetcd
- Metallb and Traefik v1 for incoming traffic
Configuration
- Pulumi for everything except Helm charts
- Helmsman for Helm charts
- kubie for using multiple Kubernetes contexts simultaneously in different terminals
-
-
kuberneteskuberneteshelmsmanhelmsopshelm-secretshelm-whatup
Automating Helm applications installation and upgrade with Helmsman
As I’ve mentioned in my post about Pulumi, I don’t like
helm template
approach. In my opinion, it’s better to stick with the tool rather that mimic it’s behaviour. In case of helm “sticking with the tool” also means out of the box support for the standardhelm
tool, including plugins.My tool of choice is Helmsman
-
kuberneteskubernetespulumipythonhelm
Kubernetes state management with Pulumi and Python
I like Kubernetes way of declarative workload configuration, but handling cluster state using dozens or hundreds of YAML files is impractical.
Of course, one can just combine them all into a single uber-YAML . But the harsh reality is, despite the fact that Kubernetes by design can and will apply this configuration asynchronously, and eventually cluster state will achieve the desired state, this “eventually” might be equal to infinity.
There are certain cases when order matters, for instance when new CRD definitions are added, and then new objects with that
kind
are declared.Another aspect is complexity, which can be encapsulated by tools such as Helm. While Helm is a good solution for the problem of installing third-party apps, it’s not necessary a right choice for your own services, or for lightweight overall cluster configuration.
And one more thing. I enjoy the kubernetes architecture, even (and especially!) the fact that numerous abstractions are needed to “canonically” expose a single container to the rest of the world. But it doesn’t mean that I enjoy to break a DRY principle, and copy-paste-modify same YAMLs over and over.
So… Pulumi to the rescue!
-
elasticsearchelasticsearchELK32biti586
Running ElasticSearch on 32-bit Linux machine
I have an old piece of hardware with Atom D2700 CPU, which according to ARK is capable of running x64 OS. Vendor, however, never released a BIOS with x64 support, and I was unable to find it on an Internet.
Aside this sad fact, that small PC have decent specs, including 4 gigs of RAM, which makes it a good candidate for a single-node ElasticSearch cluster. I have to collect logs from my Kubernetes cluster somewhere, right?
Unfortunately, Elastic dropped an official
i586
support long time ago, which totally makes sense from commercial perspective.Well, thanks to Debian and Java we still can run ElasticSearch on top of 32-bit Linux!
This tutorial is written for ElasticSearch OSS v7.5.1, and might not work for newer versions, as source code might change!
-
kubernetescoreoskuberneteskubespraynucudoometallbnginxcert-manageropenwrthaproxyoauth2-proxyhelm
Home pet cluster. Kubernetes on CoreOS. Part 3: Ingress
My Kubernetes is up and running, and I’ve decided to expose certain services to the Internet, while keeping other services inside the home network.
subscribe via RSS