While checking my priority, I looked into the essentialist movement, starting with some exercises of decluttering. This was not really a journey I planned, and I would not describe myself a minimalist. It was something that came up when moving to a new apartment - moving stuff we don’t need is very inefficient, regardless of how much one owns. During that journey I recognized lots of relations to other positive aspects like mindfulness, financial stability, and sustainability. “Less, but better” goes back to Dieter Rams, an industrial designer who put his priority on functionality. This principle can be applied to many areas and has very positive effect in general.


I subscribed to the idea of owning high-quality items many years ago already, compared to owning a ton of obsolete crap. This is probably based on the satisfaction that comes from using well-made items and the idea of getting the most value for money. Value is very often translated to economic terms like “as cheap as possible” but this is not the whole story. I rather see it as “cost-effective” which can mean many things, for example “cost per usage”, “durability”, “satisfaction” or “additional future use”.

Cost per usage

When purchasing an item, let’s say a mobile phone, the idea could be to get the cheapest possible option that still fulfills my requirements. However, in most cases this means making concessions in terms of quality, satisfaction and longevity. Spending €1.000 on a phone might seem excessive, since there are phones for €50 that will work almost as well - or let’s say they are not so bad that it’s justified to spend 20x that amount. I use my phone at least 20 times a day and upgrade every 4-5 years. That means roughly 36.000 uses during its lifespan or 3ct per usage, while the cheap phone is below 0.2ct per usage. Even if the cheap phone lasts 5 years, which it most likely will not, the question is not “is the expensive phone worth €950 more?” but rather “does it provide a value of 3ct or more per use?”. I’d rather not use any phone at all if that interaction provides a value of less than 3ct, and I certainly would not change this assessment if it would save me 2.9ct. This way of thinking puts many things into perspective. People ranting about spending €1.000 on a phone on the other hand have no issue spending €50.000 on a car and just using it an average of 30 minutes per day.

Future use

Another idea is “additional future use”, which I use when purchasing tools, for example. There is a saying that “you always pay double for cheap tools” which can mean many things, including a lack of durability, safety, bad results but also functional obsolesce. When getting into a new hobby, say woodworking, it does not make sense to spend excessive amounts and buy all sorts of tools because you typically start as a novice. At this point you’re not in any way limited by the capabilities of tools but by your skills. It’s also entirely possible to lose interest in the hobby. When progressing however, the question is to buy the “next level” of tools or go for the “best money can buy”. I usually opt for the second because the improvement in skill and capability is not linear, but exponential. This means an average tool will start limiting me quite soon, and I’d rather not buy three tools when I can buy the best right away. The other benefit is that I can never blame a tool for the result, just myself.

Reason and impulse

Regardless of these ideas, there is always the question of “reason”, even when gravitating towards high-quality goods. I reject the idea of consuming just for the sake of status, which I find a terribly stupid and wasteful thing. There may be no reason to spend €50k on a high-quality car or getting a gold-plated phone if it does not provide any additional value for me. Those decisions have to be thoughtful in any case. I realized that delaying purchases is a great tool to avoid buying things that one would regret. Everything I purchase online sits on my “watch list” for at least 20 days. If I’m still convinced that an item adds value after 20 days, it’s very likely to be a good decision. These mechanics counters impulse purchases by 99% and saves lots of money that can be spent on higher quality goods.

Last year has been special in many ways. There would be countless things to mourn about, but one bright side was the extended opportunity to read and learn. After some weeks of after-hours idleness, I went to read a couple of fine books that I collected on my wishlist for times like these.

  • Greg McKeown: “Essentialism”
  • Matthew Walker: “Why We Sleep”
  • Nassim Nicholas Taleb: “Skin In The Game”
  • Caroline Wong: “Security Metrics”
  • Ralph Waldo Emerson “Trust Thyself”
  • CISSP All-in-One Exam Guide
  • CISSP Study Guide
  • CISSP Practice Tests
  • Thomas Sowell: “Basic Economics”
  • Gustave Le Bon: “The Crowd: A Study of the Popular Mind”
  • Greg Van Der Gaast: “Rethinking Infosec”

During the past months i’ve launched several initiatives to improve the security posture of our corporate infrastructure. As most companies we have the notion of a “internal” and an “external” network, which becomes more obsolete every day. For more background on this, look for good resources on “Zero Trust” networking and try to avoid marketing material.

Some of our assets are stored within GitLab, for example source-code, documentation, configuration, automation and build pipelines. As most Git server and CI implementations client access is offered through HTTPS and Git+SSH, with the latter being much more efficient. We have already moved web-browser access over HTTPS to flow through an authentication proxy some time ago. This means users run through our OpenID based single-sign on process before being granted access to GitLabs web interface.

When cloning or pushing Git repositories however, we still depend on static SSH keys. While SSH authentication using public/private keys is already a lot better than passwords, it still comes at the risk of losing the private key and by that allowing a third party elevated access to our repositories. GitLab is exposed to the Internet as we share lots of code with the open-source community. The main issue with those private keys is eternal trust and the fact that they are “only” protected by client-side security measures, some of which are optional and cannot really be attested, like encryption of the key material.

Teleport to the rescue

We’re using Gravitational Teleport for privileged access management, for example maintenance access to machines through SSH. It’s built around the idea that access is granted in an ephemeral way and that authentication runs through SSO, which means out-of-band techniques like 2FA can be used before access to SSH key material is provided.

Sequence diagram

Teleport works well with kubectl, which employs OpenSSH to control Kubernetes deployments. This quickly led to the idea of just using Teleport to provide access management to GitLab, which offers Git+SSH access through OpenSSH as well. In theory that’s pretty straight forward but came with some quirks related to GitLab.

In an optimal scenario a GitLab user would not upload any key material but get authenticated through Teleport and authorized through GitLab. Without knowing the users key fingerprint it’s however hard to map incoming SSH connections to user accounts and subsequently make authorization decisions. As a bonus, login to SSH works through a generic “git” user, so the users name and access permissions have to come from the certificate metadata.

Integrating Teleport

Let’s assume that Teleport is already up and running and users can tsh login to get their key material. On the Teleport side there is only one more thing to change, which is encoding specific “Principal” information to key material for users that are eligible to use GitLab. This information can be obtained by Teleport through the SSO system by checking what “claims” the user has, a LDAP backend or through static configuration. For the sake of this example let’s assume static configuration.

- '{{internal.logins}}'
- root
- git
- gitlab

The next time a user logs in to Teleport and gets access to key material, it will have those “logins” encoded as principals.

$ ssh-add -L | grep cert | ssh-keygen -L -f -
Type: ssh-rsa-cert-v01@openssh.com user certificate
Public key: RSA-CERT SHA256:1XU6aQIA8k2lx0S1oWvh+HbBDu6brERP4ezkO5mlPGQ
Signing CA: RSA SHA256:zH/mlNuyOSQMSerrbXWPVseu1rHHcA1vtQr3KVIkwZ8
Key ID: "martin"
Serial: 0
Valid: from 2020-03-19T09:57:22 to 2020-03-19T21:58:22

At this point also make sure the “Key ID” matches your GitLab user, this is essential to allow authorization.

Integrating GitLab

Public key information provided by the user through GitLabs user settings is stored within the home directory of the “git” user, at /var/opt/gitlab/.ssh/authorized_keys. Examining that file shows that a bit more is going on, for example that a command is called which maps the key to a user within the GitLab database. This will not work when authenticating with an ephemeral key that is not known or mapped at GitLab. At the same time we won’t need any integration by GitLab to make this work. It may make sense to restrict HTTPS access to force users on Git+SSH and somehow remove all existing user SSH keys, but thats rather optional.

Integrating OpenSSH


To solve this, we can configure OpenSSHd to positively authenticate all connections that use a valid certificate which got signed by the Teleport CA. This follows the normal “OpenSSH integration” guideline from Gravitational. Export the public key of the Teleport CA and put it to the OpenSSH configuration at the GitLab server using the TrustedUserCAKeys parameter.

root@teleport $ tctl auth export --type=host > cluster_node_keys
root@gitlab $ cp cluster_node_keys /etc/ssh/teleport-user-ca.pub
root@gitlab $ vim /etc/ssh/sshd_config
TrustedUserCAKeys /etc/ssh/teleport-user-ca.pub

The other very important part is to use AuthorizedPrincipalsCommand to allow sessions of the SSH “git” user to get mapped to GitLab users. This command can be run as user “git” and contains the “Principal” to make sure only certs with the encoded value gain access. Finally, the “Key ID” value is inserted as %i to tell GitLab which user shall be authorized. Note that this information can only be encoded to the certificate by Teleport as only certificates signed by Teleports CA are accepted.

root@gitlab $ vim /etc/ssh/sshd_config
Match User git
AuthorizedPrincipalsCommandUser git
AuthorizedPrincipalsCommand /opt/gitlab/embedded/service/gitlab-shell/bin/gitlab-shell-authorized-principals-check %i gitlab


Now the users OpenSSH client configuration needs to be updated to make sure the key material provided through Teleport is being used, instead of the users default key.

user@workstation $ vim ~/.ssh/config
host gitlab.heiland.io
Preferredauthentications publickey
HostName gitlab.heiland.io
IdentityFile ~/.tsh/keys/teleport/martin
User git

This makes sure the key stored at ~/.tsh/keys/teleport/martin is being used for SSH connections to gitlab.heiland.io when using the git user. Git will use this configuration when performing remote operations through Git+SSH.

Wrapping up

Now users should be able to git clone and work with repositories for which they are authorized in GitLab - once they ran through Teleports authentication process. There is no need anymore to upload any per-user key material to GitLab. However, GitLab always allow to fall-back to SSH keys, which can still be very useful for non-interactive access.


This examples showcases how access to GitLab can be controlled through Gravitational Teleport. It builds upon OpenSSH integration and does not require a premium subscription of Teleport. However, this comes at the disadvantage that access to GitLab is not logged or monitored by Teleport, which can be worked around by monitoring OpenSSH logs which contain all that information.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now