During the past months i’ve launched several initiatives to improve the security posture of our corporate infrastructure. As most companies we have the notion of a “internal” and an “external” network, which becomes more obsolete every day. For more background on this, look for good resources on “Zero Trust” networking and try to avoid marketing material.

Some of our assets are stored within GitLab, for example source-code, documentation, configuration, automation and build pipelines. As most Git server and CI implementations client access is offered through HTTPS and Git+SSH, with the latter being much more efficient. We have already moved web-browser access over HTTPS to flow through an authentication proxy some time ago. This means users run through our OpenID based single-sign on process before being granted access to GitLabs web interface.

When cloning or pushing Git repositories however, we still depend on static SSH keys. While SSH authentication using public/private keys is already a lot better than passwords, it still comes at the risk of losing the private key and by that allowing a third party elevated access to our repositories. GitLab is exposed to the Internet as we share lots of code with the open-source community. The main issue with those private keys is eternal trust and the fact that they are “only” protected by client-side security measures, some of which are optional and cannot really be attested, like encryption of the key material.

Teleport to the rescue

We’re using Gravitational Teleport for privileged access management, for example maintenance access to machines through SSH. It’s built around the idea that access is granted in an ephemeral way and that authentication runs through SSO, which means out-of-band techniques like 2FA can be used before access to SSH key material is provided.

Sequence diagram

Teleport works well with kubectl, which employs OpenSSH to control Kubernetes deployments. This quickly led to the idea of just using Teleport to provide access management to GitLab, which offers Git+SSH access through OpenSSH as well. In theory that’s pretty straight forward but came with some quirks related to GitLab.

In an optimal scenario a GitLab user would not upload any key material but get authenticated through Teleport and authorized through GitLab. Without knowing the users key fingerprint it’s however hard to map incoming SSH connections to user accounts and subsequently make authorization decisions. As a bonus, login to SSH works through a generic “git” user, so the users name and access permissions have to come from the certificate metadata.

Integrating Teleport

Let’s assume that Teleport is already up and running and users can tsh login to get their key material. On the Teleport side there is only one more thing to change, which is encoding specific “Principal” information to key material for users that are eligible to use GitLab. This information can be obtained by Teleport through the SSO system by checking what “claims” the user has, a LDAP backend or through static configuration. For the sake of this example let’s assume static configuration.

1
2
3
4
5
6
7
spec:
allow:
logins:
- '{{internal.logins}}'
- root
- git
- gitlab

The next time a user logs in to Teleport and gets access to key material, it will have those “logins” encoded as principals.

1
2
3
4
5
6
7
8
9
10
11
12
$ ssh-add -L | grep cert | ssh-keygen -L -f -
(stdin):1:
Type: ssh-rsa-cert-v01@openssh.com user certificate
Public key: RSA-CERT SHA256:1XU6aQIA8k2lx0S1oWvh+HbBDu6brERP4ezkO5mlPGQ
Signing CA: RSA SHA256:zH/mlNuyOSQMSerrbXWPVseu1rHHcA1vtQr3KVIkwZ8
Key ID: "martin"
Serial: 0
Valid: from 2020-03-19T09:57:22 to 2020-03-19T21:58:22
Principals:
root
git
gitlab

At this point also make sure the “Key ID” matches your GitLab user, this is essential to allow authorization.

Integrating GitLab

Public key information provided by the user through GitLabs user settings is stored within the home directory of the “git” user, at /var/opt/gitlab/.ssh/authorized_keys. Examining that file shows that a bit more is going on, for example that a command is called which maps the key to a user within the GitLab database. This will not work when authenticating with an ephemeral key that is not known or mapped at GitLab. At the same time we won’t need any integration by GitLab to make this work. It may make sense to restrict HTTPS access to force users on Git+SSH and somehow remove all existing user SSH keys, but thats rather optional.

Integrating OpenSSH

Server-side

To solve this, we can configure OpenSSHd to positively authenticate all connections that use a valid certificate which got signed by the Teleport CA. This follows the normal “OpenSSH integration” guideline from Gravitational. Export the public key of the Teleport CA and put it to the OpenSSH configuration at the GitLab server using the TrustedUserCAKeys parameter.

1
2
3
4
5
root@teleport $ tctl auth export --type=host > cluster_node_keys
root@gitlab $ cp cluster_node_keys /etc/ssh/teleport-user-ca.pub
root@gitlab $ vim /etc/ssh/sshd_config
[...]
TrustedUserCAKeys /etc/ssh/teleport-user-ca.pub

The other very important part is to use AuthorizedPrincipalsCommand to allow sessions of the SSH “git” user to get mapped to GitLab users. This command can be run as user “git” and contains the “Principal” to make sure only certs with the encoded value gain access. Finally, the “Key ID” value is inserted as %i to tell GitLab which user shall be authorized. Note that this information can only be encoded to the certificate by Teleport as only certificates signed by Teleports CA are accepted.

1
2
3
4
5
root@gitlab $ vim /etc/ssh/sshd_config
[...]
Match User git
AuthorizedPrincipalsCommandUser git
AuthorizedPrincipalsCommand /opt/gitlab/embedded/service/gitlab-shell/bin/gitlab-shell-authorized-principals-check %i gitlab

Client-side

Now the users OpenSSH client configuration needs to be updated to make sure the key material provided through Teleport is being used, instead of the users default key.

1
2
3
4
5
6
7
user@workstation $ vim ~/.ssh/config
[...]
host gitlab.heiland.io
Preferredauthentications publickey
HostName gitlab.heiland.io
IdentityFile ~/.tsh/keys/teleport/martin
User git

This makes sure the key stored at ~/.tsh/keys/teleport/martin is being used for SSH connections to gitlab.heiland.io when using the git user. Git will use this configuration when performing remote operations through Git+SSH.

Wrapping up

Now users should be able to git clone and work with repositories for which they are authorized in GitLab - once they ran through Teleports authentication process. There is no need anymore to upload any per-user key material to GitLab. However, GitLab always allow to fall-back to SSH keys, which can still be very useful for non-interactive access.

Limitations

This examples showcases how access to GitLab can be controlled through Gravitational Teleport. It builds upon OpenSSH integration and does not require a premium subscription of Teleport. However, this comes at the disadvantage that access to GitLab is not logged or monitored by Teleport, which can be worked around by monitoring OpenSSH logs which contain all that information.

There has not been an update on this blog for more than a year. I do not apologize, this is the way i have set my priority.

In fact, during the past years i’ve gradually defined my priority for a given point in time and within the bigger picture; creating blog posts almost nobody will read has not been part of that. At this point i decided to create some content again, because i think there are things worth sharing.

Interestingly enough to learn for someone who is not a native speaker, there has not been a plural form of “priority” in the english language for centuries. Just some decades ago people decided that one priority is not enough and there need to be more priorities to help organize our routine. Thats bullshit.

I am convinced that people can excel at one thing at a time, never at a couple of things in parallel. There has been the idea that people, and women in particular, would be able to “multi-task”, like a modern computer. Being familiar with computer technology i can confirm that there are actually very few real-world applications that inherently benefit from multi-tasking without being optimized for it. Some even suffer from it, and usually there is no need to do “real” multi-tasking. Instead, computers execute things that depend on each other sequentially, but very efficient.

Indeed there are applications where parallel processing can be used to realize huge benefits, but very often just for tasks that have no dependency on each other. Individual tasks may benefit from experience gained through a previous, maybe similar, task. Our modern lives do not have many equally important things to care about at the same time, they rather depend on each other and improvement is based on evolution, experience. This of course requires to select truly important things, otherwise we drown in trivial tasks.

Some of the computer performance gains we see during the past years are based on “speculative execution” which is not only dangerous (when done wrong) but which also does not scale in the real world where lifetime is limited and more precious than a couple of wasted CPU cycles. Having ten priorities as a person can be very similar to speculative execution since we did not care about selecting our priority before. For machines the penalty may be acceptable, for people i think it is devastating in the long run.

This is in fact much closer to reality than what someone would understand as “priorities”. Especially in business there is a competition for having as many priorities as possible to show how important a set of things is. Typical sentences i hear are “these are our top 10 priorities”, “our company strategy aims for three goals” - if everything is a priority, nothing is. This just show that someone was unwilling or unable to make tough decisions. An artisan, say a blacksmith, excels in performing his craft of making blades. He may totally suck in personal finance or driving a car. This is fine because we live in a society where labor is divided between people. Nobody has to be great in more than one thing to succeed. If you make the best blades in the world you can hire someone to drive you around or care about finance.

Sure, this does not mean we’re all meant to have singular talent, it just means we need to get our priority straight to reach our goal. This priority might shift over time, our blacksmith might have been focusing on mastering to drive a car for a couple of months, then get his finances in order and finally becoming a master of craftsmanship. If he or she would have opted for many priorities at the same time, chances are that the result would not be as great, it would have taken longer to achieve a satisfying result and stress would be immense. There are almost 8 billion people on this planet which get closer connected to each other every day and life expectancy has skyrocketed, it’s absurd to think one person requires to focus on ten things simultaneously. This issue of resource allocation has been solved by the free market for a long time.

Very often the situation of facing ten equally important tasks is based on failed planning or incorrect assessment. Having “many priorities” is a result of not saying “no” or “yes, but later” to something which is not the most important thing at a given time. When doing proper evaluation almost always one thing will qualify as “the priority”. This evaluation may take some time and force us to be realistic about capabilities - but it saves us a lot of trouble in the long run. Nobody will care or judge if a task was performed simultaneously or sequentially. In most cases it’s about the quality of the result or economic value rather than “how” something was achieved. I am convinced to achieve much more throughput and recognition when truly focusing on one thing at a time.

As an engineer i create systems to do things efficiently and reproducibly. This may take some time but once the system is built it helps to offload things that i then don’t have to actively care about - spend very little time on the actual operation but care about results and optimize. Over time this means large quantities of tasks become automated to a point where they “just happen” and i step in if something breaks. Creating those systems, not only in a technical sense, has been my priority for the past couple of months. I realized that if i don’t clearly set and own my priority, someone or something else will dump their priorities on me - but under their conditions and at for their gain, not mine. This had profound result on many things i do (or don’t do) as part of my daily life.

While writing this about twenty people messaged, mailed or tried to call me with things that are probably irrelevant. I think about it as “Schrödinger’s Mail” - it’s not my priority until i decide to read it. Reading it has not been my priority for the past hour and nobody will care. As a result i managed to create a new blog post after more than a year of not starting with a single sentence. How’s that for proving a point?

Looks like they finally managed to solve the vulnerability. As far as i can see the shop system has been replaced or migrated with the one in use at their US and international shops. Apparently the old system was broken by design and beyond fixing. Great to know that issue is solved now, very sad to known that it took about a year and no mitigations were applied.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×