Introduction to Hashicorp Consul's Kubernetes Authorization



That's right, after the release of Hashicorp Consul 1.5.0 in early May 2019 in Consul, you can authorize applications and services running in Kubernetes natively.







In this tutorial, we will step by step create a POC (Proof of concept, PoC - proof of concept] - demonstrating this new feature. Basic knowledge of Kubernetes and Hashicorp's Consul is expected from you. And although you you can use any cloud platform or on-premises environment, in this guide we will use Google's Cloud Platform.







Overview



If we turn to the Consul documentation on its authorization method , we will get a brief overview of its purpose and use case, as well as some technical details and a general overview of the logic. I highly recommend reading it at least once before continuing, as I will explain and chew it all now.









Figure 1: Official Consul Authorization Method Overview







Let's take a look at the documentation for the specific Kubernetes authorization method .







Of course, there is useful information, but there is no guide on how to actually use all this. Therefore, like any sane person, you scour the Internet for guidance. And then ... Be defeated. It happens. Let's fix it.







Before we move on to creating our POC, let's go back to the Consul authorization methods overview (Figure 1) and refine it in the context of Kubernetes.







Architecture



In this guide, we will create a Consul server on a separate machine that will interact with the Kubernetes cluster with the Consul client installed. Then we will create our dummy application in the hearth and use our customized authorization method to read from our Consul key / value store.







The diagram below shows in detail the architecture that we create in this guide, as well as the logic of the authorization method, which will be explained later.









Figure 2: Overview of the authorization method in Kubernetes







A quick note: The consul server does not need to live outside of the Kubernetes cluster for this to work. But yes, he can do this and that.







So, taking the Consul overview diagram (Scheme 1) and applying Kubernetes to it, we get the diagram above (Scheme 2), and here the logic will be as follows:







  1. Each pod will have a service account attached containing a JWT token generated and known by Kubernetes. This token is also inserted into the sub by default.
  2. Our application or service inside the hearth initiates a command to enter our Consul client. The request for login will also indicate our token and the name of a specially created authorization method (such as Kubernetes). This step No. 2 corresponds to step 1 of the Consul scheme (Scheme 1).
  3. Our Consul client will then forward this request to our Consul server.
  4. MAGIC! It is here that the Consul server verifies the authenticity of the request, collects information about the identity of the request and compares it with any associated predefined rules. Below is another diagram to illustrate this. This step corresponds to steps 3, 4 and 5 of the Consul overview diagram (Scheme 1).
  5. Our Consul server generates a Consul token with permissions in accordance with the rules of the authorization method we specified (which we determined) regarding the identity of the requestor. Then he will send this token back. This corresponds to step 6 of the Consul scheme (Scheme 1).
  6. Our Consul client redirects the token to the requesting application or service.


Our application or service can now use this Consul token to communicate with our Consul data, as determined by the token privileges.







The magic is revealed!



For those of you who are not happy with just the rabbit in the hat and want to know how it works ... let me "show you how deep the rabbit hole is ."







As mentioned earlier, our “magic” step (Scheme 2: Step 4) is that the Consul server verifies the authenticity of the request, collects information about the request and compares it with any associated predefined rules. This step corresponds to steps 3, 4 and 5 of the Consul overview diagram (Scheme 1). Below is a diagram (Scheme 3), the purpose of which is to clearly show what is actually happening under the hood of a specific Kubernetes authorization method.









Scheme 3: The magic is revealed!







  1. As a starting point, our Consul client redirects the login request to our Consul server with the Kubernetes account token and the specific name of the authorization method instance that was created earlier. This step corresponds to step 3 in the previous explanation of the circuit.
  2. Now the Consul server (or leader) needs to verify the authenticity of the received token. Therefore, he will consult with the Kubernetes cluster (through the Consul client) and, with the appropriate permissions, we will find out whether the token is genuine and to whom it belongs.
  3. Then the verified request returns to the Consul leader, and the Consul server searches for an instance of the authorization method with the specified name from the login request (and type Kubernetes).
  4. The consul leader determines the specified instance of the authorization method (if one is found) and reads the set of binding rules that are attached to it. He then reads these rules and compares them with verified identity attributes.
  5. TA-dah! Go to step 5 in the previous explanation of the circuit.


Run Consul-server on a regular virtual machine



From now on, I will mainly give instructions for creating this POC, often in points, without explanatory whole sentences. Also, as noted earlier, I will use GCP to create the entire infrastructure, but you can create the same infrastructure anywhere else.

















groupadd --system consul useradd -s /sbin/nologin --system -g consul consul mkdir -p /var/lib/consul chown -R consul:consul /var/lib/consul chmod -R 775 /var/lib/consul mkdir /etc/consul.d chown -R consul:consul /etc/consul.d
      
      







 ### /etc/consul.d/agent.json { "acl" : { "enabled": true, "default_policy": "deny", "enable_token_persistence": true } }
      
      







 consul agent \ -server \ -ui \ -client 0.0.0.0 \ -data-dir=/var/lib/consul \ -bootstrap-expect=1 \ -config-dir=/etc/consul.d
      
      







 consul acl bootstrap
      
      







Launch Kubernetes Cluster for our application with Consul Client as Daemonset









 kubectl create serviceaccount tiller --namespace kube-system kubectl create clusterrolebinding tiller-admin-binding \ --clusterrole=cluster-admin --serviceaccount=kube-system:tiller ./helm init --service-account=tiller ./helm update
      
      







 ### poc-helm-consul-values.yaml global: enabled: false image: "consul:latest" # Expose the Consul UI through this LoadBalancer ui: enabled: false # Allow Consul to inject the Connect proxy into Kubernetes containers connectInject: enabled: false # Configure a Consul client on Kubernetes nodes. GRPC listener is required for Connect. client: enabled: true join: ["<PRIVATE_IP_CONSUL_SERVER>"] extraConfig: | { "acl" : { "enabled": true, "default_policy": "deny", "enable_token_persistence": true } } # Minimal Consul configuration. Not suitable for production. server: enabled: false # Sync Kubernetes and Consul services syncCatalog: enabled: false
      
      







 ./helm install -f poc-helm-consul-values.yaml ./consul-helm - name skywiz-app-with-consul-client-poc
      
      













Configure authorization method by integrating Consul with Kubernetes





 export CONSUL_HTTP_TOKEN=<SecretID>
      
      







 kubectl get endpoints | grep kubernetes
      
      







 kubectl get sa <helm_deployment_name>-consul-client -o yaml | grep "\- name:" kubectl get secret <secret_name_from_prev_command> -o yaml | grep token:
      
      







 kubectl get secret <secret_name_from_prev_command> -o yaml | grep ca.crt:
      
      







 consul acl auth-method create \ -type "kubernetes" \ -name "auth-method-skywiz-consul-poc" \ -description "This is an auth method using kubernetes for the cluster skywiz-app-with-consul-client-poc" \ -kubernetes-host "<k8s_endpoint_retrieved earlier>" \ -kubernetes-ca-cert=@ca.crt \ -kubernetes-service-account- jwt="<decoded_token_retrieved_earlier>"
      
      







 ### kv-custom-ns-policy.hcl key_prefix "custom-ns/" { policy = "write" }
      
      







 consul acl policy create \ -name kv-custom-ns-policy \ -description "This is an example policy for kv at custom-ns/" \ -rules @kv-custom-ns-policy.hcl
      
      







 consul acl role create \ -name "custom-ns-role" \ -description "This is an example role for custom-ns namespace" \ -policy-id <policy_id>
      
      







 consul acl binding-rule create \ -method=auth-method-skywiz-consul-poc \ -bind-type=role \ -bind-name='custom-ns-role' \ -selector='serviceaccount.namespace=="custom-ns"'
      
      





Last configurations



Access rights





 ###skywiz-poc-consul-server_rbac.yaml --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: review-tokens namespace: default subjects: - kind: ServiceAccount name: skywiz-app-with-consul-client-poc-consul-client namespace: default roleRef: kind: ClusterRole name: system:auth-delegator apiGroup: rbac.authorization.k8s.io --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-account-getter namespace: default rules: - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: get-service-accounts namespace: default subjects: - kind: ServiceAccount name: skywiz-app-with-consul-client-poc-consul-client namespace: default roleRef: kind: ClusterRole name: service-account-getter apiGroup: rbac.authorization.k8s.io
      
      







 kubectl create -f skywiz-poc-consul-server_rbac.yaml
      
      





Connect to Consul Client





 ### poc-consul-client-ds-svc.yaml apiVersion: v1 kind: Service metadata: name: consul-ds-client spec: selector: app: consul chart: consul-helm component: client hasDNS: "true" release: skywiz-app-with-consul-client-poc ports: - protocol: TCP port: 80 targetPort: 8500
      
      







 cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: labels: addonmanager.kubernetes.io/mode: EnsureExists name: kube-dns namespace: kube-system data: stubDomains: | {"consul": ["$(kubectl get svc consul-ds-client -o jsonpath='{.spec.clusterIP}')"]} EOF
      
      





Testing the auth method



Now let's look at the magic in action!











Custom namespace test:





 kubectl create namespace custom-ns
      
      







 ###poc-ubuntu-custom-ns.yaml apiVersion: v1 kind: Pod metadata: name: poc-ubuntu-custom-ns namespace: custom-ns spec: containers: - name: poc-ubuntu-custom-ns image: ubuntu command: ["/bin/bash", "-ec", "sleep infinity"] restartPolicy: Never
      
      







 kubectl create -f poc-ubuntu-custom-ns.yaml
      
      







 kubectl exec poc-ubuntu-custom-ns -n custom-ns -it /bin/bash apt-get update && apt-get install curl -y
      
      







 cat /run/secrets/kubernetes.io/serviceaccount/token
      
      







 ### payload.json { "AuthMethod": "auth-method-test", "BearerToken": "<jwt_token>" }
      
      







 curl \ --request POST \ --data @payload.json \ consul-ds-client.default.svc.cluster.local/v1/acl/login
      
      







 echo "{ \ \"AuthMethod\": \"auth-method-skywiz-consul-poc\", \ \"BearerToken\": \"$(cat /run/secrets/kubernetes.io/serviceaccount/token)\" \ }" \ | curl \ --request POST \ --data @- \ consul-ds-client.default.svc.cluster.local/v1/acl/login
      
      







 curl \ consul-ds-client.default.svc.cluster.local/v1/kv/custom-ns/test_key --header “X-Consul-Token: <SecretID_from_prev_response>”
      
      







User Service Account Test:





 kubectl apply -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: custom-sa EOF
      
      







 ###poc-ubuntu-custom-sa.yaml apiVersion: v1 kind: Pod metadata: name: poc-ubuntu-custom-sa namespace: default spec: serviceAccountName: custom-sa containers: - name: poc-ubuntu-custom-sa image: ubuntu command: ["/bin/bash","-ec"] args: ["apt-get update && apt-get install curl -y; sleep infinity"] restartPolicy: Never
      
      







 kubectl exec -it poc-ubuntu-custom-sa /bin/bash
      
      







 echo "{ \ \"AuthMethod\": \"auth-method-skywiz-consul-poc\", \ \"BearerToken\": \"$(cat /run/secrets/kubernetes.io/serviceaccount/token)\" \ }" \ | curl \ --request POST \ --data @- \ consul-ds-client.default.svc.cluster.local/v1/acl/login
      
      







Repeat the previous steps above:

a) Create an identical Policy for the prefix "custom-sa /".

b) Create a Role, name it "custom-sa-role"

c) Attach the Policy to Role.









 consul acl binding-rule create \ -method=auth-method-skywiz-consul-poc \ -bind-type=role \ -bind-name='custom-sa-role' \ -selector='serviceaccount.name=="custom-sa"'
      
      







 curl \ consul-ds-client.default.svc.cluster.local/v1/kv/custom-sa/test_key --header “X-Consul-Token: <SecretID>”
      
      







Overlay example:





 consul acl binding-rule create \ -method=auth-method-skywiz-consul-poc \ -bind-type=role \ -bind-name='default-ns-role' \ -selector='serviceaccount.namespace=="default"'
      
      







Conclusion



TTL token mgmt?



At the time of this writing, there is no integrated way to determine TTL for tokens generated by this authorization method. It would be a fantastic opportunity to provide secure automation of Consul authorization.







It is possible to manually create a token with TTL:









I hope that in the near future we will be able to control how tokens are generated (for each rule or authorization method) and add TTL.







Until then, it is proposed to use in your logic the endpoint of the exit from the system.









Also read other articles on our blog:






All Articles