Knative - a k8s-based platform as a service with serverless support







Kubernetes has undoubtedly become the dominant platform for container deployment. It provides the ability to manage almost everything using its APIs and user controllers that extend its API through user resources.







However, the user still has to make detailed decisions about how to deploy, configure, manage and scale applications. At the discretion of the user, questions remain about scaling the application, protection, traffic passage. This distinguishes Kubernetes from conventional “platforms as a service” (PaaS), such as Cloud Foundry and Heroku.







Platforms have a simplified user interface, focused on application developers who are most often involved in customizing individual applications. Routing, deployment, and metrics are transparent to the user and are managed by the underlying PaaS system.







The source-delivery workflow is handled by PaaS by creating a custom container image, deploying it, setting up a new route and a DNS subdomain for incoming traffic. All this is triggered by the git push



command.







Kubernetes (intentionally) provides only the basic building blocks for such platforms, giving the community the opportunity to do this work on their own. As Kelsey Hightower said :







Kubernetes is a platform for building platforms. The best position to start, but not finish.

As a result, we see a bunch of Kubernetes assemblies, as well as hosting companies that are trying to create PaaS for Kubernetes, for example, OpenShift and Rancher. Against the backdrop of the growing Kube-PaaS market, Knative, created in July 2018 by Google and Pivotal, is entering the ring.







Knative was the result of a collaboration between Google and Pivotal, with the small assistance of other companies, such as IBM, RedHat and Solo.im. It offers similar PaaS stuff for Kubernetes with top-notch server-based computing support. Unlike Kubernetes assemblies, Knative is installed as an add-on to any compatible Kubernetes cluster, and is configured through user resources.







What is Knative?



Knative is described as "A Kubernetes-based platform for delivering and managing workloads with modern serverless computing." By declaring itself to be such a platform, Knative actively automatically scales containers in proportion to simultaneous HTTP requests. Unused services are ultimately scaled to zero, providing on-demand scaling in the style of serverless computing.







Knative consists of a set of controllers installed in any Kubernetes cluster and providing the following features:









A key component is Serving, which provides delivery, automatic scaling, and traffic control for managed applications. After installing Knative, full access to the Kubernetes API is still preserved, which allows users to manage applications in the usual way, and also serves to debug Knative services by working with the same API primitives that these services use (modules, services, etc.).







With the help of Serving, blue-green traffic routing is also automated, providing traffic separation between new and old versions of the application when the user delivers an updated version of the application.







Knative itself depends on installing a compatible ingress controller. At the time of writing, Gloo API Gateway and Istio Service Mesh are supported. He will configure the available ingress to route traffic to Knative-driven applications.







Istio Service Mesh can become a big addiction for Knative users who want to try it without installing the Istio control panel, since Knative depends only on the gateway.







For this reason, most users prefer Gloo as a gateway to Knative, which provides a similar set of features with Istio (if we talk about the purpose of using only Knative), as well as using significantly less resources and lower operating costs.







Let's check Knative in action at the booth. I will use a freshly installed cluster running in GKE:







 kubectl get namespace NAME STATUS AGE default Active 21h kube-public Active 21h kube-system Active 21h
      
      





We proceed to install Knative and Gloo. This can be done in any order:







 #  Knative-Serving kubectl apply -f \ https://github.com/knative/serving/releases/download/v0.8.0/serving-core.yaml namespace/knative-serving created # ... #  Gloo kubectl apply -f \ https://github.com/solo-io/gloo/releases/download/v0.18.22/gloo-knative.yaml namespace/gloo-system created # ...
      
      





Check that all Pods are in the "Running" status:







 kubectl get pod -n knative-serving NAME READY STATUS RESTARTS AGE activator-5dd55958cc-fkp7r 1/1 Running 0 7m32s autoscaler-fd66459b7-7d5s2 1/1 Running 0 7m31s autoscaler-hpa-85b5667df4-mdjch 1/1 Running 0 7m32s controller-85c8bb7ffd-nj9cs 1/1 Running 0 7m29s webhook-5bd79b5c8b-7czrm 1/1 Running 0 7m29s kubectl get pod -n gloo-system NAME READY STATUS RESTARTS AGE discovery-69548c8475-fvh7q 1/1 Running 0 44s gloo-5b6954d7c7-7rfk9 1/1 Running 0 45s ingress-6c46cdf6f6-jwj7m 1/1 Running 0 44s knative-external-proxy-7dd7665869-x9xkg 1/1 Running 0 44s knative-internal-proxy-7775476875-9xvdg 1/1 Running 0 44s
      
      





Gloo is ready for routing, let's create an automatically scalable Knative service (let's call it kservice) and direct traffic to it.







Knative services provide an easier way to deliver applications to Kubernetes - compared to the regular Deployment + Service + Ingress model. We will work with such an example:







 apiVersion: serving.knative.dev/v1alpha1 kind: Service metadata: name: helloworld-go namespace: default spec: template: spec: containers: - image: gcr.io/knative-samples/helloworld-go env: - name: TARGET Value: Knative user
      
      





I copied this to a file, then applied it to my Kubernetes cluster this way:







 kubectl apply -f ksvc.yaml -n default
      
      





We can view the resources created by Knative in the cluster after the delivery of our 'helloworld-go' kservice :







 kubectl get pod -n default NAME READY STATUS RESTARTS AGE helloworld-go-fjp75-deployment-678b965ccb-sfpn8 2/2 Running 0 68s
      
      





The pod with our 'helloworld-go' image starts when you deploy kservice. If there is no traffic, the number of pods will be reduced to zero. And vice versa, if the number of simultaneous requests exceeds some custom threshold value, the number of pods will increase.







 kubectl get ingresses.networking.internal.knative.dev -n default NAME READY REASON helloworld-go True
      
      





Knative customizes its ingress using a special 'ingress' resource in the Knative internal API. Gloo takes this API as its configuration to provide properties inherent in PaaS, including a blue-green deployment model, automatic TLS, timeouts, and other advanced routing features.







After some time, we see that our pods have disappeared (since there was no incoming traffic):







 kubectl get pod -n default No resources found. kubectl get deployment -n default NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE helloworld-go-fjp75-deployment 0 0 0 0 9m46s
      
      





Finally, we will try to reach out to them. Getting URLs for Knative Proxy is easy and easy with glooctl



:







 glooctl proxy url --name knative-external-proxy http://35.190.151.188:80
      
      





Without glooctl



installed, glooctl



can spy the address and port in kube service:







 kubectl get svc -n gloo-system knative-external-proxy NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE knative-external-proxy LoadBalancer 10.16.11.157 35.190.151.188 80:32168/TCP,443:30729/TCP 77m
      
      





Run a bit of data with cURL:







 curl -H "Host: helloworld-go.default.example.com" http://35.190.151.188 Hello Knative user!
      
      





Knative provides near-PaaS for developers on top of Kubernetes' box-based, using Gloo's high-performance, full-featured API gateway. This note only slightly touched the extensive number of Knative features available for customization, as well as additional features. Similarly with Gloo!







Despite the fact that Knative is still a young project, his team releases new versions every six weeks, implementation of advanced functions has begun, for example, automatic TLS deployment, automatic scaling of the control panel. There is a high probability that as a result of cooperation between numerous cloud companies, as well as as the basis of the new Cloud Run offer from Google, Knative may become the main option for organizing serverless computing and PaaS in Kubernetes. Follow the news!







From SouthBridge

The opinion of readers is important to us, so we ask you to take part in a small survey related to future articles about Knative, Kubernetes, serverless computing:








All Articles