How the specifics of working with application servers are changing using the example of OpenLiberty





Hello, Habr!



Speech by Sebastian Dashner at a java meetup in the Moscow office of IBM (found a record of a similar performance) prompted me to begin my acquaintance with lightweight application servers, in particular, with OpenLiberty. And then I thought:



  1. What are the benefits of a lightweight application server?
  2. How does the specificity of work change when using them?
  3. Why pack an application server in a container?


Answering these questions, I noticed that there is little public information on this topic, so I decided to collect and systematize it here.



I post the results under the cut.



What are the benefits of a lightweight application server?



Formerly, enterprise Java EE application servers (such as JBoss AS, Oracle WebLogic, IBM WebSphere AS) were considered a heavyweight and cumbersome design, especially if we talk about startup and deployment times. But cloud technology is capturing an increasingly large part of the industry and application server requirements are changing.



And now, in place of full-featured, corporate application servers, come fast, modular, small application servers focused on a specific task: Thorntail , Payara Micro - the younger brothers WildFly and Payara; Meecrowave is a lightweight JAX-RS + CDI + JSON server, KumuluzEE is a server that allows you to extend Java EE using Node.js, Go and others.



This list also includes OpenLiberty - an open source application server (distributed according to EPL-1.0) that supports the latest Java EE / Microprofile standards, which WebSphere Liberty runs on.



EPL-1.0 Features at a Glance (Eclipse Public License Version 1.0)
EPL 1.0 is based on CPL and is not compatible with the GPL, allows you to comply with other licenses and patents that are used in the work, and provides the right to license the product under any other license, a copy of the license must be included in all copies of the program.



Additions to the main product may be licensed separately and even under a commercial license. However, changes and additions that are derivative works must be licensed under the same EPL license, which requires you to make the source code open.



Associating a software project with code protected by the EPL license (for example, using this code as a library) generally does not make this project a derivative work and does not impose corresponding obligations.



A Member that includes the Program in a quotation must do so in such a way as to avoid potential liability for other Members. (expressly waive any warranties and liability on behalf of all authors)



For example: A participant may include the Program in a commercial offer, product X. Then this participant is a Commercial participant. If this Commercial Member then makes statements about speed or offers product warranties for X, these statements and offers are the personal responsibility of the Commercial Member. Under this section, the Commercial Member must protect other Members from claims relating to performance and warranties, and if the court requires any other Member to pay any resulting damage, the Commercial Member must pay these losses.



Let's see what benefits we can get by deploying the application in a container with OpenLiberty. (You must have any version of java installed, further steps must be performed while in the wlp directory)



Speed:



Speed ​​is the most important indicator for a cloud application, because in order for the cloud application to quickly scale, manage and cope with the growing load, it must be launched in seconds. A lightweight application server can do this. To check, download the OpenLiberty server and run bin / server run ( full list of commands ):



$ bin/server run  defaultServer (Open Liberty 19.0.0.6/wlp-1.0.29.cl190620190617-1530)  Java HotSpot(TM) 64-Bit Server VM  1.8.0_212-b10 (ru_US) [AUDIT ] CWWKE0001I:  defaultServer . [AUDIT ] CWWKZ0058I:   dropins  . [AUDIT ] CWWKF0012I:     : [el-3.0, jsp-2.3, servlet-3.1]. [AUDIT ] CWWKF0011I:  defaultServer      .  defaultServer   1,709 .
      
      





Modularity and flexibility



Most applications do not need Java EE as a whole, but require a dedicated set of standards, most often used in enterprise applications. Thanks to OSGI, we can choose the set of Java EE and / or MicroProfile standards we need, ignoring everything else.



For example, declare JAX-RS from Java EE and mpHealth from Microprofile by adding a couple of lines to the featureManager block usr / servers / serverName /server.xml



 <?xml version="1.0" encoding="UTF-8"?> <server description="new server"> <!-- Enable features --> <featureManager> <feature>jsp-2.3</feature> <feature>mpHealth-1.0</feature> <feature>jaxrs-2.1</feature> </featureManager> <!-- To access this server from a remote client add a host attribute to the following element, eg host="*" --> <httpEndpoint id="defaultHttpEndpoint" httpPort="9080" httpsPort="9443" /> <!-- Automatically expand WAR files and EAR files --> <applicationManager autoExpand="true"/> </server>
      
      





Dynamic update



During development, program code and configuration are constantly changing.

The application server is configured to monitor changes, and if necessary, reconfigures and deploys the application on the fly. For example, a reaction to recent changes:



 [AUDIT ] CWWKG0016I:    . [AUDIT ] CWWKG0017I:      0,475 . [AUDIT ] CWWKF0012I:     : [cdi-2.0, jaxrs-2.1, jaxrsClient-2.1, jndi-1.0, json-1.0, jsonp-1.1, mpHealth-1.0, servlet-4.0]. [AUDIT ] CWWKF0013I:     : [servlet-3.1]. [AUDIT ] CWWKF0008I:     0,476 .
      
      





Image size and assembly



Application server sizes have decreased significantly and now allow you to deploy

each application on a separate application server for packaging them in containers, giving the greatest flexibility. Moreover, since the image consists of layers, when reassembling and distributing artifacts, only the application layer is copied, the OS, the runtime and the application server are cached.



Dockerhub has prebuilt images containing a preconfigured OpenLiberty server. We will use one of them and create a Dockerfile:



 FROM open-liberty COPY usr/servers/defaultServer /opt/ol/wlp/usr/servers/defaultServer ENTRYPOINT ["/opt/ol/wlp/bin/server", "run"] CMD ["defaultServer"]
      
      





Let's assemble the image:



 docker build -t app .
      
      





Run it as a container:



 docker run -d --name app -p 9080:9080 app
      
      





Check the result http: // localhost: 9080 / health /







To stop the container:



 docker stop app
      
      





There are many further scenarios for using the container, and in general, this is an occasion for individual articles, so let us return to the questions.



How is the specificity of work changing?



Equipment



A container image must be collected only once, and then executed in all environments. Therefore, it is recommended to collect each application together with the application server. This simplifies the life cycle and deployment of applications and fits perfectly into the modern world of container technology.



Assembly



Now it is no longer necessary to pack different technical blocks into separate archives. All business logic, along with web services and end-to-end functionality, is packaged into a single war file. This greatly simplifies the installation of the project, as well as the assembly procedure. You no longer have to pack the application in several hierarchy levels, then unpack it again into one server instance.



Launch



Both the application server and the application itself are added to the image during the build process. The potential configuration of data sources, drives or server modules is also specified during the build process by adding specialized configuration files. All differences in configuration should be managed not from within the application, but from the outside. Therefore, the application should not be deployed in an already running container, but should be added to it at the stage of image assembly in the automatic deployment directory in order to launch it when the container starts.



Functionality Extension



Containers, deployment and scaling systems, monitoring services and service grids enabled us to configure detection, monitoring, management, authentication, scaling, tracing, stability on top of the application, transparently transferring a large amount of logic to another level, facilitating the application and simplifying its development.



Why pack an application server in a container?



First of all, by packing an application server in a container, you give each team the opportunity to independently configure their application server and focus on implementing functions, saving developers time on manual operations, middleware, and various approvals.



As a bonus, you get the opportunity to fully enjoy the benefits of containers and all the tools built around them. The application becomes easy to manage and scale, update, and automate the assembly and deployment of artifacts with zero downtime.



You will find more practice here.
In addition to the documentation , the project site has a large number of tutorials: how to create web applications with maven / gradle, package and deploy applications, deploy and configure microservices in kubernetes, manage traffic with istio, and deploy to IBM Cloud from other popular cloud providers and much more.



Sebastian Dashner (Java & OSS enthusiast, Java Champion, IBMer, JCP member, Jakarta EE committer) publishes useful articles on how to use OpenLiberty, such as monitoring Open Liberty with Prometheus and Grafana , in his blog , and his book Architecting Modern Java EE Applications was used in the preparation of this article.



Liberty-maven-plugin (alternative to gradle ) can greatly simplify your work. By the way, the manuals have good examples of its use.



You can also contribute to the project .




All Articles