What can be done with the annotations of microservice contracts?

In a previous post, we talked about how and why we at Ancronis make annotations for microservices, and promised to share our practice of using a single API format for the entire Acronis Cyber ​​Platform. Today we will tell about our experience of static annotation checks - aka the first step on the path to introducing annotations in a company.







So, suppose you already have annotations for all or most of the APIs. A reasonable question that our colleagues also asked us: “What can be done with these magical objects?”



In fact, there are two types of checks that can be done directly by annotation. Static checks go directly to the texts of Swagger or RAML annotations. They do not need anything else, and they can rightfully be considered the first advantage of using formalized microservice contracts.



Dynamic checks can only be started if you have a working service. They are somewhat more complicated, as they allow you to dive deeper and check how valid the annotation is at all, test the stability of the service, and also do much more. But this is the topic of the next post, and today we will focus on statics.



Long live the API Guideline!



In order not to confuse yourself or others, it’s worth mentioning right away that static annotation checks are created to verify that the annotations of the API (and, hopefully, the APIs themselves) comply with corporate requirements. It can be either just practices adopted by the company, or a full-fledged Guideline API, which formalizes the rules for preparing the API for your services.



image



When we talk about the colorful world of microservices, this is very important, because each team can have its own framework, its own programming language, a unique stack, or some special libraries. Under the influence of practices specific to microservice, the presentation of the API for the external observer is changing. This creates unnecessary variety. For effective interaction of elements of an ecosystem (or platform), it is necessary to “align" the API as much as possible.



Here is an example: in the API of one component, 404 code is returned only if the resource does not exist. And the other team binds this error to the application logic and the API will return 404 when the user wants to buy a product that has ended in the warehouse. Clearly, such problems are very well described by atygaev here . This inconsistency in the understanding of 404 code will slow down the development and lead to errors, which means unnecessary calls for support or extra hours of debug, but in any case, problems measured in monetary terms.



The specifics of syntax and naming adopted by the company are supplemented by various aspects specific to REST itself. The developer must answer questions such as:





Now imagine that each team should individually puzzle over these answers.



The format of the search query can also be completely different. For example, there are about a dozen ways to create a sample in which only those users who have MacBookPro and frequent virus attacks will be listed. When working on a large project consisting of dozens of microservices, it is necessary that the syntax of the search query be common to all versions and products of the company. And if a person is used to contacting one of the products / services of your development, he expects to find the same request and response structure in another. Otherwise, amazement (and grief) will not be avoided.



Many companies, especially giants, such as Microsoft , PayPal , Google , have their own guidelines, and they are very well thought out. But using other people's guidelines is not always possible, because they are largely tied to the specifics of the company. In addition, everyone’s thinking is different, and the rules may differ simply because it’s more convenient for people to work this way and not otherwise.



The understanding that Acronis needs its own API Guideline did not come immediately, but with the growing number of developers, microservices and actually external clients and integrations. At some point, the team of architects had to establish uniform rules for declaring the API, taking into account both the experience of the IT giants mentioned above and already established de facto practices in development teams.



One of the problems with this late implementation of the Guideline API is the existing published APIs that do not meet the requirements, which in turn leads to additional costs for redesigning the interfaces and maintaining backward compatibility. Therefore, if your business model involves future integration and publication of the API, then you need to think about uniform rules as early as possible.







Of course, the adoption of the Guideline API did not automatically make all APIs constitutive. Every API had to be analyzed, every devl had to understand the new requirements. Therefore, we did the automation of RAML checks against the Guideline API we developed.

Static analysis



First you need to decide what we will statically analyze. Not all points of the Guideline API can be formalized, so first you need to highlight a set of rules that can be easily understood from the API annotation.



In the first version, we identified the following rules:



  1. Check for API descriptions. As we said in our previous article, you can create beautiful documentation based on the API annotation. This means that each resource, query parameter and response should have a description that will give all the necessary information to any user of our API. It would seem a trifle, but how easy it is to show developers that their annotation is not rich in descriptions!
  2. Checking for the presence and correctness of examples. API annotation languages ​​also allow you to describe examples. For example, in response we can add an example of a real service response, something from real life. Using examples tells how endpoint should be used and working, and we check for examples through static annotation analysis.
  3. Validation of HTTP response codes. The HTTP standard defines a large number of codes, but they can be interpreted in different ways. Our guideline formalizes the use of codes. We also limit the applicability of different codes according to the HTTP method. For example, code 201, according to the HTTP specification, is returned when a resource is created. Therefore, the GET returning 201 will be a wake-up call (either the code is used incorrectly, or GET creates the resource). Therefore, our Guideline API prohibits such use. Moreover, architects have fixed sets of codes for each method, and now we check them in static mode at the level of annotations.
  4. Checking HTTP methods. Everyone knows that the HTTP protocol has a set of standard methods (GET, POST, PUT, etc.), and also makes it possible to use custom methods. Our Guideline API describes allowed methods that are allowed, but undesirable (OPTIONS), as well as completely forbidden (can only be used with the blessing of architects). Forbidden include the standard HEAD method, as well as any custom methods. All this is also easy to verify in the annotations to the contracts.
  5. Verification of access rights. We hope that in 2019 the need for authorization support does not require additional explanations. Good documentation should contain information about which roles the API supports and which API methods are available for each role. In addition to documenting information about the role model, recorded at the annotation level, allows you to do much more interesting things in dynamics, however, we will talk about this in the next article.
  6. Validation of naming. Relieves a headache from using different approaches to naming entities. In general, there are two “camps” - supporters of CamelCase and snake_case.Camelcase - when each word begins with a capital letter, and snake_case - when words are separated by underscores and everything is written in small letters. In different languages, it is customary to give names in different ways. For example, snake_case is accepted in Python, and CamelCase is adopted in Go or Java. We made our own choice universal for all projects and fixed in the API Guideline. Therefore, through annotations, we check the names of resources and parameters statically;
  7. Validation of custom headers. This is another example of a naming test, but it is tied specifically to the header. At us in Acronis it is customary to use a format of the form X-Custom-Header-Name (despite the fact that this format is deprecated). And we control its compliance at the level of annotations.
  8. Verify HTTPS support. Any modern service should support HTTPS, and some people think that working with HTTP these days is generally bad news. Therefore, in the annotation RAML or Swagger should be indicated. that the microservice supports HTTPS without HTTP.
  9. Verify the structure of the URI. In prehistoric times, that is, before the release of the Guideline API, the request URI looked different in different services: somewhere / api / 2, somewhere / api / service_name / v2, and so on. Now in our guideline a standard URI structure is defined for all our APIs. The manual describes how they should look so as not to create confusion.
  10. Checking the compatibility of new versions. Another factor that the author of any API should keep in mind is backward compatibility. It is important to check whether the code built on the basis of the old API can work with the new version. Based on this change can be divided into two categories: breaking backward compatibility and compatible. Everyone knows that breaking changes are unacceptable, and if you want to dramatically “improve” something, the developer should release a new version of the API. But everyone can make mistakes, so our other goal at the stage of static checks is to skip only compatible changes and swear at incompatible ones. It is assumed that the annotation, like the code, is stored in the repository and, therefore, it has the entire history of changes. Therefore, we can check our HTTP REST for backward compatibility. For example, the addition of (method, parameter, response code) compatibility does not violate, and the removal may well cause loss of compatibility. And this can already be checked at the level of annotation.






When there are no descriptions



When there are descriptions



findings



Static analysis of annotations is needed in order to check (no, not the quality of service), but the quality of the API. This stage allows you to align the programming interfaces relative to each other so that people work in a clear and understandable environment where everything is quite predictable.



Of course, it is only necessary to deal with such formalism in large enough companies, when checking all correspondences “manually” is very long, expensive and unreliable. A small startup just doesn’t need it. At least for the time being. But if in your future there are plans to become a unicorn, like Akronis, then static checks and the Guideline API will help you.



All Articles