Workflow Core - a business process engine for .Net Core

image







Hello!







We decided to support the theme of the migration of the project using the Windows Workflow Foundation to .Net Core , which was started by colleagues from DIRECTUM, because we faced a similar problem a couple of years ago and went our own way.







Let's start with the story



Our flagship product, Avanpost IDM, is an account lifecycle and employee access management system. He knows how to manage access both automatically based on the role model, and according to requests. At the dawn of the formation of the product, we had a fairly simple self-service system with a simple step-by-step workflow, for which the engine was not required in principle.







image







However, when confronted with large clients, we realized that a much more flexible tool was required, since their requirements for the processes of coordination of access rights sought the rules of a good, weighty workflow. After analyzing the requirements, we decided to develop our own process editor in the BPMN format, suitable for our needs. We will talk about the development of the editor using React.js + SVG a bit later, and today we will discuss the backend topic - the workflow engine or business process engine.







Requirements



At the beginning of the development of the system, we had the following requirements for the engine:









After analyzing the market (for 2014), we settled on a virtually non-alternative solution for .Net: Windows Workflow Foundation.







Windows Workflow Foundation (WWF)



WWF is Microsoft's technology for defining, executing, and managing workflows.







The basis of its logic is a set of containers for actions (activities) and the ability to build sequential processes from these containers. The container may be ordinary - a certain step in the process where the activity is performed. It can be a manager - containing the logic of branching.







You can draw a process directly in Visual Studio. The compiled business process diagram is stored in Haml, which is very convenient - the format is described, it is possible to make a self-written designer of processes. This is on the one hand. On the other hand, Xaml is not the most convenient format for storing a description - the compiled scheme for a more or less real process turns out to be huge, not least because of redundancy. It’s very difficult to understand, but you’ll have to understand it.







But if sooner or later one can comprehend zen with schemes and learn to read them, then the lack of transparency of the engine itself adds to the hassle already during the operation of the system by users. When the error comes from the bowels of Wf, it is not always possible to find out 100% what exactly was the reason for the failure. The closed source and relative monstrosity does not help the case. Often bug fixes were due to symptoms.







In fairness, it’s worth clarifying here that the problems described above, for the most part, have plagued us due to the strong customization over Wf. One of the readers will say for sure that we ourselves created a bunch of problems, and then heroically solved them. It was necessary to make a self-made engine from the very beginning. In general, they will be right.







In the bottom line, the solution worked stably enough and successfully went into production. But the transition of our products to .Net Core forced us to abandon WWF and look for another business process engine, because As of May 2019, the Windows Workflow Foundation has not been migrated to .Net Core. As we were looking for a new engine - the topic of a separate article, but in the end we settled on Workflow Core.







Workflow core



Workflow Core is a freeware business process engine. It is developed under the MIT license, i.e. it can be safely used in commercial development.







Actively one person does it, several more periodically make a pull request. There are ports for other languages ​​(Java, Python, and several more).







The engine is positioned as lightweight. In fact, this is just a host for sequentially performing actions grouped by any business rules.







The project has wiki documentation . Unfortunately, it does not describe all the features of the engine. However, it will be impudent to require complete documentation - the opensource project is supported by one enthusiast. Therefore, the Wiki will be quite enough to get started.







Out of the box there is support for storing process status in external storage (persistence storage). Providers are standard for:









Write your provider is not a problem. We take the sources of any standard and do it as an example.







Horizontal scaling is supported, that is, you can run the engine on several nodes at once, while having one point of storage of process states (one persistence storage). In this case, the placement of the engine's internal task queue should be in the general storage (rabbitMQ, as an option). To exclude the execution of one task by several nodes, a lock manager is provided at the same time. By analogy with external storage providers, there are standard implementations:









Getting to know something new is easiest to start with an example. So do it. I will describe the construction of a simple process from the very beginning, along with an explanation. An example may seem impossibly simple. I agree - it is simple. The most for a start.







Go.







Step



A step is a step in the process at which any actions are performed. The whole process is built from a sequence of steps. One step can perform many actions, can be performed repeatedly, for example, for some event from the outside. There are a set of steps that are endowed with the logic “out of the box”:









Of course, on some built-in primitives you can’t stand the process. We need steps that complete business tasks. Therefore, for now, put them aside and take steps with our own logic. To do this, you need to inherit from StepBody abstraction.







public abstract class StepBody : IStepBody { public abstract ExecutionResult Run(IStepExecutionContext context); }
      
      





The Run method is executed when the process enters a step. It is necessary to place the necessary logic into it.







  public abstract class StepBody : IStepBody { public abstract ExecutionResult Run(IStepExecutionContext context); }
      
      





The steps support dependency injection. To do this, it is enough to register them in the same container as the necessary dependencies.







Obviously, the process needs its own context - a place where intermediate results of execution can be added. Wf core has its own context for executing a process that stores information about its current state. You can access it using the context variable from the Run () method. In addition to the built-in, we can use our context.







We will analyze the ways of describing and registering the process in more detail below, for now, we simply define our own class — the context.







  public class ProcessContext { public int Number1 {get;set;} public int Number2 {get;set;} public string StepResult {get;set;} public ProcessContext() { Number1 = 1; Number2 = 2; } }
      
      





In the variables Number we write numbers; into the variable StepResult - the result of the step.







We decided on the context. You can write your own step:







  public class CustomStep : StepBody { private readonly Ilogger _log; public int Input1 { get; set; } public int Input2 { get; set; } public string Action { get; set; } public string Result { get; set; } public CustomStep(Ilogger log) { _log = log; } public override ExecutionResult Run(IStepExecutionContext context) { Result = ”none”; if (Action ==”sum”) { Result = Number1 + Number2; } if (Action ==”dif”){ Result = Number1 - Number2; } return ExecutionResult.Next(); } }
      
      





The logic is extremely simple: two numbers and the name of the operation come to the input. The result of the operation is written to the output variable Result . If the operation is not defined, the result will be none .







We decided on the context, there is a step with the logic we need too. Now we need to register our process in the engine.







Process description. Registration in the engine.



There are two ways to describe a process. The first is the description in the code - the hardcode.







The process is described through the fluent interface . It is necessary to inherit from the generalized IWorkflow <T> interface, where T is the model context class. In our case, this is a ProcessContext .







It looks like this:







  public class SimpleWorkflow : IWorkflow<ProcessContext> { public void Build(IWorkflowBuilder<ProcessContext> builder) { //    } public string Id => "SomeWorkflow"; public int Version => 1; }
      
      





The description itself will be inside the Build method. The Id and Version fields are also required. Wf core supports process versioning - you can register n process versions with the same identifier. This is convenient when you need to update an existing process and at the same time give "live" to existing tasks.







We describe a simple process:







  public class SimpleWorkflow : IWorkflow<ProcessContext> { public void Build(IWorkflowBuilder<ProcessContext> builder) { builder.StartWith<CustomStep>() .Input(step => step.Input1, data => data.Number1) .Input(step => step.Input2, data => data.Number2) .Input(step => step.Action, data => “sum”) .Output(data => data.StepResult, step => step.Result) .EndWorkflow(); } public string Id => "SomeWorkflow"; public int Version => 1; }
      
      





If translated into “human” language, it will turn out something like this: the process begins with the CustomStep step. The value of the step field Input1 is taken from the context field Number1 , The value of the step field Input2 is taken from the context field Number2 , the action field is rigidly indicated with the value "sum" . The output from the Result field is written to the StepResult context field . Complete the process.







Agree, the code turned out to be very readable, it is quite possible to figure it out, even without special knowledge in C #.







Add one more step to our process, which will output the result of the previous step to the log:







  public class CustomStep : StepBody { private readonly Ilogger _log; public string TextToOutput { get; set; } public CustomStep(Ilogger log) { //    _log = log; } public override ExecutionResult Run(IStepExecutionContext context) { _log.Debug(TextToOutput); return ExecutionResult.Next(); } }
      
      





And update the process:







  public class SimpleWorkflow : IWorkflow<ProcessContext> { public void Build(IWorkflowBuilder<ProcessContext> builder) { builder.StartWith<CustomStep>() .Input(step => step.Input1, data => data.Number1) .Input(step => step.Input2, data => data.Number2) .Input(step => step.Action, data => “sum”) .Output(data => data.StepResult, step => step.Result) .Then<OutputStep>.Input(step => step.TextToOutput, data => data.StepResult) .EndWorkflow(); } public string Id => "SomeWorkflow"; public int Version => 2; }
      
      





Now, after the step with the addition operation, the step of outputting the result to the log follows. To the input, we pass the Result and context variable into which the result of the execution was written in the last step. I will take the liberty of asserting that such a description through a code (hardcode) in real systems would be of little use. Unless for some office processes. It is much more interesting to be able to store the circuit separately. At a minimum, we don’t have to rebuild the project every time we need to change something in the process or add a new one. Wf core provides this feature by storing the json schema. We continue to expand our example.







Json process description



Further I will not provide a description through the code. This is not particularly interesting, and will only inflate the article.







Wf core supports schema description in json. In my opinion, json is more visual than xaml (a good topic for holivar in the comments :)). The file structure is quite simple:







 { "Id": "SomeWorkflow", "Version": 1, "DataType": "App.ProcessContext, App", "Steps": [ { /*step1*/ }, { /*step2*/ } ] }
      
      





The DataType field indicates the fully qualified name of the context class and the name of the assembly in which it is described. Steps stores a collection of all the steps in the process. Fill in the Steps element:







 { "Id": "SomeWorkflow", "Version": 1, "DataType": "App.ProcessContext, App", "Steps": [ { "Id": "Eval", "StepType": "App.CustomStep, App", "NextStepId": "Output", "Inputs": { "Input1": "data.Number1", "Input2": "data.Number2" }, "Outputs": { "StepResult": "step.Result" } }, { "Id": "Output", "StepType": "App.OutputStep, App", "Inputs": { "TextToOutput": "data.StepResult" } } ] }
      
      





Let's take a closer look at the structure of the step description via json.







The Id and NextStepId fields store the identifier of this step and a pointer to which step may be next. Moreover, the order of the elements of the collection is unimportant.







StepType is similar to the DataType field, it contains the full name of the step class (a type that inherits from StepBody and implements the step logic) and the name of the assembly. More interesting are the Inputs and Outputs objects. They are set in the form of mapping.







In the case of Inputs, the name of the json element is the name of the class field of our step; the value of the element is the name of the field in the class, the context of the process.







For Outputs, on the contrary, the json element name is the name of the field in the class — the context of the process; element value is the name of the class field of our step.







Why are context fields specified via data. {Field_name} , and in the case of Output , step. {Field_name} ? Because wf core the element value is executed as a C # expression ( Dynamic Expressions library is used). This is a rather useful thing, with its help you can lay some business logic directly inside the scheme, if, of course, the architect approves such a disgrace :).







We diversify the scheme with standard primitives. Add the conditional If step and the processing of the external event.







If


Primitive If . Here the difficulties begin. If you are used to bpmn and draw processes in this notation, then you will find an easy setup. According to the documentation, the step is described as follows:







 { "Id": "IfStep", "StepType": "WorkflowCore.Primitives.If, WorkflowCore", "NextStepId": "nextStep", "Inputs": { "Condition": "<<expression to evaluate>>" }, "Do": [ [ { /*do1*/ }, { /*do2*/ } ] ] }
      
      





There is no feeling that something is wrong here? I have. The step input is set to Condition - expression. Next, we set the list of steps inside the Do array (actions). So, where is the False branch? Why is there no Do array for False? Actually there is. It is understood that the False branch is simply a passage further along the process, i.e., following the pointer in NextStepId . At first, I was constantly confused because of this. Okay, sorted it out. Although not. If the process actions in the case of True need to be put inside Do , this is what “beautiful” json will be then. And if there are these If enclosed with a dozen? Everything will go sideways. They also say that the scheme on xaml is difficult to read. There is a small hack. Just take the monitor wider. It was mentioned a little above that the order of steps in the collection does not matter, the transition follows the signs. It can be used. Add one more step:







 { "Id": "Jump", "StepType": "App.JumpStep, App", "NextStepId": "" }
      
      





Guess what I'm leading to? That's right, we introduce a service step, which in transit takes the process to a step in NextStepId .







Update our scheme:







 { "Id": "SomeWorkflow", "Version": 1, "DataType": "App.ProcessContext, App", "Steps": [ { "Id": "Eval", "StepType": "App.CustomStep, App", "NextStepId": "MyIfStep", "Inputs": { "Input1": "data.Number1", "Input2": "data.Number2" }, "Outputs": { "StepResult": "step.Result" } }, { "Id": "MyIfStep", "StepType": "WorkflowCore.Primitives.If, WorkflowCore", "NextStepId": "OutputEmptyResult", "Inputs": { "Condition": "!String.IsNullOrEmpty(data.StepResult)" }, "Do": [ [ { "Id": "Jump", "StepType": "App.JumpStep, App", "NextStepId": "Output" } ] ] }, { "Id": "Output", "StepType": "App.OutputStep, App", "Inputs": { "TextToOutput": "data.StepResult" } }, { "Id": "OutputEmptyResult", "StepType": "App.OutputStep, App", "Inputs": { "TextToOutput": "\"Empty result\"" } } ] }
      
      





The If step checks to see if the result of the Eval step is empty. If it is not empty, then we display the result; if it is empty, then the message " Empty result ". The Jump step takes the process to the Output step, which is outside the Do collection. Thus, we have maintained the “verticality” of the scheme. Also, in this way, one can jump n steps backwards, i.e. to organize a cycle. Wf core has built-in primitives for loops, but they are not always convenient. In bpmn, for example, loops are organized through If .







Use this approach or the standard, it's up to you. For us, such an organization was more convenient steps.







Waitfor


The WaitFor primitive allows the outside world to influence the process when it is already running. For example, if at the stage of the process approval of the further course by any user is required. The process will stand in the WaitFor step until it receives an event that it is subscribed to.







Primitive structure:







 { "Id": "Wait", "StepType": "WorkflowCore.Primitives.WaitFor, WorkflowCore", "NextStepId": "NextStep", "CancelCondition": "If(cancel==true)", "Inputs": { "EventName": "\"UserAction\"", "EventKey": "\"DoSum\"", "EffectiveDate": "DateTime.Now" } }
      
      





I will explain the parameters a little.







CancelCondition - a condition for interrupting a wait. Provides the ability to interrupt the wait for an event and move on through the process. For example, if a process is waiting for n different events at the same time (wf core supports parallel execution of steps), it is not necessary to wait for all to arrive , in this case CancelCondition will help us . We add a logical flag to the context variables and upon receiving the event we set the flag to true - all WaitFor steps are completed.







EventName and EventKey - name and key of the event. Fields are needed so that in a real system with a large number of simultaneously working processes, to distinguish between events, i.e. so that the engine understands which event is intended for which process and which step.







EffectiveDate - an optional field that adds an event timestamp. It may come in handy if you need to publish an event “into the future”. So that it is published immediately, the parameter can be left blank or the current time can be set.







Not in all cases it is convenient to take a separate step to process reactions from the outside; rather, even usually it will be redundant. An extra step can be avoided by adding the expectation of an external event and the logic of its processing to the usual step. We complement the CustomStep step by subscribing to an external event:







  public class CustomStep : StepBody { private readonly Ilogger _log; public string TextToOutput { get; set; } public CustomStep(Ilogger log) { _log = log; } public override ExecutionResult Run(IStepExecutionContext context) { //-  return ExecutionResult.WaitForEvent("eventName", "eventKey", DateTime.Now); } }
      
      





We used the standard WaitForEvent () extension method. It accepts the previously mentioned parameters EventName , EventKey and EffectiveDate . After completing the logic of such a step, the process will wait for the described event and again call the Run () method at the time the event is published on the engine bus. However, in its current form, we cannot distinguish between the moments of the initial entrance to the step and the entrance after the event. But I would like to somehow separate the logic before-after at the step level. And the EventPublished flag will help us with this. It is located inside the general context of the process, you can get it like this:







 var ifEvent=context.ExecutionPointer.EventPublished;
      
      





Based on this flag, you can safely divide the logic into before and after an external event.







An important clarification - according to the idea of ​​the creator of the engine, one step can only be signed for one event and react to it once. For some tasks this is a very unpleasant limitation. We even had to “finish” the engine in order to get away from this nuance. We will skip their description in this article, otherwise the article will never end :). More complex usage practices and examples of improvements will be covered in subsequent articles.







Registration process in the engine. Publication of the event on the bus.



So, with the implementation of the logic of steps and the description of the process figured out. What remains is the most important thing, without which the process will not work - the description needs to be registered.







We will use the standard AddWorkflow () extension method, which will place its dependencies in our IoC container.







It looks like this:







 public static IServiceCollection AddWorkflow(this IServiceCollection services, Action<WorkflowOptions> setupAction = null)
      
      





IServiceCollection - interface - a contract for a collection of service descriptions. He lives inside DI from Microsoft (more about it can be read here )







WorkflowOptions - basic engine settings. It is not necessary to set them yourself; standard values ​​are quite acceptable for the first acquaintance. We are going further.







If the process was described in code, then registration takes place like this:







 var host = _serviceProvider.GetService<IWorkflowHost>(); host.RegisterWorkflow<SomeWorkflow, ProcessContext>();
      
      





If the process is described via json, then it must be registered like this (of course, the json description must be preloaded from the storage location):







 var host = _serviceProvider.GetService<IWorkflowHost>(); var definitionLoader = _serviceProvider.GetService<IDefinitionLoader>(); var definition = loader.LoadDefinition({*json  *});
      
      





Further, for both options, the code will be the same:







 host.Start(); //      host.StartWorkflow(definitionId, version, context); //      /// host.Stop(); / /    
      
      





The definitionId parameter is the process identifier. What is written in the process Id field. In this case, id = SomeWorkflow .







The version parameter specifies which version of the process to run. The engine provides the ability to register immediately n process versions with one identifier. This is convenient when you need to make changes to the description of the process without breaking already running tasks - new ones will be created according to the new version, old ones will live quietly on the old one.







The context parameter is an instance of the process context.







The host.Start () and host.Stop () methods start and stop process hosting. If in the application the launch of processes is an application task and is performed periodically, then hosting should be stopped. If the application has the main focus on the implementation of various processes, then hosting can not be stopped.







There is a method for sending messages from the outside world to the engine bus, which will then distribute them among subscribers:







 Task PublishEvent(string eventName, string eventKey, object eventData, DateTime effectiveDate = null);
      
      





The description of its parameters was higher in the article ( see the WaitFor primitive part ).







Conclusion



We definitely took a chance when we decided in favor of the Workflow Core - opensource project, which is actively developed by one person, and even with very poor documentation. And you most likely will not find real practices of using wf core in production systems (except ours). Of course, having highlighted a separate layer of abstractions, we insured ourselves against the case of failure and the need to quickly return to WWF, for example, or a self-written solution, but everything went quite well and the failure did not come.







Switching to the opensource Workflow Core engine solved a certain number of problems that prevented us from living peacefully on WWF. The most important of them is, of course, support for .Net Core and the lack of such, even in plans, at WWF.







Following is the open source. Working with WWF and getting various errors from its bowels, the ability to at least read the source would be very helpful. Not to mention changing something in them. Here with Workflow Core complete freedom (including licensing - MIT). If an error suddenly appears from the bowels of the engine, just download the sources from github and calmly look for the cause of its occurrence. Yes, just the ability to start the engine in debug mode with breakpoints already greatly facilitates the process.







Of course, solving some problems, Workflow Core brought its own new ones. We had to make a significant amount of changes to the engine core. But. Work on "finishing" for themselves cost less in time than developing your own engine from scratch. The final solution was quite acceptable in terms of speed and stability, it allowed us to forget about the problems with the engine and focus on the development of the business value of the product.







PS If the topic turns out to be interesting, then there will be more articles on wf core, with a deeper analysis of the engine and solutions to complex business problems.








All Articles