FP vs OOP

Not so long ago, several posts appeared on the hub that contrasted the functional and object approach, which generated in the comments a heated discussion of what it really is - object-oriented programming and how it differs from functional. I, albeit with some delay, want to share with others what Robert Martin, also known as Uncle Bob, thinks about this.













Over the past few years, I have repeatedly been able to program in tandem with people studying Functional Programming who were biased about OOP. This was usually expressed in the form of statements like: β€œWell, this is too much like something object.”







I think this comes from the belief that FP and OOP are mutually exclusive. Many seem to think that if the program is functional, then it is not object oriented. I believe that the formation of such an opinion is a logical consequence of the study of something new.







When we take on a new technique, we often begin to avoid the old techniques that we used before. This is natural, because we believe that the new technique is β€œbetter” and therefore the old technique is probably β€œworse”.







In this post, I am justified in the view that while OOP and FP are orthogonal, these are not mutually exclusive concepts. That a good functional program can (and should) be object oriented. And that a good object-oriented program can (and should) be functional. But in order to do this, we have to determine the terms.







What is OOP?



I will approach the issue from a reductionist perspective. There are many correct definitions of OOP that cover many concepts, principles, techniques, patterns, and philosophies. I intend to ignore them and focus on the salt itself. Here, reductionism is needed because all this wealth of opportunities surrounding OOP is not really something specific to OOP; it's just part of the wealth of opportunities found in software development in general. Here I will focus on the part of OOP, which is defining and unremovable.







Look at two expressions:







1: f (o); 2: of ();







What is the difference?







There is clearly no semantic difference. The whole difference is entirely in the syntax. But one looks procedural, and the other is object oriented. This is because we are used to the fact that for expression 2. implicitly implies a special behavior semantics that expression 1 does not have. This particular behavior semantics is polymorphism.







When we see expression 1. we see the function f , which is called into which the object o is transferred. This implies that there is only one function named f, and not the fact that it is a member of the standard cohort of functions surrounding o.







On the other hand, when we see expression 2. we see an object with the name o to which a message with the name f is sent. We expect that there may be other kinds of objects that receive the message f, and therefore we do not know what specific behavior to expect from f after the call. Behavior depends on type o. that is, f is polymorphic.







This fact that we expect from polymorphic behavior methods is the essence of object-oriented programming. This is a reductionist definition and this property cannot be removed from OOP. OOP without polymorphism is not OOP. All other OOP properties, such as data encapsulation and methods tied to this data and even inheritance, are more related to expression 1. than to expression 2.







Programmers using C and Pascal (and to some extent even Fortran and Cobol) have always created systems of encapsulated functions and structures. To create such structures, you do not even need an object-oriented programming language. Encapsulation and even simple inheritance in such languages ​​is obvious and natural. (In C and Pascal more naturally than in others)







Therefore, what really distinguishes OOP programs from non-OOP programs is polymorphism.







You might want to argue that polymorphism can be done simply by using inside f switch or long if / else chains. This is true, so I need to set another limitation for OOP.







The use of polymorphism should not create the dependence of the caller on the called.







To explain this, let's look again at the expressions. The expression 1: f (o) seems to depend on the function f at the source code level. We draw this conclusion because we also assume that f is only one and that therefore the caller must know about the callee.







However, when we look at Expression 2. of () we assume something else. We know that there can be many realizations of f and we do not know which of these functions f will actually be called. Therefore, the source code containing expression 2 is independent of the function being called at the source code level.







More specifically, this means that modules (source files) that contain polymorphic function calls should not refer to modules (source files) that contain an implementation of these functions. There can be no include or use or require or any other keywords that make some source code files dependent on others.







So, our reductionist definition of OOP is:







A technique that uses dynamic polymorphism to call functions and not create dependencies of the caller on the called at the source code level.


What is AF?



And again, I will use the reductionist approach. FP has rich traditions and history, whose roots are deeper than programming itself. There are principles, techniques, theorems, philosophies and concepts that permeate this paradigm. I will ignore all this and go straight to the essence, to the inherent property that separates FP from other styles. Here it is:







f (a) == f (b) if a == b.







In a functional program, calling a function with the same argument gives the same result no matter how long the program has been running. This is sometimes called referential transparency.







It follows from the above that f should not change the parts of the global state that affect the behavior of f. Moreover, if we say that f represents all functions in the system β€” that is, all functions in the system must be referentially transparent β€” then no function in the system can change the global state. No function can do anything that can lead to another function returning a different value from the system with the same arguments.







This has a deeper consequence - no named value can be changed. That is, there is no assignment operator.







If you carefully consider this statement, you can come to the conclusion that a program consisting only of referentially transparent functions cannot do anything - since any useful behavior of the system changes the state of something; even if it's just the state of the printer or display. However, if we exclude iron from the requirements for referential transparency, and all the elements of the world around us, it turns out that we can create very useful systems.







The focus, of course, is in recursion. Consider a function that takes a structure with state as an argument. This argument consists of all the state information that a function needs to function. When the work is finished, the function creates a new structure with a state whose contents are different from the previous one. And with the last action, the function calls itself with a new structure as an argument.







This is just one of the simple tricks that a functional program can use to store state changes without the need to change state [1].







So, the reductionist definition of functional programming:







Referential Transparency - You cannot reassign values.


FP vs OOP



At this point, both proponents of OOP and proponents of FI are already looking at me through optical sights. Reductionism is not the best way to make friends. But sometimes it is useful. In this case, I think it’s useful to shed light on the unfading FP holivar against OOP.







It is clear that the two reductionist definitions that I have chosen are completely orthogonal. Polymorphism and Referential Transparency have nothing to do with each other. They do not intersect in any way.







But orthogonality does not imply mutual exclusion (ask James Clerk Maxwell). It is entirely possible to create a system that uses both dynamic polymorphism and referential transparency. It is not only possible, it is right and good!







Why is this combination good? For exactly the same reasons that both of its components! Systems built on dynamic polymorphism are good because they have low connectivity. Dependencies can be inverted and placed on different sides of the architectural boundaries. These systems can be tested using Moki and Fake and other types of Test Doubles. Modules can be modified without making changes to other modules. Therefore, such systems are easier to modify and improve.







Systems built on referential transparency are also good because they are predictable. State immutability makes such systems easier to understand, change, and improve. This greatly reduces the likelihood of races and other multithreading issues.







The main idea here is this:







There is no holivar AF vs OOP

FP and OOP work well together. Both are good and proper to use in modern systems. The system, which is based on a combination of the principles of OOP and FP maximizes flexibility, maintainability, testability, simplicity and strength. If you remove one to add another, it will only worsen the structure of the system.







[1] Since we use machines with von Neumann architecture, we assume that they have memory cells whose state actually changes. In the recursion mechanism, which I described, tail-recursion optimization will not allow the creation of new glass frames and the original glass frame will be used. But this violation of referential transparency (usually) is hidden from the programmer and does not affect anything.








All Articles