Advice from the creator of RimWorld: cognitive distortions in predicting a fan of the game

(2007 article)



To develop games, we need to evaluate whether they will be fun or not. Given the description of the game, it's good to know if it will work before we create it.



This post is about the general naive method, which is often used to showcase unpublished game designs. This is what I call intelligent modeling, and this is one of the most basic mistakes in game design.



Intelligent modeling is a process of imagination when you play a game in your head and then evaluate the game based on how you felt in this imaginary game. For example, when evaluating a first-person shooter, you can imagine an intense shootout in which a player triumphs in some particularly exciting manner.



Game previews and advertisements are designed to make you think. They often describe specific game scenarios in poetic detail. The goal is to make you imagine yourself playing a game and enjoying this experience. This is misleading, because the quality of the possible micro-experience in the game says very little about the quality of the game design as a whole.



The main problem is that intelligent modeling allows you to evaluate only a very short segment of the gameplay. A person will strive to appreciate the coolest of the possible moments of the gameplay. This means that the rest of the game is completely ignored. In almost all cases, an imaginary experience cannot be extrapolated to the rest of the gameplay. The game must be constantly fun throughout the playing time in order to be effective. Thus, intelligent modeling will make games that are only interesting at 5% of their length, that seem really good, although the remaining 95% will be very boring.



Intelligent modeling also does not take into account the learning curve. Since the game is in your head, you fully understand it. This happens automatically. Unfortunately, there are many possible game designs that are incredibly good after the player finds out how to play the game well. Evaluating games using intelligent modeling can obscure learning the game.



Intelligent modeling also leads to design ideas that lack rigor and internal coherence. The transition from intelligent modeling to code almost always reveals gaping holes in game logic that cannot be elegantly matched.



There are two cognitive biases that lead people to intelligent modeling. First of all, people respond very well to storytelling. Games, however, are not stories. These are systems from which a narrative can arise. Evaluating only one possible story, we completely miss the quality of the system producing this story.



The second distortion is the bias of confirmation (confirmation bias). This is a problem with how people test hypotheses. People tend to look for evidence to support their hypothesis, while actually looking for fraud is much more useful. In this case, the person using intelligent modeling is looking for β€œevidence” to confirm his belief that the game in question will be good. They almost always come up with some kind of idealized scenario, and then extrapolate the received emotions to the entire game design. It is not right.



Instead, try faking the game design. Do not try to come up with the coolest possible scenario. Try to come up with the most boring scenario possible. You can usually give a lot of examples, because most game projects do not withstand such attacks. If your design can, you know you have a real gem.



It is almost impossible to avoid mental modeling. Just understand that it will lead you astray if you do it naively.



All Articles