Wizards and warriors, part five

We’ve been struggling in the last four episodes to encode the rules of our business domain — which, recall, could be wizards and warriors or papers and paycheques or whatever — into the C# type system. The tool we’ve chosen seems to be resisting our attempts, and so maybe it’s a good time to take a step back and ask if we’re on the right track in the first place.

62103645

The fundamental idea in the first and second episodes was use the type system to detect and prevent violations of the rules of the business domain at compile time. That effort has largely failed, due to the difficulty of representing a subtype with a restriction, like “a Wizard is a Player that cannot use a Sword. In several of our attempts we ended up throwing exceptions, so that the rule was enforced by the runtime rather than the compiler. What is the nature of this exception?

I classify exceptions as fatal, boneheaded, vexing and exogenous. Plainly the exception here is neither fatal nor exogenous. I can’t advocate for creating vexing exceptions — that is, wrapping every call that sets the Weapon of a Player with a try-catch that catches the entirely-expected exception.

If it is boneheaded then there must be a way for the caller that is trying to set the Weapon of a Player to know that they are about to do something illegal so that they can avoid it. There are two ways to do that.

First, the caller could to know all the rules of the system to make sure that they’re not giving a Sword to a Wizard — and now we are encoding the rules of the business domain in many places, which isn’t DRY. And it puts a high burden on the developer, which leads to exactly the correctness problems we are attempting to avoid.

Second, Player could provide a TryChangingWeapon method that returns a bool instead of throwing, and then the caller has to deal with the resulting failure case.

No matter how we slice it, if the compiler can’t prevent the rules violation then somehow the code has to manage “I tried to do something illegal and failed” at runtime.

Any time I think about dealing with failure at runtime I ask myself “is the failure condition truly exceptional?” What if we said, hold on a minute, it is not exceptional to want a wizard to wield a sword. Doing so might be disallowed by our policies about allowable weapons, but it is not exceptional to make the attempt.

I thought about this a lot when designing the semantic analyzer for Roslyn. We could have used exceptions as our model for reporting compiler errors, but we rejected that immediately. When you’re writing code in the IDE, it is correct code that is the exception! Code as you’re typing it is almost always wrong; the business domain of the analyzer is dealing with incorrect code and analyzing it for IntelliSense purposes. The last thing we wanted to do was to make it impossible in the type system to represent incorrect C# programs.

In the third and fourth episodes of this series, we saw that it was also difficult to figure out first, how to invoke the right code to handle various specific rules, and second, where to put that code. Even leaving aside the problems with the highly verbose and complex Visitor Pattern, and the dangerous dynamic invocation pattern, we’ve still got the fundamental problem: why is resolving “a Paladin in the Church attacks a Werewolf with a Sword” a concern of any one of those types, over any other? Why should that code go in the Paladin class as opposed to, say, the Sword class?
lcsof
The fundamental problem is my initial assumption that the business rules of the system are to be expressed by writing code inside methods associated with classes in the business domain — the wizards and daggers and vampires. We keep on talking about “rules”, and so apparently the business domain of this program includes something called a “rule”, and those rules interact with all the other objects in the business domain. So then should “rule” be a class? I don’t see why not! It is what the program is fundamentally about. It seems likely that there could be hundreds or thousands of these rules, and that they could change over time, so it seems reasonable to encode them as classes.

Once we realize that “rule” needs to be a class, suddenly it becomes clear that starting our design with

  • A wizard is a kind of player.
  • A warrior is a kind of player.
  • A staff is a kind of weapon.
  • A sword is a kind of weapon.
  • A player has a weapon.

completely misses the actual business of the program, which is maintaining consistent state in the face of user attempts to edit that state. It would have been better to start with:

  • The fundamental concerns of the program are users, commands, game state, and rules.
  • A user provides a sequence of commands.
  • A command is evaluated in the context of the rules and current game state, and produces an effect.

What are some things we know about effects?

  • Doing nothing is an effect.
  • Mutation of game state is an effect.
  • Playing a sound is an effect.
  • Sequential composition of any number of effects is an effect.

And what are some of the things we know about rules?

  • Rules determine the effects that result from a particular player taking a particular action; an action may involve arbitrarily many other game elements.
  • Some rules describe universally applicable state invariants that must never be violated.
  • Some rules describe “default” handling of commands; the actions of these rules can be modified by other rules.
  • Some rules weaken other rules by causing the other rule to not apply to a specific situation.
  • Some rules strengthen other rules by adding additional restrictions.

Now all our previous problems fade away. A player has a weapon, great, that’s fine, we’ll make a Player class with a property of type Weapon. That code makes no attempt to try to represent that a wizard can only wield a staff or a dagger; all that code does is keep track of game state, because state is its concern.

Then we make a Command object called Wield that takes two game state objects, a Player and a Weapon. When the user issues a command to the system “this wizard should wield that sword”, then that command is evaluated in the context of a set of Rules, which produces a sequence of Effects. We have one Rule that says that when a player attempts to wield a weapon, the effect is that the existing weapon, if there is one, is dropped and the new weapon becomes the player’s weapon. We have another rule that strengthens the first rule, that says that the first rule’s effects do not apply when a wizard tries to wield a sword. The effects for that situation are “make a sad trombone sound, the user loses their action for this turn, no game state is mutated”. When a user issues a command “this paladin should attack that werewolf” then again, the relevant rule objects are consulted in the context of the game state (namely, the paladin is wielding a sword and standing in a church), and the effects are produced (make the sword glow, the werewolf is destroyed, add ten points to Griffindor, whatever.)

What problems have we solved?

We no longer have the problem of trying to fit “a wizard can only use a staff or dagger” into the type system of the C# language. We have no reason to believe that the C# type system was designed to have sufficient generality to encode the rules of Dungeons & Dragons, so why are we even trying?

We have solved the problem of “where does the code go that expresses rules of the system?” It goes in objects that represent the rules of the system, not in the objects that represent the game state; the concern of the state objects is keeping their state consistent, not in evaluating the rules of the game.

And we have solved — or rather, sketched a solution for solving — the problem of “how do you figure out which rules apply in a given situation?” Again, we have no reason to suppose that the rules of overload resolution and the rules of Dungeons & Dragons attack resolution have anything in common; if we’re building the latter, then we need to design a system that correctly chooses the valid rules out of a database of rules, and composes the effects of those rules sensibly. Yes, you need to build your own resolution logic, but resolving these rules is the concern of the program, so of course you’re going to have to write code to do it.

And what new scenarios have we enabled? Rules are now more like data than like code, and that is powerful!

  • We can persist rules into a database, so that rules can be changed over time without writing new code. And we get all the nice benefits of a database, like being able to roll back to previous verions if something goes wrong.
  • We can write a little Domain Specific Language that encodes rules as human-readable text.
  • We can do experiments, trying out tweaks to the rules without recompiling the program.
  • We saw in a previous episode that it might be hard to know which of several rules to choose from, or how to combine the effects when multiple rules apply. We can write test engines that try billions of possible scenarios and see if we ever run into a situation where the choice of applicable rules becomes ambiguous or violates a game invariant, and so on.

This kind of system, where rules are data, not code, seems like it would be quite a bit more heavyweight than just encoding the rules in C# and its type system, but it is also more flexible. I say that when the business domain of the program actually is evaluating complex rules and determining their actions, and particularly when those rules are likely to change over time more rapidly than the program itself changes, then it makes a lot of sense to make rules first-class objects in the program itself.

One of those points was to make a DSL that represents the rules. In fact there are DSLs where these sorts of rules are first-class elements in the language itself. A number of commenters have mentioned Inform7, a brilliant programming language for writing interactive fiction (aka “text adventures”). This series of posts was inspired in part by how Inform7 handles this problem. As some commenters noted in a previous episode, Inform7 lets you write code like this to solve our first problem (somewhat abridged from the original):

A wizard is a kind of person.
A warrior is a kind of person.
A weapon is a kind of thing.
A dagger is a kind of weapon.
A sword is a kind of weapon.
A staff is a kind of weapon.

Wielding is a thing based rulebook. The wielding rules have outcomes 
allow it (success), it is too heavy (failure), it is too magical (failure).
The wielder is a person that varies.
To consult the rulebook for (C - a person) wielding (W - a weapon):
        now the wielder is C;
        follow the wielding rules for W.

Wielding a sword: if the wielder is not a warrior, it is too heavy.
Wielding a staff: if the wielder is not a wizard, it is too magical.
Wielding a dagger: allow it.

Instead of giving a weapon (called W) to someone (called C):
        consult the rulebook for C wielding W;
        if the rule failed:
                let the outcome text be "[outcome of the rulebook]" in sentence case;
                say "[C] declines. '[outcome text].'";
        otherwise:
                now C carries W;
                say "[C] gladly accepts [the W]."

And you can write rules which modify other rules like this:

Rule for attacking a werewolf when the time is after 
midnight: decrease the chance of success by 20.

Rule for attacking a werewolf which is not the Werewolf King 
when the player is a paladin and the player wields the Holy Moon Sword: 
increase the attack power by 8.

There is no need to figure out “which class does the rule about paladins vs werewolves go in?” The rule goes in a rulebook, end of story. Like I said, Inform7 is amazing.

I started this series by saying “let’s write some classes without thinking”; the moral of the story here is of course: think about what the primary concern of your program really is before you start writing it. The classic paradigm for OOP still makes perfect sense: encode the fundamental, unchanging relationships between business elements into the type system. Here the fundamental unchanging relationships are things like “commands are evaluated in the context of state and rules, to produce a sequence of actions”, so that’s where the design should have started in the first place.

78 thoughts on “Wizards and warriors, part five

  1. Pingback: Wizards and warriors, part four | Fabulous adventures in coding

  2. Nice 😀

    Do you know of any other implementations of a rulebook system besides Inform 7’s? How would you go about implementing (or finding off the shelf) a solution for finding the relevant rules for a situation?

    And next challenge: handle the floating intelligent sword 😉 (though I would expect you’d use object composition rather than inheritance in that case)

    • I do not know of a ready-made such system. I do know how I would approach building one. Rules form a graph; from that graph we extract a set of *applicable* rules. Of the applicable rules we then eliminate rules which are *superceded* by other rules in the set. Of the remaining rules, we examine them to see if any *conflict*, and potentially produce an error condition if the conflict is not expected by design of the rules. And of the remaining non-conflicting rules we determine how their actions compose together, for instance, what order the combined sequence of actions executes in.

      This is essentially a superset of the overload resolution algorithm, where the nodes in the graph are methods and the edges are things like the “overrides” relationship.

      • I hope you’ll go a bit more in-depth into this in a future article, sounds interesting but I don’t think I’ve quite got my head around it.

        Also with rules as data, this sounds like it could easily open up in game spell-crafting and other similar things.

      • I think tools for building expert systems are other such systems – CLIPS would be the most well known, and there is also Jess, which runs in the JVM. I used an expert system built in Jess to encode the rules of a RockPaperScissors-like game in Java, which seems very similar to the idea presented here.

  3. An interesting point that I’m surprised was left out of this article is that you have perfectly illustrated why we need to follow the Single Responsibility Princple. Players and weapons are objects; they track what they are and their condition/state. Their interactions with other objects is not their concern. That should be left to some object which only concerns itself with those interactions.

    • One of my complaints about object-oriented languages is that they impose too strong a coupling between the combinations of features seen by clients and the code structure of the underlying implementations. If client code has references to two player objects and a weapon object, it should be able to ask the first player object to attack the second player object with the weapon without having to worry about the existence of a rulebook object.

      This sort of issue comes up in a lot of real-world situations involving graphics. If I have an object whose `Paint` method needs to draw some shapes on a `Graphics` object, and most of those shapes represent operations built into `Graphics` but a few don’t, it can be awkward to have a paint method which accepts `Graphics gr` as a parameter say

        gr.DrawLine(...);
        gr.DrawLine(...);
        GraphHelpers.DrawRoundRect(gr, ...);
        gr.drawLine(...);
      

      Extension methods can sometimes help, but unfortunately C# requires that they be public; in many situations like this it would be more useful to be able to declare them private, or else have a means of declaring a struct type which would encapsulate an immutable reference of type `Graphics` and have its outward-facing interface include all the methods of `Graphics` plus any additional methods it wished to define. Unfortunately, I know of no way to code such a thing in C# without having to write boilerplate code to chain to every single method to be proxied [it’s possible to use a dynamic proxy, but that adds other kinds of complexity].

      • C# does not require extension methods to be public. They can be internal. They can even be private, though that is not so useful because private extension methods cannot be used outside the declaring class. The declaring class, in turn, can also be internal as well as public.

        • I somewhat misspoke. My intended point was that while it makes sense to require that public extension methods be in public static classes, that should not imply that all extension methods should be so constrained, there should be support for non-public extension methods *not bound by that constraint*. In the drawing example, it would have been helpful to be able to define “DrawRoundRect” as an extension method within the type whose code calls it, without having to make the method visible in any other context. Being able to embed the extension method within the consuming class would allow it to use private and protected members of that class, something which isn’t possible with extension methods isolated in separate public static classes.

  4. I don’t understand what you have against “vexing” exceptions, as you call them.

    Why is:

    if (player.canWeildWeapon(weapon)) {
    player.weildWeapon(weapon);
    }
    else {
    print(“Unable to weild ” + weapon.name());
    }

    worse than:

    try {
    player.weildWeapon(weapon);
    }
    catch (playerWeildFailure pwf) {
    print(“Unable to weild ” + weapon.name());
    }

    Exceptions are generally used to signal that part of you program is trying to do something it is not able to do, such as violate some kind of rule or invariant. This includes your “vexing” exceptions, where your post includes a similar suggestion for parsing ints. And while you say these exceptions should be avoided, you don’t ever describe *why* you think this is the case. To me, exceptions seem like the perfect fit for the use case you’re describing.

    • In the first case you can do something fantastic, check if the player can wield a weapon without actually equipping it, so, the user interface may show a list of possible weapons for the player to choose from, or enable/disable an “use” submenu, etc. So throwing an exception without offering an option to check is vexing.

      But in the case of int.Parse I agree with you, TryParse is not all that different from throwing an exception, except by the awkward way C# deals with exception handling.

      • Exceptions are expensive… In the following benchmark, a valid string and an invalid string are parsed 1,000,000 times with Parse and TryParse

        Parse, Valid input 00:00:00.1165636
        TryParse, Valid input 00:00:00.1060652
        Parse, Invalid input 00:00:30.4630575
        TryParse, Invalid input 00:00:00.0728342

        As you can see, it takes more than 400 times longer to detect an invalid input with Parse than with TryParse.

        • Excuse me? Exceptions are expensive?

          You just did a million of them in 30.4 seconds. Thus an exception “costs” 0.0000304 seconds. Anything that I can do over 10,000 times a second is difficult to call expensive without some other modifier.

          Exceptions are _more_ expensive than returning a value: I agree.
          Exceptions are 400 times slower than returning a value: sure.
          Exceptions should not be thrown in tight loops: probably, but let me profile to be sure.
          But just saying “exceptions are expensive so don’t use them” (which you did not say, by the way) is just silly.

          It does not help me to say a construct is expensive without a comparison or a context. In the final analysis every language innovation after assembly language has been trading processor cycles (which are cheap and getting cheaper) for programmer time (which is expensive and the cost is increasing.). Most of the time, I can afford the hit and never notice it since most computers spend most of their time waiting for the user anyway.

          If throwing exceptions makes you productive as a programmer then use them! As long as you don’t throw exceptions in tight loops, then you are unlikely to ever notice the performance difference.

          • Obviously I’m not saying to never use exceptions… I’m just mentioning performance as one of the reasons to try to avoid them when possible. But unless performance is critical, I agree that it’s not a sufficient reason not to use exceptions.

        • Current implementation of exceptions are expensive, but that is an implementation detail, currently .Net implements exceptions by throwing a hardware exception, which is pretty expensive on x86, but there are other forms of implementing exceptions, for example, the runtime could pass an extra value in a register or flag which the caller checks, if not zero than go for the nearest catch block, if there is no catch block return to calling, of course, all this extra checking would make the common case (no exception, in current .Net philosophy) slower, I understand why .Net designers choose to do no checking on method return and instead rely on hardware exceptions instead (still, I wonder how microFramework handles this).
          Finally, the comparison between Parse and TryParse isn’t totally fair, the first not only throws an exception in the case of invalid input but also tells you why it is invalid (overflow vs invalid format), the second only tells if the conversion succeed or not.

          • There are two things at play with Parse versus TryParse, one philosophical, and one practical.

            The philosophical issue is that the rule that “exceptions should be use for exceptional circumstances” gets broken if you are using Parse to test whether a user-provided string is indeed the int that you expected. A user typing something wrong is *far* from an exceptional condition, and as a result this case should not be handled with an exception.

            The practical issue is that the .NET implementation of exceptions (rightly) optimizes the non-exceptional code flow. Putting try/catch or try/finally statements in your code costs very little. Actually throwing and catching exceptions, however, is *very* expensive. If you go back and read Chris Brumme’s blog from way back when, he has a good description of some of the rationale for how exceptions were designed in .NET.

          • There are many scenarios here, and that’s only talking about int.Parse! For example, you may want to parse user input, since humans make errors all the time than an invalid input is far from the exceptional case and the program should be able to handle it, probably by telling the user on how to fix, let’s call this case “data validation”, and frankly, there is no built in decent “data validation” features on C#, Asp.Net alone created at least two mechanisms for that.
            Another case is when you are not parsing user input, maybe reading a configuration file, in this case what you do when the value is invalid? Throws an exception? To reach this point something went really bad, there is not point in trying to handle this situation, if a file is corrupted the exe file may be corrupted too!
            And there maybe cases in between the two above, for example, a webservice call or return, it is data coming from another computer, but maybe a not trusted one, the invalid data is an exceptional case, for sure it is not supposed to happen, but it may happen and you are supposed to handle it.

            Now in all those three cases there is something common to them, neither will continue with the normal flow, and in all there is the simple case of int.Parse and a single value, how about parsing multiple values? Or something more complicated like parsing a XmlDocument? Once the use case becomes more complicated and there is code all around to handle the non-exceptional-non-normal flow the code also becomes harder to read, so hard that about every example will come with a note “All error checking removed for expository purposes”, I thing these is a clear indication that code outside normal control flow should not be mixed with the normal control flow.

            And finally, I don’t thing the exception model of C#, Java or any existing language is good, this is one area where programming languages have a lot to evolve, I consider the current state as “better than nothing” but still, really bad, with some languages having a default error handling mechanism so bad that’s totally unusable.

          • A fundamental difficulty with languages’ concepts of exceptions is that they very strongly associate “take action when X occurs” with “consider X resolved”, and assume both should use the same monolithic X; there’s no semantically-clean way to specify that code will have no hope of resolving an exception but will need to know about it anyway, nor for such code to notify upstream recipients of the exception that additional problems have arisen, nor to have “compound exceptions” trigger the actions associated with the multiple parts within them without considering themselves “resolved” until all parts have been acknowledged.

            The lack of such features means that the most semantically-useless way of handling unexpected exceptions from nested methods (let them bubble up the call stack) is also the most common, and that recipients of exceptions thrown from nested method calls often have no way of knowing whether the exceptions represent “The requested action could not be completed; the attempt had no side-effects” or something more serious.

    • There’s basically three reasons:

      1) It violates the design philosophy of exceptions. Exceptions are intended for unexpected exceptional circumstances. User giving invalid input is not at all exceptional.

      2) Because of (1), exceptions are expensive. The .Net runtime optimizes for non-exceptional circumstances because it is expected that, most of the time, no exceptions will be thrown. Plus, the semantics of exceptions makes them difficult to implement efficiently. The time it takes to throw and catch a single exception is small in terms of our perception of time, but if you’re doing something that can fail often (like checking millions of rows of data analyzing which strings contain valid numbers), it makes a significant difference. By reporting this type of “error” as an exception instead of a return value (like the original Int32.Parse did), you give users no efficient alternative.

      3) The semantics of exceptions make them much more difficult to pin down exactly why an exception was thrown. If Int32.TryParse returns false, you can be certain that the string you passed to TryParse was not a valid representation of an int. On the other hand, if Parse throws a NumberFormatException, you ONLY can be certain of that if you know there are absolutely no other calls in the try block that might also throw the same exception. If for example, you called Int32.Parse(MyClass.CurrentInstance.GetStringToParse()), it’s possible that MyClass.CurrentInstance could also throw NumberFormatException for an unrelated reason (maybe loading MyClass.CurrentInstance involves reading something from the config file that it expected to be a number but was not). This makes it easy to introduce subtle bugs. Have you ever gotten an error message from a program telling you that your user input was invalid, when, in fact, the problem had nothing to do with your input? This is one way that happens.

  5. The rulebook system is a very indirect solution to the problem and hard to understand. If you have hundreds of rules with possible interactions you need good tooling for your designers still being able to understand them.

    When I was faced with these problems I went for a more direct and practical approach. My rules form a hierarchic system and the implementation of the rules know about each other. Now when implementing a rule I have to decompose complex/abstract rules into more specific/simpler rules: a rule of “player attacks monster” is implemented by invoking rules for “calculate attack value for player vs monster” and “calculate defense value for monster vs player” and then comparing the results. The calculation rules are composed by rules for looking into the equipment of the player, and so forth. To make the rules more dynamic, game objects can be “annotated” with modifiers/effects which are consulted by various of the rules, basically adding hooks to modify rules and their resolution.

    Had to implement transactional game state though, for the rules to be able to “try out stuff” and do partial rollbacks if it didn’t work out. Though non-persistent transactions (which don’t need to survive crashes) are a very cool thing to have available when doing game programming, was totally worth it.

    • I agree that as the rules become more and more complex, you need ways to organize them. Inform7 has the concept of multiple rulebooks, but it is also optimized for text adventure scenarios, where there are typically single developers working on relatively small projects with fairly simple rule systems. A rule system for a more complex game, or for really complex scenarios like policies of actual businesses, I agree would likely need better organizational and debugging tools.

      The problem of transactional state is a tricky one. When I was working on adding lambdas to C# 3 we needed to fit into the compiler a system for doing “trial” bindings of lambda parameters to see if they produced compiler errors, but without actually surfacing the error to the user. Since errors were at the time modeled as mutable state it was quite tricky.

      Something I would like to try is to use immutable objects in a game engine, as “undo / redo” operations become much more straightforward in an immutable universe.

      • > … immutable objects … “undo / redo”

        I once designed a rules-based engine for tracking risk (exposure) in commodities trading. Moving to an immutable model was the best thing we did. Apply the trader’s desired actions. Did we bust the limits (break the rules)? Reject the action, tell the trader, discard the new state. Everything’s OK? Keep the new state. Immutable ftw.

    • Well, having an experience with modding several games, I can only say that games usually take “rulebook” approach even further — they simply drop in a sort of a scripting language to move “EngineObject”s around. Sometimes it’s something like Z machine (GTA: Vice City has a huge bytecode-like scenario file), sometimes it’s actually a language: Lua (WoW’s client), or C# (pretty much anything written in Unity).

      • I’m thinking the same thing. Back in the mid 2000s I taught myself Neverwinter Nights modding and they had their own C-like scripting language. I don’t know exactly how the rule system was implemented but I could imagine that whenever some rules need to be valued the core passed a bunch of objects to the scripting language VM for evaluation.

  6. Does this approach largely abandon encapsulation? After all, “rule” sounds an awful lot like “function”, and “state” sounds an awful lot like “structure with a bunch of publicly accessible, mutable fields.”

    • Well, let’s take a step back and remind ourselves what the purpose of encapsulation is. Encapsulation lowers the costs associated with large-team software development, so that the authors of subsystems can edit the internal details without breaking consumers of the public interface.

      Does my proposed architecture continue to maintain that desirable outcome?

      • To be honest, I’m not sure. I haven’t tried this on a large scale project, and I don’t think arguing over hypothetical rules and states will be productive. I have tried this on some small toy projects, and it raised a yellow flag for me. It’s something I worry about.

      • One difficulty I see with this approach is that unless the rulebook is a singleton there’s a major constraint which would seem like it would be useful to handle in the type system: a player should only be able to attack other players *that are bound to the same rulebook*. Similar issues come up in the real-world if one tries to have a `Dictionary` class support joining multiple instances that don’t happen to use the same comparer. If one tries to join a case-sensitive dictionary which maps “Eric Lippert” to 5, and a case-insensitive dictionary that maps “Fred Jones”, “FRED JONES”, “Fred Jones”, “FReD JoNES”, etc. all to 23, there’s no clean way to have the combination yield a single dictionary except by having the union of two dictionaries encapsulate two separate dictionaries (and have unions of unions encapsulate even more dictionaries).

        An idea I’ve thought of, though I have no idea how well it would work in practice, would be to have the Player type (or dictionary) be a generic whose type parameter identifies a class with a parameterless constructor which can be queried for a reference to a rulebook (or equality comparator). Such an approach would make it hard for code to create arbitrary numbers of rulebooks, but would allow type checking to ensure that only objects with matching rulebooks could interact with each other. What would you think of that idea?

      • “Encapsulation lowers the costs associated with large-team software development, so that the authors of subsystems can edit the internal details without breaking consumers of the public interface.”

        I’ve been pondering this, and I don’t think it’s true. That’s what abstraction does, not encapsulation. The purpose of encapsulation is to bind data and behaviours together with the end goal of making it easier to maintain invariants. If the only module that can alter my head and tail pointers is LinkedList.cpp, then it’s much easier to ensure that they always point to the correct nodes.

        So, no, I don’t think the proposed architecture maintains this. In a real game, I would expect things like Players to have lots of properties and would expect there to be invariants that must be maintained between them. It would be very easy for different rules to break these invariants or to make conflicting updates.

        I guess the real questions is: does this matter?

        • Your comment about encapsulation reminds me of one of the reasons I often favor bunch-of-variables-stuck-together-with-duct-tape structures over “immutable” structures that try to behave like objects. Types are generally made immutable not because the entities which create them want to be unable to change them, but rather because they want to ensure that nothing else can change them, and they are willing to give up their own ability to change them in order to achieve that. Since the fields of a structure-type variable can only be changed by code which has direct access to the variable in question, such variables are inherently guarded from outside change; they can thus give the owner the ability to change them without having to give such ability to anyone else.

          Having the actions of players and weapons controlled through an external rule book makes it harder for them to enforce certain invariants; if those invariants are important, an inability to enforce them may be a problem. If, however, the supposed “invariants” are not only stronger than are needed to get the job done, but also inconsistent with some things code might need to do, then an inability to enforce them might not be a problem at all.

    • Yes, this approach breaks encapsulation, because encapsulation is wrong for this use case. The knowledge of how to manipulate the state of the objects in the system is not and cannot be local to those objects.

  7. I know nothing about Inform7, but have a few questions, with that rule set, can I check if a player can equip an weapon? Can I list witch weapons a player can equip? Can I query the rules, like, usually games have an human readable game guide explaining each class, so can I query the rules to build strings like Wizard – Can use staffs and daggers/-can only wear robes?

    • “can I check if a player can equip an weapon?”

      Yes, this is what “consult the rulebook for C wielding W” does: after setting up some global state, it runs through the wielding rulebook until one of the rules signals success or failure. The outcome is stored in global state, which is then checked with the condition “if the rule failed”.

      Updated example: http://pastebin.com/9yP7m4pp

      In the updated code, some new syntax is defined so you can write, equivalently:

      if the rules allow Gandalf to wield the sword
      if Gandalf can wield the sword
      if the sword can be wielded by Gandalf

      “Can I list witch weapons a player can equip?”

      Yes, by iterating through all weapons and running each one through the rulebook.

      In the updated code, “[the list of weapons that can be wielded by C]” expands to some appropriate text listing the weapons. “weapons that can be wielded by C” is what Inform calls a description, which is more or less a query that enumerates objects matching some criteria, in this case type (“weapons”) and relation to C (“can be wielded”).

  8. I’ve got to echo Karellen here. What’s wrong with “vexing” exceptions?

    I once wrote a simple recursive descent compiler where the design was “If you run into a single error just print the error message and quit.” So whenever I find an error I just throw a CompilerException. The stack unwinds and cleans up everything the compiler was doing. A singe try/ catch block at the top catches the exceptions and prints out the error.

    This code has been working for several years with no apparent problems. The exception is a perf expensive flow control construct, but in this case I can afford it. If I used a TryParseXXX pattern, then every step of my recursive descent parser would need to handle the case where any of its sub-parsing calls failed. Littering the code with error handling is exactly the problem structured exception handling was designed to solve.

    This “I’ve come to a point where I just want to quit this whole operation and go home” is a common one. Other than the performance impact (which may or may not matter after I profile) what problems do I run into by throwing an exception essentially as a “super return” that unwinds the stack and gets me out of the current activity?

    • That’s fine. The problem is with things like Int.Parse, where you would prefer not to die just because the user made a typo.

    • I think this is primarily a philosophical issue. Just like we can ask “where does this code belong?”, we can ask “what is the purpose of this tool?”. Eric’s classifications are consistent with the idea that exceptions mean (OUGHT to mean, by nature) “I simply cannot go on.” At that point, “normal” behavior just isn’t possible. That means different things to different functions; “Give me the square root of this number” will simply fail with a negative number; “Process this user input to a square root calculator” can be done with negative numbers; the perfectly legitimate result is “tell the user about the problem”. In the first case, returning a number is “wrong”, because no number is correct. Throw an exception to represent the impossibility of producing a result. In the second case, present the user an error message and don’t “return” a number. The crucial idea is that the bad data should never get to the sqrt function; the exception “should” only be there for the case when some programmer makes the mistake of calling a function with invalid arguments. Ultimately the distinction comes down to “what is unacceptable input and what is acceptable input for which the result is a representation of the problems”. In a compiler, in general, an invalid program is acceptable input for which the output is a list of errors.

      But there are practical issues as well. Long-range exceptions used as control flow (and not true can’t-go-on errors) mess with encapsulation. Now the ultimate caller and the ultimate callee have to agree about which exceptions to deal with. You don’t just have to know what could go wrong with the function you’re calling, but every function that will ever by called by that function. You must either know intimate details about the callee’s callees (bad encapsulation), the callee must only throw one kind of exception (inflexible design), or you must catch very broad exceptions in the caller (uninformative errors). At that point you might spit out a general error return value or blindly spit the error message out to the user.

      If you’re lazy and your error description can be formed as a single string, that’s fine. Lazy is the mother of invention, and whatever works works. YAGNI. Whatever. Just know that you are being lazy, and that if the project gets bigger or user errors get more complex (as is likely in an informative complier), you might want more than a string do deal with it!

    • I think it certainly does have some inspiration from functional and logical styles. But the approach I’m suggesting here is still object oriented. What is fundamental about object-orientation is not that we have classes that re-use implementation from each other via inheritance, but rather that we have distinct lumps of state that communicate with each other via “messages” — method calls — that can query or modify that state.

    • I would say it is *more* object oriented. You have objects representing distinct concepts in code. Just because a rule is abstract in nature doesn’t mean that it isn’t distinct. A rule is an object, and it has a behavior and responsibility. The first few approaches were hiding that fact, and we were seeing that merging the functionality into the state-holding objects (warriors, wizards, etc.) only served to complicate the application. Having this separation of concerns reveals much about what the code is trying to accomplish. The application functions more smoothly, and the code is easier to read and, therefore, maintain.

      I work with this rule-based system at my job, and it is really neat to see it in action.

  9. But you haven’t explained ‘When could you safely feed a mogwai, anyway?’ !!!

    At age 10 I saw the movie and that bit made my head go “Does not compute” and it annoyed me for the rest of the movie and long after. It was hugely problematic because I was also told grown-ups know what they’re doing. But grown-ups made the movie, so it was like “Does not compute” to the power of itself. My head is still hurting…. 30+ years on do I declare myself grown up or not?

    • I suspect that it was a deliberate and subtle joke by Chris Columbus. If I recall correctly, someone in Gremlins 2 does ask the question “isn’t it always after midnight?” but I might be misremembering.

      • Back when I saw the movie (I was a young engineer back then) I took it to mean “after midnight and before dawn” – though I always thought that midnight was a pretty arbitrary time. Dusk to dawn, or (dusk + N time units) till dawn would seem more reasonable. Then again, there are a lot of movies that require that the viewer suspend his/her understanding of physics and nature.

  10. I may be a little uneducated on this, but about your question of how to get the weapons for each player, try working it like a proof in algebra 2. (it’s been a while since I took algebra 2)
    but address every character can carry a weapon, then address each class of each character, then which weapon goes to which (character/class), and then address how they would use it. but I don’t know how anyone could just come along and change your code unless you use a lock

  11. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1858

  12. I would say that this is a question of structure vs content. The types ought to describe the structure instead of the content. I mean, you would probably like to be able to add a new kind of weapon without having to write new code to handle this situation.

  13. This series really got me thinking. Once I was committed that the attack has to be resolved on a player or monster, I was stuck and solutions I was considering went further down a path of adding complexity to player etc. Fantastic Series, Thanks Eric!

  14. While I love this series (and agree with the sentiments therein for the most part), there’s a problem I keep on running into when trying to do this kind of thing in a “real” business domain. I think it’s best described by Alex Papadimoulis’s excellent “Soft Coding” article: http://thedailywtf.com/articles/Soft_Coding

    You say “Rules are now more like data than like code.” But code has all sorts of useful properties, like there being an existing unit testing framework, there’s an IDE, source control’s already integrated, and there are preexisting libraries that are easy to call. Building a new DSL is a *lot* of work, especially one that has all the stuff that’s built into all existing “normal” languages. A business rule change may be as likely to require a change to your DSL parser to change, unless it just happens to be one of the few that’s as simple as replacing one number with another. And as hard as it can be to find developers, you’re more likely to find one for whatever “normal” language your business code is in than for your custom DSL. (And I suspect that no matter how good your DSL is, you basically need somebody with the skills of a developer to translate all the real business needs into code that a computer can understand.)

    Adding a bunch of code-as-data just means that you have twice the issues (at least) with developing, building, testing, documenting, and deploying your system. Why is it a problem that “those rules are likely to change over time more rapidly than the program itself changes”, when the purpose of the program is to execute the rules? Shouldn’t any part of the software be just as easy to change?

    At the same time, as you say, it’s not like the “normal” languages were explicitly designed to really represent your business domain by themselves. I see a lot appealing about building a language to do so, and I’m sure it’s the right answer for some things. I just don’t think it really solves all the real problems involved with the fact that business rules constantly change.

    • First off, yes, it is very expensive to develop an actual DSL, particularly one as complex as Inform7. For most applications that use object-oriented rules, I would imagine that they’d just have objects maybe with some serialization to a configuration file, but not a full-blown custom language.

      Second, your point is well taken, and one that I have made myself. When we were doing the initial planning on Roslyn it was by no means obvious what language we should write it in. Some people seriously suggested F#, which is an awesome language to write a compiler in. But the C# and VB compiler teams had, unsurprisingly, a large number of C# and VB experts and a small number of F# experts; it was simply more expedient to go with C# and VB — which are certainly more than adequate to write a compiler! — than it was to have a bunch of people learn F#, for a very small marginal benefit. Having an existing body of expertise with the language and other tools is very valuable.

      • I think you have gone too far with the DSL. In this statement “Again, we have no reason to suppose that the rules of overload resolution and the rules of Dungeons & Dragons attack resolution have anything in common; if we’re building the latter, then we need to design a system that correctly chooses the valid rules out of a database of rules, and composes the effects of those rules sensibly. Yes, you need to build your own resolution logic, but resolving these rules is the concern of the program, so of course you’re going to have to write code to do it.”, you seem to completely obviate the power of embedding the structure of the problem domain in a proper, OOP structured, compile time error reporting, language like C#. I think, that just like the other ‘kinds’ (classes) like wizards and such, the appropriate ORGANIZATION of the rules, game state, etc, into effective, and *helpful*, compiler checked classes (kinds – Rule=>InteractRule=>AttackRule|SpellRule|WieldRule), virtual dispatch and polymorphic reuse, encapsul., etc would be better. Fer instance, how do I run the debugger into the DSL? I have to write my own. And how do I design the type class hierarchy in the DSL? I have to design my own. I suspect it will be amazingly similar to the C# type system (polymorphic, hierarchical, ORGANIZED…). Basically I respect your instinct, but I think this knee-jerk is off-base – ” we have no reason to suppose that the rules of overload resolution and the rules of Dungeons & Dragons attack resolution have anything in common.” Why in the world not? Respectfully…..

        • I agree that going to a DSL is quite expensive, for the reasons you state. The path there would be to start with an object-based system as you describe, and then realize that you need to add the ability to dynamically configure the system, and now you have to serialize the objects. If you’re serializing the objects, might as well do it to XML, and hey, now you have a DSL. So I think we agree there.

          My point about the rules of overload resolution is that we have no reason to suppose that the rule “a wizard can use a silver dagger against a werewolf after midnight in a church” has a clear, unambiguous mapping to a problem in overload resolution. Why should “a method which has at least one more specific argument argument type and no less specific argument type than another method is considered the better method” have a clear, unambiguous mapping to the wizards-and-warriors-rules domain? It doesn’t, so don’t try.

          • Right on. Same Page. As far as overload resolution, I think the rest of the C# dialect and object programs organized by polymorphism, specialization, generalization and re-use would be highly effective in expressing these ‘midnight battle’ problems. The whole dialect of C#, the ways that it can hide complexity, yet express complexity when needed; the way you can create generalizations effectively, then refine them as needs change, without breaking everything — is uniquely suited to MANY MANY problem domains. That something a little bit tricky like a Dungeons and Dragons is outside the scope of C# and OOP in general I don’t agree with. I will go here : if you really think the system will change minute to minute and you don’t mind a boatload of contradictions that will be resolved by ‘reasonableness’ by the players, then yeah, a souped up rules engine would be fine. But if you want to KNOW how the system works. Want to program a system that works, the same way, every time, and you can SHOW how it works, and WHY it DOESN’T work any other way. Well I suggest the C# guidelines. The rules about abstract classes, the rules about generic type resolution, the rules, the structure. Everything that developers and language designers have discovered since Turing. Maybe Greenspun’s ‘nth is in order here. “Every sufficiently full-featured D&D rules engine contains an informally specified, bug-ridden, slow implementation of half of Common C#”

    • I think “soft coding” for configuration as referenced in the dailywtf article is only one aspect of removing hard coding. The other is to achieve D.R.Y. (“don’t repeat yourself”); you’re replacing multiple hardcoded locations with references to a single soft coded value. On the other hand you can certainly have a bad soft coded system which violates D.R.Y. (e.g. relying on a database in which the same value is duplicated in multiple different tables, i.e. not normalized).

      Once you make the distinction between hardcoded vs softcoded and D.R.Y./normalized vs. not-normalized, you can also use the concept of database normalization as a special case of a more general technique of analysis about folding together parts of a system that can apply to source code as well as databases. It is interesting to think about what that can tell us about the Wizards and Warrior example.

  15. “The last thing we wanted to do was to make it impossible in the type system to represent incorrect C# programs.”
    I tried imagining this, but my head exploded.
    [insert some appropriate meme GIF here]

  16. Looking around a bit more on this topic, I discovered a thing called rules engines and the Rete Algorithm. Are these related to this topic, or a not quite similar thing but with a similar name?

    • I’ve heard of the Rete Algorithm but never looked into it. Turns out, I developed essentially that algorithm for a rule-based system that I built a few years ago. Works well!

  17. It’d be cool to see a more full code example of all of this in C#. The theory is nice, but I’m not totally sure how the implementation of a rule would look.

  18. This post perfectly illustrates Greenspun’s Tenth Rule. Rules engines are very complex. That’s like saying, want faster code? Just build your own compiler!

  19. Let us do a little math here, … let us assume, just for arguments sake, that I were a young person and I wanted to become a programmer.

    How long will it take me to just know that the “dynamic”-keyword really does? How long until I can really handle virtual-override-new or explicitly implemented interface methods? Honestly, who can tell me what this code does without running it?

    public interface TheInterface
    {
    void Go();
    }
    public class ParentClass : TheInterface
    {
    void TheInterface.Go() { System.Console.WriteLine(“a”); }
    public virtual void Go() { System.Console.WriteLine(“b”); }
    }
    public class ChildClass : ParentClass, TheInterface
    {
    void TheInterface.Go() { System.Console.WriteLine(“c”); }
    public new virtual void Go() { System.Console.WriteLine(“d”); }
    }
    public class GrandChildClass : ChildClass, TheInterface
    {
    public override void Go() { System.Console.WriteLine(“e”);
    }

    GrandChildClass grandchild = new GrandChildClass();
    ChildClass child = grandchild;
    ParentClass parent = grandchild;
    TheInterface inter = grandchild;

    grandchild.Go();
    child.Go();
    parent.Go();
    inter.Go();

    Now, you can of course say: “If you don’t like it, don’t use it”. Fair enough, but I usually have to read other people’s code, so this is not an option.

    What I am worried about is not that someone may not know what is going on here. What I am worried about is that I can write such code in the first place. I dare say 90% of programmers don’t know what is going on here.

    Don’t ge me wrong: I really admire Eric Lipperts in-depth-knowledge. I assume most programmers do, as least most readers of this blog.

    What troubles me is that a simple question such as the wizard-warrior-player-sword-staff-weapon-issue we began with eventually results in a five-parts blog not many people would fully understand and hardly anyone on this planet could have come up with on her/his own.

    Whatever we learn here, one thing is obvious to me: There probably were many topics that were addressed when the C# language was designed, simplicity certainly was not one of them I dare say. No offence to anyone who might have been involved here, this is just my impression: I miss simplicity.

    What would we lose if there was no “new” at all or no explicit interface methods, no “dynamic” etc.? Yes, one can easily come up with an example of where one happily uses those keywords. On the other hand Java shows that you can very will live without (which is not to say that Java had no such issues). What could we gain? Simplicity. Fewer errors.

    But that’s not all. After i laboured my way through the language, I start having fun with the runtime environment. Can anyone out there explain to me under what circumstances the “finally”-code within a try-catch-finally is executed and when it is not? Anyone? Please!

    And after all that, we start having fun with libraries …

    Ok, let us get back to the math. How long will it take until I can really work with a language, self-confident enough to say “I really know what I am doing”. Let us call this period of time “a”. In comparison to that: How long will it take for the next programming language to show up? How long until I have to make myself familiar with the next programming language in order to keep myself up to date? Let us call this period of time “b”. Am I the only one who suffers from b being considerably shorter than a?

    Now, you can obviously say: “What do you expect? You can’t work with a language without learning it in the first place. That does not work with natural languages. How can you expect it to work with programming languages?” Right, I agree. Which is exactly why I would love programming languages to be designed for simplicity, also because I don’t seem to have to learn a new natural language every few years.

    • There’s definitely a trade-off between simplicity of understanding the code and simplicity of writing (good) code. Consider your example; how can that be fixed? Part of the confusion comes from new vs. override and from explicit interface implementation. If you just take out the new/override keywords, it’s easier to write, and maybe easier to understand in the general case, but you’re reducing the ability to do one or the other. Suddenly everything is either override or everything is new. At lease using “override” makes the programmer (and the reader) acknowledge that the function name now refers to a function in the child. The only other option is to remove the feature entirely but that takes away a lot of power from the language at the expense of “simplicity”. Where’s the line? What features should be sacrificed so new programmers can understand any code?

      I think the real issue here is that your example itself is very complex. If you’re using new, override, and explicit Interface overrides multiple times each in the same hierarchy, it might be time to consider a different design. If your colleagues are writing hard-to-understand code, blame the code and not the language.

      I’m not sure I’d say time period “b” is all that short; languages come and go, but they’re not often replaced by another one. Java and C# are great for general purpose application development, and they have been around for a while. They may have all but replaced C++, but I consider that a step in the simpler direction. New languages seem to be popping up, but not generally displacing those that come before it because they cover different usage scenarios.

      • C++ is still tremendously popular and under active development as a language; I don’t think C# and Java have “all but” replaced it. I think the trend towards safer, higher-level languages is a very welcome one, but there is a huge installed base of C++ out there, and C# and Java do not provide an easy migration path. (This is one of the reasons why I am impressed by the design of TypeScript; it does present a very straightforward migration path for existing bodies of code.)

    • You raise many interesting points that the language design team takes very seriously; I’d have to write a whole article to respond at length (and perhaps someday I will.)

      The C# language was designed to be simple — it says so in the first sentence in the specification! I often have pointed out to Mads that the “simple” really should be removed because anything with an 800 page specification is not simple.

      When thinking about “simplicity” I think it is wise to consider that power is work done divided by time spent. Is it “simple” to move to Australia? I have a friend moving to Australia right now and it seems very complicated, what with the medical certificates and student visas and everything you have to obtain. But compare moving to Australia today to how it was 120 years ago, when you could count on a several week long journey by tall ship, and it seems quite simple.

      Modern languages come with some costs in terms of user education, but the hope is that the tasks which the language allows you to perform are worth those costs.

      • Your comment about the “simple” language with the 800-page spec suggests another principle that may be worth writing about: when designing something (a system, a language, an API, or whatever), one should decide early on how “simple” or “complex” it should become; if one has decided on “simple” and later wishes to provide features that would increase the complexity, it may be better to fork off a new system/language/whatever which is designed to ease migration from users of the simpler system, but is distinct from it.

        C# has many aspects that would, if one were evaluating it as a new language, almost certainly be construed as design defects. Using strictly-inside-out expression evaluation simplifies overload resolution, and if one is trying to design a language so even a simple compiler can handle it, the fact that expression evaluation can be broken down into distinct steps that imply clear and unambiguous actions is a major plus. Further, saying that most operators will always return a result of the same type as their operands simplifies the language spec even though in many cases it would make more sense to have different operators behave differently (e.g. given `UInt32 a,b;` have `a-b` and `a*b` yield results of type `Int64` and `UInt64`, respectively, but allow those results to be implicitly coerced to `UInt32`; given `UInt32 a; Byte b;`, have `a & b` yield `Byte`, since the result is guaranteed to fit in that type).

        Such design decisions may in the early days have allowed C# to be a much simpler language than it could otherwise be. Given the extent to which the later versions of the language require the compiler to solve NP-hard problems (as you’ve blogged about), however, such decisions no longer serve to “simplify” the language in any meaningful way, but instead serve to complicate life for programmers.

  20. The first 4 parts were great, with very concrete examples that were easy to follow along with. Then the last point was made with no actual examples, which feels like the ending is missing because I don’t yet understand the point that was reached.

  21. Pingback: Basic OOP is easy, isn’t it? | gerleim

  22. Ahh, this brings me back to some very ancient days — 1978! Bartle “recently” wrote about his troubles in 2001:
    http://www.skotos.net/articles/dawnof02.shtml
    through (and including)
    http://www.skotos.net/articles/dawnof08.html

    I found post script to be a very good prototyping tool for this sort of stuff since I can push and pop dictionaries in arbitrary order, and thus implement a design-time multi-inheritance system and enjoy a run-time single-inheritance system. Too bad that’s just for prototyping and not actual production.

  23. I don’t understand this post at all. You started with wanting compile time validation of the invariant (“A wizard cannot wield a sword”) – then talked about how runtime solutions were not good and then ended up with a solution that **does not verify that invariant in compile time at all**.

    You just ended up with a language where all code is written in a specific language programmers have to learn and errors are in another language.

    The generic constraint solution you had in the middle (part 2) was actually pretty decent because hey: if Wizards and Warriors can’t yield the same weapons – and the weapon is a property of the supertype then __they are not interchangeable as players__. You need a different interface for a player and **not expose the Weapon property**.

  24. Definitely agree that using the OOP language to represent rules, commands, and effects rather than directly encode the esoteric specific rules is the superior approach. However I’m curious how you bound the recursive nature of your argument? Even in the meta-system there will be situations which don’t map cleanly to the chosen programming language; one could replace the wizards and warriors in the initial posts with rules and effects and then use similar examples and argumentation to show how what is really called for is a meta-meta-system which gets around the problems of representing the meta-system directly.

    One might dismiss this as absurd, but consider that abstraction and representation are a matter of perspective. We’re already working at least seven layers of abstraction up just with C#, and two or more layers of abstraction below the point of view of the users. So this is not a question of a doubling from N=1 to N=2 layers of “meta”, but rather asking why the argument applies to go from N=7 to N=8 but does not to continue for N=9 or N=10 layers?

    Here’s possible implications I can think of about this difficulty about the recursive application of the argument (not necessarily mutually exclusive, nor necessarily all true):

    1. We do continue to gain representational power and potential effectiveness as programmers as we target meta-problems of higher order abstraction, but our limited ability to understand abstraction and close mindedness prevents us from seeing that.

    2. We do not really gain (as much) benefit (as we think we do) from encoding the meta-problems.

    3. Perhaps once you’re representing rules, conditionals, and effects (logic programming?) you’ve hit some kind of terminal condition for abstraction?
    3a. It’s a terminal condition because of diminishing returns for abstracting rule based systems.
    3b. It’s a terminal condition because it’s a fixed point. (I.e. abstracting a rule based system just produces another rule based system.)
    3c. Because logic programming is some kind of fundamental… something

    4. Current OOP languages are not sufficient to represent the kinds of problems we really want to solve
    4a. …but a better OOP language might be
    4b. …but no possible OOP language might be

    Again I’m not saying I believe all or any of these things. Just that I think there are some philosophical difficulties here and I think that people 100 years from now with the benefit of hindsight will know that we used to hold some self-contradictory idea. I just don’t know which preconception is the one that needs to be thrown out.

    • I also note an uncanny symmetry between making a rules system that is “too”-meta versus stubbornly abusing generics and other tricks to encode everything about wizards and warriors at the type system level regardless of the consequences. Both are difficult engineering efforts undertaken as much to prove a point as to achieve a business goal. Both are likely to result in the hapless maintenance programmer being nonplussed when s/he looks into the source files. Whereas the one requires him or her to understand the workings of a goofball custom rules engine, the other requires understanding all the arcane details of the chosen programming language. To the extent that the programming language allows encoding these rules directly in the language, so much so the language comes to resemble a rules engine itself.

  25. I recently read a few articles on implementing the command pattern in C#, which definitely seems to be a good way to go re: creating an RPG that takes Commands from a User, processes them, &c. What I’m curious about, then, is (a) for a simple system like in part one, how do you go about creating a “Rule” interface/object, especially since some rules take precedence over others? and (b) other than a DSL, how would you recommend creating a database of rules that can be parsed created at runtime?

  26. With the rules pattern we still have a Player class with a Weapon property, and a Wizard class that inherits player. I assume that Weapon must still be publicly settable, so that Wield can set it, assuming the rules pass. In that case, is there anything stopping me writing: wizard.Weapon = sword; other than self-discipline?

  27. Pingback: Модификаторы атрибутов в играх - c# .net - Вопросы и ответы по программированию

  28. I’m thinking naively here. There are probably two approaches here.

    One is the rule system you mentioned. So I’d assume that there is a huge global rule class that snoops into everything else, or using a queue system, whatever. The key point is, whenever a rule needs to be consulted (Can A “wear” B? Can A “lick” A’s “paw” and consume items owned by “paw”?). This immediately looks like, as you said, a simplified Infocom 7. Infocom is more complicated in the way that not only it has to solve the rules, but have to parse different representations of a rule (wear B should have the same semantic meaning as put on B, and so on). I’m not really good at programming so I don’t know exactly how to implement one, but I assume, as shown by pretty much every complex RPG from Gold Box, that a house-grown scripting language (or Lua) is a good solution.

    The other is the ECS system. So entities own some attributes and whenever the core system evaluates one attribute against another in a certain way (Can A cast a spell B stored in armor C?), it automatically awakens the ECS to evaluate the thing. I’m even less familiar with ECS so maybe it’s just a variant of the first method I mentioned?

    Sorry to make a mess here, but I’ll try to find some source code of older RPG to read. Something that is not as complex as Baldur’s Gate or Neverwinter nights, which span the whole rulebook of ADND/DND 3.0, but simpler so that I can understand.

Leave a comment