Earlier this month I wrote about how Glyph and I have been trying to fix a bug in Imaginary. Since then we’ve worked on the problem a little more and made some excellent progress.
If you recall, the problem involved being shown articles of clothing as though they were lying on the floor rather than being worn by nearby people. I mentioned that our strategy to fix this was to make the “look at something” action pay attention to more of the structure in the simulation graph.
That’s just what Glyph and I did when we got together to work on this some more last week.
The version of the “look at something” action in trunk operates in two steps. First, it searches around the player in the simulation graph for an object satisfying a few simple criteria:
- The object is something that can be seen at all - for example, a chair or a shirt, not the wind or your rising dread at the scritch scritch scritch noise that always seems to be coming from just out of your field of vision.
- The object is something the player’s senses actually allow them to see - for example, objects not draped in a cloak of invisibility, objects not sitting in a pitch black room.
- The object answers to the name the player used in the action - “look at hat” will not consider the Sears tower or a passing dog.
- The object is reasonably nearby (which I’ll just hand-wave over for now).
Having found one and only one object satisfying all of these criteria (behavior for the case where zero or more than one result are found produce another outcome), the action proceeds to the second portion of its implementation. It invokes a method that all objects capable of being seen are contractually obligated to provide, a method called visualize
which is responsible for representing that thing to the player doing the looking. The most common implementation of that method is a stack of special-cases:
- is the thing a location? if so, include information about its exits.
- does the thing have a special description? if so, include that.
- is the thing in good condition or bad condition? include details about that.
- is the thing wearing clothing? if so, include details about those.
- is the thing a container and open? if so, include details about all the things inside it.
Much of this logic is implemented using a plugin system so, while somewhat gross, it at least holds with some of Imaginary’s goals of generality. However, it has some problems beyond just being gross. One of those problems is that the full path from the player doing the looking to all of the things that appear in the visualize
method’s result is not known. This is because the path is broken in half: that from the player to the direct target (whatever something names) and that from the direct target to each of the (for lack of a better word) indirect targets (if Bob looks at Alice then Alice is a target of the action but so is the axe Alice is holding behind her back, Alice’s hat, etc). And in most cases the problem is even worse because the second part - the path from the direct target to each of the indirect targets - is ignored. Breaking up the path or ignoring part of it like this has problematic consequences.
If Alice is carrying that axe behind her back and Bob looks at her, unless you know the complete path through the simulation graph from Bob to the axe then you can’t actually decide whether Bob can see the axe or not: is Bob standing in front of Alice or behind her?
The idea that Glyph and I are pursuing to replace the visualize
method solves this problem, cuts down significantly on the grossness involved (that is, on the large amount of special-case code), and shifts what remaining grossness there is to a more suitable location - the presentation layer (where, so far as I can tell, you probably do want a lot of special-case code because deciding exactly how best to present information to people is really just a lot of special cases).
Perhaps by now you’re wondering how this new idea works.
As I’ve mentioned already, the core of the idea is to consider the full path between the player taking the action and the objects (direct and indirect) that end up being targets of the action.
An important piece of the new implementation is that instead of just collecting the direct target and then acting on it, we now collect both the direct target and all of the indirect targets as well. We also save the path to all of these targets. So, whereas before resolve_target
(the method responsible for turning u”something”
into some kind of structured object, probably an object from the simulation graph) would just return the object representing Alice, it now returns the path from Bob to Alice, the path from Bob to Alice’s hat, the path from Bob to Alice’s axe, the path from Bob to the chair Alice is sitting in, and so forth.
With all this extra information in hand, the presentation layer can spit out something both nicely formatted and correct like:
Alice is here, wearing a fedora, holding one hand behind her back.
Or, should it be more appropriate to the state of the simulation:
Alice is here, her back turned to you, wearing a fedora, holding an axe.
Of course, we haven’t implemented the “holding something behind your back” feature of the “holding stuff” system yet so Imaginary can’t actually reproduce this example exactly. And we still have some issues left to fix in the branch where we’re working on fixing this bug. Still, we’ve made some more good progress - both on this specific issue, on documentation to explain how the core Imaginary simulation engine works, and on developing useful idioms for simulation code that has yet to be written.