• October 13th 2016

    Introducing Gall and Blotter

    Earlier this year when Inkle released their Ink scripting language, there was a lot of excitement from the IF community around using it as a tool for building sophisticated branching stories in the vein of Choicescript. Today I'm releasing a couple of open-source tools that should help people who want to experiment with Ink without having to build a full game UI to go around the story engine.

  • October 7th 2016

    IFComp 2016 Capsule Reviews: The Little Lifeform that Could, The Shoe Dept.

    For these two pieces, I'm writing shorter capsule reviews. See my initial post for info on my approach to writing about those.

  • October 2nd 2016

    IFComp Review: 500 Apocalypses

    500 Apocalypses is essentially a stochastic novel, a large collection of loosely-connected vignettes meant to be read in a random order. Its blurb should probably come with a broad content warning for mature and disturbing themes, which I am bringing up here because I’ll be discussing some of it.

    Each vignette one is an impression of the death of some alien civilization, ranging from the wry (a society that dies because they just never figured out the wheel), to the poignant (various emotive glimpses into lives at the edge of their worlds), to the aggressively miserable (a hopeless survivor carries around their cancer-ridden partner around the wasteland on a wheelchair, trading her body for painkillers to ease her suffering). The stuff in the latter category is only a fraction of the content, but I found that after a while it suffuses everything. To me it seems too much like trafficking in the sort of misery tourism that stuff like Threads is built on. The entire piece seems built to be a tonal melange, but saddest shit” is an overpowering flavor. Eventually, even the wry or absurd or thoughtful entries take on the faint smell of misery porn. It’s as though all the different vignettes were uncovered food in the same fridge, slowly taking in one another’s flavors.

    It is, after all, about the end of the world; says so right on the title. About the world ending over and over again and in different ways, all of them unhappy in unique ways. The problem with structuring that as a collage of vignettes is that it’s all build up and no conclusion. Maybe there’s an ending to 500 Apocalypses, but I didn’t want to stick around to find it, and the piece is very clearly structured to dispel the idea that there is an ending. And so, reading it is like going down an Escher staircase into gloom; there’s no point of catharsis or release to all the pathos, there’s just more pathos. Even the implicit ending that every apocalypse has, in which the suffering is finally over as death comes for everyone, is denied the reader: Click through and read another entry. There’s another apocalypse two clicks away. There’s 500 of them. The abstract field of dots it uses as an index to its entries comes across as a minefield: How long until it presents you with something aggressively unpleasant and unsettling that you were unprepared for?

    The harsh, dissonant tonal shifts feel like a cheap shot: You can go from a wry or absurdist entry to one that is terrifyingly miserable, and every time I can hear the gearbox scraping inside my head. Eventually I learned to just expect the worse from every entry, which blunted the subtler poignancy that a lot of entries have. It feels like it could have done with a narrower band of tones, like it should be playing a chord or a scale rather than mashing its hands all over the keyboard.

    This is not to say that the different entries are disjointed. Far from it; they have currents of theme running through them. Many of the vignettes fall under the rubric of body horror; many others fall under body discomfit, quietly reminding the reader that they are about alien lives situated in alien bodies only hinted at. Re- and deconstruction of the body. The relationship between a civilization’s semantics and its infrastructure. So does sex, as is obligatory in literature that is about death. Often, masturbation, reproduction, and the regulation of physical desire, make appearances.

    In the end, it’s hard to dispute that 500 Apocalypses seems important, and it’s undeniably well-written and affecting. But I’m not sure if that equates to good. The affect, for me, is mostly of creeping nihilistic misery, or maybe lingering anxiety. And this puts me in a bit of a bind; I’d be a pretty poor critic if I could only appreciate art that makes me feel good, but at the same time I eventually came to find this work unpleasant. Not artistically flawed, not ill-conceived, not badly written, but unpleasant. And if the goal was to create revulsion, even dread, then it succeeds at that; it’s not ineffective, even if I’m not fond of the effect. But I’m not sure it’s necessarily correct to say that being effective is the same thing as being good, either. If the question I’m supposed to be answering is whether I like this, well, the answer is no; I do not like this one bit. But I can’t tell you how much that matters, or if it matters at all.

    My bottom line with 500 Apocalypses is that I don’t feel better off for having read it. But there is a lot to it, and there is definitely a fascination that it exerts, morbid though it may be. There are fascinating and powerful ideas there, but I’m not a partisan of the idea that the best way to get those across is by causing the reader distress; and, inasmuch as fiction writing can be distressing, I find 500 Apocalypses pretty distressing. It is, above all else, a work of horror fiction. But without the focusing of a singular narrative, it feels too much like touching the third rail. It’s horror without enough of a direction. It doesn’t have the cathartic, clarifying quality that good horror has; instead, it’s just mounting dread without release.

    Or maybe it’s not so much that the meal is overdone, but rather that I choked on it. Maybe this is just a singularly affecting piece of work that I encountered in the wrong mental state. There’s too much subjectivity here, far too much for my liking. I think that people who want to understand will have to play this for themselves. And maybe they should; maybe this is an important piece, one that we will be coming back to years from now to talk about hypertext. But I can scarcely recommend something that doesn’t pass this basic test: do I feel better off for having read this?

    Grade: Not recommended.

  • September 30th 2016

    IFComp '16: Review Notes

    The Interactive Fiction Competition is once again upon us; the deadline for submissions is today, in fact.

    This year, I’m busy. But I did promise I would make an effort to review some of the games. As is tradition, before we see the list of comp games, I thought I should take a moment to go over my approach to reviewing these, partly to set expectations, and partly because no collection of comp reviews is complete without a self-important essay talking about the correct way to write reviews.

    Practical Considerations

    I am writing impressions, not full reviews. Unless I’m really taken with something, don’t expect 2000-word rundown of what it’s about. I might group some pieces into smaller roundups of capsule reviews, even.

    I am probably not going to get through the entire comp. The number of entrants last year was huge, and unless an enormous drop-off happens this year, it’s unlikely I’ll have the time to go over everything.

    I am doing this in no particular order. If your title or blurb really grabs me I’ll probably get to your thing first, but otherwise I will be playing things in a random order.

    I will be following the two-hour rule for judging, but I won’t guarantee it for reviews. Which is to say, if I want to keep playing past two hours, I’ll mark down a judging grade for your piece and get back to it, and review it after I feel like I’m done with it.

    As a final side note - I might be covering IFComp elsewhere as well, in which case I’ll be forgoing writing about certain games in here in favor of that, though I will probably go back and write some additional notes to cover stuff that I didn’t think was valuable in a review aimed at a more general public.

    Grading Considerations

    Review scores are bad, but I think a grade is useful for summing up how I feel about something, especially for people who don’t want to read the full review because they don’t have the time or want to go in blind. So I’m going to be putting grades into one of three categories:

    • Exceptional: this piece is an instant classic that everyone should play;
    • Recommended: this is a good piece doing interesting things;
    • Not recommended: this piece is either significantly flawed, or doing something that will only appeal to fans of a particular subgenre.

    There are four additional categories that I don’t really expect to encounter, but want to bring up ahead of time for the sake of thoroughness:

    • Broken: I wasn’t able to view most of the content of this piece, or the effect was seriously hampered, because of technical issues; ie, I wasn’t able to really play this. I’ll probably just write that as a side note to another review.
    • Objectionable: This piece seems to espouse, support, or normalize a viewpoint that I find deeply objectionable (eg racism, misogyny), which for me overrides its technical or literary merit, if any. I might write at length about it, or I might not, but the bottom line is I wouldn’t recommend it to someone without a very large caveat.
    • Category Error: Even though I promote a fairly broad definition of interactive fiction, this piece doesn’t seem to belong here.
    • Won’t review: For some other reason, I can’t or won’t review this, for instance if it’s exclusive to some platform I don’t have access to.

    Philosophical Considerations

    I am not interested in how well something meets the (sometimes-arbitrary) trad-if standards of whether something is interactive enough, polished” enough, or IF enough. Instead, I’m interested in what a piece has to say, and how effective it is in saying it, in terms of its content and interaction. I’m not interested in re-litigating whether dynfic or hybrid IF are interactive fiction (they are), but I am interested in whether a particular story benefits from a given format. I’m not concerned with the (somewhat calcified) standards of world model architecture that have been extremely prevalent in criticism within the IF community over the years, but I am interested in the ways the parser, trait-based narrative, and cybertext toolkits can be semantically productive.

    This isn’t to say that I am giving traditional parser games a free pass on mimesis” or whatever, but it does mean that things which are deliberately deviating from this construction will be evaluated on its own terms, and understood in those terms.

    I’m particularly interested, these days, in procedurality, prose generation, narrative systems, and dynamic fiction, so you can expect to see a little more attention paid to these subjects or to pieces that give me an excuse to write about them. This isn’t to say that I like those pieces better; just that they happen to fall under my current theoretical and critical priorities. Particularly, I’m looking forward to seeing what people do with hypertext interaction, which seems to fall quite well under the IFComp’s purview.

    Personal Considerations

    I took part in the IFcomp last year, and I know it can be a bit of a harrowing experience. I know that it can be a twitchy tug between feeling like you haven’t gotten the attention or recognition you merit, and feeling blown out by too much scrutiny. So it’s absolutely not my goal to shame anyone for the work they put into the comp in good faith, which is why I’m trying to stay away from stack ranking people. Yes, some pieces are going to stand out, and some are going to not succeed. But it’s not my goal here to snark, or to act as a gatekeeper of who is worthy of being in the space.

    This is, secretly, the real goal of the rating system: I might give some pieces little more than a sentence and a rating. I’m not sure if that is a good balance to strike between staying totally silent about something because I don’t have anything too positive to say, and writing a full-on negative review (and I reserve the right to write negative reviews if I think they would be interesting or useful). But I’m not trying to turn anyone’s creative failure into entertainment, here, which I think is the standard you have to apply. At the same time, I do want to at least mention every game I get around to playing.

    Above everything, I implore authors to remember: Your value as a person is totally orthogonal to your creative success. This is not a competition to determine how much you matter or how good you are.

  • June 29th 2016

    Scraping DBPedia for Fun and Corpora

    One of the main challenges in procedural text generation is obtaining big enough corpora to produce surprising results. Hand-writing corpora is a good approach, but sometimes too time-consuming or unlikely to produce surprising enough results.

    Another common approach is the use of machine learning to make use of unstructured data as a corpus. Markov chains and neural networks have their uses, but they’re not for every application either.

    The third approach (Which Emily Short aligns with the principle of Beeswax) is scraping open access data. Wikipedia editors have done a lot of work structuring information about the world, and that data exists in a surprisingly machine-friendly format, assuming one knows how to coax it out.

    Writing ad-hoc web scraping scripts is a valid and useful technique, but there’s a more convenient (well, for a certain value of convenient”) alternative: SPARQL queries.

    DBPedia is a semantic web” collation of wikipedia, joining together Wikipedia’s information into a databade of machine-friendly relationships. It uses RDF as a format, which can be queried through SPARQL.

    SPARQL is a query language for RDF databases. For those of you with database experience, this is similar to the much more broadly used SQL language used to manage relational databases. For those of you without database experience (like yours truly), you can rest assured that RDF and SPARQL are totally unlike relational databases or key-value storages, so you’re on the same footing as the MongoDB nerds.

    RDF, or resource description framework, is a format for describing metadata. I realise your eyes are glazing over by now but bear with me. An RDF database, like DBPedia, is a big unordered pile of triples.

    A triple is essentially a statement in the subject-predicate-object form we’re used to from English. However, all three components can be resources, ie web URLs that represent something — in the case of DBPedia, Wikipedia pages or ontologies” that are used as predicates. The same resource can be the object in one triple and the subject in another, forming a web of interconnected statements which can be searched.

    A SPARQL query is a series of conditions, such as find me the names of British sailing ships launched after 1820, with their launch dates”. SPARQL is a language for expressing that.

    SPARQL is also clumsy, not very intuitive even for technologists from outside the database realm, and obscurely documented. So this is my attempt at wresting it out of the hands of dedicated data nerds. I’ll be going step by step until we have a list of British sailing ships launched in the 19th century, in the form of a JSON file that looks like this:

    "name": "HMS Plantagenet",
    "launched": 1801

    Making queries

    You can use dedicated software to talk to SPARQL endpoints (ie, the servers that receive and respond to queries for a database) but DBPedia has a number of web interfaces to endpoints that are very convenient, such as this one.

    First, a note about prefixes: In reality, every part of a triple is either a resource (Ie, a URL) or a literal value (String, number, or date). But typing out fully qualified URLs by hand gets tiresome fast. As such, SPARQL queries often start with a list of prefixes, shorthand for naming resources in specific domains. The DBPedia web query interface comes with a preloaded list of prefixes, and we’ll mostly be using that.

    So when I write dbo:Ship, what that really means is <http://dbpedia.org/ontology/Ship>; when using a literal URL in SPARQL, we enclose it in angle brackets. Note that these names are case-sensitive, even though the keywords in the SPARQL language themselves aren’t. So let’s start with a simple query:

    select distinct ?ship
    where { ?ship rdf:type dbo:Ship }

    This will get us a long list of every ship in Wikipedia, which unfortunately also includes things such as ship classes — so you’ll find specific U-Boats listed alongside models of U-Boats. Wikipedia’s data is often messy and noisy, and going one step at a time helps in not missing anything as you filter data.

    Let’s go over this line by line, since SPARQL is probably unfamiliar even to programmers.

    select distinct ?ship

    This first select statement tells the database what we are looking for, that is, the columns in the table we’ll get as a result. For now, we’re looking only at ships; eventually we’ll want to connect ship names to ship launch dates. This isn’t as simple as finding a list of triples; it’s essentially finding a list of paths through the database that satisfy the particular query, since the name (a literal value) and the date (another literal value) are not in fact directly connected to each other, but rather are both objects of two different predicates with the same subject, the resource for a given ship.

    ?ship is a variable; variables in SPARQL are prefixed with ?, because the W3C designed this thing therefore using a character that was already in common use as a variable sigil was out of the question.

    where { ?ship rdf:type dbo:Ship }

    The where statement contains a list of conditions that have to be fulfilled for a valid path to be found. This one simply states that we’re looking for ?ship where every possible value of ?ship relates to dbo:Ship via the rdf:type predicate.

    rdf:type is a commonly-used predicate used to mean is a”; dbo:Ship is an ontology, one of many objects created in DBPedia for the purpose of acting as categories. I’ll talk about how to figure out what resources to reference at the end of this tutorial.

    We can add another column to our table:

    select distinct ?ship ?propulsion
    where {
    ?ship rdf:type dbo:Ship .
    ?ship dbp:shipPropulsion ?propulsion

    Note the . used as a separator between statements. This won’t refine the search, but it’ll give us a table of ships with their propulsion methods. This is useful for finding out how that’s specified in the data. Looking over the entries, we find that both Sail” and Sails” are often used to denote a sailing vessel. We don’t need our corpora to be totally perfectly comprehensive (Wikipedia scraping won’t get you that anyway), so let’s just consider that our qualification.

    select distinct ?ship
    where {
    ?ship rdf:type dbo:Ship .
    ?ship dbp:shipPropulsion "Sails"@en

    "Sails"@en is a string literal. Strings in RDF come with a specified language, so just Sails wouldn’t match; we need the language tag (@en) in there. This is only half the equation, though; Sails” isn’t Sail”; curse Wikipedia editors for their inconsistency.

    Here’s how we look up both together:

    select distinct ?ship
    where {
    ?ship rdf:type dbo:Ship .
    { ?ship dbp:shipPropulsion "Sails"@en } union
    { ?ship dbp:shipPropulsion "Sail"@en }

    union is a SPARQL operator. It means a set union, of course, and it’s infix, because why would the syntax make sense. This gets us all the sailing ships, at last.

    By looking at the data, we can find the right names to use in order to further select only British ships:

    select distinct ?ship
    where {
    ?ship rdf:type dbo:Ship .
    { ?ship dbp:shipPropulsion "Sails"@en } union
    { ?ship dbp:shipPropulsion "Sail"@en } .
    ?ship dbo:country dbr:United_Kingdom_of_Great_Britain_and_Ireland

    Finally, we want to know when those ships were launched, and filter out the ones that were launched before or after the 19th century:

    select distinct ?ship ?launched
    where {
    ?ship rdf:type dbo:Ship .
    { ?ship dbp:shipPropulsion "Sails"@en } union
    { ?ship dbp:shipPropulsion "Sail"@en } .
    ?ship dbo:country dbr:United_Kingdom_of_Great_Britain_and_Ireland .
    ?ship dbo:shipLaunch ?launched .
    filter (
    ?launched > xsd:dateTime('1820-1-1') &&
    ?launched < xsd:dateTime('1900-1-1')

    Note how we can have two variables in a predicate: ?ship dbo:shipLaunch ?launched. This lets us traverse the network of triples, going arbitrarily far and deep across the relationships; it’s possible to ask elaborate questions such as Football players under 25 who play for countries that took part in WWII”, because we can draw indirect relationships like that.

    The contents of the filter statement should make sense to people with some programming familiarity; the one notable thing is that to write out a date literal, we use a function to create it from a string. Simply writing 1820-1-1” wouldn’t work.

    Now we have a table of ships (that is, web resources representing ships) and their launch dates. But we want a table of ships’ names and their launch dates, information that we can actually use. For neatness’ sake, we’ll also sort the results by date:

    select distinct ?ship ?name ?launched
    where {
    ?ship rdf:type dbo:Ship .
    { ?ship dbp:shipPropulsion "Sails"@en } union
    { ?ship dbp:shipPropulsion "Sail"@en } .
    ?ship dbo:country dbr:United_Kingdom_of_Great_Britain_and_Ireland .
    ?ship dbo:shipLaunch ?launched .
    filter (
    ?launched > xsd:dateTime('1820-1-1') &&
    ?launched < xsd:dateTime('1900-1-1')
    ) .
    ?ship dbp:shipName ?name
    order by asc(?launched)

    asc means ascending, of course. At this point, we can change the results format” setting on the web interface to JSON and download a nice machine-readable JSON file.

    The JSON includes a lot of metadata we don’t need, but it’s easy to clean that up with a simple script. You can use whatever tool you like for this; I wrote a dirty ES6 script that runs on babel-node:

    import jetpack from 'fs-jetpack'
    jetpack.read('ships.json', 'json')
    .map(entry => ({
    name: entry.name.value,
    launched: entry.launched.value.split('-')[0]

    You can see the final result in this gist.

    Finding Resources

    Here’s the problem with SPARQL: Even if you know the syntax and semantics of it, you don’t necessarily know what resources to use in queries, which is to say the right names to express the relationships you want to search for.

    So far, the best way I’ve found of figuring this out is by using the DBPedia faceted browser. With it, you can search for the DBPedia resources that are counterparts to wikipedia pages, and see how their relationships are structured and what predicates are used. For instance, when I started writing this example, I first looked at the page for the HMS Trafalgar, which is where I found out how the different relationships are structured in the data: dbo:country used to express country of origin, for instance, and that ships have rdf:type to dbo:Ship. Some experimentation is required to get useful queries, and I’m still myself figuring out how to best use this tool.

    Now go out there and make some twitter bots.

  • February 17th 2016


    Yesterday, on &if, someone asked whether we were attracted to IF because of its status as "outsider art."

    I don't really want to define outsider art, or get into the discussion over whether IF qualifies. But I responded that I felt I was attracted to IF because it's unsettled.

    And then I had to go and write a post about what, exactly, I mean by that.

  • February 15th 2016

    The Future of Raconteur

    I'm not really ready for a release of this just yet -- it'll be a while, probably at least a week -- but I wanted to give people an update of where I'm at with Raconteur. Here's the current (rough) roadmap.

  • January 27th 2016

    Improv, a javascript library for generative text

    I’m currently working on a project involving some fairly demanding procedural generation of text. While that project isn’t ready to be announced yet, one of the first core pieces of functionality I wrote for it was a text-generating library. Said library had to be powerful, flexible, and fulfil the following needs:

    • Like Tracery, it needs to randomly choose text from nested webs of corpora, recursing itself.
    • Also like Tracery, it needs some basic templating functionality.
    • Unlike Tracery, it needs to run with the backing of a world model that can guide text generation.

    Most of the ideas used to build this initial version of the tool were taken from Emily Short’s Annals of the Parrigues, which contains a long and extremely useful discussion of generative text in its epilogue.

    Since this library is a separate module, I’ve decided to open source it. Improv has been released under the MIT license and can be viewed on Github. It’s an npm module, but it’s built so that it will work in a browser environment using a module-bundling tool like webpack or browserify. Improv is currently in active development, but the latest (0.4.2) version is one I consider to be reasonably usable.

    Assuming you have node (v4 or newer), npm, and gulp installed, you can see a demo of Improv in action by doing:

    $ git clone https://github.com/sequitur/improv.git
    $ cd improv
    $ npm install
    $ gulp demo
    $ node demo_build/hms.js

    This demo produces descriptions of fictional ships, along the lines of:

    The HMS Reliable is a clipper commissioned 6 years ago.

    Using a whale oil engine, she can reach speeds upwards of 32 knots. The Reliable is one of the new generation of vessels built to fight against the Arkodian fleet in the Short War. Her crew is known to be one of the more disciplined in the Navy. She is currently serving as a colonial troop transport.

    The most obvious place to play around with Improv, at first, is Raconteur projects, since those are already friendly to including npm modules. NanoGenMo and ProcJam are some time away, but I look forward to seeing what people do with this tool in the meantime. Bug reports and pull requests are welcome.

  • January 6th 2016

    Impressions: What Fuwa Bansaku Found

    What Fuwa Bansaku Found (Chandler Groover), released today through sub-Q magazine, is a free-verse ghost story set in an abandoned shrine in Sengoku Japan.

  • December 31st 2015

    2015 in Review: Thank-yous and shout-outs

    As the year heads to a close I have been busy sending thank-you notes (well, emails). This list is in no particular order and, inevitably, incomplete; if you feel like I have missed you, I am sorry.

    • My testers, too numerous to name but incredibly important; any remaining bugs and typos are entirely my fault.
    • Tory Hoke, Devi Acharya, Kerstin Hall, and the rest of the sub-Q team: You’ve made a dramatic change to how I look at writing IF. This has been an incredible year, and sub-Q is responsible for a lot of that.
    • Carolyn van Eseltine, Aaron Reed, Neil Butters, and Jason McIntosh: That is, the people whose competitions I entered this year. People consistently underestimate how much work organising those events is, and the least I can do is thank the people who inexplicably continue to do it, expecting no reward whatsoever.
    • The good people at &if, including furkle, Brendan Patrick Hennessy, Chandler Groover, Emily Short and others, who’ve made the last couple of months a terrifying delight. There’s a million things we haven’t done; but just you wait.
    • greenie, chromakode, and intortus, the Euphoria crew, for giving me this wonderful space to do terrible things with.
    • Last but not least: Cat Manning. You know what you did (and continue to do).