Grant Custer

Feed  Index  Twitter

2020.09.21 Tile: release notes
2020.09.12 The benefits of limitations in application launchers
2020.09.01 Sift: release notes
2020.08.20 Fantasy consoles and framing
2020.07.30 Automadraw: release notes
2020.07.28 Bushido Blade 2: a design appreciation
2020.07.12 Swapping color schemes across all terminals and Vim with Pywal and Base16
2020.06.25 Vimlike


Tile: release notes

Tile is an experimental image editor that lets you layout images using a tiling tree layout. You can move, split, and resize images using keyboard controls.


1. Tiling window managers

I’ve been using the tiling window mananger i3wm for around six months now. For my purposes, tiling windows are a much better and more intuitive experience than the dominant “floating windows” desktop metaphor. A big part of the motivation for making tile was wanting to dive in and explore how tiling logic works at the code level.

2. Dividing up space from the outside in

In Tile (and i3wm) you divide up space from the outside in. You start from the window size, then you portion it up. It reminds me of folding a paper into sections. This is a different way of thinking about layout than I’m used to, though it’s hard to articulate exactly why. People may have recently become more familiar with tiling layouts from seeing them in Zoom video layouts.

Proportional splitting is built into web layout in the form of percentage units and the newer viewport units. CSS Grid also has you specify how to split up space. I think tiling feels different to me because the splitting up of space is the primary interaction, and you can do the splitting up incrementally with immediate feedback: “first let’s split this, ok and now this section, now let’s stop and take a look.” Even with CSS grid I feel more like I’m building from the inside out.

The feel of tiling

The main benefit of tiling is that it uses all the available screen space. In all of the Constraint Systems experiments I’m interested in trying to go “with the grain” of the computer. Tiling works off the screen as the space of possibility, and encourages you to act within it. The tiles in a tiling window manager are like additional screens (the recursive nature of it feels very computer-y to me): here is a rectangular area that you can fill with programs as you see fit (as they fit).

A lot of designs make use of an “offscreen” metaphor borrowed from the physical world, where you can swipe to see different applications. Mobile design is full of these. I think the offscreen metaphor can be useful. But I think it’s also useful to go the other direction and not pretend. In i3wm you have multiple desktop spaces, but (by default) there’s no swiping transitions between them. Each numbered workspace is just a space that can be immediately switched to, the same goes for the applications within the space. For me, it makes the computer feel more tool-like. I feel like we have a more honest relationship. An additional benefit, is that because there are no complex transitions you aren’t distracted by the occasional transition failure or interruption.

Stumbling blocks: the tree

I find the initial divvying up of a tiling layout to be pretty intuitive. It is making changes to that initial layout that can be confusing. The reason for this confusion is the tree data structure that underlies layout. Specifically, it is the discrepancy between the actual layout structure and how that structure appears on the screen: sometimes movements that look like they should behave the same act differently due to the underlying structure. The easiest way to demonstrate this is looking at a layout split into four equal sections:

The quad dilemma

How a tile can be moved depends on its relationship to its parent container.

Let’s look at an example where the layout has been split into four equal sections, the underlying tree structure for this layout is:

- Root - horizontal layout
  - Container 1 - vertical layout
    - Tile 1
    - Tile 2
  - Container 2 - vertical layout
    - Tile 3
    - Tile 4

In tile (as in i3wm) you can move existing tiles using shift and the arrow keys. If you want to swap Tile 1 and Tile 2 you can move Tile 1 with shift + ↓ for a swap. The issue comes when you want to swap Tile 1 and Tile 3. It looks like you should just be able to move Tile 1 with shift + → and swap it with Tile 3, but because of the tree data structure (and choices I made in the movement logic), the first movement changes the orientation of the Container 1 split, pressing it again moves it out of Container 1, resulting in:

- Root - horizontal layout
  - Tile 1
  - Tile 2
  - Container 2 - vertical layout
    - Tile 3
    - Tile 4

Tile 3 has popped out of Container 1 and now you have a three across horizontal layout. If you press shift + → again you are presented with a choice, do you want to move Tile 1 next to Tile 3 or Tile 4? Choosing Tile 3 results in this structure:

- Root - horizontal layout
  - Tile 2
  - Container 2 - vertical layout
    - Container 3 - horizontal layout
      - Tile 1 
      - Tile 3
    - Tile 4

To achieve the original swap you can move the cursor to Tile 3 and repeat the steps in reverse (press shift + ← several times). Until you achieve the swap:

- Root - horizontal layout
  - Container 4 - vertical layout
    - Tile 3
    - Tile 2
  - Container 2 -vertical layout
    - Tile 1 
    - Tile 4

So a swap in the vertical rejection requires only one keypress, while a horizontal swap requires 8+. The reason for the difference is “logical” in that it reflects the underlying tree data structure. I try to give a hint about that structure by giving the parent container a wider gray background in Tile. It helps as a hint, but the movement effects can still surprise you if you don’t have the structure in mind. I think this unpredictable movement is the biggest obstacle in getting used to a tiling layout. I tried some strategies to overcome it in Tile. I believe those strategies help but I still wonder if there isn’t some other solution that could make things even more intuitive.

Why does it have to be a tree?

If the mismatch between the tree layout and the user’s mental model is the major stumbling block in moving and changing content in a tile layout can we switch out the tree structure? I thought hard about this for a while, and my conclusion was… no. I’d certainly be interested to see any tiling layout experiments that use a different structure (some tiling window managers do use a “list” structure instead, where the layout is almost completely automated, but I wanted something where you could manually adjust the layout).

The tree structure also determines how a tile can be resized. It ensures that there are no gaps between tiles.

So what does a tree layout get us and why is it hard to replace? The simplest example I came up with to demonstrate its value is resizing children. Like moving a tile, resizing acts differently depending upon the relationship of the tile to the parent. If we look at the same quad again, we see that if you resize vertically it behaves as you would expect, the selected tile takes space from its vertical neighbor. If you resize horizontally, because the parent is vertical, it resizes the parent, taking space from the neighboring horizontal parent.

What if the tiles weren’t constrained by the relationship to parents. You could imagine that resizing horizontally would instead act the same as vertically, taking space only from the neighboring tile. I’m not sure exactly how that data structure would work, maybe instead of reworking the data structure you’d just change the resize logic, so that operated more like collision detection on whatever was visually near. But here’s the conceptual issue I ran into: Say you resize Tile 1 so it is vertically shorter, than you make it wider, if you try and go tile by tile, it would push only the horizontally neighboring tile and you would be left with a gap in the middle, between the bottom right corner of Tile 1 and Tile 4 below it. The tree parent relationship ensures these gaps don’t exist, and that is why it is necessary, even if it causes issues with our mental models.

(I do still wonder if you could do something like a tree layout that reconfigured on the fly. Where it looked at the visual arrangement, and if it was such that it wouldn’t cause a gap, it would modify the structure so that resizing would perform more like you visually expect. There would be ‘tracks’ that resize could travel along. I still haven’t decided if that approach is really viable, it would, at the least, require a lot of edge-case handling.)

My movement philosophy

I did try and implement some logic that would make movement for the user more intuitive. As far as cursor (not tile) movement, I look exclusively at the on-screen position of the tile, so that moves to the neighbor tile to the right that contains the y coordinate of the current tile (regardless of how many parent containers are involved). This differs from i3wm where moving to a container focuses the last active tile, a convention that has a logic to it but still feels unpredictable to me. My approach does create a default favortism to tiles that are nearer the left and the top (because I look for something that contains the top or left position).

The cursor movement approach of looking at the rendered position of the tiles also complicates the code logic a bit. It causes a kind of divergent logic between the movement and data structure. I think it was the right choice here, but it is a trade-off, and I still think about whether some approach couldn’t avoid the divergence altogether.

The cursor movement is also just generally easier to reason about because you’re moving within a static layout. When you move the tile itself you’re altering the layout and, as demonstrated in the quad example above, things get much more complicated very quickly. One question that might be lingering from the quad example: for cases like the horizontal swap, why not build special logic to swap immediately without going through the intermediate steps? The answer is that if I aggressively tried to predict and fulfill any swap intentions, I would prevent the user from having access to all possible layouts. In the quad example, the second shift + → actions moves Tile 1 out of the container and create a 3-across grid. If I instead tried to guess you wanted to swap Tile 1 and Tile 2 there would not be a way to access that 3-across layout. In all the movement options, I tried to create a balance between smoothing the way for the user and making sure I wasn’t over-predicting their intentions.

Another interesting thing you can see in the swap example is that movement in and out of parents is not exactly symmetrical. The first move changes Container 1’s orientation to horizontal, the second moves Tile 1 out of the container. If things were symmetrical the next move would move Tile 1 into Container 2 in a new horizontal container, like this:

- Root - horizontal layout
  - Tile 2
  - Container 2 - horizontal layout
    - Tile 1
    - Container 3 - vertical layout
      - Tile 3
      - Tile 4

Instead, I assume you want to move into one of the children (Tile 3 or Tile 4) and give you a choice between them. The choose-a-child mechanism was a late addition to the layout, and one I’m quite pleased with. Usually I try and avoid intermediate choice steps, instead preferring immediate feedback and easy reversibility, but I found that being able to choose the child to join cut down the steps in moving tiles between parents in a way that I rarely found annoying or frustrating. The idea is that when you’re moving a tile towards a parent with lots of nested children, you’re actually trying to move it within those children, where if you’re moving a child towards the edge of its current parent, you’re trying to move it out. It does rely on some prediction of the user’s intentions which I try to stay away from it, but it felt like the right solution in this case. There were a lot of trade-offs like that in this project, which made it both interesting and stressful.

Choosing which child to move into and then backing back out.

Deciding on splits

Closely related to the tile movement trade-offs, is the issue of how I decide how a container is split. In i3wm you have to set whether to do a horizontal or vertical split. This has the virtue of being clearly understandable logic, but when you do multiple successive splits it feels unintuitive – it slices it up into smaller and smaller vertical or horizontal strips. It turns out what you actually (usually) want is for the split to be as evenly as possible. I use an autotiling script for i3, that looks at the tile dimensions and splits against the larger dimension. As mentioned above, I’m usually against ‘mgical/algorithmic prediction in software, but I was impressed how “correct” this simple split prediction felt in almost all settings.

Tiles are split on their longest dimension.

I implemented the same dimension-based split logic in tile, and it feels right to me there as well. Another approach would have been to add different key combos to splitting horizontally and vertically. I love the simplicity of enter being the one and only mechanism for splitting, though. I think it encourages the user to split first and adjust the layout after (if they actually wanted the opposite split then it is one tile move away).

Image-specific adjustments

Most of the decisions I made could apply to either a tiling window manager or a tiling image assembler. But some things are specific to Tile because it deals with images. One late change that made Tile more interesting, was copying the image when a tile is split. This decision creates a forking, multiplying effect as you split tiles that is fun and interesting. It wouldn’t apply in a window manager because there you generally choose the application as you open the new space.

Image fill types include: stretch, contain, and cover.

Another image specific decision was deciding on the default image fill-type. The default image fill-type is stretch, where the image is stretched over the dimensions of the tile. Stretch does not respect the image’s original aspect ratio. The other two fills, contain and cover, do. Stretch is probably an unusual default choice, because it distorts the original image, but I thought it was the most interesting.

Where to next

Tile was an opportunity for me to explore tiling layout logic, something I plan to continue to experiment with. A lot of the Constraint Systems experiments work from the “inside-out” with a cursor inside of a grid. Tiling gives me another method of navigating and partitioning space, either to explore on its own or in combination with the grid-cursor. I’d like to continue to refine the code logic for the tile layout and movement, hopefully condensing it down into something that I can drop in across projects.



The benefits of limitations in application launchers

In my Linux set-up, I use dmenu as an application launcher. dmenu is basically autocomplete for applications and scripts. In many ways, it’s not so different from launching things using Spotlight on a Mac.

Opening with dmenu and a launcher script.

Since I started using it, dmenu has been a convenient way to launch apps. But I’ve only recently started to realize some of the interesting things it makes possible. A lot of the possibilities come down to the limitations of the interface, and how agnostic it is about what it launches.

Lately, I’ve started putting scripts to run apps, or a combination of apps, or a combination of apps and websites and terminals, in my .local/bin folder where they are exposed through dmenu. Some of the things I’ve added:

  • planner launches my calendar, personal email, and work email all in different firefox windows.
  • 750words launches in a new firefox window.
  • blog launches a terminal set to my blog directory, a terminal window running npm run dev to run the local versions, and a firefox window open to localhost.
  • record starts a screen recording. gif_2x converts the last screen recording into a GIF running at 2x speed. When it’s done it opens a window showing the GIF and listing the file size.

Some of these are rather involved scripts, some are very simple. What I’ve been surprised by is how different even the simple ones feel when launched from the application launcher. Something like 750words is given its own space and weight as an activity, promoted from being a website I visit among other websites. It also makes my intention going in very clear, if I open 750words my intention is to do it, versus getting lost browsing (even though, functionally, all it is doing is opening 750words in a web browser).

I used to go after the same effect on macOS, especially for web applications. There were several programs that would let you run a website within its own application container. That meant you could launch it from Spotlight. There were always rough edges that made clear it was a bit of a hack though. The difference between the wrapped web apps and the true apps was noticeable. This is partly because apps on macOS are expected to be polished, with their own nice-looking icon.

The lesson I’m interested in with dmenu, is how the limitations (an application is only a name among names) make it much easier and more satisfying to add user-configured applications and scripts. I am surprised by how different it feels to have my scripts sit right alongside the other applications. As with my experience of lots of Linux-related stuff, it is a feeling of empowerment. It feels like a level of customization above what I’m used to, computer as tool I am control of, rather than something I’m wrestling with.

Content limitations in web previews

I’ve noticed a similar effect in making websites. Often you end up making a website that you want to display a preview of on another website. I do this on Constraint Systems and for the Cloudera Fast Forward prototypes. Through experience, I’ve learned you want to make the number of assets needed for the list preview as few as possible. In the case of Constraint Systems, it is a tile, a description, a preview image, and a preview GIF. It is tempting to require more elements (like several preview images) to provide a fancier preview. Whatever you require, however, you’re going to have to provide for every link going forward. A surprising amount of friction can come from needing to create preview assets (and also the deploy process for those assets). Enough friction to cause you to make fewer tools or blog posts because you’re dreading that part of the process.

Website meta tags have been an interesting development in relation to this. Used for creating previews on shared Twitter and Facebook links, the main meta tags are limited enough (title, description, preview image) that they’re worth doing, and now that they’re being regularly done across sites, even more services can dependably use them to unfurl links.



Sift: release notes

Sift is an experimental image editor that slices an image into layers. You can offset the layers to produce interference patterns and pseudo-3D effects. It uses an additive blending mode and pixel-based light splitting algorithm.


I started planning Sift while standing in the ocean, thinking about waves and how to use a wave effect on the pixels of an image. I’ve gotten used to thinking of images as a grid of pixels, and I’ve done some experiments using HTML canvas and javascript to move, or even flow, pixels around. I started trying to imagine how pixels could cycle “below the surface” and then pop up on top.

For a wave effect I needed pixel depth. I needed to figure out a way to transform an array of pixels from 2D into 3D. I thought about RGB values. Coud I use the color value as the third dimension by using it to make a pixel stack? What if for a pixel with a red value of 100 I stacked 100 red pixels?

I ended up using the pixel stack idea (and cutting the wave idea, for now), but I had to get the right blending mode and slicing algorithm to get things working.

Slicing colors

One of the issues with making pixel-specific stacks was that a pixel doesn’t have just one color value, it has three (red, green, and blue). I decided to put the “brightest combo” at the top. So for an RGB value of [10,16,24] I would start the stack with 10 [1,1,1] pixels. Then 16 - 10 = 6 for 6 [0,1,1] pixels, and, finally, 24 - 16 = 8 for 8 [0,0,1] pixels. This means the white-ish pixels are on top, and then, as those are finished, you see a kind of exhaust trail of color.

(For performance reasons, the finished app bins values according to the number of layers. So instead of 192 stacked [1,1,1] pixels for a [192,192,192] value, a 16 layer edit in Sift bins 192 / 16 = 12, for 12 [16,16,16] pixels.)

Blend mode

Overlapping red, green, and blue squares in additive blend mode.

The color slicing only works with right blend mode, where each layer’s RGB value is added together. For canvas, the blend setting is called globalCompositeOperation and the value for additive blending is lighter.

Additive blending, while not what I’m used to working with for computers, is actually how our eyes perceive light. I vaguely knew this, but it’s been fun to play with it in the app and get a real feel for the consequences.

Layers and offsets

Originally, I thought I’d build this app in Three.js, where you could rotate around the pixel (actually voxel) stacks in 3D. I thought maybe I could get the perspective such that it appeared a whole image at the start, but as you zoomed in you could see cracks between. I’m still not totally sure if the math for that could be worked out. But I quickly ran into performance issues from trying to render even binned values for every pixel in an image, so I switched over to HTML canvas. (I’m sure there are ways to do this in Three.js, possibly utilizing shaders? If you have ideas let me know.)

I knew performance might be a struggle in canvas as well, but I had a plan. I know canvas can redraw image files (with drawImage()) quickly. On image load I split the image into a set number of layers (16 by default), doing all the bin calculations. The render function (peformed whenever the x and y offsets are changed) then just draws those layer images on top of one another and the blend mode takes care of the rest.

The result

Sometimes I have specific goals for Constraint Systems projects, other times I just follow an effect to its end. Other than the “stack” idea, I didn’t really have a goal for what Sift should make images into. I was pleasantly surprised by the early results, right after I flipped the blend mode switch. For certain images, it produces a pseudo-3d effect. Ezekial suggested it’s like an aerogel. It is also kind of like badly calibrated color separation in a TV (but different because of how it is stacked). Other images you can get kind of an otherwordly thing. I watched Twin Peaks: The Return recently and it reminded of me some of the face distortions from that.

Future experiments

I’m definitely intrigued by the possibilities of additive blending. Partly due to Tyler’s suggestion, I want to try layering video frames on top of each other using a similar process.

Larger goals

One of the goals of the Constraint Systems projects is to get really used to thinking of images as a collection of pixels, and work “with the grain” of how computers store images. I think the stacking and layer ideas are good signs that that part of the project is working. My intuition for what might produce interesting image results has gotten better. The mix of having a good idea of what I wanted but not being sure of the final effect is a fun one – it feels like a collaboration with the computer.



Fantasy consoles and framing

I’ve been thinking a lot about Joseph White’s talk on his motivations for making the PICO-8 fantasy console. There’s so much in the talk that resonates with what I’ve been thinking about for Constraint Systems: about how carefully selected constraints change the feel of working, making it feel more focused, and even cozy.

Since viewing the talk I’ve been thinking a lot about how he frames PICO-8 with the idea of a fantasy console and cartridges, and what I could do for framing Constraint Systems. I’ve toyed with the idea of making the Constraint Systems homepage into a simulation of a fantasy operating system with each experiment as an application. Part of the feeling I want to capture is going to the middle school computer lab in the mid 90s and trying out the strange collection of software the school had preloaded (even though the variety of the internet is great, there is something comforting and cozy in the idea of a finite number of programs to explore).

I had been thinking of the operating system metaphor as a fun, possibly attention-attracting, thing, that I should get around to sometime. After viewing White’s talk, however, I think it’s something I should prioritize. Framing Constraint Systems as a fantasy computer/operating system could (done well) communicate my vision of the project, and communicate it not in a long text somebody has to read, but as a general vibe. In the best case, they would “get” the project just by looking at the homepage. This is what “branding” is, I suppose, it just feels more tied to the core of the project here than I’m used to thinking of it.

Extensions of the idea:

  • The simplest version is just presenting the Constraint System experiments as different apps on a fantasy operating system. I could also try and make them behave as apps. Possibly using iframes and a tiling window management system. A further step (that I’ve always wanted to do) would be to let you pipe the output of one application into another.
  • Picking up on the middle school computer lab vibe, I wonder if Constraint Systems could someday be a physical computer lab, where computers limited to only CS software are available free for anyone to use, and I administer the lab and get to see what people make and can adjust or make new applications based on what people are doing with it. (I think this is at least a good idea for an installation or area at a hackerspace.)


Automadraw: release notes

Automadraw is a new experimental app I made for my Constraint Systems project. It lets you draw and evolve your drawing using cellular automata using two keyboard controlled cursors.

What is it for

I think there are two main uses for Automadraw:

  1. Get more familiar with the cellular automata (Conway’s Game of Life and Langton’s Ant) that it runs. You can quickly experiment with lots of different patterns.
  2. Draw something collaboratively with the automata. The interaction design aims to make working with the automata intuitive. These design techniques (two cursors, keyboard controls) could be applied to a wide range of creative apps.

Two cursors

I had originally planned to use just one cursor, and have it shift between draw mode and “act” (run automata) mode. As I experimented I found that usually for act mode I wanted to cover a large area and draw mode a smaller one. Having to resize when switching between modes ruined the flow, so I split the cursors up.

Splitting them up opened up some new possibilities. I realized I could set it up so that I could use each cursor’s actions (draw or act, respectively) regardless of which one was in focus. This set up a couple of interactions I really liked:

Sweeping: draw some lines then use a long, narrow act cursor to sweep over the lines, running Game of Life over each sweep step. This usually produces intricate symmetrical designs that really feel like they're evolving through each sweep.

Active environment: resize the act cursor over a large area, use the draw cursor and have it move in and out of the act area as the automata is run. The act area becomes an environment where different rules apply. It feels like a physics or chemistry simulation.

This set-up is uniquely suited to keyboard cursor controls, where each cursor’s position is fully visible and fully predictable (versus a touch interface where you would have to use multiple fingers and the fingers themselves would obscure your view of the changes taking place). I use Vim-like keyboard controls because I honestly prefer them. My suspicion is that they may enable modes of interaction other methods do not. I was happy to find an interaction that fit them so well. I’m looking forward to seeing how even more multiple cursors feel in future experiments.

Stamp is a different example of the possibilities of multiple cursors: two cursors across two canvases.

Keyboard events

Part of the reason the two cursor interaction is interesting is because of an accident of keyboard event handling. A lot of the Constraint System experiments let you hold down multiple keys. This is tricky to handle in Javascript for everything except modifier keys. The main issue is that if you’re holding down one key, and start holding an additional one, the new one will take over the keyDown event. The solution is to make a keymap object, store each key on keyDown and remove it on on keyUp. You then use the keymap object for the source of truth about what is pressed on each keyDown event.

This technique mostly just works, but there turns out to be an issue, arguably a bug, that makes things like the “sweep” technique I discussed above possible. If you are pressing one key, add another key, then let up on the second key, keyDown events stop firing. For Automadraw, this behavior enables this interaction:

  • Hold down ‘a’ to run automata, press a direction key to move the act cursor. When you let up on the direction key the automata will pause running… until you press a direction key again. Using this technique you can run “sweeps”, moving the act cursor across a set of pixels, automatically running the automata once each step.

This interaction was a happy accident, and I’m looking forward to thinking about how to expand and support it more in future experiments.

Limitations and future possiblities

I had been wanting to experiment with cellular automata and a drawing app for a long time. For this experiment, I needed to really scope things down in order to get started. I restricted the drawing app colors to 1-bit (on or off). This usefully limited the number of cellular automata I could use and the number of interactions I needed to support. I also made the app ‘pixels’ large, at 16 actual pixels. This makes drawing quick and the automata actions more legible, but also restricts the fidelity of the final image. Someday I would like to build a cellular automata app more focused on image editing, where you could evolve parts of an image at a higher fidelity. That would also involve using automata that use color information, there are some interesting examples of those in this CA Lab demo video.


The code for Automadraw is avaliable on github.

Slowly recreating React

I built the early Constraint Systems experiments using React, but have moved off of it to vanilla Javascript for the most recent ones. I do find myself recreating a lot of the set-up of React. I’ve found out firsthand that a lot of the React boilerplate I questioned is in there to work around the constraints of Javascript itself. I may switch back to React sometime, but right now I’m still enjoying experimenting on my own. It is also true that a lot of the benefits of React don’t mesh well with HTML canvas, which is where most of the action for this app takes place.

ES6 modules

This was the first project where I used ES6 modules. It was nice to be able to organize the code into sections like keyboard and state. I’ll continue to use them and refine my organization going forward. Maybe someday I’ll have a true base starter kit I can reuse across projects.

Canvas compositing

One switch I’ve made that I’ve been very happy with, is moving from rendering multiple canvas DOM elements on top of eachother, to placing only one canvas on the dom and compositing the different layers (in this case: cursor, grid, art) on to the DOM layer for each render. My rendering code is a little knotty, but it still feels a lot cleaner than stacking the canvases in the DOM.



Bushido Blade 2: a design appreciation

Bushido Blade 2 was a Playstation game I played a lot in high school. It was a fighting game with swords, and its main hook was that instead of health bars, damage was based on where you struck your opponent. You could injure limbs or finish the an opponent with one strike if you hit the right spot.

Design-wise, Bushido Blade rethought the premise of a fighting game from first principles. I love what this approach allowed them to do in terms of immersion: during a fight, nothing is visible on the screen except the two characters.

This is what I want to do when I design something: communicate everything through the core action. Design things so well that you don’t need to bring in health bars and labels.

Bushido Blade 2, like most of the games I played, was well-reviewed but never really that popular. I remember it having a pretty good story-mode with fun voice acting.



Swapping color schemes across all terminals and Vim with Pywal and Base16

Switching between light and dark colorschemes in all terminals using a hotkey.

I recently got instant light and dark color scheme toggle working for all open terminals, including those running Vim. I used a combination of techniques from Pywal and Base16 shell, and learned some things about scripting in Linux and escape sequences along the way.


Pywal is a package for switching color schemes system wide. Mostly it is known for generating those color schemes from images, but it also comes bundled with a bunch of predefined themes. I wanted to use it to switch between gruvbox light and dark themes.

Pywal can change the color schemes for all open terminals automatically. It can also switch colors for several other Linux applications.

How Pywal works

This is what the gruvbox dark theme looks like in Pywal’s colorschemes directory:

# Pywal gruvbox colorscheme
  "special": {
    "background": "#282828",
    "foreground": "#a89984",
    "cursor": "#ebdbb2"
  "colors": {
    "color0": "#282828",
    "color1": "#cc241d",
    "color2": "#d79921",
    "color3": "#b58900",
    "color4": "#458588",
    "color5": "#b16286",
    "color6": "#689d6a",
    "color7": "#a89984",
    "color8": "#928374",
    "color9": "#cc241d",
    "color10": "#d79921",
    "color11": "#b58900",
    "color12": "#458588",
    "color13": "#b16286",
    "color14": "#689d6a",
    "color15": "#a89984"

A JSON file declaring each color. OK, but how do those colors get communicated to the applications? The customization instructions mention ~/.cache/wal a lot, so let’s see what’s in there:

# ls ~/.cache/wal
colors                      colors-putty.reg        colors.Xresources
colors.css                  colors-rofi-dark.rasi    colors-wal-dmenu.h   colors.yml
colors.hs                   colors-rofi-light.rasi   colors-wal-dwm.h     sequences
colors.json                 colors.scss              colors-wal-st.h      wal
colors-kitty.conf                 colors-wal-tabbed.h
colors-konsole.colorscheme  colors-speedcrunch.json  colors-wal.vim
colors-oomox                colors-sway              colors-waybar.css

Ah! It’s using the JSON color schemes to generate application specific color scheme files. This is a great example of figuring out which level of abstraction to intervene at: Pywal defines a standard color scheme spec and uses application specific templates to generate files from it. If anyone wants to add a new color scheme or application template the procedure for doing so is clear and self-contained.

Live reload and escape sequences

To change color schemes in most applications, Pywal builds the color config file and sends a message to the application to reload. For terminals, it does something different. It uses ANSI escape codes, invisible character sequences that give a terminal color and formatting instructions, to instantly swap out the colors.

You can see how this works in Pywal’s The conversion from the JSON hex color to the terminal readable escape sequence is here:

# from pywal/
def set_color(index, color):
    """Convert a hex color to a text color sequence."""
    if OS == "Darwin" and index < 20:
        return "\033]P%1x%s\033\\" % (index, color.strip("#"))

    return "\033]4;%s;%s\033\\" % (index, color)

Escape sequences, which I’ve only seen otherwise in terminal prompt customizations, are not easy to parse or write for a human, but as part of a script they’re a powerful way to achieve instant terminal color palette swaps. I don’t think anyone would design an API featuring anything like escape sequences today, but in this case they make for a much smoother experience than a “change config and reload” cycle.

Now let’s look at how the escape sequences get sent to the terminal:

# from pywal/
def send(colors, cache_dir=CACHE_DIR, to_send=True, vte_fix=False):
    """Send colors to all open terminals."""
    if OS == "Darwin":
        tty_pattern = "/dev/ttys00[0-9]*"

        tty_pattern = "/dev/pts/[0-9]*"

    sequences = create_sequences(colors, vte_fix)

    # Writing to "/dev/pts/[0-9] lets you send data to open terminals.
    if to_send:
        for term in glob.glob(tty_pattern):
        util.save_file(sequences, term)

    util.save_file(sequences, os.path.join(cache_dir, "sequences"))"Set terminal colors.")

This shows the power of Unix’s “everything is a file” approach. The script locates the file for each open terminal and writes the sequences directly to it (same as you would write to a text file). And it just works.

Vim issues

Pywal worked beautifully for me except for Vim. It may not be an issue depending on how your Vim and terminal color schemes are configured, but in my case to get the proper color scheme I needed to not only swap the terminal colors but also toggle the background setting in Vim between light and dark. I eventually got this working using xdotool to trigger a toggle hotkey in Vim, but it was not nearly as clean a process as the main write directly to terminal Pywal approach. So I went hunting for other solutions.


Base16 is a standardized format for creating 16-color terminal color schemes. Those color schemes can then be combined with templates to produce color configurations for a wide range of applications. Base16 shell is a set of scripts that converts those color schemes into escape sequences to be applied to terminals.

The main draw for me for Base16 was that their Vim package lets you set a base Vim color scheme that works wonderfully with any Base16 terminal color scheme, no background setting change needed. (Pywal does also have a version of this, but I was much less impressed with the base Pywal Vim color scheme.)

Applying Base16 to all open terminals

Base16 shell, unlike Pywal, only applies the new color scheme to your current terminal. This set-up has its own interesting possibilities (different color schemes for terminals where you sshed; random color scheme for each new terminal) but I wanted the color scheme to be applied globally. So I frankensteined a bit of Pywal into the Base16 shell script:

# Modified Base16 shell script
terms=`ls /dev/pts/[0-9]*`
terms="${terms} $PWD/.cache/base16/sequences"
for term in $terms
  # 16 color space
  put_template 0  $color00

I converted the Pywal send function into Bash, and wrapped the part of the shell script that sent the escape sequences. I also set it to save the sequences to a cache, to be run for each new terminal. This got me the exact terminal and Vim color swap I wanted. I set up a toggle script and assigned a hotkey using my window mananger i3wm. If I want to swap color palettes on other applications, I can add the necessary steps into the toggle script. I like knowing exactly what the toggle script is doing, vs. Pywal’s “we’ll try and take care of everything we can”.

The final result.

I just modified the shell scripts for the specific gruvbox color schemes I wanted, but the cleaner way to do it would be to modify the shell template and regenerate them all. For now, I’m happy I got everything working and learned more about escape sequences and the structure of the Linux file system in the process.

Lessons learned

Part of why I’m exploring Linux and scripting is to get a feel for how software could be more customizeable. A few things were especially interesting to me here:

  1. Writing to all open terminals is a great example of the power of “everything is a file”. Being able to locate all the open terminals and send the escape sequences to them through the file system interface shifted my mental model about scripting possibilities. I usually think of applications and files as very separate, and this blurred that a bit. I’d seen people talk about the power of the file concept before but this is one of the first times its been useful for something I was trying to do. I will spend some time thinking about how the file system concept could be applied to the software I make.
  2. Escape sequences. I’m trying to think if you would ever want to include them (or a concept like them) in an application created from scratch. I don’t think so. They’re useful when you want to do formatting and the only interface you have with the program is that you can write text characters to it. The style is embedded in the text, but because the embedding is invisible it’s going to be pretty unpredictable if you try and move it between programs.
  3. The power of plain text and the being able to manipulate plain text. Lots of the config files for Linux applications are in a simple, plain text format. Coming from Javascript, I’m more used to intaking data as JSON and doing the manipulation in Javascript. In Linux you’re more likely to manipulate the text directly, and there’s a bunch of tools to help you do this. I’m sure that in some respects this leads to more formatting edge-case errors, but there’s also a beauty to the simplicity. You can see this in how Pywal handles changing color config for a lot of applications: generate a color config in the proper format, then just include that in the larger appplication configuration.



This thread, by Zach Gage, on how genre conventions serve as interaction shortcuts, got me thinking about how I use Vim conventions in my creative tools at Constraint Systems.

7/ A big part of making games involves working with genre literacy. In game design a key concept is the idea of weight: Every rule you add has a cognitive load on the player, and you must balance the weight of your rules against how meaningful they are to the play experience.

8/ An idea might be great, but if it makes the game unwieldy, ditch it. But genre-conventions are different – they’re weightless. They allow for an increased complexity and nuance in games, because they let designers include a huge number of rules without adding any weight.

Almost all of the experiments on Constraint Systems use Vim conventions: at least the hjkl characters for movement. One of the big reasons I started the experiments was my fascination with how I felt using Vim in the terminal. The combination of a strict character grid and keyboard controls provide a feeling of stability, and through that calm, that I don’t feel in other programs, or using a computer in general.

This was especially in contrast to how I’ve felt when making gestural interface, or ones that simulate physics. Building those often felt like piling on edge-case handler on top of edge-case handler. If you did it well you could make a pleasing user experience, as long as they stuck to the path you had prepared. If they wanted to go a different direction, or you wanted to take the program in a new direction, you had to deal with that unwieldy tower, either by rearchitecting it or by adding even more code for handling the new edge cases.

I wanted to strip things down, and see if I could start from a more stable foundations, and I turned to Vim conventions to do that. It was a natural choice because I was chasing that feeling from Vim. Choosing Vim also gave me the interaction bootstrapping effect that Zach is talking about. Rather than asking the user to start from interaction scratch, I had the Vim foundation. That’s not directly relevant for the majority of people, Vim is only used a subset of programmers, so it doesn’t solve everything, but it is a place to start.

Even for users not familiar with Vim conventions, I think there’s a benefit to starting the experiments there, rather than trying to introduce a new paradigm. Vim has proven itself to at least be useful to many people (and inspired a lot of loyalty). So there’s an implicit promise that even if this looks weird, you know it can be learned and at least some people have found it useful.

There’s a whole series of Vim-like programs, mostly terminal-based, that use similar key combinations. There’s also a number of browser extensions that let you use Vim keybindings in the browser. Tiling window managers (I use i3wm) also share a lot of conventions. Putting these all together, you can put together a system for daily use that is keyboard-focused and mostly Vim-based. I’ve started referring to the Constraint Systems experiments as “alternative” interfaces. Vimlike interfaces are arguably the longest running, most fully fleshed out alternative interface for computers. I want to add to and learn from that system, and keep it alive in the face of the conventions (often imported from mobile/touchscreen design) that are dominating today.



Grant Custer is a designer-programmer interested in alternative interfaces.

You can see work and inspiration in progress on my Feed and my alternative interface experiments on Constraint Systems. I also design and build prototypes for Cloudera Fast Forward. I’m happy to talk on Twitter, email: grantcuster at gmail dot com, or Mastodon. You can see a full list of projects on my Index.