Sequences

Designed and developed while at Chatterbug

Built with

The Problem

At Chatterbug, we helped people learn new languages, such as German, Spanish and French. Specifically, Chatterbug is a web and mobile application that helped you discover and memorise new words. At some point, we wanted to start teaching students phrases, instead of just single words. In our first attempt, we had simply added full sentences into the existing curricula.

Two painful problems immediately arose:

  • Translation Prediction Problem. As creators of the curriculum, it was expensive and difficult for us to predict every correct translation of a phrase (in some cases, there are potentially dozens!)
  • Student Experience Problem. As a student, it was tiring to type the phrase out in full, every time it appeared in your memorisation session.

In this article, I'll share what I learned while solving each of these problems and how I created 'Sequences', a core feature of Chatterbug's platform.

Translation Prediction Problem

When you're asked to translate a German phrase into English such as:

🇩🇪 Wo ist hier die nächste U-Bahn Station?

How many different English translations could someone provide? There are probably around 30 permutations:

  • Where is the nearest subway station?
  • Where is the nearest subway stop?
  • Where is the closest subway station?
  • Where is the closest subway stop?
  • Where is the next subway station?
  • Where is the next subway stop?
  • Where is the nearest train station?
  • Where is the nearest train stop?
  • Where is the closest train station?
  • Where is the closest train stop?
  • Where is the next train station?
  • Where is the next train stop?
  • Where is the nearest Tube station?
  • Where is the nearest Tube stop?
  • Where is the closest Tube station?
  • Where is the closest Tube stop?
  • Where is the next Tube station?
  • Where is the next Tube stop?
  • Where is the nearest Underground station?
  • Where is the nearest Underground stop?
  • Where is the closest Underground station?
  • Where is the closest Underground stop?
  • Where is the next Underground station?
  • Where is the next Underground stop?
  • Where is the nearest metro station?
  • Where is the nearest metro stop?
  • Where is the closest metro station?
  • Where is the closest metro stop?
  • Where is the next metro station?
  • Where is the next metro stop?

We were seeing hundreds of issues where the student would respond with a translation that we did not anticipate. When students were being penalised for their failure to remember the specific translation, they would typically write into customer support and in some cases abandon their study session altogether.

Solution Pt. 1: Limit The Problem Space

To reduce the problem space, I started with two important constraints on how we would test phrasal content:

  1. We would break phrases into individual words, or groups of words, to be tested sequentially but always in the context of the phrase.
  2. We would test you only for target language input, never language input from your own native language (which has far less pedagogical value).

I started with numbered steps, so that a phrase could chunked up and tested in up to 4 distinct stages.

An early doodle of the core idea of sequences

Then the curriculum team needed a way to leave certain bits of the phrase out all together, so I added a "never test" step. Then later, to cement the whole phrase after each chunk had been tested, I added a final "review" step.

This solved the main problem of reducing the space for translations by the student and meant nobody needed to type full phrases over and over again.

Solution Pt. 2: Predict Alternative Translations

Still, even with the problem space reduced, the individual chunks of the phrase could still be translated in several different ways. This is a problem that I needed to solve while creating the authoring tool for Sequences. In the screenshot below, you'll see how synonyms can be added below each word in the sequence. I also created some language-wide rules, to automatically generate non-standard spellings of words, such as permitting substitution of "ss" for "ß" in German.

The resulting experience for the student is one where they can still progress but also be gently warned that it's a non-standard translation,when they trigger one of these planned responses:


Results

Sequences was a success for both our students and the linguistics team.

  • Students reported 54% fewer missing answers than for phrases previously.
  • Our linguists have far fewer customer support requests for phrases, and a tool they love to use and continue to innovate with.

Footnote on Building Ergonomic Tools

Internal tooling is a domain where the software developer can bring huge leverage, by multiplying the thousands of hours of output from her colleagues, yet it is also sadly the place which rarely receives its fair share of design attention.

With Sequences, my primary users were the linguistics team, who were responsible for creating the content.

From their workflows, it was clear the linguistics team preferred visual, drag-and-drop tools where the entire situation can be seen from a single screen grab. That's how they shared ideas with each other in Slack.

Contrast that with (for example) engineers, who typically prefer to work keyboard-first and often prioritize integration and automation potential in appraisals of new tools.

These preferences informed how I designed Sequences. Numeric steppers are colour-coded, and go up when you click them, and down when you right-click them.

You can move words around by clicking and dragging them, to form new phrases.

The UI is simple, forgiving and self-explanatory.

Footnote on Tokenisation

Tokenisation is one of those problems that initially seems trivial... can't you just .split(“ “)? But in fact, there are countless edge cases and nuances. This is especially true when you're working with non-English languages.

In written French, for example, there is typically a space before colons and other punctuation:

Furthermore, quotes are marked « like this » in French, capitalization is not limited to proper nouns and sentence-initial words in German, Spanish has ¿ and so on...

If you're using Python, there's the powerful spaCy or NLTK packages to handle all this linguistic complexity.

In Ruby, however, there are exactly zero well-maintained, multilingual tokenisation gems at the time of writing this. So, we settled on a lengthy but well-tested Regex to scan and tokenize the string, with additional rules based on the features of the language in question:

# This tokenises the input string, or breaks it into actual words (leaving out spaces)
tokens = initial_string.scan(/[^\s]+/).reject { |token| token == " " }
# This gives us a few more tokens for things like question or exclamation marks.
tokens = tokens.map { |token| token.scan(/[\]\[¡!"&()*+,.\/:;<=>¿?@\^_`{|}~-]+|[^\]\[¡!"&()*+,.\/:;<=>¿?@\^_`{|}~-]+/) }.flatten

As such, we don't need to store the phrase as a full string. Instead, its form is stitched back together according to the order of the words, which allows you to manipulate it.

Footnote on deciding to use React and GraphQL

Although internal tooling is often a domain where you can comfortably rely on Rails views (with a sprinkle of JS), Sequences was an exception. As you see from the interactions above, there is a non-trivial amount of editable and dynamic state going on, with implications for both function and form. Moreover, in any word within the sequence, there are subcomponents (the word, the coloured stepper and the list of responses) which are themselves interactive and conditionally rendered. It didn't take long after choosing React for it to be a net saving in cognitive overhead.

GraphQL (using Robert Mosolgo's graphql-ruby gem) was a complementary choice, for similar reasons. Sequences have lots of nested collections, from which we pluck specific fields, and this is handled cleanly and intuitively in GraphQL. Adding new features (like groups) becomes little more than adding a new field to the query string, and ensuring the flow of data is managed. By batching with Shopify's graphql-batch gem, we were also able to minimize the N+1 queries.