Rapid Keyless Character Entry

Imagine that you open the virtual keyboard on your mobile phone or tablet and get a blank rectangle. There is no keyboard. There are no keys. Now imagine that you place a hand over the blank area and tickle it with your fingers. Your hand doesn't move; only your fingers move, like you're tickling. They move really fast. Now imagine that you aren't looking at your fingers. Not at all, not ever. You're only looking at the text that your fingers are entering, and the text is coming fast. Now imagine that you are entering a chapter of your book while pacing at a bus stop.

Status

I took this proposal and prototype through the 2018 and 2019 Founders Institute programs and Austin and learned the following impediments to realizing this product:

  • Users need to see demos of people typing faster than they can already type using existing virtual keyboards. I spent 2020 optimizing gestures to adapt to each user's hand and to how the user's gesturing evolves while typing, and I got a drammatic speed improvement. However, I would also need to write a training program to help people develop high speed.
  • Very few Android devices have touchscreens that are high enough quality to support five simultaneous gesturing fingers. Android is therefore not a viable platform, and yet because at the time I started the project iOS did not allow custom keyboards, the entire system is in Java for Android. It would need to be ported to Swift or Rust for use on iOS.
  • An additional year of full-time development remains even on just the existing Android implementation, and I don't have time for that.
  • History has shown again and again that users expect their keyboards to be free. The only viable option appears to be OEMing. But OEMing a product that requires training seems like an uphill battle.
  • Gesturing is difficult on existing touchscreen technology (even Apple devices) when the air is humid or the fingers are damp, such as when the user is sweating. The effect is that input is not possible under these conditions.
  • I've spent all the money I'm willing to spend on it.

Therefore, DEVELOPMENT IS INDEFINITELY ON HOLD, except for a pending patent.

Introduction

See this demonstration video I made when taking my hypothetical company, Consontant I/O, through the Founder Institute program.

The typewriter has jammed our thinking. We have been equating character entry with pressing keys. When we needed more characters, we thought to add more keys or more combinations of pressing keys. We have to align our hands to the keyboard and train our fingers to the locations of the keys. What if instead of having to target keys, we just flicked our fingers forward or backward different distances? This way, we would only need to know which fingers move and how far they move and not have to worry about exactly where fingers touch. There would be no keys to miss. Not having to target well-spaced keys, the fingers would hardly have to move at all. Characters could be entered quite fast. This is how the character entry system I designed works.

Product

I am developing a gesture-based keyboard replacement for Android and for iOS 8. The system is currently prototyped on Android and working as expected. I am in the process of productizing the implementation to support multiple users and to provide a virtual keyboard option for people who are not familiar with the system. The system is designed to be easy to use and easy to learn, but it does take training to use effectively. It is therefore important to also produce a training program that will allow people to try out the system and learn it before installing it as the default keyboard.

Patented

Technology underlying this character entry system is patented. Patent #US 10082950 B2: Finger-Mapped Character Entry Systems. Additional patents are pending. This article describes the product in development rather than the patent or patents pending.

How It Works

Suppose you want to enter a text note into your touchscreen phone or tablet. Normally a virtual keyboard would pop up and you'd begin pressing buttons. With this gesture system installed, the space for the virtual keyboard appears, but there is no keyboard in the space. Instead you might see a few buttons at the top for help or managing users, but most of the input area is blank. Now place a hand over the blank area so that it has the following posture:

This is roughly the posture that a hand has when on the home row to touch-type on a conventional keyboard. Don't let the fingers touch yet, however. If the touchscreen is large enough, you may rest the thumb on the touchscreen, as in the following illustration:

You may enter text with either hand, but before you can begin entering characters, you must tell the gesture system which hand you're using and where the fingers are on the touchscreen. You do this by drumming the fingers. You drum your fingers roughly starting with the pinky finger and roughly ending with the index finger. If you're using your left hand, the fingers drum from left to right. If you're using your right hand, the fingers drum from right to left.

Let's suppose you're using your right hand and that you drummed your fingers pinky finger first and ending with the index finger. Each finger touches a spot on the touchscreen, as shown in the following illustration, which shows the order in which each spot was touched:

The gesture system now divides the input area into regions so that each finger has its own region, as in the following illustration:

In this simple example, the boundaries between regions are vertical and split the horizontal distance between the points touched. Never mind exactly how the regions are allocated to fingers. All that matters is that each region is associated with a particular finger. In this case, we have the following association of regions with fingers:

It no longer matters where exactly you touched when you drummed your fingers, so the points don't appear in the diagram. The gesture system uses these regions to figure out which finger is gesturing. For example, if the system detects a gesture starting in the index region, it assumes that the index finger is performing the gesture. Likewise, if a finger is found to start gesturing in the ring finger region, the system assumes that you are gesturing with the ring finger. These assumptions normally work, but even when they don't, sometimes the gesture system can guess the right finger anyway. Even so, if you find the gesture system guessing fingers wrong, all you have to do is drum your fingers again to align the gesture system with your fingers again—you don't even have to look down at your fingers to see what the problem might be.

It is not ideal to vertically align regions as shown in the preceding diagrams. Vertical alignment is best suited for narrow phones to maximize the amount of space available to each finger for gesturing. Normally we orient the device to our eyes, leaving our hands touching the device at an angle. This is what we would do when there is room to angle the regions on the touchscreen, such as on a tablet or large-screen phone. Here again we can drum the fingers, this time at an angle relative to the device, and the gesture system can guess a way to divide the input area into regions. However, it is also possible for the gesture system to collect more information about the hand in order to maximally tailor the regions to the way fingers move on the hand. The following diagram shows some of the additional information that the gesture system may collect:

Don't worry about what all of this means. Just realize that there is a lot of room for the gesture system to optimize input to each particular hand. This optimization for a hand is called a "hand calibration." The gesture system maintains a separate calibration for each hand, and each user has his or her own set of hand calibrations. There are ways for the gesture system to collect this information gradually without you realizing it, but there are also ways for you to enter this information more explicitly to optimize the calibration in a hurry. For example, you can tickle your fingers up and down rapidly on the gesture surface. This won't cause character input. Instead, it tells the gesture system how far your fingers tend to move and the angles at which they move.

If the device allows you to input a calibration that is angled relative to the device, as in the previous illustration, the gesture system will infer regions for each finger that are also at angles. For example, an angled calibration might produce the following input regions:

The gesture system normally will not show these input regions. The input regions adapt to your needs, and you should never have to attempt to line your fingers up to match them. If your fingers ever appear to be out of alignment, all you have to do is drum your fingers again, or perhaps tickle them briefly to further refine the calibration. The gesture system will orient to your fingers.

Okay, let's assume that you've put your hand in the correct posture and input a calibration for the hand. The regions are there for you to use, even if invisible. Now it's time to input characters. To input a character, leave the hand where it is relative to the device and brush a finger up or down on the touchscreen. It helps to have the thumb anchored on the side of the device or on the touchscreen. A thumb on the touchscreen won't interfere with the gestures. This keeps the fingers positioned over the regions that you calibrated for them. The fingers can move as follows:

Each finger moves by brushing the touchscreen. Don't push the fingers up or down on the touchscreen. Brush them. Brush a finger by dragging its tip up or down. A finger uncurls (or extends) to brush forward and curls (or flexes) to brush backward. Not many touchscreens are sensitive to gesturing with the tips of fingernails, so for now, the gesture system seems to require short nails. The first drawing shows that the thumb may optionally touch, with no effect. The second drawing shows that the pinky finger may optionally be touching. When the pinky finger touches without gesturing, it's like holding a shift key down: the characters that the other fingers gesture change. This is called the "pinky shift." The number of times that the thumb or pinky taps before remaining in contact with the touchscreen affects the meaning of the gestures.

Let's look at an example gesture. In the following illustration, the middle finger touches in the middle finger region and brushes backward. How far the finger moves determines the character that is input. If the middle finger moves a short distance backward, it inputs the letter 'e'. If it moves a long distance backward, it inputs the letter 'c'. The character is input when you lift the finger. Which finger you use, the direction in which you move the finger, and how far you brush your finger from the time you touch to the time you lift determines the character. In this illustration, if the finger lifts at the arrowhead, the gesture inputs a 'c':

When a single finger brushes the touchscreen, the gesture is called a "stroke." Here are the characters that strokes input in the current implementation:

In this diagram, a finger begins gesturing at the circled dot and brushes either up or down. How far it brushes in that direction determines the character. Although this diagram shows fingers starting in the middle of their regions, they can actually start anywhere in their regions. If the device allows it, they can even end the gestures outside of all of the regions. A finger may even start in its own region and end in another finger's region. All that matters is where the finger starts, the direction in which the finger moves, and how far the finger moves before lifting.

A few letters are missing from the previous diagram. This diagram only implements two gesture lengths in each direction. That is, each gesture has two "gesture levels." We could add more characters by increasing the number of gesture levels, but it turns out that two gesture levels is easy to learn and three is hard to learn. It's best to keep it at two gesture levels. We could also add more characters by giving meaning to movement left and right of the forward and backward directions indicated, but this too is problematic. Character input is fastest when the hand remains stationary and only the fingers move, as we strive to do when we touch-type on a conventional keyboard. Besides, if the hand starts moving left and right, we risk the fingers getting out of alignment with their input regions, thus requiring the user to constantly drum to recalibrate.

Instead, we add the remaining characters by detecting adjacent fingers that brush together in the same gesture. Don't worry, the fingers don't have to touch or lift at exactly the same time. It's easy to learn to brush adjacent fingers together. We just don't want to get into brushing arbitrary combinations of fingers, as that's too much to ask of most people. For example, pairs of fingers might brush forward together as in the following drawings:

A gesture in which two or more fingers brush at the same time, in roughly the same direction, is called a "sweep." You can think of the length of a sweep as being the average of the lengths of individual strokes that compose the sweep. For example, in the following illustration, the middle and ring fingers touch their regions at the circled points. They gesture forward together along their respective dashed lines. The heavy line between these dashed lines represents the average of these two gestures—no finger is actually gesturing along this heavy line. The length of the heavy line determines the gesture level. In this example, if the length is short, the gesture inputs 'v', and if the length is long, it inputs 'y'.

Allowing sweeps of adjacent fingers in two directions at two gesture levels gives us quite a few more characters:

In fact, we have now mapped the entire English alphabet to gestures, and we didn't even have to map any three- or four-finger sweeps. Those can be used for other application functions. However, we are still missing numbers and quite a few punctuation characters. To get the numbers, we just have the pinky shift give us a number pad. That is, if you hold the pinky finger down, without gesturing it, and gesture with one of the other fingers, you input characters as follows:

We can summarize the characters so far mapped using the following compact table:

For angled regions, we might represent the table as follows:

Don't let the partitioning in these tables deceive you; everything you have learned so far still applies. Fingers can begin their gestures anywhere in their respective regions and end the gestures pretty much anywhere, provided that they go in the proper direction. These tables just provide a compact way to depict which character the combinations of fingers, the gesture direction, and the gesture level indicate. Each is a good cheat sheet. Notice that the most frequently typed letters of English (a, e, i, o, u, n, s, and t) are available to the fastest gestures—the short strokes. Notice also that, aside from the short strokes, once you know the letter or number associated with the index finger, you automatically know the letters or numbers associated with the remaining fingers because they are sequential for the same direction and gesture level.

While the above table appears to only be for the right hand, it also applies to the left hand. This facilitates learning to gesture with both hands, as our hands tend to behaviorally mirror each other. It's helpful to be able to periodically change hands, should your hands need rest.

These tables show that we have gestures for the lowercase letters, for the numbers, and for the period and the space. Here are some other ways to select additional characters:

  • To get an uppercase character, you perform the gesture for the lowercase character, but before you lift the finger or fingers, briefly reverse the direction of the gesture. This is called a "backbrush" qualifier. For example, to input a capital 'E', gesture the middle finger a short distance backward and then a short distance forward before lifting.
  • Because the numbers, the period, and the space do not have uppercase equivalents, backbrushing the gestures for these characters provides access to alternative characters.
  • Any gesture can be qualified by tapping tapping the fingers one or more times immediately prior to performing a gesture with the same fingers. The number of taps that precedes the gesture further selects the character indicated. This is called a "touch count" qualifier. For example, you can input a colon by rapidly tapping the pinky finger twice and then brushing the pinky forward a short distance before lifting from the second tap. Brushing the pinky a long distance forward in this case produces an exclamation point.
  • The index finger can provide a shift function just as described for the pinky finger, allowing the middle, ring, and pinky fingers to select additional characters.
  • The thumb, the index finger, or the pinky finger may tap a number of times immediately prior to touching the touchscreen to remain motionless for a shift. The number of times tapped—the "touch count"—further determines the character indicated.
  • The gesture system can provide conventional shift, control, option, and function buttons on the input area. The user can press these buttons with the hand that is not gesturing.

Finally, we have so far only described ways to perform character entry via forward and backward strokes and sweeps. We can assign some infrequently used characters or behaviors to left and right strokes and sweeps. For example, it is useful to input an em-dash as a long stroke to the right. The gesture system also implements gestures that are neither strokes nor sweeps that do not use the finger regions. It provides a number of intuitive gestures for moving the cursor and performing clipboard operations. Many of these gestures function even without having established a hand calibration. There is also a way to auto-repeat character input, selecting the rate of autorepeat.

As you might imagine, everything here is subject to change prior to releasing the product.

Future Applications

Replacing the virtual keyboard on a mobile phone or tablet is only the most immediate application of the underlying technology. The gesture system can also be used in place of conventional keyboards with a desktop or laptop computer, should it prove to be more pragmatic than the keyboard for some people. A user could do all character entry at a computer with only one hand, a situation that is ideal for some people with disabilities. The gesture system can also be made more convenient on the go by making it a patch you wear on you pants, perhaps communicating via bluetooth with your phone, visor, or smart watch. Because it does not require any visual attention, the gesture system may also be useful for controlling functions in a car, for controlling robots, or for controlling surgical instruments.

Perhaps most futuristically, the gesture system allows users to walk up to a large touchscreen table from any direction, touch any place on the table at any angle, and begin rapid character entry. Multiple users could be engaged in character entry at different points on the table. The patent application describes a "squeeze" calibration gesture that accommodate this scenario. TO TOP