Chen Liu, H. A., Patel, S., Verma, A., Ramesh, K., and MacKenzie, I. S. (2025). A comparison of four sensor-based input methods for scanning keyboards. Proceedings of the 15th International Conference on Human Interaction & Emerging Technologies -- IHIET 2025, pp. 337-387. USA: AHFE Open Access, pp. 377-387. doi:10.54941/ahfe1006730. [Open Access] [PDF] [video]
ABSTRACT A Comparison of Four Sensor-based Input Methods for Scanning Keyboards
Haobin Alturo Chen Liu, Sarika Patel, Aakanksha Verma, Keerthana Ramesh, I. Scott MacKenzie
Dept. of Electrical Engineering and Computer Science
York University, Toronto, Canada
Four sensor-based methods for computer input were compared. The methods were button, accelerometer, flex sensor, and pressure sensor. The sensors were held in the user's hand in a grip position (button, pressure) or attached via a Velcro band either to the index finger (flex) or on the back of the hand (accelerometer). The methods were used for the select operation with a single-switch scanning keyboard using a Qwerty letter arrangement. The setup used a 700 ms scanning interval, a 200 ms scanning delay (after each selection), and auditory feedback for switch activations. Twelve participants completed five text-entry tasks with each sensor. The text entry rates were slow, but in the expected range for single-switch input. Button selection was the fastest at 2.35 words per minute (wpm) and had the highest efficiency with 82%. The flex sensor followed at 2.26 wpm with 77% efficiency, followed by the pressure sensor at 2.07 wpm with 74% efficiency. The accelerometer was the slowest at 1.89 wpm and had the lowest efficiency at 68%. Statistical tests indicated a significant effect of sensor type on entry speed and efficiency, though post hoc comparisons revealed no pairwise significance, potentially due to the limited sample size. Qualitative results supported the findings: The button sensor received the most favourable user ratings across comfort, fatigue, and preference, with six of twelve participants selecting it as their preferred choice. Four participants selected the flex sensor as their preferred choiceKeywords: Accessible technologies, interaction devices, scanning keyboard, sensor devices and platforms, text input
INTRODUCTION
As technology evolves, so too does the need for accessible methods for users who face challenges using conventional keyboard or pointing device. Individuals with poor motor control can greatly benefit from scanning keyboards, which allow single-switch selections while a scanning highlight cycles through options. Although these keyboards enable interaction with digital systems, the efficiency and comfort of the interactions depend heavily on the input mechanism. Different sensor types offer various modes of interaction for controlling scanning keyboards, and the most effective sensor can significantly enhance usability, speed, and accuracy in communication.
Research in human-computer interaction (HCI) has explored sensor-based input methods as tools for accessibility, focusing on improving usability for diverse demographics. Studies indicate that these systems boost input speed and accuracy when customized to meet the needs of users, especially those relying on assistive technologies (Cowan et al., 2012). By examining various sensor types, researchers aim to develop input solutions that are both user-friendly and tailored to specific physical limitations.
In this context, the present study compares four sensors – button, accelerometer, flex, and pressure – to determine their effectiveness in controlling a scanning keyboard. Each sensor type has unique benefits that make it suitable for different motor possibilities.
With the assistance of users with full motor control, this study evaluates the sensors' performance in terms of speed, accuracy, efficiency, and user satisfaction. The goal is to identify the most effective sensor for facilitating interaction with a scanning keyboard based on these criteria. The findings assist the future development of accessible input techniques in HCI, helping designers create assistive devices that better meet users' needs.
BACKGROUND
Scanning Keyboards and Accessibility
Scanning keyboards are essential tools for individuals with disabilities, providing an alternative input method for those with limited motor function. Traditional keyboards require direct key presses, which are impractical for individuals unable to perform precise or repetitive hand or finger movements. Scanning keyboards enable these users to interact with digital content by physically or cognitively selecting options as they are sequentially highlighted on the screen. The user signals their choice when the desired option is highlighted, typically by activating a specific sensor or input device. However, this method's effectiveness heavily depends on the input mechanism's ease of use, comfort, and accuracy.
Sensor Technologies in HCI
A sensor acts as a channel, detecting changes in its environment and transmitting the information to other electronic devices, typically a computer processor. Active sensors, which are the focus of this study, convert physical phenomena into quantifiable digital signals that are displayed, interpreted, or further processed (Javaid et al., 2021). Figure 1 provides a conceptual representation of a sensor's operation.
Advancements in sensor technology have expanded the range of input methods for users with limited motor abilities. Each sensor type varies in terms of physical activation requirements, making each suitable based on user-specific needs and abilities. For instance, button inputs, commonly activated by pressing, are known for their simplicity and high accuracy, although extensive use may lead to fatigue. Accelerometers, which detect mechanical motion, allow hands-free interaction but may be less precise than direct-touch methods due to potential involuntary motions and environmental challenges. Flex sensors offer a more intuitive input method for those with limited mobility yet sufficient control to bend a joint. Alternatively, pressure sensors provide an option requiring less precise control, potentially making them easier for users with reduced motor ability.
Figure 1: Diagram illustrating the basic operation of a sensor (after Javaid et al., 2021, Figure 1).
RELATED WORK
Scanning Keyboards
A study by Bhattacharya et al. (2008) reports two types of errors that occurred when users were tested with a scanning keyboard: timing errors and selection errors. Timing errors occurring when the user fails to press the key at the appropriate time, and selection errors are defined as occurring when the user selects the incorrect key. Timing errors and selection errors are clearly correlated.
Sensors as Inputs
Elsahar et al. (2019) discuss a variety of possible options for individuals with speech disabilities for using signal sensing and acquisition methods in combination with high-tech augmentative and alternative communication (AAC) solutions. The possible options to aid with communication discussed in their paper include imaging methods, touch-enabled inputs, mechanical and electro-mechanical access, breath-activated methods, and brain-computer interfaces. Mechanical and electro-mechanical AAC devices include both direct and indirect selection access. A scanning keyboard is an example of an indirect selection method. Some strengths of mechanical and electro-mechanical AAC devices include requiring minimal motor control, providing instant feedback to users when a key is pressed, and being inexpensive. As for disadvantages, they require voluntary muscle control and are generally slow.
In another study, Abrams et al. (2018) developed alternatives to the mouse and keyboard for individuals with a neuromuscular motor disorder. They used piezo sensors, inertial measurement units, and force resistance sensors as input devices. Their study involved the creation of multiple devices, including two headbands equipped with a piezo sensor and a force-sensing resistor to detect muscle contractions from blinking or frowning, a five-panel touchpad with force-sensing resistors to detect pressure, and a headgear containing an inertial measurement unit to measure head tilt angle. In a case study with one potential user, all devices were deemed viable alternatives to the mouse and keyboard, however the user did not consider the head devices as likely personal options due to lack of comfort.
METHOD
An empirical evaluation of four input methods with a scanning keyboard was conducted to compare text-entry speed, accuracy, and user satisfaction. All input methods were gesture-based sensors.
Participants
There was voluntary recruitment of twelve local university students who did not have any impairments, had reasonable computer skills, and little to no experience with the scanning keyboard. Participants consisted of three males and nine females with ages from 21 to 27 years.
Apparatus
Four sensor types (see Figure 2) were tested:
- Button → a handheld pen-like sensor with a pressable knob at the top; detects presses.
- Accelerometer → a wearable sensor wrapped around the hand; detects rotational movement along the z-axis.
- Flex Sensor → a wearable sensor wrapped around a finger; detects finger bending.
- Pressure Sensor → a handheld balloon-like sensor; detects the change in air pressure when squeezed.
(a) (b)
(c)(d)
Figure 2: Sensors used in the evaluation: (a) button, (b) accelerometer, (c) flex, and (d) pressure.
The experiment ran on a Windows 11 laptop. The sensors connected to an Arduino UNO R3 which converted the sensor signals into serial data. See Figure 3. A Python program read and interpreted that data as keyboard inputs. Along with the setup above, a virtual Qwerty layout ScanningKeyboardExperiment Windows app used, see (Figure 4). Phrases were selected randomly from the 500-phrase set by MacKenzie and Soukoreff (2003). The setup used a scanning interval of 700 ms, a scan delay of 200 ms, and audio feedback for keyboard navigation. The keyboard used two-tier scanning: row-by-row (Figure 4a), then character-by-character after selecting a row (Figure 4b). With the second selection, the corresponding character is added to the text message.
Figure 3: Setup showing the pressure sensor, breadboard, and Arduino controller.
The scanning keyboard software recorded a variety of human performance measures. Entry speed is the text-entry rate in words per minute (wpm). Accuracy was logged using the minimum-string-distance (MSD) metric between the entered text and the presented text, expressed as a percent of the length of the text phrase. Efficiency was calculated as the minimum number of scan steps divided by the actual number of scan steps, expressed as a percent. Efficiency is 100% if each character was entered correctly as the earliest opportunity, and below 100% if opportunities were missed and characters were selected late or if an error was made, then deleted, then correctly entered on follow-up scanning cycles.(a) (b)
Figure 4: Screenshots of scanning keyboard software. (a) row-by-row scanning, (b) character-by-character scanning.
Procedure
Each participant performed a series of trials using each sensor, selecting characters on the scanning keyboard by activating the assigned sensor. These trials were to assess the usability of each sensor in terms of speed, accuracy, efficiency, and user satisfaction.
Participants received a brief introduction to the scanning keyboard software and its functionality.
For each sensor, participants completed five text-entry trials, selecting characters as the system cycled through the rows and columns. Trials were of similar length and complexity to ensure consistency across trials. Figure 5 shows a participant interacting with the test system using the pressure sensor.
Short breaks were provided between each sensor to reduce fatigue and its potential impact on performance. This time was also conveniently used to swap sensors.
Figure 5: Experiment procedure showing a participant using the pressure sensor.
The entire procedure lasted from 50 to 80 minutes per participant.
Design
The user study employed a 4 × 5 within-subjects design. The four sensors were tested using a balanced Latin Square order to reduce order effects.
The independent variables and levels were as follows:
- Sensor type (button, accelerometer, flex, pressure)
- Trial (1, 2, 3, 4, 5)
The following dependent variables were used:
- Entry speed (wpm)
- Error rate (%)
- Efficiency (%)
- User satisfaction (5-point Likert scale)
The total number of trials was 12 participants × 4 sensor types × 5 trials/sensor = 240.
RESULTS AND DISCUSSION
While conducting the experiment, a few problems occurred which required adjustments to be made during the analyses. The experiment was originally intended to have 16 participants. At participant 15, the flex sensor was no longer functioning and was unable to send data to the computer. As a solution, the number of participants was reduced to 12, and the data for the last three participants was removed when calculating the results.
Another problem was a contradiction when calculating how statistically significant the sensor was on entry speed and efficiency. The ANOVA test showed statistical significance in the results, but the post hoc test showed no pairwise statistical significance. This contradiction between tests could be due to the number of participants, and thus participants per group being too small, causing the data to lack statistical power. Thus, we conclude that there appears to be statistical significance between one or more groups, however we are unable to draw a confident conclusion as to which ones.
Results are not reported for error rate except to note that for all but one phrase the final phrase entered was error free. The one errant phrase had a single character error and a 3.2% error rate.
Entry Speed
The grand mean for the entry speed per trial was 2.14 wpm. See Figure 6. The fastest sensor was the button with a mean of 2.35 wpm. The slowest sensor was the accelerometer, with a mean of 1.89 wpm. The effect of the sensor on the entry speed was statistically significant (F3,33 = 6.95, p < .001). No statistical significance was found with the pairwise comparisons.
Although slow, the entry speeds observed are consistent with other research on text entry using scanning keyboards. Waddington et al. (2017) reported an entry speed of 2.67 wpm using the Microsoft OSK (onscreen keyboard) in scanning mode.
Figure 6: Entry speed (wpm) by sensor. Error bars show ±1 SD.
Efficiency
The grand mean for the efficiency per trial was 74.8%. See Figure 7. The most efficient of the sensors was the button with an efficiency of 81.7%, and the least efficient of the sensors was the accelerometer with an efficiency of 67.7%. The effect of the sensor on the efficiency was statistically significant (F3,36 = 6.85, p < .001). No statistical significance was found with the pairwise comparisons.
The efficiency of the sensor was calculated using data from trials when the error rate for that trial was zero percent. When the error rate is zero percent, it means the participant entered the sentence correctly, and exactly like the sample sentence. This could potentially affect the efficiency of each sensor since there can be instances where the sensor had a high efficiency with a low error rate, but that instance of high efficiency was omitted from the average due to the presence of an insignificant error rate.
Figure 7: Efficiency (%) by sensor. Error bars show ±1 SD.
The efficiency was affected by the common errors in scanning keyboards that were found in the study by Bhattacharya et al. (2008). We noticed that the efficiency of the sensor went down when there was a timing error or a scanning error. These errors would decrease the efficiency because the scanning keyboard would have to go through a higher number of scan steps for the participant to fix the error.
User Feedback
To understand which sensor was the most and least preferred, participants were asked to choose which sensor they liked the most and which sensor they disliked the most. The button was liked the most with six votes, and the accelerometer and flex sensor were disliked the most with five votes each. See Figure 8.
As well, 5-point Likert scale questions were posed for likeness, fatigue, and comfort for each sensor individually. The results are seen in Figure 9.
Figure 8: Least and most liked sensor. Higher scores are better.
Button was the most liked sensor, followed by flex, and pressure; accelerometer was the least liked. The button caused the least fatigue by a big margin, with the rest of the sensors causing a similar level of fatigue with each other. Button appears to be the most comfortable, followed by flex, and pressure, and accelerometer.
The button appears to be the preferred sensor overall as it scores favourably in likeness, fatigue, and comfort. On the other hand, accelerometer seems to be the least preferred overall.(a)
(b)
(c)
Figure 9: Qualitative responses using a 5-point Likert scale. (a) likeness, (b) fatigue, (c) comfort. Higher scores are better.
These statistics were collected to understand if the sensors had the potential to be used in accessible technology based on how the user experience was. We understand that this data about likeness, fatigue, and comfort is not an accurate representation for people with disabilities and limited motor control. The likeness, fatigue, and comfort of a sensor will vary between different people and the difference in disability.
CONCLUSION
Our study aimed to evaluate and compare speed, and efficiency of four sensor-based inputs in controlling a Qwerty scanning keyboard. The experiment revealed that the button sensor was effective and with high efficiency. It also had the fastest speed, with very positive user feedback. The accelerometer sensor had the lowest efficiency rate. It also had the slowest speed compared to the rest of the sensors and received poor user feedback. Along with this, the flex sensor and the pressure sensor also proved effective with higher efficiency and higher speed compared to the accelerometer. In conclusion, we can say the most effective sensor was the button sensor, followed by the flex sensor, the pressure sensor, and with the accelerometer coming in last.
Future work will add word completion and will also compare the Qwerty layout to a scanning ambiguous layout where entry rates around 5 wpm are reported (MacKenzie and Felzer, 2010).
As HCI continues to develop, the need for accessible alternative technology increases. The diversity of sensor-based inputs for keyboards makes them the best and affordable option for accessible keyboards. Despite this there are some disadvantages to them, like input delay, sensor glitching and user fatigue and low satisfaction. However, sensor-based input keyboards help enhance user accessibility and inclusivity in the HCI domain.
REFERENCES
Abrams, A. M., Weber, C. F., and Beckerle, P. (2018). Design and testing of sensors for text entry and mouse control for individuals with neuromuscular diseases, Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility – ASSETS '18. New York: ACM. pp. 398-400. https://doi.org/10.1145/3234695.3241012
Bhattacharya, S., Samanta, D., and Basu, A. (2008). User errors on scanning keyboards: empirical study, model and design principles, Interacting with Computers, 20(3), pp. 406-418. https://doi.org/10.1016/j.intcom.2008.03.002
Cowan, R. E., Fregly, B. J., Boninger, M. L., Chan, L., Rodgers, M. M., and Reinkensmeyer, D. J. (2012). Recent trends in assistive technology for mobility. Journal of Neuroengineering and Rehabilitation, 9(1), 20. https://doi.org/10.1186/1743-0003-9-20
Elsahar, Y., Hu, S., Bouazza-Marouf, K., Kerr, D., and Mansor, A. (2019). Augmentative and alternative communication (AAC) advances: A review of configurations for individuals with a speech disability. Sensors, 19(8), 1911. https://doi.org/10.3390/s19081911
Javaid, M., Haleem, A., Rab, S., Singh, R. P., and Suman, R. (2021). Sensors for daily life: A review. Sensors International, 2, 100121. https://doi.org/10.1016/j.sintl.2021.100121
MacKenzie, I. S. (2009). The one-key challenge: Searching for a fast one-key text entry method. Proceedings of the ACM Conference on Computers and Accessibility – ASSETS '09, pp. 91-98. New York: ACM. https://doi.org/10.1145/1639642.1639660
MacKenzie, I. S., and Soukoreff, R. W. (2003). Phrase sets for evaluating text entry techniques. Extended Abstracts of the ACM SIGCHI Conference on Human Factors in Computing Systems – CHI '03, pp. 754-755. New York: ACM. https://doi.org/10.1145/765891.765971
Waddington, C. T., MacKenzie, I. S., Read, J. C., & Horton, M. (2017). Comparing a scanning ambiguous keyboard to the on-screen QWERTY keyboard. Proceedings of the 31st International British Computer Society Human-Computer Interaction Conference – HCI 2013, London: British Computer Society. https://doi.org/10.14236/ewic/HCI2017.103