New Standards For A New Paradigm: Resolving Conflicts in Accessible Practices

July 23, 2024

VoteHub: Digital Absentee Voting

Our client, Tusk Philanthropies, hired Nearform to build the beta version of their Mark-It app, now named VoteHub, which empowers disenfranchised, absentee and UOCAVA (overseas) voters to cast their ballot in U.S. elections.

For many of us on the team, this was an exciting and new opportunity to work with Tusk in the civic space, and lend ourselves to fascinating, complicated, and socially important work. The potential of such an app to drastically and positively impact voter turnout alone, by way of serving segments of the population otherwise unable or less able to participate, underscored the gravity of our job.

Their previously-completed alpha version, partly by way of its underlying tech stack, was unable to deliver on key accessibility and usability fronts, and while many parts of its workflows and business logic were sound, our team was in some ways invited to reimagine the voter-facing product from the ground up.

At the outset of our engagement with Tusk, Nearform spent a two week discovery phase detailing accessibility and usability audits, during the course of which we were able to identify several key opportunities for improving the app’s organization, design, and execution.

Before that, though, our team had some hurdles to overcome, in resolving conflicts between voting system requirements and mobile best practices.

New Standards For a New Paradigm

The Voluntary Voting Systems Guidelines (VVSG) are a set of security, usability and accessibility standards, that any system designed to be used to cast and record a vote in a U.S. election is tested against.

We weren’t certain how or in what way these standards applied to a mobile voting app’s interface, as opposed to ones used in physical locations for which the standards were written.

We combed through the VVSG’s sections pertaining to interface and interaction, and collected our findings as plainly as possible in a Figma document. This was essentially our baseline challenge: We had to create an interface passing the criteria and standards the VVSG laid out, on a screen much smaller than the ones used in voting booths, all while accounting for the many accessibility affordances mobile devices allow for.

VVSG App Behavior

Some standards were to be expected:

  • “Touch area size” (VVSG 7-2.I). The minimum tap target size, and minimum distance between tap targets, should be ½ inch square, or ~48px square.
  • “Voting system must use a sans-serif font” (VVSG 7-1.J). ”Choice of typeface must take into consideration internationalization concerns, including but not limited to extensibility of character sets or alternate forms for low vision/low reading capable voters.”
Typography

Some standards, however, were fairly surprising:

  • The system can’t adjudicate on behalf of a voter” (VVSG 7-2.C). We learned that, whereas in an app or web context certain interaction patterns we would otherwise take for granted were in fact anti-patterns, according to the VVSG. Auto-deselecting a candidate when another is selected, for instance, is disallowed depending on the contest type.
  • Navigation elements must always be visible” (VVSG 7-2.D). For the assumedly small viewports this app would be used on, this requirement was a bit of a shock. Can we not just show a navigation header or footer when the user scrolls back up? Can we really not free up more visual real estate? The point, it turned out, was that by relating fixed parts of the screen to relatively stable actions (e.g. lower-RH corner is Next/Yes, lower-LH corner is Back/No), users with low motor function and those with less cognitive ability (among other types) could be expected to physically and intellectually reach those actions.

On top of the VVSG’s standards, we had web and app best practices to consider.

  • AAA contrast, at minimum AA.
    • We first evaluated contrast using APCA, and then checked against AAA/AA.
  • Use RN’s native components where possible. Users are accustomed to their operating system’s default interface, and using what’s familiar takes away the additional mental load of learning a new interface.
  • Proximity gestalt principle + settings-driven scaling ⇒ Proportional type and spacing values For example, when an interface is scaled for low-vision/low reading capable voters, VoteHub preserves the typographical hierarchy for displaying ballot contest headings/subheadings, and candidate names and their affiliations — in other words, it proportionally scales elements and layouts. We accomplished proportional scaling by way of a design token to control the interface scale, which in turn informed our typographical and spacing design tokens.

Why create our own wireframing component library?

While by no means exceptional, the custom component library we used to wireframe the beta version of VoteHub is worth mentioning here, because of how the set of components and compositions ended up helping the app’s overall accessibility goals.

By following the subset of VVSG’s recommendations pertinent to the app, eschewing superfluous decoration, and in considering legibility, scalable type, tap target sizes, and fixed navigation as fundamental interface concerns, we arrived at a streamlined library that was in some senses an accessibility win: because we had already accounted for the range of accommodations our users needed by way of our interface building blocks, we saved ourselves quite a bit of time and effort down the road when moving on to high-fidelity mocks and, eventually, when we began developing the app.

buttons and form elements

The wireframe-components-first strategy specifically helped vet out our workflow revisions and the viability of atomic interface elements with voting systems and accessibility experts, along with a small set of users representative of the app’s targeted groups. UX testing with real humans was an ongoing feature of our design and development cycle, and while the result of review meetings with those individuals were anecdotal, their collective insight into the particularities, for instance, of screen reader behavior on a variety of devices and operating systems aided in producing a more robust and elegant experience.

Our engineers used Nearform’s open-source library react-native-ama to create an accessible UI primitives component library, which allowed us to confidently scaffold out the full set of components defined in our Figma library, and bake accessibility into each component by default. You can read more about this approach, along with some pointers and tips related to React Native, in our blog post, “Empowering Users: developing Accessible Mobile Apps using React Native.” Additionally, details on the open-source library react-native-ama can be found on our website here.

Related Posts

Rust vs Go: Which Is Right For My Team?

August 29, 2024
In recent years, the shift away from dynamic, high-level programming languages back towards statically typed languages with low-level operating system access has been gaining momentum as engineers seek to more effectively solve problems with scaling and reliability. Demands on our infrastructure and devices are increasing every day and downtime seems to lurk around every corner.

Timing and Talking: Lessons from Working with Screen Readers

August 15, 2024
Often as digital product designers we consider our job to be entirely tied to the visual medium, and spend most of our time gauging the proportions of elements on the screen, what kinds of tactile actions a user can or should be able to perform, and how best to optimize our product’s intended function, look and feel.

Please Don’t Make Me Guess: A Design System Guide for Designers and Developers

August 6, 2024
Vague feedback often adds a lot of frustration to the software creation process. By reducing the number of these blind spots, we could collectively stop thinking, “please don’t make me guess”.