
Porting scientifically validated cognitive tests to mobile devices
Project Overview
The company I worked for maintained a battery of scientifically validated cognitive tests assessing different areas of cognition, such as memory, reaction time, and reasoning. These tests had traditionally been used in-clinic, first on paper, then via desktop computers, and later handheld tablets.
With the increasing demand for decentralised clinical trials (DCTs), we aimed to port our most widely used tests to mobile devices — allowing participants to complete assessments remotely, on their own smartphones.
Problem
Unlike standard software or web application ports, these cognitive tests needed to maintain scientific integrity across different devices. Data collected from mobile users had to be directly comparable to data collected via desktop, without any compromise to validity or reliability.
Even small differences in UX could significantly affect results — for example, in reaction time or spatial memory tasks — making the design process critical to the scientific success of the product.
Constraints & Challenges
Several key technical and scientific constraints shaped the project:
- Legacy Voice Instructions:
Existing voice prompts instructed users to collect tokens “in the sidebar on the right.” These prompts were professionally translated into multiple languages. Re-recording them would add significant time and cost, so we needed to maintain this instruction exactly. - UI Design for Mobile:
During the task, up to 12 tokens needed to be collected. Each token had to be a comfortable tap target on a small screen. This requirement influenced the overall layout and screen orientation. - Screen Orientation:
Given the need for large, tappable tokens stacked vertically, combined with the constraint of keeping the sidebar on the right, the test would need to be taken in portrait mode. - Validated Layouts:
The original desktop versions used scientifically validated layouts optimised for landscape orientation. We needed to adapt these layouts for portrait without introducing any new cognitive bias or usability issues that could impact results.
Process
Initial Approach
The first idea was to rotate the existing box layouts by 90 degrees. This was the simplest approach technically and would retain relative box positioning and spacing. We hypothesised that if box locations remained consistent, the cognitive demand would remain comparable between mobile and desktop. I also made sure that the boxes met the minimum tap-target requirements for iPhone and Android.
We set up a study via Prolific, recruiting participants to complete either the desktop or mobile version. We then analysed their performance data side by side.
Results
While initial results were promising, a noticeable difference emerged:
- Mobile users were making more mistakes compared to desktop users.
I conducted usability reviews and hypothesised that this was likely due to the reduced spacing between boxes on mobile screens. On smaller devices, users were accidentally tapping adjacent boxes, introducing unintended errors.
Iterating the Layout
To address this, I designed a new box layout specifically for mobile devices:
- Vertical Distribution:
I adapted the layout to use the taller aspect ratio of mobile screens, spacing boxes further apart vertically to minimise accidental taps. - Consistency with Original Cognitive Load:
Care was taken to ensure that although the visual layout changed, the cognitive requirements (e.g., remembering spatial locations and collecting tokens) remained equivalent.
We re-ran the testing on Prolific, comparing the performance between mobile and desktop users once again.
Outcome
The updated mobile layout led to a significant improvement.
- Error rates between mobile and desktop users were now statistically comparable.
- Our R&D team validated the findings and signed off the mobile version for clinical use.
This enabled the cognitive tests to be deployed confidently in decentralised clinical trials, broadening access for participants and future-proofing the company’s core product offering.
Reflection
This project highlighted how even small UX decisions can have profound implications when scientific validity is at stake. It was a valuable reminder that:
- Legacy constraints (like voice prompts) must be respected without compromising user experience.
- Mobile-first design often requires rethinking layouts rather than merely shrinking existing ones.
- Rapid hypothesis testing and iteration — backed by data — can lead to solutions that meet both user needs and scientific requirements.
It was rewarding to see careful UX design help extend the reach of cognitive science into new, decentralised contexts.
