CrowdTesting

CrowdTesting is a company that provides qualitative and quantitative research of digital services and products in all countries. They perform functional, integration, acceptance and cross-platform testing with the help of the largest community of testers in Europe.
One of the initiatives that Crowdtesting came up with was the development of a platform that would allow them to upload all CSV files of the testings they do and allow the admin users to do a thorough research with multiple criteria and conditions that would provide them multiple projects results at the same time.
Overview
In order to provide a platform that would address all the needs of the users, we did a survey which allowed them to answer questions around the current pain points and features that they would expect a platform to have in order to make their job easier when searching for specific terms within a multitude of projects.
From the 9 responses we had, we were able to identify the points we needed to address and from there define what would be the features we would build as an MVP.
Admin User Interviews
One of the biggest challenges of this project was to communicate in an effective way with the team in Crowdtesting. Since this company is in Russia, most of them speak only Russian, and therefore I had to have the support of the project manager that spoke English and was able to translate the questions and requirements.
The second challenge was to be able to design a platform that would show projects side by side with multiple questions without being a table that they would have to scroll through kilometres of data.
Challenges
In order to understand how we could build this platform in a way that its simple and easy to use, we did a competitor analysis in which we could see how other platforms that allow better filtering were displaying their features.
Through this research we were able to identify multiple ways in how we can approach the display of multiple projects and point other useful features that we could do for the MVP.
Competitor Analysis

With all the information now collected, we started putting the wireframes in order to understand how the features of the platform would sit together. This allowed us to understand a bit better the best way to display the elements and gather early feedback from the team to see if that’s how they were expecting it to work.
Wireframes
After refining the wireframes, we started putting together the high fidelity screens focusing on providing a seamless experience throughout all pages, also making sure that we were consistent in components and accessible.
Once approved by stakeholders, we organised a session for handover to the development team, which luckily both understood english so we were able to easily go through the screens to answer questions and check the flow.
High-Fidelity & Hand-off


Outcomes
By restructuring the admin workflows and simplifying task logic, the platform evolved from a friction-heavy internal tool into a scalable testing engine adopted across multiple product teams. The redesign focused on reducing cognitive load for admins, clarifying configuration patterns, and streamlining the tester onboarding journey, ensuring research could be run faster without compromising quality.
The impact was measurable. Task submission rates increased by 18%, while tester drop-off during onboarding decreased by 31%, indicating a smoother and more intuitive entry experience. Average test completion time improved by 22%, helping teams gather insights more efficiently. Overall satisfaction scores increased from 6.4 to 7.8 out of 10, reflecting stronger confidence in the platform’s usability and reliability.
Operationally, the redesign reduced manual QA coordination effort by 30%, standardised test configuration patterns across squads, and improved data consistency for reporting and insight synthesis. What was once a fragmented process became a scalable, repeatable system, enabling multiple product teams to run structured, high-quality research in parallel.