I've loved user research ever since I first started testing my games. I discovered a passion for figuring out how to improve games through testing, and quickly took my first class on the subject, "User Research and Testing," which is where I learned how to write surveys and conduct tests properly. Now I TA for that same class, and since then I've taken "Data Visualization" and "Fundamentals of Psychological Research" to help bolster my skills. I've had hands-on experience testing games in a team setting, as well as testing mockups and even custom game engines.
My current team-project has given me the opportunity to test a multiplayer-networked game, so I've included a link to download my most recent write-up, and I've broken down my process below.
Summary of Experience
User Research Teaching Assistant at DigiPen Institute of Technology
User Researcher on a 3D Multiplayer Networked Action-Stealth Game
User Researcher on a 2D Co-Op Platformer (Shortstack)
I'm a very goal-oriented person, so my process always begins with a goal. The goal of testing is to answer questions the team has and to gather data about how to improve the game. The writeups I do aren't graded, so they're done with the specific purpose of helping the team out, which allows me a lot of freedom to work. As backstory, the game I'm drawing examples from is a 3D multiplayer action-stealth game where thieves try to steal artifacts from a museum, and guards try to stop them.
1. Consulting the Team
I start my process by seeing if the other developers have any questions they want to be answered about our game. An example of this in our current game might be "which character is more fun to play as, thieves or guards?" It's important to write everything down, as sometimes I'll hear questions that would better be answered at a later date. For example, if we're reworking how movement works, we wouldn't want to ask the question "Does movement feel responsive?" as that question is more useful once the rework is complete.
2. Operational Definitions
Once I have the questions the team wants to be answered, I have to operationally define terms. For example, if the question is "Which abilities are the strongest, and which are the weakest?" I have to operationally define "strength." Is it the number of times players use an ability? Is it some correlation between use of the ability and winning? It might be a combination of multiple things, but it's important to look at the data from multiple angles to see if anything needs to be adjusted. For example, if any abilities are almost never used, we want to explore why that is, and possibly try to change it by increasing the effectiveness of the ability or something similar.
3. Creating the Testing Instrument
Once I have all of my questions and operational definitions, I need to create the testing instrument. My testing instruments usually consist of a simple script, pre-test survey, and post-test survey. The goal of the pre-test survey is to collect basic demographic data and try to account for bias, which is especially important because I typically use a convenience sample. The goal of the post-test survey is highly variable, but it always aims to gather data on certain aspects of the experience. I also make a form for myself to fill out during the test to gather observational data, such as the number of deaths per player. The screen of the player is typically recorded, and occasionally we might record player facial expressions in the future if given consent.
I normally conduct these tests with an assistant to make my life easier, as our game requires 4 players to be playing simultaneously. The testing process begins with a convenience sample, because DigiPen, unfortunately, has nothing close to a human subjects pool. I read from the script and administer the pre-test survey, then I explain anything we need to explain about the game. The players sometimes have a few minutes to play around with the controls and explore the map, then the actual test begins. Players play the game in teams across the room from each other while we observe their play and reactions. I record anything noteworthy using the form I create for myself in the previous step. Finally, I ask players to fill out a post-test survey, ensure the test didn't cause any sort of serious stress (debriefing), and send them on their way.
5. Results, Analysis, Write-Up
After the testing is done, I analyze the results and create a write-up for my team. Some of this write-up is documentation for our future selves, but the primary focus is providing actionable data. I start by using the data to answer any questions asked by the team, then I move on to other things we could improve. This is what I mean by actionable data, an example of this was "Guard mobility: We might consider giving the guards increased options for mobility, unless the design team wants the contrast in mobility with the thief to be as significant as it
currently is. Either way, the guard currently seems to feel clunky, so we might want to work with the camera and character controller to improve the gamefeel." This is how I usually try to keep things, I might propose a solution, but I try to leave that up to the appropriate department. I always discuss individual problems more carefully with the individuals that would be responsible for solving them, but this is a great example of a problem we are working on solving. The players repeatedly described the guard as "clunky" and "too slow" when asked about them, so we've been working to make them feel large and powerful instead of slow and clunky, as the lack of speed is an intentional design decision. However, now players feel like they're a powerful behemoth, so they rarely complain about the lack of mobility.
If you have any questions or criticisms, feel free to contact me, I'm always looking to improve wherever I can!