couple of weeks ago, we released the latest iteration of the playtest materials. I thought it would be useful to give everyone insight into how we manage feedback and the role your feedback plays in developing the game.
To start with, we're happy to report that we have a large number of players involved in this playtest. It's critical that we have many people taking part so that we can ensure that the feedback we receive represents a broad portion of RPG players and DMs.
Second, the number of players participating in the playtest steadily increases. This is another important area that ensures our results aren't skewed over time. We don't learn much if people are happier and more positive over time but lots of people are leaving the playtest. That just means we're keeping the happy people and driving off the unhappy ones. It's good to see that's not happening. If you're mad about how we screwed something up, thank you for sticking around.
So, what actually happens behind the scenes with your feedback? Chances are that if you've answered a survey, I've read what you've had to write.
To start with, we work in two-week sprints aimed at producing material for the next package. If you've worked in software development, you might be familiar with the Scrum process. We've been using it since the late summer to drive our efforts, and so far it is working well. Scrum focuses our efforts on delivering features of the game—classes, races, tactical combat, multiclassing—in an iterative manner.
Within that process, I serve as the product owner. Basically, I'm the advocate for the customer. That's where the playtest feedback plays a huge role. I can't speak for you guys if I can't hear what you're saying. So, after collecting each survey's results, I spend a day or two looking at the survey data and reading through the individual comments. From those results, I try to categorize every key element of the game into one of three categories.
Red elements are in trouble. With red elements, we have a lot of people expressing frustration or unhappiness, and we need to make a change. I usually look much more closely at anything where more than 10 percent of the people have given something a rating of 1 or 2. The specific comments then provide details and advice on why people are unhappy.
Yellow elements are our underachievers. People might not be unhappy, but we don't have a lot of people who are happy. In this case, most people are giving something a 3 on our scale of 1 to 5. Once again, the comments are useful in tracking down what's happening. These elements usually receive some amount of thought, but they have to wait for us to clear out our red issues. Also, there are times when an element is fine at yellow. A rule or subsystem that is aimed more at utility than excitement, say the rules for surprise, can sit in this zone for a bit. If something is at yellow but we receive few comments about it, we might leave it be.
Green elements have passed the test. A significant majority of people rated it at 4 or 5 out of 5. We try not to mess with things that have gone green, and if we make changes we keep a careful eye on how things move in future surveys.
When it comes to fixing things, we have a few options. The easiest red elements to repair have consistent, clear direction based on survey comments. Changing Glancing Blow from a die roll of 10 or higher to an attack result of 10 or higher was easy because playtesters were consistent in telling us that the mechanic didn't make any sense given our attack bonuses and monster ACs.
In other cases, things aren't quite as clear. In the early playtest rounds, we had a lot of people bored with the fighter but a bunch more who were happy with a simple take on the class. Resolving that tension took a lot of brainstorming and experimentation. I believe we're in a similar position with healing and spellcasting, and those are two areas in which we're devoting a lot of time and attention.
As for our overall progress, our next two big areas are multiclassing and high-level play. With those two areas in place, we'll move on to several tasks:
- Refine the key systems and content of the game. Once something is finished, we'll continue to polish it based on feedback as described above.
- Expand content to cover more classes and races, ranging from core D&D elements such as the paladin, half-orc, and cavalier to world-specific stuff such as draconians, warforged, and bladesingers.
- Our spell survey was very useful for determining our design direction, so we'll launch something similar for prestige classes and feats.
- Review the entire system to lock down the core of the game, which is the starting point and the simplest expression of D&D Next. This step mainly consists of cutting out as much stuff as possible—rules and character options—to create a true core of D&D.
- With the core in place, we can either complete or kick off the design of a number of rules modules, such as alternative magic systems, tactical combat, mass battles, skirmish battles, planar travel, gritty wounds, and realm management. If it was in the D&D Rules Cyclopedia, it's something that we're likely to cover with a rules module.
In parallel with all this, I'm working closely with our business team to build a product plan for the game. That's probably the most exciting and most challenging part of the process.
So, if I say this too often, I'm still not saying it enough: Thank you for taking part in the playtest. We literally could not do this well without you.
Mike Mearls is the senior manager for the D&D research and design team. He led the design for 5th Edition D&D. His other credits include the Castle Ravenloft board game, Monster Manual 3 for 4th Edition, and Player’s Handbook 2 for 3rd Edition.