Thoughts from our Experts
Join us here each month for blogs from our expert staff. Subject matter will be helpful to decision makers and practitioners of Learning and Development departments.
Project Management for eLearning
What does good look like? And who decides that anyway?
When we talk about project management for elearning, the discussion often centres on established development processes like ADDIE, or some of variation on it; but what I find lacking in many of the project management methodologies and/or elearning development models is the concept of quality management. Who decides what good looks like? And at what point? And where is the learner in this discussion?
To me, these are the most important questions, because it is in the process of “acceptance” that projects get into trouble. In other words, you deliver something to the client/approver (or, in the end, the learner), and they don’t accept it. In their minds, it is not “good” enough; and so begins the discussion on what good looks like, which takes time (of which there is never enough) and resources (ditto).
How did this happen?!
Let’s look at how the problem arises, and then some techniques for preventing it:
The first part of the problem lies in the waterfall approach that most project management methodologies prescribe, including ADDIE. Their founding principle is this: Once you produce a deliverable, there is no going back (other than on a superficial level to correct mistakes, add a bit here and/or take out a bit there). This approach is prescribed because it seemingly strengthens the “control” aspect of project management, and control is important.
This would be wonderful, if only it was workable; but consider that in elearning projects, we are often creating something entirely new out of something that either does not exist (new online training) or exists in an entirely different form (an existing classroom course). So while ADDIE correctly allows for the client and project teams to incrementally design the new course, there are still quantum leaps from concept (the storyboard) to actual design/development (the completed modules). It is during these leaps that expectations can get out of whack.
Let’s use the storyboard to full development leap as an example. Client stakeholders review a large document (the storyboard) that contains wonderful text-based descriptions of images and interactivity. Visions of colour, interactivity, and soundtracks dance in their heads; and, so, it is approved. However, when they see a fully developed module, it may be VERY different from what they thought would come out of the storyboards. They raise this with you, the project manager, and you correctly point out that, no, the module is exactly as described in the storyboard; and you are technically right, but the client/approver is also right. Now we have a problem.
The issue ultimately comes down to managing expectations with the main project stakeholders and ensuring that we get informed acceptance on a deliverable. To do this, we need to challenge traditional methodologies like ADDIE, and make them more iterative.
Here are two simple, yet highly effective quality management techniques that work to do just that:
Prototyping
In the traditional ADDIE approach, creative design and development doesn’t happen until all instructional design and storyboarding has been completed and approved. This is a problem, since the project is way too far along in terms of budget and schedule at this point, especially considering that the instructional design and storyboarding phases are usually carried out by just one resource, the instructional designer, so you cannot easily speed it up. So, by the time the first module is programmed, it can be well past the half-way point of your project, and you are likely in no position to deal with acceptance problems.
To avoid this, I start a prototype of one module as soon as that module’s storyboard is approved by the client. If I am working with a client who is new to elearning, or one that is new to me and my team, I will also build in an exercise prior to prototyping to review existing course design samples to give the client a sense of what our design will ultimately look like, and gain insight on design ideas/expectations they might have.
This can all be done in parallel to the storyboarding process, so you are not losing schedule/time in the process. Once you and the client have aligned expectations on the design of the modules, the prototype is built, and you are ready to employ another quality management technique that I advocate, usability testing.
Usability Testing
Usability testing is a highly efficient process that can also be done earlier on in the project in parallel with storyboarding tasks, and which produces tremendous insight on the quality and effectiveness of your design. There are many variations on how to conduct usability tests, but I keep it simple and have only one mandatory criteria: use an observed testing model, in which participants work through the prototype while silent observers track their actions. To set up the usability sessions, I ask the client to arrange for 10 - 15 representatives from the target learner population. These learners are invited to a two-hour session, along with the project team and client group. In the session, users are asked to work through the prototype and provide comments as they go. The project team and clients act as observers, and track/document the users’ running commentary. After all users have been through the prototype, I also hold a facilitated discussion that probes a little deeper into the participants’ comments/feedback, and in this discussion further insight is gained on the design.
The feedback is then collated after the session, and the client and project team meet to discuss what should be changed, removed, refined, etc. Most often, the changes are minimal, but there are always one or two great suggestions or insights provided by the users that get incorporated – refinements that the client and project team would never have thought of, as we can often not see the forest for the trees in our design at this stage.
But here’s the best part about usability testing: up to this point, the question of what is “good” has been a discussion between the project and client teams. Now, you have validated the design with the most important project stakeholder, the learner; and you have done so early on in the project (not even half-way through, oftentimes). This is particularly valuable when the client and project teams disagree on what the design should be (and that never happens, right?). Now it doesn’t matter who thought what was best, because you have a grounded, impartial validation and direction from the learner. That is so powerful, because in the traditional ADDIE approach, you are not getting learner feedback until after the course is fully developed (assuming the client does any evaluation at all, as many don’t).
So why don’t we do these things?
Well there are many reasons, but they usually all relate to one common cause, and that is that the traditional project management community and practice does not like the concept of iterative design. It is seemingly too uncontrollable. ...But I ask you, what is the point of a product/project that meets deadlines and budgets but which fails with the audience?
When planning your next project, build in some quality management techniques, and include the learner in your design and development process early – it may seem like extra work; but, in reality, it will save you time and money and ensure that your design works.
Created by Tim Birch-Jones, Instructional Designer. Dec. 1, 2013.