ADDIE Instructional Model: Evaluation Phase

CEIT 317 - Instructional Technology and Material Development Course IntroductionFAQSite MapContacts

IST HomeCourse IntroductionContact InformationR522 SyllabusR522 ScheduleR522 PresentationsR522 ResourcesR522 AssignmentsR522 Assignment Drop BoxR522 GradebookR522 Discussion ForumR522 Course EvaluationR522 FAQR522 Site Map


ADDIE Instructional Model: Evaluation Phase

Learning Objectives

Upon completion of this component, the learner will be able to:


  • identify the difference between formative and summative evaluation;
  • explain the relationship between analysis and evaluation in the instructional design process;
  • list several reasons for conducting evaluation;
  • define four types (levels) of evaluation that may be used to measure the impact of training;
  • discuss strategies for using Kirkpatrick's four levels of evaluation.

Topic Overview


In the presentation on Development and Usability testing, you learned something about how to evaluate your materials so that you can make informed design decisions regarding what needs to be changed, what needs to be added, and what needs to be deleted from your instructional materials in order to best facilitate learning. This type of evaluation is generally known as "formative" evaluation, because you complete it during the formative stages of the instructional design process, and it helps you to form the content and processes of your instruction.

Another kind of evaluation that is important in the instructional design process is generally called "summative evaluation," and is conducted after an educational experience has been implemented, in order to determine whether the students achieved the objectives of instruction, whether their learning transferred to "the real world" and in the case of corporate training, what impact this transfer of training had on the bottom line and the goals of the organization.

You will notice that, in the cases of both formative and summative evaluation, the emphasis is on evaluating the quality of instruction, not the quality of the learner. A general assumption in the field of IST is that most learners are capable of learning (unless the learner faces special challenges or disabilities) and the onus is on the instructional designer to design high quality instruction that will facilitate the learning of all learners. In other words, evaluation usually says a lot more about the quality of instruction than it does the intelligence of the learner.

Another general assumption in the field of IST is that analysis and evaluation are inextricably intertwined, and that one can not do a very good summative evaluation at the end of the ADDIE process unless a thorough analysis was completed at the beginning of the ADDIE process. This assumption is based primarily on the fact that the objecitves for the instruction are identified during the analysis phase, and without clear and appropriate objectives, it is impossible for the instructional designer to determine what should be measured during the evaluation phase.


There are three good reasons for conducting summative evaluations:

  1. Data from an evaluation can tell us how to improve future instruction/training programs;
  2. Evaluation data can help us to determine whether a particular instructional experience/training program should be continued or dropped;
  3. In corporate training departments, evaluation data can provide evidence to justify the existence of a training department; just as in school contexts, evaluation data is used to justify school expenditures.


Donald Kirkpatrick has identified four types/levels of evaluation that may be addressed when evaluating instructional programs, including:


  • Level 1: Participant Reaction
    Measures how those who participate in the program react to it - a measure of learner satisfaction - positive reaction may not ensure learning, but negative reaction almost certainly reduces the possibility of learning occurring as a result of the instruction
  • Level 2: Participant Learning
    The extent to which participants improve knowledge, increase skills, or think differently (change attitudes) as a result of engaging in the learning experience - in order to evaluate learning, specific objectives must be determined and pretesting must be completed - this is the level at which the results of the instructional experience are purely measured (meaning that the results are entirely dependent on the learning experience, and aren't impacted by other aspects of the environment (see levels 3 and 4)
  • Level 3: Transfer of Learning Outside of the Classroom/Change in Behavior
    The extent to which there is a change in the learners' behavior that continues after the learner has engaged in the learning experience, this is contingent on other elements besides the quality of the instructional experience, such as a learner's motivation and incentive, as well as environmental supports
  • Level 4: Long-Term Results / Return On Investment
    Return on investment to an organization as a result of learners engaging in the learning experience (in terms of time, cost, quality) - this is also contingent on other elements besides the quality of the instructional experience, such as a learner's motivation and incentive, as well as environmental supports - education professionals can never take full credit for Level 4 results


Kirkpatrick's model lists four levels that provide a *sequence* for evaluating instructional experiences. Kirkpatrick's model is sequential because it would be impossible to complete a level 4 evaluation without having completed levels 2 or 3. Each level provides important information in its own right, each level is a stepping stone to the next level, so none of the levels should be bypassed in order to get to a level that is considered more important. Each level provides progressively more valuable information regarding the impact of the learning experience, but at the same time, as one moves from level 1 up to level 4, the process of evaluation becomes more difficult, more nebulous, more costly and more time-consuming - which is why relatively few organizations actually conduct level 4 evaluations. Yet, level 4 evaluation is the most worthwhile and meaningful type of evaluation, if one has the resources to complete evaluation at that level.


Leshin, Pollock & Reigeluth, 1992: Instructional Design Strategies and Tactics. Unit 5 - Evaluate the Instruction.

Review and Discussion Questions


  1. Do you think that evaluating students' reaction to instruction (Level 1) is important? Why or why not?
  2. In your opinion, what level of evaluation is most important to the instructional designer? Give a reason for your opinion.
  3. What are the links between the analysis and evaluation phases of the ADDIE model? Why are analysis and evaluation so closely tied together?
  4. Why is it so difficult to complete Level 3 and Level 4 evaluations?
  5. What's the difference between "formative" and "summative" evaluation?
  6. What's the difference between assessment and evaluation?
  7. Is the purpose of evaluation to evaluate the instruction or the learner? Explain your answer.
Last modified: Monday, 12 September 2011, 5:42 PM