380.611.01 | AY 2012-2013 - 3rd Term | 4 Credit(s)
WF 1:30:00 PM
  • Contact Information
    Kristin Mmari
  • Course Learning Objectives

    Upon successfully completing this course, students will be able to:

    • Describe the purpose of different types of evaluation
    • Design a conceptual framework that explains program impact, based on program objectives
    • Develop indicators based on the conceptual framework
    • Identify sources of data at the program and population level corresponding to different types of evaluation
    • Describe the purpose of needs assessment and steps in the process
    • Describe the purpose of formative research and identify the most common methods
    • Explain the purpose of pretesting communications and the most common methods
    • Outline the advantages and disadvantages of using service statistics for program evaluation
    • Use a computerized MIS to obtain and interpret routine service statistics
    • Describe the elements of experimental and quasi-experimental designs, and explain how they address the threats to validity
    • Outline the characteristics, advantages and limitations of randomized control trials for evaluating impact
    • Design an evaluation plan
  • Course Description
    Familiarizes students in different types of program evaluation, including needs assessment, formative research, process evaluation, monitoring of outputs and outcomes, impact assessment, and cost analysis. Students gain practical experience through a series of exercises involving the design of a conceptual framework, development of indicators, analysis of computerized service statistics, and development of an evaluation plan to measure impact. Covers experimental, quasi-experimental, and non-experimental study designs, including the strengths and limitations of each.

    Additional Faculty Notes:

    The following is the course description updated on 1/14/09:

    Familiarizes students in different types of program evaluation, including formative research, process evaluations, monitoring of outputs and outcomes, impact assessment, and cost analysis. Students gain practical experience through a series of exercises involving the design of a project logic model, development of indicators, design of a process evaluation, and selection of a study design for an impact evaluation. Covers experimental, quasi-experimental, and non-experimental study designs, including the strengths and limitations of each.

  • Intended Audience
    Masters and Doctoral Students

    Additional Faculty Notes:

    This course is designed for students interested in acquiring knowledge and skills required for designing and implementing evaluations of program interventions.  Program evaluation is an important component of public health, because it allows managers to identify ways to improve their programs mid-course. Also, it allows evaluators (1) to determine if change occurs consistent with the program objectives, and (2) to establish that the program intervention caused the observed change.

    This course provides students at the masters level with the basic skillset required to conduct M&E (monitoring and evaluation) in the context of public health/social development programs. It provides doctoral students with this same set of skills, which combined with other coursework in biostatistics, research methods, and data analysis will allow them to conduct more advanced analyses of program impact.

  • Methods of Assessment
    Student evaluation is based on five exercises.

    Additional Faculty Notes:


    Updated on 1/14/09:

    The student's grade will be based on 4 exercises.  In the event that the student's final grade is borderline between two grades (e.g., an A and a B), class participation will favor the student's getting the higher grade.

  • Prerequisites

    Additional Faculty Notes:

    There are no prerequisites to this course.

  • Required Text(s)

    Additional Faculty Notes:

    There is no required textbook. The required reading will consist of articles and manuals described on the course syllabus.

    *Note:  One classic evaluation textbook is Rossi, P., M. Lipsey, and H. Freeman. Evaluation: A Systematic Approach. 7th edition 2003: Thousand Oaks, CA; Sage.  It is not required reading for this class. Students intending to specialize in program evaluation may want to purchase this textbook and read relevant chapters that parallel the syllabus.


  • Course Schedule

    Please see the course Session for a full list of dates and items for this course.

  • Files from the Online Library
  • Academic Ethics Code

    Students enrolled in the Bloomberg School of Public Health of The Johns Hopkins University assume an obligation to conduct themselves in a manner appropriate to the University's mission as an institution of higher education. A student is obligated to refrain from acts which he or she knows, or under the circumstances has reason to know, impair the academic integrity of the University. Violations of academic integrity include, but are not limited to: cheating; plagiarism; knowingly furnishing false information to any agent of the University for inclusion in the academic record; violation of the rights and welfare of animal or human subjects in research; and misconduct as a member of either School or University committees or recognized groups or organizations.

  • Welcome Message

    Anne Lilly

    TA Office Hours: Fridays, 12:00-1:00, Wall of Wonder & by appointment

    Laura Hinson

    TA Office Hours: Wednesdays (right after class), Wall of Wonder & by appointment

    Hannah Lantos

    TA Office Hours: Thursdays, 12:15-1:15, Wall of Wonder & by appointment

    Jenita Parekh

    TA Office Hours: Tuesdays, 12:30 - 1:30, Wall of Wonder & by appointment


    Dr. Mmari's office hours: Thursdays 12:00 - 1:00 pm; Fridays, 12:15- 1:15


  • Contact Information(from old syllabus)

    Kristin Mmari
    Office: E4620
    Tel: 410-502-3112

    Jane Bertrand, PhD, MBA
    Office: 111 Market Place Suite 310
    Tel: 410-659-6300
    Home Page:

  • Course Objectives(from old syllabus)

    Objectives for Fundamentals of Program Evaluation (updated on 1/5/2011):

     By the end of this course, students will be able to:

    1. Explain the major concepts in program evaluation:

    • Types of evaluation and their purpose
    • Inputs, processes, outputs, outcomes, and impact
    • Sources of data
    • Study designs, including randomized control trials, and threats to validity

    2. Perform skills required in conducting program evaluation:

    • Design of a logic model
    • Develop objectives and indicators
    • Develop an evaluation study design
    • Conduct a focus group

    3. Write an evaluation plan

    Other topics that are important in program evaluation but not covered in this course:  

    • Surveys (design, data collection, data processing analysis, and interpretation)
    • Surveillance (behavioral, bio markers)
    • Other qualitative methods (in-depth interviews, observation checklists, and other)
  • Disability Support Services
    If you are a student with a documented disability who requires an academic accommodation, please contact the Office of Student Life Services at 410-955-3034 or via email at