Teaching for Mastery in the MPH

Teaching Excellence in Public Health | March 6th, 2018

For many years I have taught an introductory course in epidemiology at Boston University School of Public Health, a required core course for the Master of Public Health (MPH) degree program. One widely held tenet was that the core courses for the MPH provided a fundamental body of knowledge and skills that all MPH candidates need to master. The school policy has been that students had to get a grade of B- or better in each of the core courses. I generally used a semester average of 80 percent as the minimum to earn a B-. One way of looking at this is that as much as 20 percent of the fundamental knowledge and skills had not been mastered, and even the few students who achieved final exam scores of 90 percent or better may have had some deficits. Nevertheless, students with a semester average of 80 percent or greater proceeded to higher-level courses despite these gaps in understanding.

What is Mastery Learning?

The traditional model of teaching consists of a year and is divided into semesters, with students taking four or five courses per semester. All students in a given class move from one topic to another each week, regardless of whether they have mastered the preceding material. I would assign readings and graded problem sets each week, and students would take two or three exams during the semester. The graded weekly problem sets and the exams always indicated that most students had not fully mastered the material that had been covered, but we would keep marching along and move on to the next topic each week. This traditional model presumes that a fixed amount of time will be devoted to any given subject, regardless of the pre-existing experience, skills, and talents of the students, and this approach inevitably results in some students failing to truly master the fundamentals.

Benjamin Bloom was a proponent of “mastery teaching.” In a tribute to Bloom, one of his former students said:

“The variable that needed to be addressed, as Bloom saw it, was time. It made no pedagogical sense to expect all students to take the same amount of time to achieve the same objectives. There were individual differences among students, and the important thing was to accommodate those differences in order to promote learning rather than to hold time constant and to expect some students to fail. Education was not a race. In addition, students were allowed, indeed encouraged, to help one other. Feedback and correction were immediate. In short, what Ben Bloom was doing was applying in a very rational way the basic assumptions embraced by those who believe the educational process should be geared towards the realization of educational objectives. He believed that such an approach to curriculum, to teaching and to assessment would enable virtually all youngsters to achieve success in school.”1

Thomas Guskey further elaborated on Bloom’s concept of “mastery learning”:

“Bloom believed that all students could be helped to reach a high criterion of learning if both the instructional methods and time were varied to better match students’ individual learning needs. In other words, to reduce variation in the achievement of diverse groups of students and have all students learn well, Bloom argued that educators and teachers must increase variation in instructional approaches and learning time. Bloom labeled the strategy to accomplish this instructional variation and differentiation mastery learning. Research evidence shows that the positive effects of mastery learning are not limited to cognitive or achievement outcomes. The process also yields improvements in students’ confidence in learning situations, school attendance rates, involvement in class sessions, attitudes toward learning, and a variety of other affective measures.”2

Salman Khan (Khan Academy) is also a proponent of mastery learning, and he addressed the issue of the practicality of this method in a TED talk in 2015.3 He said:

“Now, a lot of skeptics might say, well, hey, this is all great, philosophically, this whole idea of mastery-based learning and its connection to mindset, students taking agency over their learning. It makes a lot of sense, but it seems impractical. To actually do it, every student would be on their own track. It would have to be personalized, you’d have to have private tutors and worksheets for every student. And these aren’t new ideas — there were experiments in Winnetka, Illinois, 100 years ago, where they did mastery-based learning and saw great results, but they said it wouldn’t scale because it was logistically difficult. The teacher had to give different worksheets to every student, give on-demand assessments.

But now today, it’s no longer impractical. We have the tools to do it. Students [need to] see an explanation at their own time and pace? There’s on-demand video for that. They need practice? They need feedback? There’s adaptive exercises readily available for students.”

A Mastery Learning Component for a Core Course in Epidemiology

Two observations induced me to begin experimenting with mastery learning for my introductory epidemiology course several years ago. First, within any given class there was a great deal of variability in the students’ comfort with quantitative concepts and skills, and many students were afraid of the core courses in epidemiology and biostatistics. In addition, students frequently asked for more practice problems, and they uniformly appreciated immediate feedback on their solutions and answers. I had already created automated online problem sets consisting of 15-20 questions or problems in Blackboard. For the problems I generally created “calculated numeric” questions in which the student has to compute and express the answer in a specific way (e.g., per 100,000 population), round off their answer as specified and submit the actual number rather than selecting a multiple choice answer. For these problems and for many of the other questions I had also included detailed feedback. Students were required to complete the corresponding problem set during the week following each class discussion, and they would immediately receive their score and the associated feedback for each question. Scores were automatically recorded in the Blackboard grade book.

My plan was to build on this by creating pools of questions for each weekly topic area (e.g., measures of frequency, measures of association, bias, confounding, etc.). A teaching assistant helped solicit and collect relevant questions and answers from past quizzes and tests from my colleagues in the epidemiology department. We selected questions that addressed the core course learning objectives and added feedback, eventually ending up with 40-90 questions in each pool. I then created new weekly problem sets in Blackboard by creating quizzes that randomly drew 10 questions from the pool of questions for each topic. Students were required to complete each post-class quiz at least once within one week after the class discussion. However, the test parameters were set to allow students to take the quizzes as many times as they wanted for the entire semester, but each time a given quiz was opened 10 questions were drawn at random from the corresponding pool. I set Blackboard to record the highest score achieved on any given quiz. There were 11 quizzes, each with its own pool, and at the end of the semester the average score on the 11 quizzes made up 25% of their semester grade.

When I announce this scheme on the first day of classes, students have been stunned at its novelty and apparent generosity. However, I quickly point out that while the quizzes are a component of their final grade, they really have to do well on the midterm and final exams in order to secure a passing grade. I emphasize that, while I cannot prevent them from getting help from others on the problem sets, there is no advantage to them in doing this, since there is no real risk in doing the quizzes themselves. Moreover, the quizzes help them prepare for the in-class midterm and final exams. In fact, some of the questions and problems on the exams come directly from the weekly quiz pools.


There were 51 students in the course the first semester I used this mastery model. During this initial trial, the quizzes were set to record the last score achieved for any given quiz in the online gradebook. Students took advantage of the mastery model, and almost always took the quizzes over until they had achieved a high score. Table 1 below shows the mean quiz scores recorded in the gradebook (i.e., the score on the last attempt) and also shows the mean scores achieved on the midterm and final exams.

Tables and figures:


My students have been universally enthusiastic about this new “mastery learning” format. There are several confounding variables in this study that make it difficult for me to draw a definitive conclusion about whether the master learning format led to an improved performance, including increased rigor of the exams. There was potential confounding by variability in the quality and experience of the students from year to year and from section to section and confounding by differences in the methods of instruction and methods of assessment among different sections.

However, in fall 2016 we introduced a totally new core curriculum, which includes an integrated course called “Quantitative Methods in Public Health”, that now has uniform content across sections. Each fall this new core course will have roughly 400 students randomly assigned to one of 6 sections. We plan to create question pools for this new course, and one possibility is to utilize the mastery model in three of the six sections, while the other three will provide a comparison group. The primary endpoints in this study would be a) scores on the midterm and final exams and b) scores on a follow-up exam given one year later in order to compare retention between the two groups.

Teaching Example