Featured Research

The Class Size Debate: What the Evidence Means for Education Policy

“The days in which lawmakers support schools that are somehow good enough for someone else’s children, but not for their own – those days must be over.” - Arne Duncan, U.S. Education Secretary, January 12, 2015

Why class sizes?

The more students a teacher is responsible for, the harder it is to teach. That’s hard to argue with. Yet the research and policy around this idea is surprisingly controversial. We can say with confidence that smaller class sizes improve grades for younger learners. But the answers to deeper questions are less clear.

The issue of class sizes is a live policy debate. Teachers unions from Oakland, California to Auckland, New Zealand are advocating for smaller class sizes. A call for smaller class sizes was second only to higher salaries for the LA teachers union at recent protests. The president of the National Education Association, Lily Eskelsen García, attributes fighting for smaller class sizes as her motivation for becoming an education activist. 

The idea of smaller class sizes is strongly supported by teachers and the general public. A national survey of 50,000 Americans found that reducing class sizes was perceived to be the best way to reform schools. And yet when public budgets are tight, class sizes are quick to grow. California’s class size cap increased substantially following the Great Recession.

While there is evidence that smaller class sizes improve student learning, the magnitude of the impact must be weighed against the impact of other reforms. For example, would paying higher salaries to retain a high-quality teacher workforce be money better spent? 

The question of which students would benefit is also important: would smaller classes improve high achievers’ learning more than, say, students who are still catching up? Or would the gains in learning be spread out broadly among the children in the class? 

Jessica Tyson, a history and English teacher at Oakland Technical High School, said that the issue is more than just the right size of an individual classroom.  It’s also about making room for better lesson preparation. 

“Teaching fewer hours and having that extra hour in the day to collaborate, plan, grade and reflect on what students are doing would be very helpful to me,” Tyson said. 

“U.S. teachers spend more time in front of students by a significant margin than in other developed countries,” she said. “I was struck by how wonderful [it] would be … if I could just teach slightly less, how much more I could do.”

The Evidence on Class sizes

To answer the question of how much difference smaller classrooms make for student learning, we need good data and strong analytic methods that can isolate the effect of class sizes. Researchers do not want to accidentally pick up other effects such as differences in school resources, for example, or difference in students’ characteristics or parental involvement. 

Two of best studies we have on this issue – Project STAR and a 1999 analysis of Israeli schools by economists Joshua Angrist and Victor Lavy – address these problems. They both show a positive effect from smaller school sizes in the first few years of school. I discuss this research in greater detail below. 

Placing children in random class sizes: Lessons from Project STAR

Much of what we know about class sizes comes from an experiment called Project STAR (also known as the Tennessee Study). From 1985 to 1989 11,600 Tennessee students from kindergarten to third grade were randomly assigned to three class-size categories. The three class sizes were 13–17 students, 22–25 students and over 25 students. 

The results were strong. An average student assigned to the smallest classes had a reading score nearly 8 percent higher than students in the medium-sized classes. The smaller-class students, on average, achieved 9 percent higher math scores. (See the paper here.) 

Students in smaller classes who completed high school were more likely to take college-entrance exams than students assigned to medium or large classes. The effects are even stronger for minority and less affluent students. 

Education economists Alan Krueger and Diane Schanzenbach calculate that, based on Project STAR’s results, reducing class sizes from 22 to 15 students has a 5.5 percent return in annual benefits. This takes into account students’ increased lifetime earnings.

Project STAR generated the best data we have on class sizes, but findings should be interpreted carefully. First, the experiment was conducted in larger urban schools, so results may not be as strong for smaller suburban schools. Second, it is hard to believe that the students were completely randomly assigned; it’s easy to imagine some motivated parents lobbied to move their children to the smaller classes. Finally, Tennessee is a much below-average state for education. This means the Project STAR students may have benefited from the class-size reductions more than students in higher-performing states might. 

This study also only looks at the effects on students who were in small classes in their first few years of school. Stanford economist Eric A. Hanushek and his colleague, Steven Rivkin, downplay the conclusion that many researchers draw from Project STAR – that smaller classes are a cost-effective policy to improve learning. In a 2006 paper emphasizing teacher quality over class sizes they are skeptical. “In only 40 out of 79 schools,” they write, “did the kindergarten performance in the small classroom exceed that in the regular classroom.” This is better than random (which would be an improvement in about 26 out of the 79 schools). Nevertheless, they argue, the results show too weak an effect to justify wholesale change to class sizes.

Despite Hunushek and Rivkin’s reservations, Project STAR indicates room, on average, for improving test scores for children ages 5 to 8 in class sizes larger than 17 students. If real-world results replicate those found in the STAR project, we can expect higher student achievement by moving towards smaller classes. 

Maimonides’ Rule: What a 12th Century rabbinic scholar can tell us about class sizes

Because the random allocation of students found in Project STAR is so rare, education researchers have found creative ways to mimic a randomized experiment. MIT professor Joshua Angrist and University of Warwick professor Victor Lavy use the unique education policy of modern Israel to study class sizes. The researchers harness Israel’s rule that public school classrooms must not have more than 40 students, inspired by 12th Century rabbinic scholar Maimonides who advocated class sizes smaller than 40 students. 

After 41 students are enrolled, the class splits into two. This continues after each multiple of 40. Class size is therefore related to the random chance of how close school enrollment is to a multiple of 40 and less related to confounding variables. Socioeconomic characteristics schools are controlled for.  

For 5th grade students, Angrist and Lavy find a 10-student increase in class size explains a 6.5 percent drop in the average student’s reading comprehension scores and 4.5 percent drop in math scores. Angrist and Lavy do not find statistically significant effects of class size for third graders, possibly due to test-score training that year. The 5th grade effects of class size on achievement are strong, giving further evidence that class sizes matter. The non-effects and weak effects of class size on 4th and 5th graders highlights some uncertainty in the literature and data.

The heart of the academic debate on class sizes  

Significant and insignificant results pepper the class size literature. Karen Akerhielm, a graduate of Yale’s economics PhD program, finds a 10-student reduction in class sizes explains approximately a 5 percent improvement in science and history test scores. Stanford economist Caroline Hoxby, in a 2000 paper, cannot find a relationship between school districts’ test scores and estimated class sizes. She also cannot find any evidence that the effect is stronger for schools with a higher proportion of low-income or African American students. This is in contrast to the strong effects found in the STAR project. 

Conflicting results such as these in the literature – and differing opinions on how to weight the findings of each paper in meta-analyses – have meant that two of the most prominent researchers in the field, Hanushek and Krueger, draw different conclusions. In the 2002 book The Class Size Debate Hanushek writes, “despite the political popularity of overall class size reduction, the scientific support of such policies is weak to nonexistent.” In the same book Krueger (2002) writes, “the strongest available evidence suggests a connection” and that reducing class sizes from 22 to 15 students per teacher has a positive benefit-cost-ratio at a real discount rate lower than 6%.  

While this academic debate over the methods and results of experiments on class size reduction continues, the issue has lost much of its momentum in education policy circles. Much of this goes back to the related debate about what kind of reforms produce the biggest impact for the money spent. 

Eric A. Hanushek, for example, argues that focusing on teacher quality over class sizes yields a better return on one’s investment. In an influential 2005 paper on factors that influence student achievement, Hanushek compares reducing a class by ten students to switching the teacher with a moderately better replacement.

“The effects of a costly ten-student reduction in class size are smaller,” Hanushek says.

Hanushek is not claiming that smaller classes are ineffective at improving achievement, just that it may not be the right priority. The verdict is still out whether an increase in ‘teacher quality’ is an appropriate comparison in terms of cost and amenability to policy changes. When we talk about improving teacher quality rather than decreasing class sizes, are we talking about higher salaries to attract and retain better teachers? Are we talking about higher credential requirements for new teachers (which has been shown to be ineffective)? Are we talking about more on-the-job training? Teacher quality means different things to different people; this suggests caution should be taken when comparing teacher quality with class sizes.   

In short…

Despite strong arguments on both sides of the debate, there are reasons why we should take small class sizes seriously. The methodologically strongest experiment, the STAR experiment, shows strong results. Positive results are commonly found in the literature. Private schools reveal parents’ and teachers’ preferences for smaller student-teacher ratios. School sizes are large in many U.S. school districts, and the problem may be even worse than the official statistics.

“Most of the class sizes I’ve had have been around 28 people,” Oakland teacher, James Malamut, said. “When I worked for the School District of Philadelphia it was 31 on paper, but there would be times when it was closer to 35. They’d sneak a few kids in there unofficially.” 

When teachers repeatedly raise an issue, policymakers should listen. Teachers close to the actual practice of teaching strongly believe in reducing the student-to-teacher ratio. A 2007 survey found that 81 percent of American teachers would prefer smaller class sizes over higher salaries.

“I think it would have a really significant impact,” Malamut said. “I’d be able to give more individualized attention in the class. I’d have more time for preparation. And when grading I’d be able to give more individualized feedback.”

The evidence on the relationship between class size and achievement is not conclusive but there is good reason to believe that smaller classes could be beneficial to students in the early years of primary school and especially for minorities and low-achieving students. Whether it follows that class-size reductions are poor value-for-money is less certain. Reducing class sizes, particularly for 6- to 14-year-olds where the evidence is strongest, should be considered – alongside teacher training, salaries, accreditation, and collaboration initiatives ­– as a serious albeit expensive policy option to improve educational outcomes and reduce educational outcome disparities. Does class size matter? It certainly does.

__________

Darian Woods is a Master of Public Policy candidate 2016 at the Goldman School of Public Policy at University of California, Berkeley.

This article was originally posted on PolicyMatters Journal, a publication put forth by the graduate students of GSPP.

Journalism for Social Change

An analysis of the first Massive Open Online Course dedicated to the practice of solution-based journalism.

INTRODUCTION

In March 2015, Journalism for Social Change transitioned from an on-the-ground class – teaching graduate students how to use solution-based journalism to positively impact the United States child welfare system – into a MOOC, or Massive Open Online Course.

Results after two offerings of the class suggest that the Journalism for Social Change (J4SC) MOOC was successful in delivering journalism instruction; informing a disparate range of students about U.S. domestic policy focused on vulnerable children; and in leading students to produce high-quality, solution-focused journalism.

They also point to barriers inherent in education delivered in a Massive Open Online Format; shortcomings in the design of learning elements; and issues in retaining engagement over the seven weeks that each online class was offered.  

But, before launching into a discussion of the outcomes, it is important to discuss J4SC’s history as a brick-and-mortar class, and how its learning goals were translated for the eventual MOOC.

HISTORY

In the spring of 2012, Fostering Media Connections, a journalism non-profit based in San Francisco, launched a class called Journalism for Social Change [1] at the University of California, Berkeley.

The course, which is currently offered at the Goldman School of Public Policy, attracted graduate students from Goldman, the School of Journalism and the School of Social Welfare. Students were trained on what solution-based journalism is, how it can be applied to the field of child welfare, and were then tasked with producing solution-based news stories in the medium of their choice for publication in an online news site published by Fostering Media Connections called The Chronicle of Social Change.

Every week students heard lectures by top experts in journalism, public policy and child welfare research. The course was immediately followed by a paid summer fellowship for top students from the respective schools – who spent 10 weeks covering the foster care system on state and federal levels.

In spring 2013, Journalism for Social Change came back with a broader and more ambitious focus. The course and subsequent fellowship examined child maltreatment as a public health issue and examined ways journalism can be used to hold those charged with protecting children accountable.

In addition to new subject matter, the program expanded to the University of Southern California’s Sol Price School of Public Policy with graduate students who hailed from Price, the Annenberg School for Journalism and the School of Social Work.

In fall 2013, a variant of the program was taught to undergraduates in San Francisco State University’s Department of Journalism.

By 2014, J4SC had trained nearly 150 students in the classroom, and its parent organization, Fostering Media Connections, had hosted more than a dozen summer fellows.

Those students and fellows produced hundreds of stories that appeared in The Chronicle of Social Change to then be followed by top-tier media outlets. This sustained coverage has repeatedly influenced public policy.

Student coverage that appeared in The Chronicle of Social Change sparked follow-up stories and segments in The National Journal, The Associated Press Roll Call, The San Francisco Chronicle, The Contra Costa Times, San Jose Mercury News, KPIX TV, KGO Radio News, KFBK Radio, The Sacramento Press, EdSource and Witness LA among others.

In California, student coverage has been instrumental in preserving educational provisions for foster youth in the state budget, and strengthening services available to transition-aged youth as California rolled out extension of care to age 21. On the federal level, J4SC fellows detailed the unintended barriers to the educational achievement of foster youth created by the Federal Educational Rights and Privacy Act (FERPA), which coincided with the introduction of legislation to amend the law. In 2013, President Barrack Obama signed that legislation into law.

And most recently, student coverage on the application of predictive risk modeling in assessment of the risk of child abuse has garnered follow up coverage in far-flung publications including Forbes.com and Bloomberg News among others. This coverage has helped drive a high-level policy conversation about the use of “Big Data” in child abuse prevention.

In the winter of 2015, the designers of J4SC would go on to launch an online variant of the course. It was quite possibly the first MOOC dedicated to solution-based journalism in existence.

J4SC BECOMES A MOOC

In June 2013, U.C. Berkeley’s provost sent out a request for proposals for a new project, initially bankrolled by the school and Google, called MOOCLab.

Berkeley had signed a contract with a consortium of elite universities partnering to provide courses on the edX [2] online learning platform, and was intent on attracting a handful of classes to compete for grants to build online variants of brick-and-mortar offered by MOOCLab. Journalism for Social Change class was one of three to be awarded a grant to become a MOOC.

In January 2014, the J4SC team began working alongside the Berkeley Resource Center for Online Education (BRCOE) and the California Social Work Education Center (CalSWEC) [3] to build learning modules and map out a plan to evaluate student engagement in the course.

The learning goals of the course spanned comprehension of what solution-based journalism is to instruction on how to report a news story and finally mastery of the major themes in child welfare.

THE CHALLENGE: TRANSLATING LEARNING GOALS TO THE ONLINE WORLD

MOOCs, or “Massive Open Online Courses,” became popular and available en masse circa 2012, with a strong initial focus on STEM (science, technology, engineering and math) courses. This was not coincidental; any course with thousands of students enrolled cannot be heavy in instructor-student interaction.[4] In fact, much of the initial attention among the inner circle of those teaching or providing MOOCs was on automating teaching and learning [5] in STEM courses. As a greater array of subjects began to appear as MOOC offerings, questions surrounding how to teach non-STEM courses began to seed discussions about effective teaching methods for humanities courses at scale. [6]

Enter J4SC, a course seeking to inspire civic engagement in its students and based on an interdisciplinary humanities foundation. While some best practices for teaching things such as writing methods had emerged, there had not yet been a MOOC asking students to apply journalism skills to highlight and impel policy solutions to endemic problems facing American children. Additionally, no other MOOC had offered the opportunity and incentive to students of having their final project, which in the case of J4SC was a solution-based article, published.

These factors make the J4SC MOOC a very interesting case because they both address student motivation and the notion of service learning, grounding the often-lofty study of humanities in an applied, pragmatic and progressive experience. For the burgeoning field of MOOC research, the question of motivation and J4SC’s design was most important. Student retention and engagement are the two most frequently cited problems in any course. So the question of whether, or to what extent, the course structure acted to intrinsically or extrinsically motivate students to persist through the final project demanded further study.

Converting the J4SC face-to-face course into an online course open to the masses was guided by several factors: funding constraints, purpose to serve potentially thousands of students and technical limitations of required platforms.  

The course conversion process was underwritten by a grant, which pre-assigned funds for course development. As video production is costly, and the instructional content of MOOCs is weighted heavily in favor of audio-visual media, there was little funding remaining for other non-text instructional media (e.g., interactive infographics or simulations).

Second, the course was offered via the edX consortium,[7] who have developed their own proprietary Learning Management System of the same name. The structure of the edX LMS in part determined the structure of the online J4SC course. The LMS is built so that concepts are explored laterally, with different types of content (e.g., videos, quizzes, discussions, assignments) all added in click-through frames to each specific topic. The LMS also impacted the way students could, or could not, be assessed or interact as peers. The instructor thus went through a development process of adjusting his materials to the communication medium. 

Finally, research on student persistence in MOOCs has shown that the shorter the course duration, the more likely that students will complete it. [8] Like advice from course developers resulted in a compression of the total amount of content in and duration of the course. It also reduced the amount of instructor interaction, both automated and manual. The course was organized into seven units, corresponding to seven weeks, with each week beginning with lecturettes, readings, assignments, quizzes, or guest speaker videos. Because MOOCs can have thousands of students enrolled, the instructor also had to choose teaching methods heavily leveraging peer-to-peer interaction and methods that reduced didactic feedback (e.g., grading written assignments or offering personalized critiques).

Given a different platform and a narrower intended audience, other types of rich instructional media and online teaching pedagogies could bolster the content.

EVALUATION METHODS

J4SC was first offered as a small, private online course (SPOC) on the edX Edge platform. This more-intimate setting allowed the instructor to test student reactions to the course design and work out kinks with the technology. The MOOC began just after the SPOC had closed. Multiple methods were employed to measure the impact of the course on students in both iterations of the course.

Pre- and post-course surveys: For the SPOC, 77 students completed the pre-course survey and 12 completed the post-course survey. Both pre- and post-course surveys contained several attitudinal items that measured respondents’ self-efficacy, personal motivation and comfort with online learning (see Figure 1). With such a small sample size, it was not possible to check for significant demographic differences between those who completed and did not complete the post-course survey. 

In the MOOC, however, numbers were larger; 1,244 students completed the pre-course survey and 72 completed the post-course survey. Those who completed the post-course survey had significantly higher ratings of self-motivation and personal responsibility at the start of the course (see Figure 2).

Qualitative coding: The course assistant assessed all assigned discussion posts for quality on a three-point scale. This scale matched the rubric students were given for self-evaluations (see Figure 3).

Student reflections: The final assignment in the course was to post to the discussion board with reflections on J4SC. The course assistant read each of these, looking for common themes in the most substantive posts.

Limitations: Additional usage data for respondents was not easily available from edX. In the future, it would be useful to have these measures (e.g., course completion or time spent on online materials) to develop measures of engagement.

EVALUATION        

SPOC survey results

Students in the small private online course (SPOC) were recruited through Listservs, Twitter and Facebook, focusing largely on audiences connected to social work. It is thus not surprising that two-thirds began the course with a background in child welfare. Nearly equal shares of respondents rated themselves beginners (35%), intermediate (35%) and expert (30%) in the field of child welfare at the start of the course. Smaller shares reported backgrounds in public policy (40%) and journalism (35%); more than half assessed themselves beginners in these fields. These self-perceptions did not change in the post-course survey.

Three-quarters had at least a bachelor’s degree, and the majority was relatively early in their careers (ages 23 to 35). Of those beginning with a bachelor’s degree, one-quarter also had a master’s or doctorate. Eighty-six percent were women. “Personal enrichment” was the main motivation for 74% of the respondents, followed by a desire to “improve their occupational skill” (39%).

MOOC survey results

Students for the MOOC were recruited much more broadly, including the SPOC channels but also via public listing on edX.org. It is not surprising, then, that they had less previous experience with the topics. Prior to the course, more than 50% of students rated their expertise as novice or beginner in child welfare, public policy and journalism. 

MOOC participants had more formal education. A majority had an undergraduate degree, and of those, 36% had a bachelor’s degree and 33% a Master’s. In terms of age, 16% were college-aged (18 to 22), 65% were under age 35, a larger share of young adults than the SPOC. A smaller share of MOOC students (64%) than SPOC students were women. A share similar to SPOC participants, 69%, took the course for “personal enrichment.” About half were taking the course to “improve their occupational skills” and 8% for college credit (an option not available in the SPOC).

Post-course, more MOOC participants rated their expertise in child welfare, public policy and journalism as intermediate and expert than they had pre-course. As a result of JSC4, students were more comfortable contacting leaders and conducting interviews (see Figure 1). Most importantly, they were more interested in changing public policy and more likely to believe that they could change public policy.

Student perspectives

The course was designed with three modules in each unit: journalism, social welfare and public policy. In final reflections, students recognized changes in their attitudes in each of these areas.

Journalism. Many cited activities practicing interviewing skills and evaluating data for stories as useful for their own work as journalists and as readers of news. This is reinforced by survey data (see Figures 5 and 6). At least three students started blogs as a result of participation in the course, and more than a dozen stories that began in the course were published online (see Figure 4).

Some students said the course gave them confidence in their potential in journalism: “This course has encouraged me to go ahead in my career with the purpose of addressing journalism toward social change,”[9] one wrote. Another said that “interviewing people for the main story ... made me realize that I can be a reporter.”[10]

Others made specific mention of the skills they gained, connecting them to a career goal: “I plan to take [these] new skills and practice by writing more on children of prisoners.”[11] “I intend to write for newspapers and magazines about the ill treatment of children .... Interviewing children was a wonderful experience for me.”[12]

Several reflected on their changed views of the role of journalists in society:

It has been an eye-opening course ... [I am now] thinking about the power of journalism to make a difference. (MOOC student discussion post)

Moving forward I will read the news in a more analytic way. Even though the subject matter was very focused, I believe we learned skills to apply to anything. (MOOC student discussion post)

This course has renewed my faith in journalism as a profession. (Yay!) [SPOC student discussion post]

I have learn[ed] what it takes to become a journalist. ... I have more respect for the work they do and what they do to put a story out there on paper for the public. (MOOC student discussion post)

Child welfare. In general, MOOC students ended up learning more about child welfare than they expected. Many reported that the topic is not one that originally interested them particularly, but their views changed as they engaged with debates in the field. Several reported their intention to work with specific child-welfare organizations in their communities as a result of the course.

My own views of maltreatment of kids [have] been increased to say the least. I am more sensitive to this subject and whenever I get a chance to talk about the matter, I will without reservation. (MOOC student discussion post)

At first I was kind of reluctant to deal only with the single topic. As a French [student] I felt I could be lost. ... But then I realized that in order to produce ... solution-based journalism, you need to be expert in your field. (MOOC student discussion post)

Public policy. Finally, students gained an understanding of public policy mechanics that they planned to apply in advocacy. They valued demystifying the policy process and were likely to also see these skills as transferrable to other policy areas. This echoes the finding that MOOC students gained confidence in their ability to conduct interviews, contact leaders and change public policy.

Earlier, I had an impression that journalism was meant to mirror exactly what is happening in the society, giving people a reflection of what they’re doing. This course has made me realise that there are always solutions ... and writing about such stories can make journalism a tool for social change. (MOOC student discussion post)

I have gained a wealth of knowledge from both the legislative and journalistic perspectives and I intend to use the knowledge to voice various challenges that should be addressed by the government. (MOOC student discussion post)

I intend to continue to speak out against child abuse .... I believe that this new knowledge can help me in addressing policy makers. (MOOC student discussion post)

I started the course with very little interest in doing a deep dive on the subject of California’s child welfare system. Yet I was reminded ... any policy topic has multiple layers and complexities. The layers offer the opportunity for many different kinds of stories, but also demand writers who can also move from specific to the larger story. (MOOC student discussion post)

I think I have become a more critical thinker and ... observer. (MOOC student discussion post)

Students’ active engagement with the material and one another helped them practice their critical thinking, writing and reading skills while they updated their beliefs about social policy.

LESSONS LEARNED

While both the SPOC and MOOC were well-received by students, and their performance showed promise, the course instructor and assistant learned several valuable lessons through this experience that are likely to be useful to humanities or advocacy-focused online courses.

1. Learn from your students, and engage them personally.

The instructor and course assistant made several changes to structure and pacing of the MOOC in response to SPOC student experiences. These changes anticipated the challenge of running a course with hundreds more students, many from countries outside the United States.

Students in the SPOC had to submit three pitches; MOOC students submitted one. Fewer pitch assignments made it possible for the instructor and assistant to give feedback to each student, something SPOC students had said they valued. Even short or boilerplate replies from staff in the MOOC appeared to be valuable to students.

Course material in the MOOC was released one week at a time rather than all at once as it had been in the SPOC. Putting students on similar timelines encouraged them to interact with one another in real time. Both SPOC and MOOC students reported forming relationships and collaboration with other students via the discussion board:

Networking and interacting with other students is also a positive experience that I will keep with me. The mood was really good among students and I think there is a great potential to strengthen the bonds created so far and make new ones in the future (MOOC student discussion post).

The course assistant and instructor also engaged students by replying to discussion posts with further questions. This initially took a significant amount of time. For the MOOC, the instructor and assistant developed boilerplate responses. Students did not seem to notice (or at least to complain) about these somewhat generic responses, but instead reported that they felt more engaged by the staff than in other MOOCs they had experienced.

2. MOOCs can inspire significant changes in attitudes and beliefs.

Although the response rate for the MOOC was low, respondents showed significant changes from pre- to post-course, indicating that J4SC had a positive impact on their attitudes and beliefs. A follow-up survey could be sent to respondents to get a better sense of what they are applying from the course a few months out.

As shown in the student perspective section above, even the process of acquiring skills can shift attitudes about a topic or an entire field. While some students were frustrated by the United States focus of the child welfare topics, others saw it as a training ground for coming to understand how much a solution-based journalist needs to know about the issue they would like to impact. The variety of final stories’ topics and settings highlighted just how broad the subject could be in practice.

3. Some students are motivated by publicity.

The courses each culminated with an 800-word story assignment on a child welfare topic. Students were promised a shot at publication in The Chronicle of Social Change, a leading outlet in that field, if they submitted their stories by the deadline. While a small share of total enrollees completed the story and had it published, the ones who did were significantly more engaged in the course throughout. In turn, their visible participation online was a resource to other students in the course.

At the end of the SPOC, several students who had published stories participated in a live, 30-minute video chat online. MOOC students were invited to watch as five SPOC writers discussed their subjects, from the juvenile justice system to a profile of a foster youth mother. This recording has been viewed more than 1,265 times in three months.

Three students from the MOOC participated in a live video chat at the end of that course. They included a 19-year-old home-schooled student living in Malaysia who wrote about child marriage in Malawi; an Italian freelance journalist working for a human rights organization who examined “rehoming” in Arkansas; and a college student in New Delhi, interested in the child welfare focus, who wrote about child abuse in India. The recording had been viewed 439 times just over two months later.

Two students who appeared in the video chat have continued to write for The Chronicle of Social Change, an indication of their continued interest in the subject matter and the practice of solution-based journalism

4. Set expectations early and often.

Some students felt overloaded with or intimidated by the course work. Many assignments involved reading that took longer than estimated, especially for those who were not as fluent in English. Those with limited background knowledge about child welfare or the American context generally reported that they were reluctant to join discussions with their peers. However, among those who mentioned this as a barrier, all also said they gained confidence as the course progressed.

In each of these cases, setting clear expectations up front can encourage students to participate at a degree they are comfortable with. Clarity about self-directed learning as a benefit and not a restriction seemed to improve attitudes among individual students who were frustrated by the pace.

5. Ensure your platform can provide the data you need.

The absence of data from edX limited evaluation. Data from edX specific to students’ actual engagement in the course (e.g., time spent or completion) was unavailable. Such data could be linked with students’ attitudinal data and would better address the change going on as a result of the course. The data currently gathered speak to respondents’ reactions before and immediately after the course. A follow-up survey, which addresses application of learning and change in behavior as a result of the course several months later, might be an informative next step. 

CONCLUSION

Taking an in-person class and converting it into a successful MOOC is a difficult task. While certain aspects of the Journalism for Social Change class were degraded in the conversion, others were augmented.

For example, the semester was condensed into seven weeks. While this required a lessened workload, it also required the instructor to synthesize lessons into a much shorter format, which as evidenced by the survey, seems to have proven successful. This unexpected result will help inform future iterations of the Journalism for Social Change on-the-ground class.

But the through-line is that there is a global community of people interested in solution-based journalism who are willing to apply the skills J4SC trained to help solve endemic world problems. If this program could be expanded to other subject areas, and the class itself could be streamlined, there is the potential to train an army of solution-based journalists empowered to produce stories that can change the course of public policy.

Such potential warrants further exploration and study.

__________

Figures

FIGURE 1.  Changes from pre- to post-course survey of MOOC students.

Means are presented from the 5-point scale from 1 (Strongly Disagree) to 5 (Strongly Agree). Higher means indicate greater agreement. (n = 72)

FIGURE 2. Significant differences on the pre-course survey between those who did and did not complete the MOOC post-course survey

Means are presented from the 5-point scale from 1 (Strongly Disagree) to 5 (Strongly Agree). Higher means indicate greater agreement.

FIGURE 3. Journalism for Social Change Discussion Rubric

FIGURE 4. J4SC MOOC and SPOC Student Story Headlines (Partial List)

 

  • Arkansas Becomes Fifth State to Regulate Re-Homing In Wake Of High-Profile Case [13]
  • Prosecuting Youth as Adults Fails to Address Trauma [14]
  • Legalization Without Citizenship for Stateless Children in Sabah, Malaysia [15]
  • Child Safety Versus Family Preservation In Sierra Leone [16]
  • Bill to Expand Foster Care Benefits for Probation-Involved Youth Clears Key California Senate Committee [17]
  • The Crime of Punishment [18]
  • Seven Years Later: States Working Toward Promise of a Medical Home for Every Foster Youth [19]
  • Discontinuing Solitary Confinement in Juvenile Facilities [20]
  • The Unanswered Question [21]
  • UK Report Says British Government Fails Homeless Youth [22]
  • Journalism Can Incite Social Change: Here’s How Media Coverage of One Awareness Campaign Made A Difference! [23]
  • The Vaccine for Pollyanna Attitudes Toward Public Health and Religious Beliefs [24]

FIGURE 5

FIGURE 6

__________

Sources

[1] CBS SF Bay Area (Jan. 2012), UC Berkeley Class Promotes Journalism for Social Change, Link: http://sanfrancisco.cbslocal.com/video/6661720-uc-berkeley-class-promotes-journalism-for-social-change/

[2] edX is a consortium / a group of elite universities partnering to provide MOOCs.  Harvard and MIT developed the edX learning management system(s) as the ‘open source’ vehicle through which to offer the massive courses.  Any person offering an “edX” course is required to have that course built and hosted inside of the edx LMS.  Berkeley is the third partner in the edX consortium and, instead of directly funding the consortium via membership fees, it contributes in-kind services such as engineers to work on code development projects.

[3] CalSWEC provides curricula and training to 21 graduate and undergraduate schools of social work throughout the state. They are a key partner, not only in regard to the research, but also in helping with the overall expansion of the J4SC program.

[4] Ho, Andrew Dean and Chuang, Isaac and Reich, Justin and Coleman, Cody Austun and Whitehill, Jacob and Northcutt, Curtis G and Williams, Joseph Jay and Hansen, John D and Lopez, Glenn and Petersen, Rebecca, HarvardX and MITx: Two Years of Open Online Courses Fall 2012-Summer 2014 (March 30, 2015). Available at SSRN: http://ssrn.com/abstract=2586847

[5] Yudelson, M., Hosseini,R., Vihavainen, A. & Brusilovsky, P. (2013). “Investigating Automated Student Modeling in a Java MOOC.” In the 7th International Conference on Educational Data Mining. Veletsianos, G., & Miller, C. (2008). Conversing with Pedagogical Agents: A Phenomenological Exploration of Interacting with Digital Entities. British Journal of Educational Technology, 39 (6), 969-986.

[6] Reichard, Cara. (2013). “MOOCs face challenges in teaching humanities.” The Stanford Daily.

[7] Harvard, MIT and UC Berkeley are founding members.

[8] Ho, et al.

[9] MOOC student discussion post

[10] MOOC student discussion post

[11] MOOC student discussion post

[12] MOOC student discussion post

[13] https://chronicleofsocialchange.org/news/arkansas-becomes-fifth-state-to-regulate-re-homing-in-wake-of-high-profile-case/9901

[14] https://chronicleofsocialchange.org/analysis/prosecuting-youth-as-adults-fails-to-address-trauma/9525

[15] https://chronicleofsocialchange.org/news/legalization-without-citizenship-for-stateless-children-in-sabah-malaysia/9881

[16] https://benjaminstephensmedia.wordpress.com/2015/04/11/child-safety-verses-family-preservation-in-sierra-leone-the-failings-of-a-single-solution/

[17] https://chronicleofsocialchange.org/news/bill-to-extend-foster-care-benefits-for-probation-involved-youth-clears-key-california-senate-committee/9924

[18] https://chronicleofsocialchange.org/opinion/the-crime-of-punishment/9559

[19] https://chronicleofsocialchange.org/analysis/seven-years-later-states-working-toward-promise-of-a-medical-home-for-every-foster-youth/9551

[20] https://chronicleofsocialchange.org/news/discontinuing-solitary-confinement-in-juvenile-facilities/9544

[21] https://chronicleofsocialchange.org/opinion/the-unanswered-question/9548

[22] https://chronicleofsocialchange.org/research/uk-report-says-british-government-fails-homeless-youth/9982

[23] https://fishershannon.wordpress.com/2015/04/04/journalism-can-incite-social-change-national-sexual-assault-awareness-and-prevention-month-unite-against-rape-shannon-fisher

[24] https://verdict.justia.com/2015/02/12/vaccine-pollyanna-attitudes-toward-public-health-religious-beliefs

Lower Premiums for States With Stronger Health Insurance Rate Review Authority

MINNEAPOLIS/ST. PAUL – A new study published today found state-level prior approval authority over individual market health insurance rates were in effect from 2010 to 2013 was associated a 10 percentage point lower rate of increase in premiums.
 
The research was published in the August issue of Health Affairs, and was conducted at the University of Minnesota School of Public Health in collaboration with the University of California, Berkeley School of Public Health.
 
“This study provides the first evaluation of state rate review authority in the individual market in the years immediately following the implementation of the Affordable Care Act,” said Pinar Karaca-Mandic, Ph.D., lead author and associate professor in the University of Minnesota School of Public Health. “We found significant variations in rate review authority across states and these were highly associated with differences in adjusted premiums.”
 
Premiums were adjusted for insurance carrier, insurance market, provider market, political, and population differences across states.
 
The Affordable Care Act (ACA) requires carriers in certain categories of health insurance to provide public justification for rate increases of 10 percent or more. During the 2010-13 time period, 44 states upgraded their health insurance rate review programs by hiring or contracting actuary services, upgraded information systems and enhanced insurance rate transparency.
 
Study authors collected rate review authority and anticipated loss ratio requirements from each state and the District of Columbia by examining statutes, regulations, and bulletins and distributed a questionnaire to each state and the District of Columbia.
 
“Our findings suggest that rate review by states with prior approval authority may be a viable option for moderating the growth in health insurance premiums," said Richard Scheffler, Ph.D., principal investigator and distinguished professor of health economics and public policy at UC Berkeley. “Given the massive rate increases in premiums being proposed this year, our results could not be timelier.”
 
During the 2010-13 time period, the study also found:
 

·  States with prior approval authority and a loss ratio requirement in the individual market experienced lower health insurance premiums.

·  States with prior approval authority in the individual market experienced lower growth in health insurance premiums.

 
"Many states bolstered their rate review programs after the passage of the Affordable Care Act using federal grants, but will now need state funding to maintain their programs," said Brent Fulton, Ph.D., M.B.A., assistant adjunct professor of health economics and policy at UC Berkeley.
 
The authors say given the expansions in individual-market coverage, it will be important to further evaluate state rate review authority and activity to determine whether study findings remain in effect over a longer time period and are generalizable throughout the expanded individual insurance market.

The authors are thankful for funding provided by Robert Wood Johnson Foundation through its Changes in Health Care Financing and Organization Program (Grant No. 69906). Pinar Karaca-Mandic also acknowledges funding from the National Institute on Aging (Grant No. K01AG036740).

For more information please contact katiphillipsucb@berkeley.edu or visit Petris.org

Money Does Matter After All

This is a response to “Money Matters After All?” by Eric Hanushek, published July 17, 2015 on the Ed Next blog, which was a response to “Boosting Educational Attainment and Adult Earnings,” by C. Kirabo Jackson, Rucker C. Johnson and Claudia Persico, published in the Fall 2015 issue of Education Next.  Eric Hanushek has responded to this piece in a blog entry published on July 20, 2015 on the Ed Next blog. 

We would like to thank Eric Hanushek for his comments and interest in our work.  We appreciate the opportunity to offer a brief response. Hanushek provides an accurate description of our study and is correct that the methodological details matter. His critique, however, is not an objection to any of our methodological choices; he instead disputes our results. He states “while these [questions about measurement and how spending reactions to court decision is measured…] are important methodological issues, it is more useful to focus on the substance of their findings.” We take this as clear evidence that Hanushek finds our methodology sound. When the methods are sound, the results must be taken seriously.  We appreciate that Hanushek has done so in this case. His single, important critique of our key results is the “time trend” argument. Following the summary of our findings below, we present the “time trend” argument and highlight its flaws. We then discuss how we overcome the problems of the previous studies on which Hanushek bases his opinions. Finally, we discuss how our results differ from previous literature because (a) existing studies suffered from biases, and (b) the spending increases analyzed in our analysis were spent on more productive inputs than the spending increases examined in other studies.

Overview of our findings:

In most states, prior to the 1970s, most resources spent on K–12 schooling were raised at the local level, through local property taxes (Howell and Miller 1997; Hoxby 1996). Because the local property tax base is typically higher in areas with higher home values, and there are persistently high levels of residential segregation by socioeconomic status, heavy reliance on local financing contributed to affluent districts’ ability to spend more per student. In response to large within-state differences in per-pupil spending across wealthy/high-income and poor districts, state supreme courts overturned school finance systems in 28 states between 1971 and 2010, and many states implemented legislative reforms that spawned important changes in public education funding. The goal of these school finance reforms (SFRs) was to increase spending levels in low-spending districts, and in many cases to reduce the differences in per-pupil school-spending levels across districts. By design, some districts experienced increases in per-pupil spending while others may have experienced decreases (Murray, Evans, and Schwab 1998; Card and Payne 2002; Hoxby 2001). Our key finding is that increased per-pupil spending, induced by court-ordered SFRs, increased high school graduation rates, educational attainment, earnings, and family incomes for children who attended school after these reforms were implemented in affected districts. We find larger effects for low-income children, such that these reforms narrowed adult socioeconomic attainment differences between those raised in low- vs. high-income families.

What we do not find:

There are two misunderstandings about our findings that critics appear to make. As such, we feel it is helpful to outline what we do not conclude from our study.

1. We do not find that merely increasing spending will improve student outcomes irrespective of how it is spent. Though Hanushek’s critique may lead readers to think otherwise, at no point in our paper do we make claims suggesting that “policy makers…only have to concern themselves with how much money was provided to schools and not with how money was used.” We are very careful to highlight that how money is spent matters. We find that increased spending that leads to reductions in class sizes, increased teacher salaries and more instructional school days in a year improved outcomes. As such, one of our key conclusions is that, while how much money one spends does clearly matter, how it is spent is very important. The final lines of our full paper read, “Money alone may not be sufficient, but our findings indicate that provision of adequate funding may be a necessary condition. Importantly, we find that how the money is spent may be important. As such, to be most effective it is likely that spending increases should be coupled with systems that help ensure spending is allocated toward the most productive uses.”

2. We do not find that increasing spending by 22.7 percent will eliminate all differences in outcomes by socioeconomic status. This is a common misunderstanding of our findings that is also made by Hanushek. We find that a 22.7 percent spending increase is large enough to eliminate the average outcome differences between the poor (those with family incomes below twice the poverty line) and the non-poor (those with family incomes above twice the poverty line). Because there are large differences by socioeconomic status among those in each income group (e.g., the wealthy tend to have better outcomes than the average non-poor person, and the very poor tend to have worse outcomes than those just above the poverty line) eliminating the average difference in outcomes across the two broad groups does not eliminate all differences by socioeconomic status within each group. Simply put, just because a 22.7 percent spending increase is large enough to eliminate the average outcome differences between the poor and non-poor it does not mean that a 22.7 percent spending increase is large enough to eliminate the difference in outcome between the very poor and the very wealthy or differences across other measures of socioeconomic status. Also, we do not speculate that this spending increase will eliminate differences in outcomes by other categories such as race and gender. To illustrate this logic, consider the following simple mathematical example.

Illustrative example: There are 4 people in a society of different income levels. Persons are ranked by income level so that Person 1 is the richest and person 4 is the poorest. Richer individuals tend to have better outcomes such that Person 1 has 20 years of education, Person 2 has 18 years, Person 3 has 18 years and person 4 has 16 years of education. The average educational attainment for the two richest persons is 19 years and the average educational attainment for the two poorest persons is 17. The average gap between the high income group and the low income groups is 2 years. However, the gap between the richest and poorest person is 4 years. If one could increase the level of education for both lower income persons (persons 3 and 4) by 2 years, the average gap across the two groups would be eliminated. However, the richest person would still have 2 more years of education that the poorest person. This simple example illustrates that eliminating the average difference across the two groups will only remove all differences by socioeconomic status if there are no differences in outcomes by socioeconomic status within the broad income groups. Given that there are large difference in outcomes by socioeconomic status within broad income groups in the United States, this condition clearly does not hold in reality.

The Problem with Hanushek’s “Time Trend” Critique:

Now that the reader should have a clear sense of our paper and its implications, we now describe the Hanushek “time trend” argument. Hanushek points out that school spending in the United States has increased substantially between 1970 and present day. As such, he argues that, if our results are correct and school spending really does improve student outcomes (with larger effects for low-income children), outcomes should have improved over time and achievement gaps by income should have been eliminated over this time period. He then argues that any improvements between 1970 and today have been small so that it is unlikely that our conclusion that school spending improves student outcomes is correct.

While this “time trend” argument is intuitive, it is flawed for two reasons. The first reason is that it relies on the same flawed understanding of our results outlined above (i.e., that eliminating differences across two broad income groups implies eliminating all differences by socioeconomic status). The second problem with this “time trend” argument is that it is a facile argument based on fuzzy (albeit intuitive) logic. We highlight the problems of his logic below.

To see the problems of Hanushek’s logic, consider the following true statistics: between 1960 and 2000 the rate of cigarette smoking for females decreased by more than 30 percent while the rate of deaths by lung cancer increased by more than 50 percent over the same time period.[1] An analysis of these time trends might lead one to infer that smoking reduces lung cancer. However, most informed readers can point out numerous flaws in looking at this time trend evidence and concluding that “if smoking causes lung cancer, then there should have been a large corresponding reduction in cancer rates so that there can be no link between smoking and lung cancer.” However, this is exactly the facile logic invoked by Hanushek regarding the effect of school spending on student achievement.

While there are several problems with this simplistic argument, to avoid going too deeply into the weeds we focus on the most important flaw in this “time trend” argument. Simply put, the “time series” argument will hold only if nothing else has changed between 1970 and present day. It is important to bear in mind that these spending increases occurred against the backdrop of countervailing influences, such as the rise in single-parent families, more highly concentrated poverty, deterioration of neighborhood conditions for low-income families, the exodus of the middle class to the suburbs, mass incarceration, the crack epidemic, changes in migration patterns, and others. Consider just one countervailing factor: the significant rise in segregation by income between neighborhoods over the past four decades. This increased residential segregation was driven mostly by families with school-age children (Owens 2015), a simple reflection that quality of local schooling options is a key driver of segregation. This significant increase in residential sorting by income among families with school-age children would have likely led to far greater disparities in school resources by community socioeconomic status had SFRs not been an effective leveling tool.

In short, 1970 and 2010 is not an “apples-to-apples” comparison, so there is no reason to expect that the correlation between aggregate spending and aggregate outcomes over such a long time span will yield anything resembling a “causal” relationship. In fact, the observation that using simple correlations over time is unlikely to yield the true “causal” relationship is exactly what motivated us to follow a different methodological approach. Our methodological approach allows for an “apples-to-apples” comparison and allows us to disentangle the effects of school spending from that of all these other countervailing forces. Though Hanushek has chosen not to discuss the methodological advances in our work, they are important, and methods matter.

How We Overcome These Problems to Facilitate “Apples-to-Apples” Comparisons:

We make several decisions in order to facilitate more of an apples-to-apples comparison. First, we use fine-grained data on individual students, rather than comparing the entire United States in 1970 to the entire United States in 2010. With these finer-grained data we are able to account for a variety of other factors that may have changed over time such as family structure, childhood poverty, and neighborhood factors. Using these finer grained data, our main approach is to compare the outcomes of individuals with similar background characteristics born in the same school district but who attended public schools during different years (when per-pupil spending levels may have been different) — i.e., an apples-to-apples comparison. However, this is not all that we do to ensure that our results yield real causal relationships.

In our paper, we point out that even if one can carefully account for several observable factors (as we do), correlating all actual changes in school spending with changes in student outcomes is unlikely to yield causal relationships. We point out that some spending changes are unrelated to other factors that may obscure the real effect on outcomes (i.e., clean spending changes), while other kinds of spending changes would clearly yield erroneous results (i.e., confounded spending changes). We point out that many of the spending changes analyzed in previous studies may have been of the confounded variety. To give an example of such confounded spending changes, consider the following example. The federal Elementary and Secondary Education Act allocates additional funding to school districts with a high percentage of low-income students, who are more likely to have poor educational outcomes for reasons unrelated to school spending. As such, school districts serving declining neighborhoods are also those that are most likely to receive additional per-pupil spending over time. Such compensatory policies generate a negative relationship between changes in school spending and student outcomes that obscure the true relationship between school spending and student outcomes. We avoid this kind of problem by focusing only on clean spending changes. Specifically, we focus on the relationship between external “shocks” to school spending and long-run adult outcomes. The “shocks” we use are the sudden unanticipated increases in school spending experienced by predominantly low-spending districts soon after passage of court-mandated SFR.

As discussed above, by design, very soon after a court-ordered SFR in a state, some districts experienced sudden unanticipated increases in per-pupil spending (i.e., shocks) while others may have experienced decreases. Our analytic approach compares the outcomes of individuals who attended school before these spending shocks to those of similar individuals from the school district after these spending shocks. The validity of our design relies on the idea that districts that experienced sudden increases in school spending right after the passage of a court-ordered SFR were not already improving in other ways in exactly those same years. For this reason, we spend much time in our work showing that the timing of these spending shocks has nothing to do with underlying neighborhood changes or changes in family characteristics, so that changes in outcomes due to these shocks are likely to reflect a causal relationship. We encourage interested readers to consult the full paper for further detail.

Reconciling our results with the Older Literature:

Even though we outline the faulty assumptions in Hanushek’s “time trend” argument, in the interest of good social science it is helpful for us to try to reconcile our findings with the simple time-series evidence. As we explain above, our results do not imply that a 22.7 percent increase will eliminate all differences by parental socioeconomic status. However, they do suggest the much more realistic prediction that one might observe some convergence across groups over time as school spending has increased. Indeed this has been the case. For example, Krueger (1998) uses data from the NAEP and documents test score increases over time, with large improvements for disadvantaged children from poor urban areas; the Current Population Survey shows declining dropout rates since 1975 for those from the lowest income quartile (Digest of Education Statistics, NCES 2012). Murnane (2013) finds that high school completion rates have been increasing since 1970 with larger increases for black and Hispanic students; Baum, Ma and Pavea (2013) find that postsecondary enrollment rates have been increasing since the 1980s, particularly for those from poor families. Contrary to Hanushek’s assertions, outcomes have improved. Importantly, these improvements are consistent with increase in school spending playing a key role.

Finally, Hanushek proposes three reasons why our estimates (if true) may not track the national time trends very well. His ideas are not novel — we considered, tested, and addressed them ourselves in the paper and herein.  First, he says there may be diminishing marginal returns to schools spending. Indeed we find that this is the case in our study. Areas with the lowest initial spending levels were also those for which increased spending had the most pronounced positive effect. The second reason he cites is that spending induced by the courts might have large effects while spending not related to judicial rulings have small effects. Indeed we find evidence of this also. Specifically, spending increases associated with court-mandated reform are much more strongly related to improvement in measured school inputs (e.g., student-to-teacher ratios, length of the school year) than ordinary spending increases. There are a few explanations for this that we explore in our study. Finally, he proposes that our estimates are wrong. We propose an alternative: the time series evidence Hanushek relies on does not reflect a causal relationship. Indeed in our larger study, we show that simple correlations are obscured by a variety of other factors that also influence student outcomes. We also present numerous pieces of analysis in our larger study that support a causal interpretation of our results.

To be clear, we do not think that our study is the final word on the question of whether increasing school spending will improve student outcomes in all contexts. As Hanushek himself concedes “none of this discussion suggests that money never matters. Or that money cannot matter.” Here we will make a similar concession; none of what we show suggests that money always matters. We show that money did matter and that it mattered quite a lot. What our study does is dispels the notion that school spending does not matter, so that one must look only at how it is spent. We find that money does matter and how it is spent matters. Contrary to Hanushek’s claims, our findings do not let policymakers off the hook. Our findings suggest that it is extremely important that money is allocated effectively and also that it is allocated equitably so that all schools have the resources necessary to help all children succeed.

— Rucker C. Johnson, C. Kirabo Jackson and Claudia Persico

Rucker C. Johnson is associate professor of public policy at University of California, Berkeley. C. Kirabo Jackson is associate professor of human development and social policy at Northwestern University. Claudia Persico is a doctoral candidate in human development and social policy at Northwestern University. This article was originally posted on EducationNext.

___________________________________

[1] http://www.geocities.ws/microecon03/sectionII.html

http://scienceblog.cancerresearchuk.org/2014/11/09/its-lung-cancer-awareness-month/

http://www.cancer.org/research/cancerfactsstatistics/cancerfactsfigures2013/cancer-statistics-2013-slide-presentation.pdf

NOTE: The lung cancer rates for males has been on the decline since 2000 and has been relatively stable for females between 2000 and 2009.