Consortium for Policy Research in Education

 

The Consortium for Policy Research in Education (CPRE) is a community of researchers from renowned research institutions and organizations committed to advancing educational policy and practice through evidence-based research.

 

 

Search results

Now showing 1 - 10 of 90
  • Publication
    Recruiting and Retaining Teachers: Keys to Improving the Philadelphia Public Schools
    (2001-05-01) Watson, Susan
    In 1996 the Consortium for Policy Research in Education (CPRE) at the University of Pennsylvania and its partner, Research for Action (RFA) were charged by the Children Achieving Challenge with the evaluation of Children Achieving. Between the 1995-1996 and 2000-2001 school years, CPRE and RFA researchers interviewed hundreds of teachers, principals, parents, students, District officials, and civic leaders; sat in on meetings where the plan was designed, debated, and revised; observed its implementation in classrooms and schools; conducted two system-wide surveys of teachers; and carried out independent analyses of the District’s test results and other indicators of system performance. An outline of the research methods used by CPRE and RFA is included in this report.
  • Publication
    Graduating From High School: New Standards in the States
    (1989-04-01) University of Pennsylvania

    This brief examines attempts by states to improve public education by increasing high school course requirements in 1989. According to a report published by the Center for Policy Research in Education, these attempts have had mixed results. As a result of the reforms, low-and middle-achieving students are taking more courses in science and math, but there are serious questions about the quality of the courses themselves. This issue of CPRE Policy Briefs is based on the report which was written with assistance from Paula White and Janice Patterson.

  • Publication
    The Limits and Contradictions of Systemic Reform: The Philadelphia Story
    (2002-11-01) Corcoran, Thomas B.; Christman, Jolley Bruce
    In Philadelphia, the Annenberg Challenge was known as Children Achieving and was a districtwide systemic reform initiative designed and led by a small core group of District officials and external partners. This report examines the Children Achieving Challenge and the strategies the designers employed to improve teaching and learning in the public schools. Among the conditions associated with the Annenberg Challenge were requirements that two matching dollars be raised for each one received from the Annenberg Foundation and that an independent management structure be created to provide program, fiscal, and evaluation oversight of the grant. In Philadelphia, a business organization, Greater Philadelphia First, assumed these responsibilities, and with them, the challenge of working with the School District to build and sustain civic support for the improvement of the public schools.
  • Publication
    In Search of Leading Indicators in Education
    (2012-07-10) Supovitz, Jonathan A; Foley, Ellen; Mishook, Jacob
    Data have long been considered a key factor in organizational decision-making (Simon, 1955; Lindblom & Cohen, 1979). Data offer perspective, guidance, and insights that inform policy and practice (Newell & Simon, 1972; Kennedy, 1984). Recently, education policymakers have invested in the use of data for organizational improvement in states and districts with such initiatives as Race to The Top (United States Department of Education, 2010) and the development of statewide longitudinal data systems (Institute for Education Sciences, 2010). These and other initiatives focus attention on how data can be used to foster learning and improvement. In other fields, including economics and business, much work has been done to identify leading indicators that predict organizational outcomes. In this paper, we conceptualize how leading indicators might be used in education, using examples from a small sample of school districts with reputations as strong users of data. We define leading indicators as systematically collected data on an activity or condition that is related to a subsequent and valued outcome, as well as the processes surrounding the investigation of those data and the associated responses. Identifying leading indicators often prompts improvements in a district’s system of supports. To develop this concept, we describe four examples of how districts identified and used key indicators to anticipate learning problems and improve student outcomes. We also describe the infrastructure and other supports that districts need to sustain this ambitious form of data use. We conclude by discussing how leading indicators can bring about more intelligent use of data in education.
  • Publication
    A Randomized Evaluation of Ohio's Personalized Assessment Reporting System (PARS)
    (2007-12-01) May, Henry; Robinson, Marian A
    In the 2006–07 school year, the Ohio Department of Education (ODE) launched a pilot of its Personalized Assessment Reporting System (PARS) for the Ohio Graduation Tests (OGT). The PARS program included several new OGT test score reports for teachers, administrators, students, and parents along with two new websites for educators and students. The new PARS test score reports and associated websites are designed to provide teachers, administrators, students and parents with more detailed information about student performance as well as numerous suggestions and resources for improving performance. One of the primary goals of PARS is to increase student motivation to pass the OGT and graduate high school. ODE hopes that by providing clear and detailed information to each student about his or her performance relative to the state standards, along with resources for improving performance and planning for the future, PARS may lead to improvements in student attitudes and behaviors that are fundamental to success in high school and beyond. Research suggests that grades or scores in the absence of constructive feedback can have a detrimental effect on student achievement (Butler 1987; 1988). The PARS reports are designed to provide this kind of detailed constructive feedback. Furthermore, by providing clear and detailed information to teachers and administrators about student performance, along with tools for making sense of the data and resources for improving and targeting instruction, PARS has the potential to inform numerous aspects of instruction. This research report presents program evaluation findings from the first-year pilot of PARS. The primary goals for the evaluation were to (a) document the implementation of the program and (b) provide scientifically based evidence of potential impacts on instruction and student learning. The evaluation involved a district random assignment design and a mixed-methods approach to measuring program implementation and impacts. A total of 100 high schools in 60 school districts participated in this research, with 51 schools in 30 districts randomly assigned to participate in the PARS pilot during the 2006–07 school year. A subsample of 5 schools agreed to site visits during which researchers conducted interviews with teachers and students to learn more about PARS.
  • Publication
    Science Instruction in Newark Public Schools
    (2011-09-01) Corcoran, Thomas B.; Gerry, Gail B
    The Consortium for Policy Research in Education (CPRE) has prepared this report on the Newark Public Schools (NPS) for the Merck Institute for Science Education (MISE) to assist them with the development of a strategic plan for improving science education in the district. The data used in the report have been gathered and analyzed through the collaborative efforts of CPRE, MISE, and Horizon Research, Inc. (HRI). MISE and CPRE collaborated on two rounds of school site visits; CPRE conducted interviews with district officials; MISE staff analyzed Newark’s curriculum documents and administered a survey of Newark teachers and administrators; and HRI reviewed a sample of interim science assessments developed by the NPS staff.
  • Publication
    TASK: A Measure of Learning Trajectory-Oriented Formative Assessment
    (2013-06-01) Supovitz, Jonathan A; Ebby, Caroline Brayer; Sirinides, Philip M
    This interactive electronic report provides an overview of an innovative new instrument developed by CPRE researchers to authentically measure teachers’ formative assessment practices in mathematics. The Teacher Analysis of Student Knowledge, or TASK, instrument assesses mathematics teachers’ knowledge of formative assessment and learning trajectories, important components of the instructional knowledge necessary to teach to the high expectations of the Common Core State Standards (CCSS). Researchers found that the majority of teachers of mathematics in grades K-10 in urban and urban fringe districts focused on their students' procedural skills rather than their conceptual understandings, indicating that there is significant room for growth in teacher capacity to identify, interpret, and respond to students' conceptual understanding.
  • Publication
    Learning About Assessment: An Evaluation of a Ten-State Effort to Build Assessment Capacity in High Schools
    (2009-02-01) Weinbaum, Elliot H
    In 2006, the State of Delaware and the Council of Chief State School Officers (CCSSO) partnered with the Consortium for Policy Research in Education (CPRE) to conduct an evaluation of a ten-state initiative that sought to enhance assessment practices at the high school level. This effort aimed to help states, districts, and schools build familiarity with instruction that uses assessment as part of the learning process, a practice known as assessment for learning. This report focuses primarily on the third goal of this project, the creation and function of teacher learning teams focused on assessment for learning.
  • Publication
    An Analysis of the Effects of Children Achieving on Student Achievement in Philadelphia Elementary Schools
    (2002-02-01) Tighe, Erin; Wang, Aubrey; Foley, Ellen
    Philadelphia’s Children Achieving was a sweeping systemic reform initiative. Systemic reform eschews a school-byschool approach to reform and relies on coherent policy, improved coordination of resources and services, content and performance standards, decentralization of decision-making, and accountability mechanisms to transform entire school systems. Led by a dynamic superintendent and central office personnel, Children Achieving was the first attempt by an urban district to test systemic reform in practice. In 1996, the Consortium for Policy Research in Education (CPRE) at the University of Pennsylvania and its partner, Research for Action (RFA), were charged by the Children Achieving Challenge with the evaluation of Children Achieving. Between the 1995- 1996 and 2000-2001 school years, CPRE and RFA researchers interviewed hundreds of teachers, principals, parents, students, District officials, and civic leaders; sat in on meetings where the plan was designed, debated, and revised; observed its implementation in classrooms and schools; conducted two systemwide surveys of teachers; and carried out independent analyses of the District’s test results and other indicators of system performance. An outline of the research methods used by CPRE and RFA is included in this report.
  • Publication
    State Education Agencies and the Implementation of New Teacher Evaluation Systems
    (2015-10-01) McGuinn, Patrick
    It has been three years since Race to the Top grant-winning states piloted new teacher evaluation systems and many of them have made considerable progress, yet according to media coverage and a Government Accountability Office report published in April 2015, struggles remain and most grantees have asked to extend the timetables for completing this work. Given the enormous importance and complexity of these reforms — and the fact that states vary widely in the timing, approach, and success of their implementation work — this is an excellent opportunity to assess the progress that has been made and identify where challenges persist. It is imperative that states learn from one another during this implementation stage, and this brief from Patrick McGuinn (Drew University) serves to facilitate the discussion by highlighting what is and is not working in the Race to the Top states.