Large scale assessments: opportunities and challenges for South Africa

Large scale assessments: opportunities and challenges for South Africa

Education systems around the world have a noble responsibility to provide quality education to their learners through the schools that these learners attend.  This responsibility often demands complex processes, and the measurement and evaluation of these processes at various levels of the system.  At the classroom level, the measurement and evaluation often involves what is referred to as formative assessment, where teachers assess their learners on a regular basis to determine whether these learners have mastered the curriculum taught at a particular point in time. At the national and system levels, we have summative assessments involving a large number of schools (large scale assessment), where learners are assessed and schools are profiled to develop an understanding of how these schools function to provide opportunities for their learners to succeed. An example of such an assessment is the Annual National Assessment (ANA) conducted by the Department of Basic Education. In recent years, large scale assessments have often been used to serve multiple functions. This development presents a number of challenges with implications for education systems such as South Africa in their practice of undertaking large scale assessments. We reflect on these opportunities and challenges from a theoretical point of view as well as by looking at South Africa’s experience in administering the ANAs.

Multiple functions of large scale assessments

Using a framework conceptualized by Nagy[1] and Klinger, DeLuca and Miller[2] the different purposes of large scale assessment can be identified, which include:

  • Accountability-where learners’ achievement relative to their exposure to an implemented curriculum is used to assess the “health” of an education system and most importantly, make the different levels of an education system account for the decision making to improve teaching and learning.
  • Gate keeping-where the assessment is used to determine the graduation, admission, and grade promotion of learners.
  • Instructional diagnosis-where learners’ results are compared to a set of criterion, expectations or learning outcomes to determine their strengths and weaknesses and the diagnosis used to support and guide instruction.
  • Monitoring performance and identifying trends-where changes in learner achievement are monitored over time to establish trends and isolate important determinants of success and failure.

The framework demonstrated that large scale assessment can serve more than one function.  Bennet[3] argues that well designed tests can have a primary and secondary purpose.  Summative tests, for example, could fulfill their primary purposes of assessing learning as well as a secondary purpose of identifying learners’ needs and provide formative feedback. By including comprehensive surveys of educators and compiling an extensive video library of classes in a few participating education systems, the Trends in International Mathematics and Science Study (TIMSS) was able to link summative assessments to classroom instructional practices[4].

Other researchers have expressed reservations in the use of large scale assessment for purposes beyond the original intended measures of the test.  Volante[5] argues that using large scale assessments for gate-keeping raises the stakes of the assessment and is therefore wholly inappropriate.  He contends that classroom assessment that covers the broadest range of subjects throughout a period of time should be the most appropriate test for graduation. Shepard[6] views large scale assessments as broad and only touching lightly on the many curricular topics and skills that are covered over a longer period of time.  Due to their broad nature, Shepard argues that these tests are not designed to diagnose individual learning needs, but instead can provide diagnostic information at a broad programme level.

The challenges in using South Africa’s Annual National Assessment to serve multiple functions

In South Africa, an attempt to use large scale assessment referred to as the Annual National Assessment (ANA) to serve multiple functions has not been very successful.   ANA is a critical component of the South African education policy outlined in the document “Action Plan 2014: Towards Schooling 2025”. The ANA processes involve large scale assessment expected to generate a national benchmark of learner competencies, improve teacher assessment practices, create context-specific best practice models and foster active participation from all stakeholders.  An important aspect of ANA is the use of test results to generate diagnostic reports to inform teaching and learning in the classroom.

In a policy brief, we outlined the challenges in using ANA to serve multiple functions.  The challenges include the potential misuse of ANA to:

  • inform policies that have not been well scrutinized and are based on measures and analysis with limited credibility;
  • control and limit educational innovations and professional autonomy of educators;
  • hold teachers responsible for results that they have limited control over; and
  • narrow curriculum overage by encouraging “teaching to the test” techniques which take valuable time away from non-tested subjects, particularly when high-stakes are attached to results.

To address these challenges, professional development programmes linked to the ANA processes were proposed. We argue that teachers’ access to courses designed for this programme would help them to develop a culture and understanding of formative assessment literacy.

Concluding remarks

Large scale assessments such as ANA can be developed to serve multiple functions, providing valuable information about South Africa’s education system, as well as highlighting areas which require intervention. However, the use of these assessments is not without challenges. The limitations identified in the use of ANA are common to other national and international large scale assessments in which South Africa participates, such as TIMSS and the Progress in International Reading Literacy Study (PIRLS). This piece has argued for the need for caution in the implementation of these assessments, as well as the analysis and interpretation of their results, in order to avoid the potential misuse and unintended consequences which may arise when large scale assessments are used for multiple purposes.

Author: George Frempong, Chief Research Specialist

              Education and Skills Development research programme of the HSRC

[1] Nagy, P. (2000). The Three Roles of Assessment: Gatekeeping, Accountability, and Instructional Diagnosis, Canadian Journal of Education, 25(4): 262–279..
[2] Klinger, D., DeLuca, C., and Miller, T. (2008). The evolving culture of large-scale assessments in Canadian education. Canadian Journal of Educational Administration and Policy, 76: 1–34.
[3] Bennett, R.E. (2011). Formative assessment: a critical review, Assessment in Education: Principles, Policy & Practice, 18(1): 5–25.
[4] Doig, B. (2006). Large‐scale mathematics assessment: looking globally to act locally, Assessment in Education: Principles, Policy & Practice, 13(3): 265-288
[5] Volante, L. (2007). Educational quality and accountability in Ontario: Past, present, and future, Canadian Journal of Educational Administration and Policy, 58: 1-21.
[6] Shepard, L. (2003). Reconsidering large-scale assessment to heighten its relevance to learning. Everyday assessment in the science classroom, 121-146.