Jump to ContentJump to Main Navigation
The Politics and Governance of Basic EducationA Tale of Two South African Provinces$

Brian Levy, Robert Cameron, Ursula Hoadley, and Vinothan Naidoo

Print publication date: 2018

Print ISBN-13: 9780198824053

Published to Oxford Scholarship Online: November 2018

DOI: 10.1093/oso/9780198824053.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2022. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use.date: 03 July 2022

Explaining the Western Cape Performance Paradox

Explaining the Western Cape Performance Paradox

An Econometric Analysis

Chapter:
(p.149) 6 Explaining the Western Cape Performance Paradox
Source:
The Politics and Governance of Basic Education
Author(s):

Gabrielle Wills

Debra Shepherd

Janeli Kotzé

Publisher:
Oxford University Press
DOI:10.1093/oso/9780198824053.003.0006

Abstract and Keywords

In this chapter we consider how well primary school students perform in the Western Cape when compared with their peers in other provinces and countries across Southern and Eastern Africa. We find that while the Western Cape is a relatively efficient education system within South Africa, particularly in serving the poorest students, a less-resourced country such as Kenya produces higher Grade 6 learning outcomes at every level of student socio-economic status. The system performance differentials are not explained away by differences in resourcing, teacher, school inputs, or indicators of hierarchical governance. The results point to the limits of strong Weberian bureaucratic capabilities for raising learning outcomes.

Keywords:   learning outcomes, education systems, SACMEQ, South Africa, Western Cape

6.1 Introduction

What accounts for variations across locales in educational outcomes? One common explanation focuses on the availability of resources. However, the correlation in Figure 6.1 confounds any simplistic explanation along these lines.

Explaining the Western Cape Performance ParadoxAn Econometric Analysis

Figure 6.1. Average country and region test performance against spending per primary school learner

Source: Spending data for countries retrieved from UNESCO Institute for Statistics. The expenditure data for South African provinces is obtained from the Provincial Budgets and Expenditure Review 2005/06–2011/12. Average test scores are calculated using SACMEQ III (2007). Expenditure only includes public expenditures and is expressed in Purchasing Power Parity (PPP) dollars. Private expenditures may vary notably across countries but this is not shown here.

As the figure shows, Kenya and Tanzania spend less than one-third the amount per learner that is spent by the three South African provinces (and also Botswana). Even so, their outcomes measured by SACMEQ mathematics test score results for Grade 6 students are as good as the Western Cape and Gauteng, and better than the South African average. This is not to say that resources are not important at all, but ‘the main message is still not one of broad, resource-based policy initiatives’ (Hanushek and Woesmann, 2007: 67).

A rich global literature has explored the impact of a wide variety of causal factors on educational outcomes in developing countries. The overall focus of the book of which this chapter is a part is on the influence on educational progress of some key aspects of how education is governed. As Chapter 1 detailed, one set of questions concerns the benefits of having in place a high-quality bureaucracy and, related, the possible limitations of an exclusive pre-occupation with strengthening bureaucratic capability as a way of improving the governance of education. A second set of questions concerns the potential and limitations of ‘horizontal’ governance—enhanced governance authority and flexibility at the school-level—as a means of improving educational outcomes. Is good horizontal governance potentially a complement to (p.150) hierarchy, improving outcomes even where bureaucracy is highly capable? Might it also be a substitute source of institutional capability, a way of improving outcomes even in settings where hierarchical institutions are weak?

In most of this book the above questions are explored using case study analyses. This chapter complements these qualitative approaches with statistical analysis. We build especially on the analysis of the Western Cape education system in Chapter 4. In that chapter, the Western Cape Education Department emerges as an unusually high-quality bureaucracy.

Using the Western Cape’s educational outcomes as a benchmark enables us to explore econometrically a number of ‘paradoxes’ suggested by Figure 6.1: is the Figure 6.1 pattern, in which the Western Cape achieves better educational outcomes than other South African provinces (notwithstanding similar resources, and a similar policy and regulatory framework), empirically robust once other non-governance-related causal influences are taken into account? To what extent can the Western Cape’s performance be accounted for by stronger bureaucratic capabilities? Insofar as the Western Cape bureaucracy is indeed unusually strong, why do Tanzania and, especially, Kenya achieve similar outcomes, notwithstanding their substantially lower levels of resources? We address these questions using a statistical methodology which enables us to identify a distinctive ‘Western Cape effect’—the unexplained variation in education outcomes that remains even after controlling for the (p.151) influence of a comprehensive set of factors which potentially might drive performance.

Our analysis proceeds as follows. Section 6.2 drills down into the ‘dependent variable’; it uses a variety of data sources to benchmark educational outcomes in the Western Cape relative to other locales. Section 6.3 reports the results of our multivariate statistical estimation, and assesses the sign and magnitude of a ‘Western Cape effect’, once other sources of variations in educational outcomes are controlled for. Section 6.4 drills down into some micro-level details of interactions between socio-economic status, bureaucratic capability, and educational outcomes. Section 6.5 concludes.

6.2 Benchmarking the Western Cape Outcomes

The econometric analysis in this chapter uses the SACMEQ III data series, a school survey representative of Grade 6 students, as the principal data source. SACMEQ test scores are used as the dependent variable, and a variety of SACMEQ descriptive statistics as our independent variables. In addition to SACMEQ, other measures are also available for benchmarking the performance of the Western Cape education system relative to other locales. This section first provides some background on the SACMEQ data and then places its performance measures in perspective by contrasting the SACMEQ patterns with some other comparative benchmarks.

The Southern African Consortium for Monitoring Educational Quality (SACMEQ) is a group of education ministries, policy-makers, and researchers which in conjunction with UNESCO’s International Institute for Educational Planning (IIEP) has administered four cross-national surveys of Grade 6 learning across fifteen SACMEQ ministries of education since its inception. As of the time of the writing of this study, the most recent SACMEQ IV survey of 2013 has not yet been released in the public domain.1 We therefore rely on older SACMEQ III data collected during the last quarter of 2007 from 61,396 pupils, 8,026 teachers and 2,779 schools. Across each sample, the data was explicitly stratified by region reflecting that data is representative of Grade 6 students not only at the country level but at the regional or provincial level.

(p.152) Students were tested in three subject areas—literacy, mathematics and health—although we only use the first two performance measures in this chapter. Each SACMEQ outcome measure is obtained using a Rasch scaling approach,2 and is set to have a mean of 500 and standard deviation of 100. In the econometric analysis, we convert the scale to z-scores with a mean of 0 and standard deviation of 1 for ease of interpretation. Table 6.1 summarizes the SACMEQ Grade 6 mathematics outcomes for the Western Cape, for two other South African provinces (Eastern Cape and Gauteng), and some other African states. According to these data, the Western Cape is among the highest performing of the South African provinces but performs substantially below Mauritius, and parts of Kenya using averages and points along a performance distribution.

Table 6.1. SACMEQ III: Western Cape Mathematics Scores Relative to Other African Countries

Mean

25th percentile

50th percentile

75th percentile

N schools in sample

South Africa all provinces

495

424

483

548

392

South Africa—Western Cape

566

496

560

635

40

South Africa—Gauteng

545

483

548

610

43

Mauritius all regions

623

522

610

718

152

Kenya all regions

557

509

548

610

193

Kenya—Central

574

509

560

636

24

Kenya—Nairobi

610

535

585

662

16

Tanzania—Central

549

500

553

604

16

Botswana all regions

521

468

522

573

160

Botswana—Gaborone

569

509

560

623

20

Source: SACMEQ III, 2007 data (own calculations) and Hungi et al. (2010).

Along with SACMEQ, the Trends in International Mathematics and Science Study (TIMSS) is another multi-country benchmarking effort in which South Africa participates; comparative global results confirm that South Africa is a startlingly low performer relative to other countries. More interestingly for present purposes, TIMSS generates province-by-province information on education outcomes over time (Reddy et al., 2016; DBE, 2016). Table 6.2 reports average mathematics performance at the Grade 9 level for all South African provinces for 2003, 2011 and 2015. The Western Cape’s performance exceeded other provinces in 2003 and 2011, but performance declined in 2015, with Gauteng ranking as the best-performing province in the most (p.153) recent 2015 results. Statistically significant improvements between 2003 and 2015 were observed in all other eight provinces. For example, Gauteng and the Eastern Cape show improvements of over two years of learning. The Western Cape is the only province that experienced declines in performance over the same period as shown in Table 6.2.

The Western Cape’s declines in Grade 9 TIMSS performance however must be qualified in with respect to two factors. First, its results come off a considerably higher base level compared to other provinces. Second, there have been reported changes in the compositional characteristics of students in the province possibly due to in-migration from poorer provinces (Reddy et al., 2016). The data points to a more disadvantaged composition of Western Cape students that attempted the 2015 test, compared with earlier years.3 Note, though, that in the newly introduced TIMSS Numeracy at the Grade 5 level, the Western Cape continues to outperform all other provinces.

Table 6.2. TIMSS Performance—Provincial Comparisons Across Years

TIMSS 2003 (Gr 9)

TIMSS 2011 (Gr. 9)

TIMSS 2015 (Gr. 9)

TIMSS Numeracy 2015 (Gr. 5)

Average math. score

Average math. score

Average math. score

standard error

Average math. Score

standard error

Western Cape

414

404

391^

11

441

10.7

Gauteng

303

389*

408^

11.4

420

10.6

Eastern Cape

250

316*

346^

14.4

343

7.4

Free State

291

359*

367^

7.8

373

9.8

KwaZulu-Natal

278

337*

369^

11.8

367

7.8

Limpopo

244

322*

361^

13.4

344

10.1

Mpumulanga

287

344*

370^

7.8

384

11.3

Northern Cape

341

366

364^

7.2

373

12.7

North West

280

350*

354^

7.9

355

5.8

South Africa

285

348

368

4.9

376

3.4

Notes: In comparing 2003 and 2011 Grade 8/9 TIMSS results in DBE (2016) a difference-in-means test was run for each province. The result was that for seven provinces, specifically those with values close to zero in the final column, differences from 2003 to 2011 are indeed statistically significant as reflected by the * except for the Northern Cape and Western Cape. Reddy et al (2016) indicate whether differences between 2003 and 2012 scores are statistically significantly different. This is indicated here by a ^. All scores are on a scale of 0–1000 with 500 as the international mean and a standard deviation of 100. TIMSS Numeracy scores at the Grade 5 level cannot be compared with the Grade 8/9 results.

Source: Reddy et al. (2012), Reddy et al. (2016), DBE (2016).

A third measure for comparing learning outcomes is South Africa’s National Senior Certificate (NSC) or matriculation outcomes—the NSC is the critical document that provides students with added advantage in accessing further (p.154) tuition and higher earnings in the labour market. Although the proportion of those who sit the matriculation examination and pass is high in the Western Cape, other provinces such as Gauteng, KwaZulu-Natal or even Limpopo and Mpumulanga are more effective at producing larger proportions of their provincial population of youths with a NSC pass, as seen in Table 6.3. An analysis of the General Household Survey data for the periods 2012–14 reveals that Gauteng has the highest percentage of youths aged 20–28 with a completed Grade 12.4 (Note though that, as Chapter 4 details, as of 2015, the Western Cape was the top provincial performer when the more stringent measure of a ‘university pass’ is used.) The Western Cape, though a relatively good performer at the Grade 12 level, might have been expected to perform a lot better at that level if one considers that standardized test results show it to be the top performing province in the earlier grades (DBE, 2013: 3).

Table 6.3. Matriculants (NSC passes) by Province Relative to the Youth Population According to the General Household Survey

% of youths who obtain Matric: average 2009–2011

% of youths who obtain Matric: average 2012–2014

Difference

Eastern Cape

39.4

43.2

3.8

Free State

48.2

51.2

3.1

Gauteng

58.5

64.5

5.9

Kwazulu Natal

49

56

6.9

Limpopo

38

45.2

7.2

Mpmualanga

45.4

50.6

5.2

Northern Cape

41.2

48.9

7.7

Northwest

41.3

48.1

6.8

Western Cape

50.9

48.8

−2

All South Africa

48

51.5

3.5

Source: DBE (2016). Youths are individuals aged 20–28.

Furthermore, the recent improvements in student matric results that are evident in other provinces are not necessarily observed in the Western Cape. In examining growth rates in the number of high-level mathematics and physical science passes in the matriculation examination between 2008 and 2015, the Western Cape shows some of the lowest rates of growth in the percentage of students achieving 60 per cent or more in mathematics or physical science (DBE, 2016).

Gustafsson, in the Department of Basic Education’s 2013 and 2016 education sector reports, compares student achievement across nine provinces using different grade and subject test results from the Annual National (p.155) Assessments (2011–14), SACMEQ III mathematics performance, pre- PIRLS 2011, TIMSS 2011 and different measures of Grade 12 performance in the matric examinations (DBE, 2013; DBE, 2016). In total, he identifies thirteen different outcome measures. The Western Cape performs best across nine of the thirteen measures—not all thirteen measures (DBE, 2016).

In sum, considered together, the evidence summarized in this section suggests that while the Western Cape is a high-performing provincial department with respect to observed educational outcomes in South Africa, it has not translated into its schools consistently being the best performers in the country. Depending on what educational outcome is considered, the grade level and the position at which performance is measured along the student socio-economic profile, Gauteng or other provinces at times fare better.

6.3 Multivariate Estimation

We turn now to econometric analysis of the consequences (both strengths and limitations) for learning outcomes of the Western Cape Education Department (WCED)’s strong bureaucratic capability.

6.3.1 The Model and Variables

Our point of departure is the classic education production function, which at the most general level can be specified as:

(1a)
Yis=f(x1 , x2xn) 

where Yis is the test score of student i in school s, and x1xn are the many variables which influence educational outcomes. As numerous studies the world over (including in South Africa) have shown, variables which have a significant impact on learning outcomes include: the socio-economic status and other personal/home characteristics of learners; the quality of teachers; the presence of school resources that support learning (infrastructure, other classroom resources and particularly textbooks); and the quality of a variety of hierarchical and horizontal governance, management and accountability relationships (Hanushek, 2007; Gustafsson and Taylor, 2016; Kingdon et al., 2014; McEwan, 2015; Van der Berg et al., 2016; Evans and Popova, 2015; Crouch and Mabogoane, 2001; Fiske and Ladd, 2004).

The specific education production function which we estimate, using the SACMEQ data, has the following specification:

(1b)
Yis=WC+δIs +βHis+αSESis+γRis+εis

Where:

(p.156)

  • WC, the Western Cape effect, is the variable of most immediate interest for the purposes of the present study. We discuss our estimation strategy, interpretation, and results below.

  • Is is a vector of school/institutional factors; some are directly relevant for our core questions, others function more as control variables.

  • His is a vector of student and home background factors, which serve as control variables in analyses.

  • SESis is our measure of the socio-economic status of learners; our measurement and estimation approaches are discussed in detail in Section 6.4.

  • Ris is a vector of classroom and teacher resources—some of these measures are discussed further below.

Appendix A6.1 provides comprehensive detail on all the independent variables used from the SACMEQ data series.

Evidently, there are significant differences in the country and regional student samples which need to be controlled for when comparisons are being made to the Western Cape relative to other locales. Our econometric analysis does this using a regression framework combined with a propensity score matching approach. Propensity score weights ensure that the estimated WC coefficient is computed using the most suitably comparable groups of students across two country/regional settings.5 This technique requires that only regions or countries that are sufficiently comparable to the Western Cape in terms of student socio-economic and home background factors are considered; for this reason, in the econometric analyses, only the Nairobi and Central regions of Kenya are selected for comparison. In Section 6.4 we consider performance for the entire Kenyan student sample in a descriptive analysis.6

In reporting our results, we focus not on the coefficients of the full set of variables (where our results by-and-large are consistent with the patterns (p.157) obtained in many analyses of education production functions), but on the Western Cape effect. This is captured by ‘WC’—a fixed effect that takes a value of 1 if the student is taught in a Western Cape school, and 0 otherwise. It measures the difference in expected performance between Western Cape students and another country or regions’ student group once controlling for contextual poverty, home background and school resourcing variables. We estimate the WC effect by sequentially pooling the data for the Western Cape and each of our comparator locales.

6.3.2 Econometric Results—Some Overall Patterns

Tables 6.4 and 6.5 report the sign, magnitude and significance of the WC effect on mathematics and literacy outcomes for each of these pairwise pools—using, as per the lower part of the tables, a range of specifications differing in terms of which control variables are included. Formally, the coefficients on WC measure the direction and magnitude of systematic unexplained differences in performance between the Western Cape and other locales. Insofar as the set of controls is comprehensive—and the control variables do not include measures of the hierarchical and horizontal governance factors which are of interest—we can interpret the WC effect as approximations of these (unmeasured) dimensions of the Western Cape’s institutional arrangements for education.7 From this perspective, a positive coefficient can be interpreted as indicating that the Western Cape’s institutions are stronger than those of the comparator locale; a negative coefficient signals relative institutional weakness for the Western Cape.

The first specification in Tables 6.4 and 6.5 controls for a Western Cape fixed effect only (column 1). The subsequent specifications discussed in this subsection expand the range of variables to include socio-economic, home background, classroom and school factors. This process aims to separate that part of the performance gap that might be explained by resourcing and home background factors from the part that may be linked to institutional factors.

Table 6.4, an estimation of mathematics scores, shows the changes on the Western Cape coefficient from progressively expanding the set of controls. The SACMEQ patterns in Table 6.1 turn out to be robust even once the controls are added. With home background and socio-economic status of (p.158) (p.160) (p.162) students controlled for, the Western Cape effect in mathematics is positive and significantly different from zero in comparison to Botswana, the Eastern Cape and Gauteng. The Western Cape’s relative strength in governance is one plausible explanation for this robust pattern. However, the opposite effect holds in relation to Kenya and Mauritius—suggesting (following similar logic) that overall, the latter two countries might have a better governed education sector than the Western Cape with, given the strong hierarchical management of the Western Cape, other ‘soft’ governance factors plausibly playing a decisive role.8

Table 6.4. Multivariate regression of SACMEQ III Grade 6 mathematics z-scores using propensity score weights

Comparison country/region:

1

2

3

4

5

6

7

8

9

Coefficient on Western Cape Dummy

Kenya (Nairobi & Central)

−0.605***

−0.375***

−0.344***

−0.373**

−0.376**

−0.434**

−0.355*

−0.359*

(0.16)

(0.18)

(0.17)

(0.17)

(0.16)

(0.16)

(0.19)

(0.19)

R-squared

0.085

0.398

0.412

0.453

0.462

0.471

0.473

0.484

Botswanaa

0.320***

0.282***

0.222***

0.254***

0.289***

0.413***

0.337***

0.421***

(0.14)

(0.06)

(0.06)

(0.07)

(0.07)

(0.07)

(0.07)

(0.07)

R-squared

0.024

0.454

0.458

0.493

0.499

0.511

0.514

0.517

Mauritiusa,b

−0.604*** −0.259***

−0.232**

−0.300**

−0.251

−0.211

−0.236

(0.15)

(0.09)

(0.11)

(0.15)

(0.17)

(0.17)

(0.18)

R-squared

0.052

0.399

0.424

0.436

0.45

0.454

0.454

Eastern Cape

0.438***

0.490***

0.430***

0.488***

0.492***

0.890***

1.023***

0.861***

0.759***

(0.18)

(0.13)

(0.13)

(0.11)

(0.11)

(0.16)

(0.19)

(0.19)

(0.20)

R-squared

0.043

0.284

0.291

0.384

0.409

0.457

0.478

0.491

0.493

Gauteng

0.257

0.320***

0.278***

0.296***

0.354***

0.408***

0.454***

0.402***

0.405***

(0.19)

(0.08)

(0.08)

(0.07)

(0.08)

(0.07)

(0.07)

(0.07)

(0.07)

R-squared

0.013

0.5

0.507

0.555

0.562

0.59

0.591

0.595

0.595

Other controls:

Home background

X

X

X

X

X

X

X

X

Socio-economic status

X

X

X

X

X

X

X

X

Teacher test scoresc

X

X

X

X

X

X

X

Teacher/ classroom characteristics

X

X

X

X

X

X

‘Governance’ indicators

X

X

X

X

X

Parents contribute to school building & teaching materials

X

X

X

X

Parents contribute to salaries

X

X

X

Parents contribute to extra-curricular & teaching activities

X

X

Teacher days lost to strike activity

X

(p.159) Observations:

Western Cape

900

900

900

876

876

876

876

876

876

Kenya

920

920

920

899

899

899

899

899

Botswana

3868

3868

3868

3868

3868

3868

3868

3868

Mauritius

3524

3524

3524

3524

3524

3524

3524

3524

Eastern Cape

1066

1066

1066

981

981

981

981

981

981

Gauteng

1020

1020

1020

1020

1020

1020

1020

1020

1020

Notes: Teacher and classroom characteristics include teacher education, teacher age, teacher experience, weekly teaching time (hours), textbook availability, class size, pupil-teacher ratio (PTR), frequency and discussion of homework and frequency of classroom assessment. Due to lack of overlap in teacher characteristics between the Western Cape and Kenya, only textbook availability, class size and frequency of assessment are controlled for. Standard errors are clustered at the school level and shown in parentheses.

(***) significance at 1% level;

(**) significance at 5% level;

(*) significance at 10% level.

(a) In the case of Botswana and Mauritius, SES is measured using a context specific asset index as household consumption data was not publicly available for these countries. In all other analyses, log per capita consumption is used.

(b) No teacher test scores are available for Mauritius.

(c) Test scores are missing for approximately 20% of the South African Grade 6 mathematics teachers sampled. A dummy variable equal to 1 for a missing test score and 0 otherwise is included in the analysis as not to exclude students taught by these teachers from the sample. Students taught by mathematics teachers with missing test scores had significantly higher mathematics test scores, therefore excluding these students from the analysis is likely to bias the estimated coefficients. Teacher strike days only vary in South African provinces and not the other comparator countries.

Table 6.5. Multivariate regression of SACMEQ III Grade 6 literacy z-scores using propensity score weights

Comparison country/region:

1

2

3

4

5

6

7

8

9

Coefficient on Western Cape Dummy

Kenya (Nairobi & Central)

−0.424**

−0.269**

−0.208*

−0.261*

−0.133

−0.277*

−0.238

−0.244

(0.17)

(0.12)

(0.11)

(0.14)

(0.14)

(0.15)

(0.18)

(0.18)

R-squared

0.04

0.346

0.37

0.422

0.454

0.466

0.467

0.472

Botswanaa

0.282***

0.255***

0.190***

0.229***

0.265***

0.401***

0.395***

0.393***

(0.13)

(0.06)

(0.07)

(0.07)

(0.07)

(0.07)

(0.09)

(0.09)

R-squared

0.018

0.455

0.46

0.496

0.501

0.508

0.508

0.51

Mauritiusa,b

0.092

0.376***

0.345***

0.299***

0.380***

0.397***

0.427***

(0.12)

(0.07)

(0.10)

(0.11)

(0.13)

(0.14)

(0.15)

R-squared

0.002

0.39

0.414

0.424

0.437

0.441

0.442

Eastern Cape

0.662***

0.704***

0.593**

0.619***

0.631***

1.175***

1.153***

1.215***

1.038***

(0.19)

(0.12)

(0.13)

(0.11)

(0.16)

(0.20)

(0.18)

(0.18)

(0.20)

R-squared

0.093

0.376

0.398

0.444

0.465

0.514

0.532

0.539

0.542

Gauteng

0.136

0.180**

0.137*

0.101

0.045

0.043

0.086

0.08

0.072

(0.18)

(0.08)

(0.08)

(0.07)

(0.08)

(0.08)

(0.08)

(0.09)

(0.08)

R-squared

0.505

0.513

0.559

0.568

0.582

0.586

0.587

0.59

Other controls:

Home background

X

X

X

X

X

X

X

X

Socio-economic status

X

X

X

X

X

X

X

X

Teacher test scoresc

X

X

X

X

X

X

X

Teacher/classroom characteristics

X

X

X

X

X

X

‘Governance’ indicators

X

X

X

X

X

Parents contribute to school building & teaching materials

X

X

X

X

Parents contribute to salaries

X

X

X

Parents contribute to extra-curricular & teaching activities

X

X

Teacher days lost to strike activity

X

(p.161) Observations:

Western Cape

907

907

907

883

883

883

883

883

883

Kenya

922

922

922

901

901

901

901

901

Botswana

3868

3868

3868

3868

3868

3868

3868

3868

Mauritius

3524

3524

3524

3524

3524

3524

3524

3524

Tanzania

4194

4194

4194

4171

4171

4171

4171

4171

Eastern Cape

1068

1068

1068

982

982

982

982

982

982

Gauteng

1020

1020

1020

1020

1020

1020

1020

1020

1020

Notes: Teacher and classroom characteristics include teacher education, teacher age, teacher experience, weekly teaching time (hours), textbook availability, class size, pupil-teacher ratio (PTR), frequency and discussion of homework and frequency of classroom assessment. Due to the lack of overlap in teacher characteristics between Western Cape and Kenya, only textbook availability, frequency of assessment and class size are controlled for. Standard errors are clustered at the school level and shown in parentheses.

(***) significance at 1% level;

(**) significance at 5% level;

(*) significance at 10% level.

(a) In the case of Botswana and Mauritius, socio-economic status is measured using the context specific asset index. In all other analyses, log per capita consumption is used.

(b) No teacher test scores are available for Mauritius.

(c) Test scores are missing for approximately 17% of the South African Grade 6 literacy teachers sampled. A dummy variable equal to 1 for a missing test score and 0 otherwise is included in the analysis as not to exclude students taught by these teachers from the sample. Students taught by literacy teachers with missing test scores had significantly higher test scores, therefore excluding these students from the analysis is likely to bias the estimated coefficients. Teacher strike days only vary in the South African sample and not in the other comparator countries.

Strikingly, the addition of the teacher content knowledge variable does not affect the results. The further inclusion of teacher and classroom characteristics does not have any significant effect on the size and significance of the Western Cape coefficient in any of the country or region comparisons (an illustrative sample of those variables is provided in Table 6.6; the full set of variables is in Appendix A6.1). Even when one accounts for observed differences in the instructional core—the place where the student and teacher interact around content—the Western Cape effect remains in the initial direction.9

Table 6.6. Selected SACMEQ measures of mathematics teacher and classroom characteristics

Western Cape

Kenya

Botswana

Mauritius*

Gauteng

Eastern Cape

Math teacher test score

852.8

898.3

781.0

790.6

726.7

Teacher qualification:

Secondary education or less

0.19

0.67

0.59

0.33

0.20

0.39

Post-secondary/degree

0.79

0.33

0.41

0.67

0.80

0.53

Teaching experience:

0–5 years

0.07

0.24

0.19

0.06

0.12

0.16

6–10 years

0.11

0.18

0.20

0.13

0.12

0.07

11–20 years

0.60

0.36

0.45

0.34

0.51

0.48

20+ years

0.20

0.22

0.15

0.46

0.25

0.22

Notes: Teachers in Mauritius did not write the test. Except for test scores, statistics are expressed in proportions.

Source: SACMEQ, see Appendix A6.1.

Table 6.5 reports the econometric results when literacy scores are used as the dependent variable. The broad effects of controlling for teacher test scores, teacher characteristics and governance indicators on the Western Cape coefficient are largely similar to the results reported in Table 6.4. However, the direction of the effects occasionally differs when literacy rather than mathematics is the outcome variable. Part of these differences may reflect differential exposure to English in these countries, which may not be fully captured by indicators for frequency of speaking English at home. In literacy, Western Cape students outperform rather than underperform relative to their peers in Mauritius, but continue to underperform relative to Kenya (although this difference becomes insignificant from regression 7 onwards). They also continue to significantly outperform Botswanan students in literacy, but there is little evidence of an observed advantage over Gauteng students at least after accounting for classroom factors. In particular, the Western Cape learning advantage relative to the Eastern Cape in literacy is more pronounced than in mathematics.

(p.163) 6.3.3 The Influence of School-Level Governance

We turn now to our econometric analysis of the influence on performance of school-level governance. The SACMEQ data include a variety of measures of school-level governance. Table 6.7 shows some of these, distinguishing conceptually between outcomes of governance and indicators which underlie the hierarchical or horizontal governance construct. Appendix A6.1 lists the full set.

Table 6.7. Some SACMEQ measures of school-level governance

Western Cape

Kenya

Botswana

Mauritius

Gauteng

Eastern Cape

Outcome:Teacher absenteeism

sometimes a problem

0.37

0.60

0.55

0.50

0.55

0.65

Often a problem

0.11

0.09

0.08

0.05

0.05

0.12

Hierarchical:

Number of school visits by inspector in past 2 years

3.32

5.58

1.36

26.02

3.17

3.15

Horizontal:

Parents assist with building facilities

0.13

0.55

0.19

0.11

0.21

0.57

Parents assist with maintaining facilities

0.18

0.42

0.14

0.32

0.47

0.65

Parents contribute to teacher salaries

0.48

0.50

0.03

0.01

0.63

0.08

Note: statistics are expressed in proportions.

Source: SACMEQ, see Annex 6.A.

As per Table 6.7 and Appendix A6.1, while significant parent contributions in the Eastern Cape are largely related to the building and maintenance of school facilities as well as the purchase of equipment and furniture, parent contributions in the Western Cape (and Gauteng) are characterized more by financial provisions for learning materials and staff salaries. This is not surprising, given wealth differences across these provinces, with historical infrastructural backlogs in more disadvantaged (typically non-fee-paying) Eastern Cape schools. Parents of students in wealthier Western Cape schools, which are more likely to be fee-paying, likely use their contributions through fees towards hiring additional school governing body paid teachers in addition to state paid teacher allocations.

The regression specifications in Tables 6.4 and 6.5 distinguish between the ‘parental contribution’ or horizontal governance controls, and the other outcome and hierarchical SACMEQ governance indicators. As the tables show, incorporating the outcome and hierarchical governance measures has very little effect on the results. By contrast, incorporating the parental contribution measures (notably contributions to school building and teaching materials) has a major effect. This is evident in some especially striking changes in the Western Cape coefficient in estimations of both literacy and mathematics in regressions 6 to 8.

(p.164) When the parental contribution variables are included in the pooled Western Cape-Mauritius regression, the significance of the Western Cape disadvantage (negative WC effect) in mathematics disappears. The pattern is reversed when the controls for parent and community involvement are included in the Western Cape-Eastern Cape comparison. Both the Mauritius and the Eastern Cape results are consistent with the propositions that parent and community involvement adds value to educational performance, and that this ‘hands-on’ type of participation is relatively low in the Western Cape:10

  • For the Eastern Cape comparison: relatively high parent and community involvement can be interpreted as being a positive governance influence—offsetting to some extent the weaknesses in that province’s education bureaucracy which are detailed in Chapter 5 of this book. Once this involvement is controlled for, the impact of these weaknesses on education performance emerges even more starkly—as evident in an increase in the coefficient of the Western Cape effect by almost a factor of two in both mathematics and literacy. (A similar pattern is evident (p.165) vis-à-vis Botswana, though the differences in both the extent of participation and the quality of the bureaucracy are not as stark as for the Eastern Cape.)

  • For the comparison with Mauritius: the seemingly statistically significant weakness of Western Cape educational institutions in econometric results where participation is not controlled for could reflect differences in horizontal governance between the two systems.

While the results point to interesting dynamics between horizontal governance in schools and learning it is impossible in this analysis to disentangle how parental involvement indicators influence learning outcomes separately from how they may also reveal wealth differences (where wealth is typically the strongest determinant of learning outcomes in models of academic achievement).

6.4 Unbundling by Socio-Economic Status

Empirical estimation of education production functions consistently shows that learners’ socio-economic status (SES) has a powerful impact on learning outcomes. This is, of course, why we include SES as one of the control variables in our econometric analysis. However, as this section explores in depth, there are some special challenges associated with incorporating SES. There also are some distinctive patterns—beyond the sign and significance of its coefficient in an aggregate education production function—in the way in which SES and bureaucratic capability interact.

In South Africa, the relationship between educational outcomes and socio-economic status of students (and particularly the school), is extremely strong and convex by international standards. This partly reflects the stark disparities of the apartheid era, which resulted in substantial inequalities in the provision of quality education along lines of race (which is unfortunately closely predictive of socio-economic status). Using a number of different international tests of student achievement, Stephen Taylor and Derek Yu (2009) highlighted how a considerable degree of the variance in reading and mathematics scores among South African students can be attributed to a students’ SES, and particularly to overall school SES. Furthermore, this relationship is considerably stronger when compared to other international contexts; in fact, all other countries that participated in PIRLS 2006 (Taylor and Yu, 2009: 23; Taylor, 2010).11

(p.166) Effectively adjusting for home advantage requires a regionally and nationally comparable measure of student SES. How SES measures are constructed can significantly influence the performance rankings of countries over the student socio-economic profile (Kotze and Van der Berg, 2015; Harttgen and Vollmer, 2011). Following Kotze and Van der Berg (2015), we rely on an internationally calibrated measure of SES to compare test scores across equally poor students in different systems.

This is achieved by constructing a log of per capita consumption SES measure, the outcome of linking an index of student’s reported asset ownership in SACMEQ to national income distributions in household survey data. This wealth indicator enables the comparison of equally poor students under different education systems. For example, the literacy level of a child living on less than $1.25 or $2 a day in the Western Cape can be compared to the literacy level of a child who is equally poor in the Eastern Cape or in Kenya. In cases where we were not able to access household data for a country12 we use a standard asset index of student SES following Filmer and Pritchett (2001). To further increase the accuracy of a comparable SES measure, our analyses (specifically the depictions of social learning gradients) are adjusted to account for out-of-school children.13 Some countries may perform better than others if only the strongest of candidates are enrolled in the school system. Effective access to education must account for both enrolment patterns and what students learn in school (Taylor and Spaull, 2015). SES scale construction and our methodology for accounting for enrolment patterns is explained in more detail in a related working paper (Wills, Shepherd, and Kotze, 2016).

Using the log of per capita consumption SES measure we can compare how Western Cape Grade 6 students fare at different points along the socio-economic student profile, relative to their peers in other South African provinces. The descriptive statistical patterns (i.e. not controlling for other variables) are graphically depicted in Figures 6.2 and 6.3 using social learning gradients, which are best-fit lines through the available data points to show the typical performance of a province (or country) at a specific level of student wealth.14

(p.167) The social learning gradients in Figure 6.2 reveal that amongst the poorest Grade 6 students, more learning is taking place in the Western Cape than in any other South African province. Specifically, the performance of students who are living on $1.25 or $2 dollars a day, as transformed onto the log of per capita consumption scale and reflected by the two vertical lines, is statistically significantly better in the Western Cape than in other provinces, in both mathematics and literacy. This suggests that the bureaucratic efficiency of the Western Cape Education Department does not just benefit the wealthy; on the contrary, the figure suggests that the benefits are disproportionately large for the poorest students in their system. As will be shown immediately below, this is also confirmed in the econometric results.

Explaining the Western Cape Performance ParadoxAn Econometric Analysis

Figure 6.2. Mathematics and literacy scores for Grade 6 students by student socio-economic status (measured in log of consumption per capita) for all South African provinces

Notes: The vertical lines reflect the point at which students live at the poverty line as reflected by $1.25 per day or $2 per day. The grey areas reflect 95% confidence intervals about the non-parametric regression lines for key provinces of interest.

Source: SACMEQ III and NIDS household dataset.

Explaining the Western Cape Performance ParadoxAn Econometric Analysis

Figure 6.3. Literacy and mathematics scores for Grade 6 students by student socio-economic status (measured in log of consumption per capita) for the Western Cape, Gauteng, and Kenya

Source: SACMEQ III and survey household datasets. See notes to Figure 6.2.

Figure 6.3 redraws social gradients for the Western Cape and Gauteng provinces—but now including Kenya as a comparator (using the full range of SACMEQ observations for the country, not only the Nairobi and Central regions). As the figure shows, the Kenyan line consistently is well above that of the two South African provinces—underscoring that Kenyan Grade 6 students outperform South Africa’s two best-performing provinces at all levels of socio-economic status, particularly in mathematics.

We turn now to the econometric analysis of whether the inferences suggested by Figures 6.2 and 6.3 are robust. Using the estimation approach described in Section 6.3, and incorporating the full range of control variables, (p.168) Table 6.8 reports the ‘WC effect’ for sub-samples of students attending schools that have a relatively similar student wealth composition within the specific country or region. This is achieved through interacting the WC dummy with indicators of four school wealth groupings; that is, each school is assigned to a country or region-specific school wealth quartile constructed using the average of students’ per capita log consumption in each school. All models control for propensity reweighting, home background factors, teacher and classroom factors, governance and parent/community indicators.15 The WC effect is interpreted relative to students taught in schools falling within the first school wealth quartile in the comparator system (whose coefficient is set to zero).

No significant literacy performance advantage is observed in the two Kenyan regions compared with the Western Cape at all school wealth quartiles after we account for differences in teacher and school inputs. However, students attending the wealthiest schools in Kenya perform significantly better than students attending the wealthiest schools in the Western Cape with regards to mathematics. Where wealth is measured in log per capita consumption, Panel B of the table describes average student wealth by school wealth quartiles in each system. Clearly, relative to the comparator regions in Kenya, schools in the Western Cape and Gauteng are wealthier, while schools in the Eastern Cape are of similar wealth. In fact, the wealthiest quartile of schools in the Eastern Cape and in Kenya (Central and Nairobi) are far more comparable to the second quartile of Western Cape schools. This reinforces how much (p.169) (p.170) better Kenyan schools are performing in mathematics relative to the Western Cape considering differences in the relative wealth of students.

Students in wealthier Gauteng schools perform similarly to their Western Cape counterparts; the primary differences occur in poorer parts of each system, where Western Cape students perform significantly better in mathematics and literacy. This is consistent with Figure 6.2—although the regression analysis confirms that this finding holds even after controlling for a wide range of other variables. In literacy, we observe similar findings in Eastern and Western Cape comparisons, except the difference is significant even at the top end of the distribution, and the size of the difference is much larger. The gap in mathematics performance comparing Western Cape and Eastern Cape schools is significant and very large for schools in quartiles 2 to 4 but not quartile 1. But if we use absolute wealth quartiles as an alternative measure (not shown here16), the significant advantage to the Western Cape in the poorest quartile 1 schools emerges.

Table 6.8. ‘Western Cape’ effect when comparing students attending schools with similar relative values of school socio-economic status

PANEL A

Western Cape

Kenya

Diff.

Western Cape

Gauteng

Diff.

Western Cape

Eastern Cape

Diff.

Mathematics test scores

Quartile 1

−0.606

0.000

−0.606*

0.312

0.000

0.312**

−0.415

0.000

−0.415

Quartile 2

−0.228

0.063

−0.291

0.661

0.319

0.342**

0.713

−0.670

1.383***

Quartile 3

−0.175

0.178

−0.353

0.792

0.278

0.314***

0.658

−0.621

1.279***

Quartile 4

−0.087

0.633

−0.710**

0.963

0.838

0.125

1.231

0.333

0.898**

R-squared

0.471

0.584

0.551

Literacy test scores

Quartile 1

−0.405

0.000

−0.405

0.326

0.000

0.326**

0.628

0.000

0.628***

Quartile 2

−0.285

0.126

−0.411

0.616

0.334

0.282*

1.001

0.364

0.637**

Quartile 3

−0.083

0.136

−0.219

0.580

0.520

0.060

0.810

0.035

0.775***

Quartile 4

0.043

0.387

−0.344

0.620

0.736

−0.116

0.909

0.478

0.431

R-squared

0.460

0.614

0.593

PANEL B

Average student log per capita consumption by school log per capita consumption quartile

Quartile 1

Quartile 2

Quartile 3

Quartile 4

Kenya (Central and Nairobi)

6.398

6.752

7.149

7.933

Eastern Cape

6.217

6.643

7.035

7.606

Gauteng

6.995

7.513

8.544

9.297

Western Cape

7.187

7.734

8.384

9.292

Notes: Regression models control for home background and student characteristics, teacher and classroom characteristics, governance indicators and parent/community involvement indices (see the notes to Tables 6.4 and 6.5 for further information). Due to lack of overlap in teacher characteristics between the Western Cape and Kenya, only textbook availability, frequency of assessment and class size are controlled for at the teacher/classroom level. Standard errors clustered at the school level are shown in parentheses. Statistically significant at *** 1% level ** significance at 5% level * significance at 10% level. The difference in coefficients (Diff.) is calculated as the Western Cape coefficient less the comparator locale’s coefficient.

Source: SACMEQ 2007.

6.5 Discussion

This chapter has used the Western Cape as a benchmark for comparative econometric analysis of education outcomes. The goal has been to assess the influence on outcomes of some key aspects of how education is governed—the influence on outcomes of bureaucratic capability, and of school-level governance. Three sets of conclusions emerge from the analysis.

A first set of conclusions concerns the extent to which variations across South Africa’s provinces in their bureaucratic capabilities help account for divergent educational outcomes. Descriptive analysis of student learning outcomes across South African provinces suggests that, when multiple performance indicators are considered, the Western Cape is a top-performing bureaucracy, but this has not consistently led to its schools being the top performers. Econometric analysis (using 2007 SACMEQ data) shows a more consistently positive Western Cape effect at the primary school level. Both the econometric and descriptive analyses suggest that Western Cape Grade 6 students perform better at lower ends of the socio-economic distribution than students in other provinces using SACMEQ data. Considered in tandem with the evidence in Chapter 5 that the Western Cape Education Department (WCED) is a well -managed bureaucracy, this suggests that the benefits of a functional WCED extends to the poorest of students in their system. WCED bureaucratic efficiency and their approach to managing the school terrain (p.171) provides some hope that improving the quality of education institutions can make a difference for the poorest of South Africans and thereby tackling inherent learning inequalities in the system.

It is important to qualify, however, that this 2007 SACMEQ data is now a decade old. Performance changes have taken place across provinces as evidenced for example in Grade 9 TIMSS 2015 scores. It will be useful to repeat the analysis of this chapter using SACMEQ 2013 data as they become available. Furthermore, the results cannot be interpreted as causal, but reflect mere approximations of a Western Cape effect. To accurately measure the impact of an administration on student learning, one would need to relocate a school and its surrounding community in a weakly functioning province or national education system and then assess the level of improvement when reassigned to a better provincial or national education administration (van der Berg et al., 2016). While this seems an impossibility, the re-demarcation of some of South Africa’s provincial boundaries created the ideal ‘natural experiment’. Gustafsson and Taylor (2016) explored statistically the impact of these boundary shifts on educational outcomes for the affected schools.17 They found that:

by 2013, schools moving to better provinces had seen an improvement, over and above that which may have existed in other schools, equivalent to around one year of progress in a rapidly improving country. The conclusion that paying attention to a province’s administration is a worthwhile policy priority seems supported.

(Gustafsson and Taylor, 2016: 26)

A second set of conclusions from our analysis concerns the performance of the Western Cape relative to some other African countries (notably Kenya). Despite the success of the WCED in providing quality education within the South African context, when considering the WCED’s performance relative to other Southern and East African systems, especially Kenya (and its Central and Nairobi regions) and to a lesser extent Mauritius, there is indeed a puzzling result of lower mathematics performance. This is evident both in the comparative descriptive statistics, and in the econometric estimations which incorporate a range of control variables. Especially noteworthy is that teacher content knowledge as measured by teacher scores on the SACMEQ (p.172) mathematics test did not account for performance gaps in favour of Kenya. Of course, one can’t rule out that teacher test scores fail to capture other teacher pedagogical skills, unobserved abilities and motivations that may be important for learning. As discussed elsewhere in this book, the Western Cape bureaucracy almost surely is more capable (in a ‘Weberian’ sense) than its Kenyan counterpart. The pairwise comparison between the Western Cape and Kenya raises the possibility that other, ‘soft’ governance characteristics might also play an important role in shaping educational outcomes.

This brings us to the third set of conclusions, which concern the impact on performance of school-level (horizontal) governance. Econometric analysis revealed that the Western Cape effect was very sensitive to the inclusion of controls for parent involvement in schools and their contributions to the school institution. For example, after inclusions for parental involvement the significant Mauritian advantage to the Western Cape in mathematics falls away; conversely, the Western Cape advantage relative to the Eastern Cape almost doubles in both mathematics and literacy. This occurs even after accounting for school resourcing (including pupil–teacher ratios), student home background and teacher factors. With non-linearity in the relationship between patterns of parental involvement and student performance in some contexts, and very different relationships between parental involvement and student or school wealth, we cannot disentangle here the different pathways by which parent involvement affects learning. But this does point to potentially interesting dynamics between horizontal governance in schools and learning that are worthy of further exploration—with the school-level case studies in Chapters 8 and 9 illustrative of how such work might proceed.

(p.173)

Table A6.1. Control Variables for SACMEQ Countries and South African Provinces

Variable

Western Cape

Kenya

Botswana

Mauritius

Gauteng

Eastern Cape

Home background characteristics:

Female

0.52

0.48

**

0.51

0.49

*

0.52

0.51

Age (in months)

150.6

165.5

***

153.5

***

136.5

***

150.2

159.7

***

Live with parents

0.83

0.80

**

0.73

***

0.94

***

0.83

0.62

***

Learner has used a computer

0.92

0.13

***

0.36

***

0.99

***

0.83

***

0.18

***

Days absent from school in last month

0.92

1.22

***

0.27

***

1.81

***

0.75

**

1.71

***

Mother’s education:

Senior secondary

0.19

0.29

***

0.11

***

0.15

***

0.20

0.20

***

Tertiary

0.28

0.05

***

0.22

***

0.12

***

0.36

***

0.04

***

Father’s education:

Senior secondary

0.17

0.39

***

0.10

***

0.16

0.19

0.21

***

Tertiary

0.29

0.08

***

0.24

***

0.15

***

0.34

***

0.05

***

Lots of books present in the home

0.51

0.22

***

0.39

***

0.72

***

0.50

0.20

***

Attended pre-school:

for a year or less

0.33

0.50

***

0.17

***

0.08

***

0.25

***

0.42

***

for 2+ years

0.49

0.42

***

0.24

***

0.90

***

0.60

***

0.30

***

Speaks English at home:

Sometimes

0.58

0.75

***

0.69

***

0.66

***

0.63

**

0.64

***

Most of the time

0.13

0.11

0.08

***

0.03

***

0.15

0.04

***

All the time

0.12

0.05

***

0.03

***

0.01

***

0.12

0.03

***

Siblings:

1 to 3

0.75

0.30

***

0.55

***

0.81

***

0.69

***

0.44

***

4 to 5

0.13

0.26

***

0.22

***

0.06

***

0.15

0.27

***

More than 5

0.04

0.41

***

0.14

***

0.03

***

0.06

*

0.21

***

Meals:

Has meal at school

0.77

0.21

***

0.92

***

0.72

***

0.65

***

0.75

Eats breakfast often

0.84

0.79

***

0.82

*

0.89

0.78

***

0.83

Receives help with homework:

Never

0.05

0.13

***

0.05

0.13

***

0.04

0.05

Sometimes

0.65

0.62

**

0.53

***

0.69

**

0.63

0.60

**

Mostly

0.27

0.21

***

0.41

***

0.17

***

0.29

0.28

(p.174) Repeated:

Once

0.18

0.34

***

0.26

***

0.20

0.16

0.28

***

Twice

0.03

0.09

***

0.04

0.02

**

0.02

0.05

***

3+ times

0.03

0.03

0.02

***

0.01

***

0.01

***

0.08

***

Mathematics teacher and classroom characteristics:

Math teacher test score

852.8

898.3

***

781.0

***

790.58

***

726.7

***

Textbook availability:

Only for teacher

0.18

0.02

***

0.04

***

0.03

***

0.22

**

0.15

Shared between 2+

0.04

0.56

***

0.12

***

0.04

0.07

***

0.17

***

Shared between 2

0.27

0.22

***

0.21

***

0.04

***

0.28

0.19

***

Textbook per learner

0.47

0.18

***

0.62

***

0.88

***

0.34

***

0.34

***

Teacher age:

Younger than 30 years

0.03

0.19

***

0.14

***

0.07

***

0.06

***

0.00

***

30–40 years old

0.43

0.36

***

0.47

**

0.33

***

0.35

***

0.39

41–50 years old

0.39

0.29

***

0.32

***

0.27

***

0.37

0.41

Older than 50 years

0.13

0.16

***

0.06

***

0.33

***

0.23

***

0.12

Teacher qualifications:

Less than secondary education

0.17

0.04

***

0.25

***

0.02

***

0.16

0.28

***

Senior secondary education

0.01

0.63

***

0.33

***

0.32

***

0.04

***

0.11

***

Post-secondary education

0.17

0.29

***

0.18

0.59

***

0.21

**

0.06

***

Degree or higher

0.62

0.04

***

0.23

***

0.08

***

0.59

0.47

***

Teaching experience:

0–5 years of experience

0.07

0.24

***

0.19

***

0.06

0.12

***

0.16

***

6–10 years of experience

0.11

0.18

***

0.20

***

0.13

*

0.12

0.07

***

11–20 years of experience

0.60

0.36

***

0.45

***

0.34

***

0.51

***

0.48

***

20+ years of experience

0.20

0.22

**

0.15

***

0.46

***

0.25

***

0.22

Weekly teaching time:

10–14 hours per week

0.05

0.08

***

0.12

***

0.03

***

0.10

***

0.07

**

15–19 hours per week

0.19

0.31

***

0.10

***

0.15

***

0.08

***

0.10

***

20–25 hours per week

0.56

0.58

0.59

*

0.64

***

0.56

0.32

***

25+ hours per week

0.15

0.01

***

0.02

***

0.10

***

0.19

**

0.22

***

(p.175) Class assessments:

1 test per term

0.07

0.00

***

0.01

***

0.17

***

0.05

***

0.02

***

2–3 tests per term

0.49

0.40

***

0.43

***

0.42

***

0.49

0.54

**

2–3 tests per month

0.30

0.35

***

0.36

***

0.24

***

0.25

***

0.23

***

Weekly tests

0.11

0.24

***

0.20

***

0.17

***

0.22

***

0.13

Literacy teacher and classroom characteristics

Literacy teacher test score

813.3

791.0

***

770.1

***

776.5

***

717.9

***

Textbook availability:

Only for teacher

0.05

0.02

***

0.03

***

0.03

***

0.07

**

0.08

***

Shared between 2+

0.05

0.54

***

0.12

***

0.04

0.12

***

0.17

***

Shared between 2

0.18

0.23

***

0.21

**

0.04

***

0.34

***

0.26

***

Textbook per learner

0.69

0.21

***

0.64

***

0.87

***

0.44

***

0.43

***

Teacher age:

Younger than 30 years

0.04

0.26

***

0.14

***

0.07

***

0.10

***

0.00

***

30–40 years old

0.37

0.34

*

0.50

***

0.33

**

0.33

*

0.31

***

41–50 years old

0.34

0.31

0.31

*

0.27

***

0.34

0.42

***

Older than 50 years

0.23

0.09

***

0.05

***

0.33

***

0.24

0.19

**

Teacher qualifications:

Less than secondary education

0.17

0.02

***

0.26

***

0.02

***

0.27

***

0.26

***

Senior secondary education

0.04

0.65

***

0.33

***

0.32

***

0.09

***

0.10

***

Post-secondary education

0.16

0.28

***

0.19

***

0.59

***

0.12

***

0.14

Degree or higher

0.60

0.06

***

0.22

**

0.08

***

0.52

***

0.42

***

Teaching experience:

0–5 years of experience

0.09

0.33

***

0.21

***

0.06

***

0.19

***

0.11

6–10 years of experience

0.14

0.12

0.19

***

0.13

0.09

***

0.02

***

11–20 years of experience

0.42

0.37

***

0.45

0.34

***

0.33

***

0.56

***

20+ years of experience

0.33

0.18

**

0.15

***

0.46

***

0.40

***

0.23

***

Weekly teaching time:

10–14 hours per week

0.02

0.09

***

0.11

***

0.03

0.09

***

0.10

***

15–19 hours per week

0.18

0.32

***

0.10

***

0.15

**

0.20

0.19

20–25 hours per week

0.52

0.56

**

0.61

***

0.64

***

0.50

0.16

***

25+ hours per week

0.21

0.01

***

0.01

***

0.10

***

0.09

***

0.27

***

(p.176) Class assessments:

1 test per term

0.14

0.01

***

0.01

***

0.17

**

0.25

***

0.02

***

2–3 tests per term

0.52

0.47

***

0.42

***

0.42

***

0.43

***

0.47

**

2–3 tests per month

0.25

0.26

0.37

***

0.24

0.14

***

0.28

Weekly tests

0.06

0.26

***

0.20

***

0.17

***

0.18

***

0.14

***

Other classroom characteristics:

Average class size

38.00

44.00

***

29.00

***

34.00

***

41.00

***

44.00

***

School pupil-teacher ratio

33.80

43.80

***

28.10

***

22.00

***

32.30

***

36.30

***

‘Governance’ and parent/community involvement:

Teacher absenteeism:

Sometimes a problem

0.37

0.60

***

0.55

***

0.50

***

0.55

***

0.65

***

Often a problem

0.11

0.09

*

0.08

**

0.05

***

0.05

***

0.12

Teachers skipping class:

Sometimes a problem

0.10

0.52

***

0.25

***

0.10

0.32

***

0.25

***

Often a problem

0.03

0.11

***

0.04

**

0.00

***

0.03

0.12

***

District support:

School has never been fully inspected

0.12

0.06

***

0.04

***

0.41

***

0.25

***

0.27

***

Number of school visits by an inspector in the past 2 years

3.32

5.58

***

1.36

***

26.02

***

3.17

3.15

Parental involvement: Parents/community—

assist with building facilities

0.13

0.55

***

0.19

***

0.11

*

0.21

***

0.57

***

assist with maintaining facilities

0.18

0.42

***

0.14

**

0.32

***

0.47

***

0.65

***

purchase furniture and equipment

0.17

0.38

***

0.12

***

0.41

***

0.36

***

0.43

***

purchase textbooks

0.23

0.11

***

0.11

***

0.23

0.36

***

0.17

***

purchase stationery

0.42

0.09

***

0.22

***

0.35

***

0.52

***

0.25

***

contribute to exam fees

0.00

0.83

***

0.06

***

0.09

***

0.11

***

0.16

***

contribute to teacher salaries

0.48

0.50

0.03

***

0.01

***

0.63

***

0.08

***

contribute to staff salaries

0.36

0.15

***

0.06

***

0.01

***

0.52

***

0.27

***

assist with extra-curricular

0.71

0.57

***

0.94

***

0.86

***

0.75

*

0.85

***

assist with teaching

0.37

0.13

***

0.29

***

0.03

***

0.23

***

0.27

***

assist with school meals

0.32

0.26

***

0.15

***

0.13

***

0.31

0.25

***

Political disruptions: Days absent due to strike action

maths teachers

5.11

5.83

**

13.74

***

reading teachers

5.06

6.46

***

13.90

***

(p.177) References

Bibliography references:

Abadie, A. and Imbens, G. (2011) ‘Bias-corrected matching estimators for average treatment effects’, Journal of Business & Economic Statistics 29(1), 1–11.

Crouch, L. and Mabogoane, T. (2001) ‘No magic bullets, just tracer bullets: The role of learning resources, social advantage, and education management in improving the performance of South African schools’, Social Dynamics 27(1), 60–78.

DBE (2013) Report on Progress in the Schooling Sector Against Key Indicators. Report prepared by M. Gustafsson. Pretoria: Department of Basic Education, http://www.education.gov.za/Portals/0/Documents/Reports/Report%20on%20Progress%20in%20the%20Schooling%20Sector%20Against%20Key%20Indicators.pdf?ver=2013-11-11-111554-463.

DBE (2016) Report on Progress in the Schooling Sector Against Key Learner Performance and Attainment Indicators. Report prepared by M. Gustafsson. Pretoria: Department of Basic Education, http://www.education.gov.za/Portals/0/Documents/Reports/Education%20Sector%20review%202015%20-%202016.pdf.

Evans, D. and Popova, A. (2015) ‘What Really Works to Improve Learning in Developing Countries? An Analysis of Divergent Findings in Systematic Reviews’. Washington DC: World Bank Policy Research Working Paper No. 7203.

Filmer, D. and Pritchett, L. H. (2001) ‘Estimating the Wealth Effects without Expenditure Data or Tears: An Application to Educational Enrollments in States of India’, Demography 38(1), 115–32.

Fiske, E. B. and Ladd, H. F. (2004) Elusive equity: Education reform in post-apartheid South Africa. Washington DC: Brookings Institute Press.

Gustafsson, G. and Taylor, S. (2016) ‘Treating schools to a new administration: Evidence from South Africa of the impact of better practices in the system-level administration of schools’. Stellenbosch Economic Working Paper Series WP No. 05/16, Stellenbosch University, Department of Economics.

Hanushek, E. (2007) Education Production Functions, Palgrave Encyclopaedia, https://hanushek.stanford.edu/sites/default/files/publications/Hanushek%202008%20PalgraveDict.pdf.

Hanushek, E. and Woesmann, L. (2007) ‘The Role of School Improvement in Economic Development’. NBER Working Paper Series No. 12832, Cambridge, MA: National Bureau of Economic Research, http://www.nber.org/papers/w12832.

Harttgen, K. and Vollmer, S. (2011) ‘Inequality decomposition without income or expenditure data: Using an asset index to simulate household income’, Human Development Research Paper 2011/13. Human Development Reports. United Nations Development Programme.

Hill, J. and Reiter, J. P. (2006) ‘Interval estimation for treatment effects using propensity score matching’, Statistics in Medicine 25(13), 2230–56.

Ho, D., Imai, K., King, G., and Stuart, E. (2007) ‘Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference’, Political Analysis 15(3), 199–236.

Hungi, N., Makuwa, D., Ross, K., Saito, M., Dolata, S., van Cappelle, F., Paviot, L., and Velien, J. (2010) ‘SACMEQ III Project Results: Achievement levels in reading and mathematics’. Working Document No. 1.

(p.178) Kang, J. D. Y. and Schafer, J. L. (2007) ‘Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data’, Statistical Science 22(4), 523–39.

Kingdon, G. G., Little, A., Aslam, M., Rawal, S., Moe, T., Patrinos, H., Beteille, T., Banerji, R., Parton, B., and Sharma, S. K. (2014) ‘A rigorous review of the political economy of education systems in developing countries’. Final Report. Education Rigorous Literature Review, Department for International Development.

Kotze, J. and Van der Berg, S. (2015) ‘Investigating cognitive performance differentials by socio-economic status: Comparing Sub-Saharan Africa and Latin America’, Third Lisbon Research Workshop on Economics, Statistics and Econometrics of Education, 23–24 January, Lisbon.

Li, F., Morgan, K. L., and Zaslavsky, A. M. (2014) ‘Balancing Covariates via Propensity Score Weighting’, http://www2.stat.duke.edu/~fl35/papers/psweight_14.pdf.

McEwan, P. J. (2015) ‘Improving learning in primary schools of developing countries: A meta-analysis of randomized experiments’, Review of Educational Research 85(3), 353–94.

Reddy, V., Prinsloo, C., Arends, F., and Visser, M. (2012) ‘Highlights from TIMSS 2011: The South African perspective’, http://www.hsrc.ac.za/en/research-data/ktree-doc/12417.

Reddy, V., Visser, M., Winnaar, L., Arends, F., Juan, A., Prinsloo, C., and Isdale, K. (2016) ‘TIMSS 2015 Highlights of Mathematics and Science Achievement of Grade 9 South African Learners’. Pretoria: Human Sciences Research Council, http://www.timss-sa.org.za/timss-2015/.

Rubin, D. and Thomas, N. (2000) ‘Combining propensity score matching with additional adjustments for prognostic covariates’, Journal of the American Statistical Association 95(450), 573–85.

Stuart, E. (2010) ‘Matching methods for causal inference: A review and a look forward’, Statistical Science: A Review Journal of the Institute of Mathematical Statistics 25(1), 1.

Taylor, S. (2010) ‘The Performance of South African Schools: Implications for Economic Development’, PhD thesis, University of Stellenbosch.

Taylor, S. and Spaull, N. (2015) ‘Measuring access to learning over a period of incresaed access to schooling: The case of Southern and Eastern Africa since 2000’, International Journal of Educational Development 41, March, 47–59.

Taylor, S. and Yu, D. (2009) ‘The importance of socio-economic status in determining educational achievement in South Africa’. Stellenbosch Economic Working Paper Series No. 01/09, University of Stellenbosch, https://www.ekon.sun.ac.za/wpapers/2009/wp012009/wp-01-2009.pdf.

Van der Berg, S., Spaull, N., Wills, G., Gustafsson, M., and Kotze, J. (2016) ‘Identifying Binding Constraints to Educational Improvement. Synthesis Report for the Programme to Support Pro-poor Policy Development (PSPPD)’. Stellenbosch University: Research on Socio-Economic Policy.

Wills, G., Shepherd, D., and Kotze, J. (2016) ‘Interrogating a Paradox of Performance in the WCED: A Provincial and Regional Comparison of Student Learning’. Stellenbosch Economic Working Papers No. 14/16, Stellenbosch, http://www.ekon.sun.ac.za/wpapers/2016/wp142016/wp-14-2016.pdf.

Notes:

(1) More recent datasets on learning such as PIRLS and pre-PIRLS 2011 are not sampled to be representative at the provincial level and include very few comparator African states. TIMSS (Trends in International Mathematics and Science Study) 2015 has been released, but includes only one other comparator African state at the Grade 8 or 9 level, namely Botswana. TIMSS 2015 testing also takes place at a later Grade 9 level when drop-out is likely to complicate the interpretation of results. Although South Africa participated in TIMSS Numeracy for the first time in 2015, a test of mathematics and science skills at the Grade 5 level, Botswana did not participate at this level.

(2) Rasch scaling, which accounts for the difficulty level of each test item, was used to generate the literacy and mathematics scores. Different test levels can be used to ascertain mathematics and literacy competencies providing a concrete analysis of what pupils and teachers can do (Hungi et al., 2010).

(3) For example, Reddy et al. (2016) report significant declines across 2003, 2011 and 2015 in the proportion of students in the Western Cape TIMSS sample with more than twenty-five books at home, parents or guardians with an education above Grade 12, and who speak the language of the test at home.

(4) Part of this Gauteng advantage in the NSC could relate to in-migration of educated youth into the province in search of job opportunities. (Gauteng has increasingly become a hub of economic activity in South Africa).

(5) Following Li, Morgan, and Zaslavsky (2014), students in the Western Cape are assigned propensity score weights equal to 1e(x) and learners in the comparison country/region are assigned weights equal to e(x), where e(x) is the propensity score of being a Grade 6 student in the Western Cape estimated from a probit model where student and home background characteristics are regressed onto the WC indicator. This weighting places greater emphasis on units with propensity scores close to 0.5 where overlap between the two student groups is greater. The final model controls for the propensity weights as well the same student and home background variables used in estimating the propensity score on which the weights are computed. This regression-adjustment attempts to resolve any observed imbalances that may remain between groups (Hill and Reiter, 2006; Ho, Imai, King, and Stuart, 2007; Stuart, 2010), increases the precision and efficiency of the estimation and reduces bias (Abadie and Imbens, 2011; Kang and Schafer, 2007; Rubin and Thomas, 2000).

(6) Tanzania is included in the analysis of the working paper version of this chapter, but has been intentionally excluded here due to low levels of overlap in the control variables with the Western Cape. The two samples of students are so different that it is not possible to adequately control for their observed differences in a regression (as this requires some common overlap in their characteristics) or to improve overlap with propensity score matching.

(7) We note, though, that some unmeasured determinants of outcomes may be unrelated to governance. For example, the Western Cape’s long and difficult social history (which is distinctive from other parts of South Africa) may have resulted in a variety of (unmeasured) social and family deficits—as illustrated by, say, the province’s unusually high levels of alcoholism and foetal alcohol syndrome.

(8) The advantage to Mauritius is also augmented by the reality that its Grade 6 students are on average nearly a year younger than Grade 6 students in the Western Cape. Due to overlap problems, we couldn’t adequately control for age differences. The bias however generated would underestimate the Mauritian advantage over the Western Cape.

(9) We do qualify, however, that the model comparing the Western Cape to the two Kenyan regions does not effectively control for teacher characteristics (for reasons of lack of common support). Therefore, it is possible that part of the negative (positive) Western Cape (Kenya) effect is accounted for by teacher factors we are unable to observe.

(10) As an interesting aside, a one-way analysis of variance (ANOVA) is used to determine whether expected parental involvement in a region differs significantly according to measures of school governance, such as teacher absenteeism. Only in the case of Kenya and the Gauteng and Western Cape provinces of South Africa is teacher absenteeism found to be significantly negatively related to parent involvement; that is, higher levels of parent involvement are related to lower teacher absenteeism problems. Parent involvement is estimated to be significantly and negatively related to teaching days lost due to strike activity in Gauteng only. This suggests that parent involvement and governance appear to play different roles in different regions, and the estimated Western Cape effect may be masked by non-linear relationships between school governance and performance.

(11) As an example, Taylor and Yu (2009: 23) note that in ‘South Africa a student with a given SES has more than twice the chance of achieving a reading score approximately equal to the reading score predicted by the SES gradient, than would be the case in the USA.’

(12) Mauritian and Botswanan datasets are not open source. An additional barrier to using datasets collected by Central Statistics in Mauritius is that by law they require a representative to collect the data in person.

(13) We calculate the percentage of eleven- to fifteen-year-old children who are currently not in school at each percentile and assume that these students would have performed at the same level as the lowest-performing fifth percentile had they written the SACMEQ tests. For a more detailed discussion on the SES scale construction the reader is referred Kotze and Van der Berg (2015).

(14) This best-fit line, referred to as a locally weighted polynomial regression, is similar to a two-variable ordinary least squares regression line, except that fewer restrictions are placed on the model, allowing for non-linear relationships to be seen by fitting simple models to localized subsets of data.

(15) As with the analysis of Tables 6.4 and 6.5, in the case of Kenya, only textbooks, class size and assessment are used as regression controls because all the other teacher/classroom variables have very little overlap with the Western Cape or are homogenous in Kenya.

(16) See Wills, Shepherd, and Kotze (2016) for a detailed discussion of wealth quartiles as an alternative SES measure.

(17) Between 2005 and 2007, South Africa’s provincial boundaries were adjusted to ensure that no municipality straddled two provinces. This ‘quasi-experiment’ allowed for the identification of a causal relationship between provincial administrations of education and learning outcomes in schools (Gustafsson and Taylor, 2016). The redrawing of provincial boundaries affected seven of nine provinces (though not the Western Cape). The changes in matriculation outcomes of province-switching schools were consistent with the direction of perceived functionality of different provinces. In particular, schools that shifted from the North West to the Gauteng provincial administration experienced improvements in their matriculation examination outcomes.