Applications for Managing the Criminal Justice System
Applications for Managing the Criminal Justice System
Abstract and Keywords
This chapter explains how the theory and models in this volume have been used to make predictions and forecasts of significant importance in managing the Criminal Justice System. In the first example, it is shown that using the theory, demographics and sentencing policy (custody rates): the prison population over a period of 30 years is predicted to within 3%; and that without the “Prison Works” mantra of the early 1990s, the prison population would have continued to decline. The second example presents the results of a similar model of court workloads. The third shows how forecasts, made in the year 2000, of the size of the DNA database panned out over the subsequent 4 years.
In Chapter 2, we proposed the theory that there are three categories of offenders: high-risk/high-rate, high-risk/low-rate, and low-risk/low-rate. In Chapter 6, we showed that these categories differed in their psychological characteristics. In this chapter, we use the theory to make forecasts of the prison population and of the number of offenders in the DNA database.
In 1996 the Operational Research Unit of the Home Office, in which two of us (John MacLeod and Peter Grove) then worked, was asked to develop a new long-term methodology to forecast the prison population in England and Wales. The requirement was to be able to predict the average population (disaggregated by age, gender, and type of offence) in any year up to five years in advance. The purpose of these long-term forecasts was to inform the Prison Agency’s programme of estate management (eg how many new prisons to build, how many to refurbish, how many to close). The methods of projection existing at the time were essentially based on regression and time series models. Although effective for short-term forecasts or in stable conditions these models cannot cope with radical change.
In 1993, following 25 years of relative stability, there was a sudden and dramatic increase (10 to 15 per cent), year on year, in the use of custodial sentences by the courts. It is difficult to see how a time series or regression approach could be helpful in such circumstances. As a result it was decided to build a model which could be used to test the consequences of various policy scenarios (p.156) (eg reduce the use of custody by 10 per cent for non-violent crime, ‘three strikes and you’re out’, etc). The model was intended to be easily used by policy makers to test the results of various policy options.
Some years prior to the 1996 request, a casual conversation led to a preliminary exploration of newly generated cohort data extracted from the Offenders Index. That exploration sparked the ideas which led to an embryonic theory of age and crime. The prison population forecasting project provided the impetus to develop and expand the theory which is now described in this book. The theory, and mathematical models implementing it, provided one element of the prison population forecasting system.
The other element is a flow model which keeps track of prisoner numbers over time and is also capable of reflecting actual and potential changes in demographics and penal policy. The theory provides the required understanding of the behaviour of offenders when confronted with the criminal justice system. We will see that together the ‘flow model’ and the theory of offending/conviction make accurate predictions of the prison population. The forecasting model described here was in regular use for over a decade1 and with some development can be expected to provide useful forecasts well into the future.
The Flow Model
One of the reasons for using a flow model is the inherent stability of the prison population when considered over a period of a few years. This is partly due to the contributions to the population of those with long sentences. We have good information about the current prison population and in particular about long sentence prisoners. These make a large contribution to the total (generally, one ‘two-year’ sentence has the same contribution as two ‘one-year’ sentences), and thus make up an important, slowly changing, and in principle easily predicted, part of the future prison population. The other contribution to stability is the high rate of recidivism. The recidivism probability for custodial sentences, ie the proportion of offenders who return to prison after release, approaches 70 per cent for those who have been in prison at least twice (see Table 4.3). As we know a good deal about those offenders currently (p.157) in prison, we should be able to predict, on the basis of past data, when these 70 per cent are going to return. This leaves those offenders who will arrive in prison on their first custodial sentence as the major uncertainty.
We can make a very rough estimate of the importance of first time custody cases using the model shown in Figure 7.1. We assume that the prison population is in equilibrium (ie releases balance receptions), which was approximately the case (within 10 per cent) from 1970 until 1993. Then, using the model, we can estimate the number arriving in prison for the first time as follows. In the model: the prison population is N; the average rate of release χ is the weighted sum of the reciprocals of sentence lengths si for each offence type i = 1,2…; and recidivism (or more accurately the probability of re-imprisonment) is p. Thus we can calculate the number leaving prison as N*χ = ∑Ni/si , and the number returning each year as p*N*χ.
As the system is in equilibrium, the number returning to prison (p*N*χ) plus the number starting their first custodial sentences (n) must equal the number leaving (N*χ). Therefore the number of first time custodies is given by n = N*χ*(1−p) (ie the number entering the system is the same as the number of reformed prisoners who will not return to prison). Using order of magnitude data from 1992 Prison Statistics, we have very approximately: χ = 0.5 per
A large proportion of first custodial offenders will have had previous non-custodial convictions so we also need estimates of the rate of reconviction and the age profile of first convictions. The obvious approach is to use historical empirical distributions. Such distributions would be smoothed to provide idealized distributions to be used in the model. An alternative approach is to construct a model of the behaviour of offenders which can reproduce the essential features of the empirical data. In the previous chapters we have described an appropriate theory and developed mathematical models which enable us to calculate the annual number of first offenders and recidivists.
The forecasting methodology described here was built on the two category simplified model of Chapter 4. The two offending categories are separately parameterized for those offences leading to imprisonment before 1993 and again for all standard list offences (the probability of being imprisoned for any significant length of time, more than a few weeks, for a non-standard list offence is very small). In the prison forecasting methodology the high recidivism, rapidly offending category is described as the ‘high’ population and the low-recidivism, slowly offending category as the ‘low’ population. The parameter estimates for the simplified two-category model are listed in Tables 4.4 and 4.5 in Chapter 4.
Predicting the Prison Population
With the offending model we have the means to deal with the two ‘difficult’ parts of modelling the prison population. We can, from the numbers born in each year over the previous 70 years, predict the number and age profile of offenders at first, second, third, etc convictions, up to ten years into the future.2 Knowing the first- custody rate, at each conviction number, we can calculate the number of offenders (n) entering prison for the first time. Similarly we (p.159) can predict the proportion of those released from prison who will reoffend, be convicted and receive another custodial sentence and the timescale for that reincarceration. The rest is accounting, although somewhat complicated by the disaggregation by offence type, conviction number and gender.
In a little more detail, the model makes the following calculations. Knowing the current prison population size, the future population, for successive quarters, can be calculated by: adding the new intake, consisting of recidivists and those receiving prison sentences for the first time; ascribing the new intake sentence lengths based on current sentencing distributions; and subtracting the number released, which can be calculated from the sentence lengths ascribed to previous intakes. The custody rate information for each offence, together with remission and sentencing policy over the time of the forecast, form a ‘Scenario’. Scenarios are generated by a graphical scenario editor and then fed into the model. Although simple in principle the calculation is rather complicated and was encoded in a long C++ program.
Initially only a test version of the computer implementation of the model was available. This did not make use of current prison information. Instead the model was run from 1950 with an initial condition of empty prisons. The prison population was then built up entirely on the basis of the model of offending. If the model is an accurate reflection of reality, the results should be comparable to
Figure 7.2 is a graph of the total prison population based on the first scenario which assumed that, for the whole period prior to 1993, the sentencing policy, and in particular custody rates, were the same as those in 1992; after 1993 the scenario assumed a continuing year on year increasing custody rate to reflect the known situation up to 1996. The 3 per cent error bars on the modelled line (approximately one standard deviation) indicate the possible effect of the stochastic variation in the total number of offenders born in
Figure 7.3 shows the results of a second scenario in which the actual custody rates for each year from 1975–1993 were used in the model. We see now that all the data points, post 1975, are within the one standard deviation error bars. This fit is ‘too good’, and this is almost certainly due to the fact that the ‘error’ calculation is really an upper bound calculation and a more realistic error estimate is about one third this size (ie about 1 per cent). We can conclude that a little less than half of the fluctuation before 1993 (ie that part not due to demographics) was due to changes in custody rates. After 1993 we see that the prison population is well explained by the year on year increase in custody rates. We should emphasize that these projections are not based on any information derived from prisons but only on sentencing information (custody rates and sentence length distributions) obtained from the courts and population estimates derived from census data. The fit is remarkable, indicating that (at least as far as those offences resulting in possible custodial disposals by the courts are concerned) the model of offending is capturing the gross behaviour of the offending population.
For comparison purposes, we modelled a third scenario in which actual custody rates were used for each year in the period 1975–1992 and then held at the 1992 values for the period 1993–1999. This scenario illustrates what might have happened had Home Secretary Michael Howard not made his ‘Prison Works’ speech.3 Figure 7.4 shows that, without the increasing use of custody by the courts, post-1993 demographic trends would have caused the prison population to continue the slow decline levelling-off around the turn of the century.
Using our theory and a simple (in principle at least) flow model we have been able to construct a remarkably accurate model of the prison population. The model can be used to predict the effects of changes in sentencing policy by the courts. Of course having an accurate model of what will happen given a particular policy is (p.162) only half the battle. If we want to know the actual prison population over the next five years it is necessary to predict sentencing policy over the same period. Unfortunately no analytical method exists which can accurately carry out that task, so there was a need to run the model assuming various different scenarios. However, even with this limitation the model has been a useful aid to policy makers in allocating prison resources.
In 1997, a change in policy (focusing more on drugs offences) resulted in all drugs offences being defined as standard list. Although this did not change the behaviour of offenders, who considered many of these offences as rather trivial, it did increase the number of standard list offences, which was the officially recognized measure of relatively serious crime. As a result the proportions of offenders being convicted for particular offence types, within the standard list offences, changed and it was necessary to update the prison model which had to be re-parameterized to reflect the new situation. The change in the total number of standard list convictions was thought to be concentrated on the trivial group of offenders (ie those who would rarely be convicted of offences formerly on the standard list). As the trivial group was not explicitly modelled in the prison model, there was no change in the overall offending parameters. More recent results using the updated model, the actual sentencing practice with and without adjustments to reflect the prison population at the beginning of the projection period, are shown in Figures 7.5 to 7.7.
Given the success of the methodology in predicting long-term changes in the prison population, it was decided to produce a similar model for all court disposals. Accuracy similar to that of the prison model was not expected for three reasons. The first is that the averaging over the six months of a typical prison sentence is not applicable to non-custodial sentences. For non-custodial sentences we have to deal with trivial offences and we have found an intrinsic 5 per cent year to year fluctuation in the number of these convictions for males and 10 per cent for females. Finally the data on sentencing policy is considerably less robust for non-custodial sentences.
The methodology is essentially built on the gamma distribution model (Grove 2003) applied to trivial offenders and discussed in Chapter 4. Some changes were required compared with the prison model such as a more sophisticated approach to the temporal parameter (δ). This is modelled by an enhanced rate of offending over the first six months following reconviction, again with the intention of providing an easily implemented approximation to our analysis of Chapters 3 and 4.
A graph of the predictions of the enhanced model (compared to actual number of convictions) for the period 1987–2000 is given in Figure 7.8; offence definitions during this period were fairly constant. The predictions and actual male conviction numbers agree to within the ±5 per cent anticipated error.
Despite the success of the model, we understand that its development, in this form, has been discontinued because of a change of view of the requirements for forecasting in the Ministry of Justice.
The DNA Database
With the possibility of ‘matching’ DNA taken at crime scenes against that of known offenders, the development of a database of DNA evidence found at crime scenes and of the DNA ‘fingerprints’ of known offenders was set in train. An important requirement for planning and implementing the database was to know how many new offenders would be entering the database each year and how (p.165) long it would take to obtain samples from existing offenders who had previously been convicted but had not had their DNA ‘fingerprint’ recorded. Such information is not available from standard statistics but is precisely the kind the calculation which can be made on the basis of our theory.
DNA evidence is not only recorded on conviction but also on cautioning. As our theory does not explicitly describe cautioning (only its impact on early convictions), we constructed three models based on possible interpretations of the interaction of cautioning and conviction. Each of these models used the full three category models of offending and a fixed annual birth rate of 330,000 males and about 330,000 females. In the year 2000, when the initial forecast was made, there were just under a million records already on the database and this was taken as the starting point.
In the first of the three models, the ‘core’ model, only first convictions and estimates of reconvictions for active (previously convicted) offenders, were assumed to add to the database. These assumptions represented the slowest possible build-up of DNA profiles on the database and provided a lower bound for our estimates. The second of our three models, the ‘total’ model, was based on an estimate of the total number of active offenders, excluding all those who would be dealt with informally on their arrest, and calculating when they would be convicted. To this was added the annual number of first cautions. The problem with this model is the lack of understanding of the informal sanctions. It was thought that this model would overestimate the rate of build-up. The final ‘intermediate’ model was based on taking the cautions element from the ‘total’ model and the reconvictions element from the core model. This was believed to underestimate the build-up but not as badly as the core model. Figure 7.9 presents the build-up forecasts from the three models. The lower line is the core model estimate, the top line is the total model forecast (the error bars on the line represent the uncertainty due to the variation in the number of cautions year to year), and the central line is the intermediate model forecast.
At the time the forecast was made our expectation was that the size and growth of the database would be between the intermediate and total model forecasts. By mid-2004, the actual database was in line to reach a total size of 2.4 million offender records by the end of that year faster than the intermediate model and a little less than the total model, as expected. A comparison between the actual size of the database and the prediction is given in Figure 7.10. The upper (p.166)
There are two measures of the size of the actual database. The police measure consists of police estimates of the numbers sent to the database excluding certain special routes; and the custodian’s measure represents all those offender records believed to have been
June 2000 represented the beginning of a drive to put all known offenders on the database by sampling all those cautioned and convicted. As can be seen the initial rise was slower than expected, probably due to resource constraints in police forces leading to a steady rise rather than the sudden increase that otherwise would have been expected. However, the size of the database in mid-2003 was much as predicted.
In this chapter we have shown that, quite apart from the value of having a theory of criminal careers from a criminological point of view, having a quantitative theory has allowed us to make forecasts of the prison population, court workloads and the growth of the DNA database. The prison population forecasting system, which has been the major subject of this chapter, was used to make annual projections of the long-term prison population for over a decade. These projections were an essential element in the management and planning of the prison building and maintenance programmes for England and Wales. An understanding of possible future scenarios is also a necessary element in many other management issues like the recruitment and training of staff. This particular application of our theory was thus an important factor in the allocation of at least £2,000,000,000 per year.