The biological productivity of livestock herds is determined by the three fundamental processes of reproduction, growth and development, and death (Konandreas & Anderson, 1982; Baptist, 1992a; Upton, 1993). Insufficient reproductive performance and excessive mortality have been found to impose severe restrictions on goat production in semi-arid regions of Africa (Hinch et al., 1985; Traoré and Wilson, 1988; Ba et al., 1996; Ndlovu & Simela, 1996). This is because these two components are themselves determinants of herd dynamics over time and, hence, of sustainable rates of offtake and selective culling per time unit (Nugent III & Jenkins, 1993). Moreover, they have a major influence on the (stable) structure of goat flocks in terms of the proportion of male and female kids, surplus stock, replacements, and breeding animals, and thus on the type and quality of offtake that can be obtained.
Environmental factors such as climate and season of year have a strong influence on reproductive performance and survival in pastoral goat herds. It has often been argued that the effects of fluctuating nutritional levels on both components could be reduced by manipulating the reproduction process in such a way as to balance nutrient requirements of the herd with seasonal pasture forage production (Smith et al., 1982; Field et al., 1984; Lebbie et al, 1996). The previous chapter has shown that under the environmental conditions prevailing in northern Kenya improvements in reproductive performance achieved by a restricted breeding management can easily be nullified by excessive youngstock mortality during the preweaning period. Similarly Upton (1985) and Wilson et al. (1985) observed that the effect of improving survival on the productivity in small ruminant flocks is probably at least as great as that expected from increases in reproductive or productive performance. Adverse environmental conditions may either directly result in increased mortality through starvation or hypothermia, or indirectly by favouring the incidence of certain types of diseases (Sherman, 1987). The present chapter investigates survival in Small East African (SEA) goat flocks subjected to a management intervention which limits breeding females to one parturition per year by imposing a single short breeding period. The aim is to identify that period within the year during which breeding optimises survival of youngstock, immature surplus and replacement stock, and breeding females. Given that developmental processes cause individual animals to differ in their susceptibility to death, it is necessary to further differentiate these broad herd categories. Relevant categories include age, body weight, and parity of dam. Data are provided which show how mortality changes as animals move through these successive life-cycle stages.
Ordinary statistical techniques such as analysis of variance or multiple regression are not well suited to the analysis of event-history data such as survival of animals over a given follow-up time period. Follow-up time intervals of interest in analysing survival of does and youngstock range, respectively, from mating until time at rebreeding, and from birth until age of disposal or first breeding. Two typical features of time-to-event data, censoring and time-varying explanatory variables, create major difficulties for standard statistical procedures (Allison, 1995). Censoring occurs when the event of interest (death) has not been observed for a number of individuals or when individuals have been "lost to follow-up" with no information about their survival status at the time of the analysis. Explanatory variables that change in value over the observation period often relate to environmental factors, such as pasture forage production. In this study, an approach using logistic regression techniques and polynomial spline functions is applied to parametrically estimate hazard rates and survival curves from censored data. The approach is particularly useful for accomodating both time-dependent covariates as well as nonproportional hazards, that is, situations where the effect of covariates on the probability of death change over time.
Data for this study pertain to the results of an experiment conducted between January 1984 and January 1988 at the Ngare Ndare Research Station of the University of Nairobi in Isiolo District, northern Kenya. The climate is semi-arid, rainfall (long-term annual average at Isiolo township: 615 mm) being distributed over two distinct rainy seasons, a long rainy season from March to May, and a short rainy season from October to November. The vegetation can be characterised as a semi-arid thornbush savannah dominated by various Acacia species with a sparse groundcover of annual grasses, herbs, and soft dwarf shrubs.
The experimental design and herd management were described in detail in Chapter 1. Briefly, 145 does of the SEA type were maintained under simulated pastoral management conditions and used for a total of 381 exposures which were distributed among 18 consecutive breeding groups consisting of approximately 18 does each. The only interventions with the pastoral management were vaccination for Contagious Caprine Pleuro-Pneumonia and the strategic use of an anthelmintic. A buck was introduced into each of the 18 breeding groups for a period of two months duration and was thereafter transferred to the next group, so as to achieve year-round mating, kidding, and weaning. Weaning occurred at 16 weeks of age. Three complete production cycles, ranging from mating until the time at which youngstock had reached an age of two years were obtained for five of the six consecutive two month breeding periods per year generated by the experiment. The sixth period had only two complete cycles, because the last breeding group of the experiment, which was set up in December 1986, had to be discarded due to incomplete records. The experimental treatment thus consisted of six mating periods or seasons, the first taking place from February to March (labelled as mating season 1) and the sixth taking place from December to January (mating season 6). Mating seasons 4 and 5 (August to October, and October to December) had to be assumed to have taken place over a period of three months due to a delay of one month which occurred in setting up the first breeding group of mating season 4 in 1984.
Measurements of liveweight of all animals and milk production of does were taken at two-weekly intervals. All events such as abortion, birth, and death were recorded continuously. A total of 8547 recordings were obtained on survival, liveweight, and milk production of does; 9837 observations were available on survival and liveweight development of youngstock. Whenever possible dead animals were subjected to a post mortem examination to attempt to establish the cause of death. The initial classification of causes of death included: unexplained loss, predation, doubtful diagnosis, miscellaneous causes, pneumonia, emaciation, cestodes, and stongyles. Pasture condition was judged every two weeks using a subjective phenological pasture condition score ranging from 1 to 4 based on greenness and abundance of the herblayer (range condition score [I]), including grasses, herbs, and small dwarf shrubs. The condition score for the herblayer was upgraded to a maximum score value of 5 to integrate the contribution of bushes and trees with regard to browse availability and the production of high quality litter such as leaves, flowers and fruits (range condition score [II]).
Survival of kids was studied at 2 week intervals from birth to 104 weeks of age, for a total of 57 follow-up time intervals. The method of statistical analysis used for estimating survival rates required creating a data set in which each animal had a separate record for each follow-up time interval it was observed in the study (see below). The dichotomous dependent variable was coded as zero when the animal survived or was censored in (i.e., withdrawn from the study for some reason other than death) a given time interval, and one when it died. All abortion records were omitted from the analysis of kid survival, whereas still birth events were considered to represent valid death events that occurred at the beginning of the first time interval of follow-up. Emergency culls were treated as censored observations.
The data on doe survival were also converted into a format with one record per animal per time interval on study. Doe survival was investigated over a period of 70 weeks from mating. As before, a biweekly time step was chosen, resulting in a total of 35 follow-up time intervals. Does which were allocated to a new breeding group before the 70 week period elapsed were considered to have been lost to follow-up at the time of rebreeding. Likewise, all forced cull events were labelled as censored observations. Note that follow-up time was not synchronized to parturition date. However, average time from the start of the mating period until parturition for fertile does that did not abort was 22.3 weeks (±2.5) and did not differ significantly between mating season groups. Therefore, the distribution of time at weaning was centred around 38 weeks from the origin, i.e. the onset of the respective mating period.
[page 43↓]Statistical analysis
For the purpose of describing the approach adopted to estimate survival rates, suppose that a set (t i, w i, X i), i=1,..., n, of independent observations is obtained from n individuals, where t i is the time an individual is known to have survived before the event, w i=1 if the individual died at t i, w i=0 if the individual was censored at t i, and X i denotes a vector of known explanatory variables. Let T represent the random variable specifying time until death. Then the probability S(t , X i) that an individual with covariate vector X i dies after time t is defined as
S(t , X i) = Pr(T>t|X i) = 1—F(t , X i) ,
where F(t , X i) is the cumulative distribution function for t given X i. The function S(t , X i) is called the survival function, and is related to the hazard function, h(t , X i), by the equation
h(t , X i) = f(t , X i)/S(t , X i),
where f(t , X i) is the probability density associated with F(t , X i). The hazard function is also called the force of mortality at t, since it represents the instantaneous risk of death at t, given that the individual has survived to time t (Laird & Oliver, 1981). The hazard function can be expressed as a function of both time and the explanatory variables, and allows the investigation of the effects of these variables on survival.
Allison (1984), Efron (1988), and Gillespie et al. (1994) proposed the using of standard logistic regression techniques to estimate hazard rates and survival curves from censored data. The general form of the model for an individual i is given by (Efron, 1988)
λi = ln[p(X i)/(1— p(X i))] = X iβ ,
where p(X i) and λi are the probability and log-odds, respectively, of an individual with covariate vector X i dying in a given time interval. The probability p(X i) corresponds to the discrete hazard as a function of the covariate vector X i. Given estimates,, of the parameters, the hazard rate p(X i) can be estimated from the inverse logit function:
In order to use logistic regression with survival data, the set (t i, w i, X i), i=1,..., n, of observations on n individuals must be expanded so that each individual has a separate record for each time interval it was observed in the study (Allison, 1984). Each individual thus contributes a number observations to the analysis which is equal to the number of time intervals until it died or was censored. The response variable is dichotomous, indicating whether the individual died or not in the interval. To simplify notation, the index i for individuals is omitted in what follows. Let j, j=1,...,N, be the index for a follow-up time interval of unit length, where N is the last time interval considered in the analysis, and h j represent the discrete hazard rate of an individual dying in the jth time interval given that it has survived until the beginning of the jth interval. For each value of j, let X j be a known 1xpcovariate vector. The logistic regression model then is
λj = ln[h j/(1- h j)] = X jβ , j=1,...,N
with β being a px1vector of unknown parameters. Cubic spline functions in time can be used to estimate the hazard function of an individual dying in a given time intervall. Polynomial splines are piecewise polynomials satisfying continuity constraints at points joining the pieces, which are called knots (see Smith (1979) for further details). When a cubic spline function in time with a knot at time t , without restrictions on smoothness of the join, is to be fitted to the hazard rate, the covariate vector X j can be written as
Fitting a logistic regression based on the above specification yields a completely separate cubic function on either side of the knot at t. Including indicators for treatment and classification effects in the design matrix is straightforward. Upon multiplying each time and/or "+function" term by an indicator, different response [page 44↓]functions over time are obtained for each level of the respective treatment (i.e., mating season) or classification effect. Determining the order of the polynomial segments as well as the continuity restrictions and covariate interactions required to achieve a satisfactory fit can be done using standard multiple regression hypothesis testing methods (Smith, 1979). Logistic regression models can be fitted to survival data using the generalized linear model approach (McCullagh & Nelder, 1983).
Maximum likelihood estimates of the survival curve, G i, are obtained using estimates of the hazard rates, ,
The procedure GENMOD (SAS Release 6.12, 1996) allows the fitting of binary outcomes to a mixture of continuous and categorical explanatory variables, and was used in this study to estimate hazard rate curves. As a first step, the general form of the cubic spline function in time was investigated by plotting the life-table hazard-rate estimates, =s j/n j, where s j and n j are the total number of deaths and the total number of individuals present at time j, respectively, against follow-up time.
In general, deciding on the number and position of the knots, and the order of the polynomial in each segment is not simple (Montgomery & Peck, 1982). No more than three knots were identified for estimating preliminary model versions, because the great flexibility of spline functions makes it very easy to overfit the data. The knots were selected such that extreme points were centred in each segment and the inflexion points were located near the knots. Final knot positioning was carried out by fitting a series of models with different combinations of knot positions in proximity of the selected points, and identifying the model which produced the smallest deviance. The polynomial degree within each segment was determined based on the hypotheses testing procedure in partially ordered spline models given by Smith (1979). Selection of covariates among those given in Table 3.1 to be retained in final models was based on backward selection from a model including all possible interaction terms up to the third degree, with terms sequentially removed if the reduction in deviance, adjusted for all other terms in the model, was not significant at the 15 percent level.
The bootstrap technique (Efron (1981), Rosenberg (1995)) was used to estimate non-parametric confidence intervals around survival curves. To apply the bootstrap, the spline model of interest was fitted to 1500 [page 45↓]random samples taken with replacement from the observed sample data. Bootstrap confidence intervals (90 and 95 percent) for survival curves were generated from the bootstrap distribution of survival rate estimates with the bias-corrected percentile method (see Efron, 1987).
Table 3.2 shows the causes of death by age category. In spite the fact that postmortem examinations were carried out on almost all animals that died during the experiment, in most of the cases (38 percent), the pathological findings on the carcasses were inadequate for a specific diagnosis. This was particularly true for animals dying within two weeks of birth. Another 15 percent of all death events were unexplained losses, but the majority of these can probably be attributed to animals that strayed and got lost from the herd and were ultimately eaten by predators. Taken together with the number of death events that were known to be due to predation, this would account for 28 percent of all deaths. Miscellaneous causes included injuries, poisoning, worm infections, metabolic disorders such as ruminal tympany, and deaths that occurred during a heavy storm in November 1987 (9 events). The incidence of emaciation was largely confined to the category of 2 weeks of age until weaning.
Graphical inspection of observed death rates in kids supported the conclusion that a polynomial spline function in time with two knots might provide sufficient flexibility to approximate mortality rates in young stock over the follow-up period of 104 weeks. The first knot was set at 20 weeks, and the second at 40 weeks of age. The final polynomial spline consisted of a third degree base function, a segment joining at t=20 containing a quadratic and cubic term, and a segment consisting of a quadratic term only, joining at t=40.
Dam parity (p>0.3), litter size (p>0.5), and sex of kid (p>0.5) did not exert significant influences on death rates. Birth weight, total milk yield until weaning, and lagged median range condition scores [I] and [II] were confounded with the mating season treatment effect. Therefore, two types of models were fitted, one including mating season and production cycle (model a) in Table 3.3), and the other, birth weight, total milk yield of dam until weaning, and the two range condition indices as main effects. In the latter model, production cycle did not significantly affect hazard rates.
|Figure 3.1. Estimated hazard rate curves by mating season (MS) for the data on kid survival.|
However, a problem occurred in simultaneously estimating the effects of birth weight, milk yield, and range condition scores. The assessment of range condition was discontinued in November 1987, before kids born in early 1986 andthereafter had reached the age of two years. In order to avoid obtaining biased estimates due to the absence of range condition measurements for these observations, hazard rate profiles for the birth weight and milk yield effects were estimated without adjusting for the effects of range condition (model b) in Table 3.3). Finally, separate models were fitted to obtain hazard estimates for both range condition scores (models c) and d) in Table 3.3).
|Figure 3.2. Estimated hazard rate curves for the kid survival data, according to a) birth weight, and b) total milk yield of mother until weaning.|
The highly significant interaction terms between time factors and mating season are a clear evidence of non-proportionality of death risks across mating seasons. A graphical display of incidence rates by mating seasons is shown in Figure 3.1. The response profiles show relatively high perinatal death rates in all mating season groups. Newborns were exposed to the highest risk of death when born just after the long rains (mating season 1), at the end of the long dry season (mating season 3), or during the short dry season (mating season 4). Significant differences in incidence rates within two weeks from birth were found between mating season groups 1 and 5 (p <0.05), with an estimated odds-ratio of 4.5. In the latter group, kids were born during the long rains.
A noticeable rise in hazard rates was observed around the time of weaning at 16 weeks of age in mating season groups 1 and 6. In both cases, weaning took place during the long dry season, under unfavourable forage conditions. Kids in mating seasons 1 and 6 were, on average, exposed to 15.5 and 9.2 times higher risk of death, respectively, at 16 weeks of age than kids in all other groups (p <0.01). Inadequate nutrition continued to cause higher death rates in the former two groups well beyond the weaning period. This is particularly true for mating season group 6, in which mortality rates remained above 4.5 percent per time period, and differed significantly (p <0.05) from those in all other groups, until 28 weeks of age. The following rise in incidence rates at about 34 weeks in the latter group, as well as the conspicuous peak at 48 weeks in the hazard curve of mating season 4, were both due to the joint occurrence of an extended dry season period with poor range conditions until the end of October in 1987, and of a heavy rain storm in the following month, during which 4 out of the 8 kids present at that time in group 6, and 6 out of the 21 kids exposed in group 4 died. It should be emphasized that the hazard rate estimates for mating season 6 might have been biased upwards due to the smaller sample size. Nevertheless, the effects of inadequate nutrition on incidence rates appeared to be much less pronounced in mating seasons 2 to 5.
With respect to the production cycle main effect, the overall risk of death was approximately 2.7 and 1.8 times higher in the first than in the second and third cycle, respectively (p <0.05). This was caused by a prolonged dry period which prevailed from July to November in 1984. Overall hazard rates in the second were similar to those in the third production cycle, with an estimated odds ratio of 0.7.
Although significant differences in hazard rates between total milk yield levels until weaning were detected, they seemed to have a much less detrimental effect on kid survival than mating season (Figure 3.2). The same holds for the effect of kid birth weight. Milk yield levels had, however, a marked influence on perinatal survival rates. Insufficient milk production (<32.5kg until weaning) led to very high kid mortality during the first two weeks of life (Figure 3.2b). Predicted hazard until 2 weeks of age was as high as 17.9 and 12.5 percent for kids born to does with a total milk yield until weaning of less than 22.5 kg, and of 22.5 to 32.5 kg, respectively. These figures differed significantly at the 5 percent level from those obtained for higher yield levels. Hazard rates reached a peak just after weaning, and were again found to be much higher in kids receiving very low quantities of milk (< 22.5 kg) during the suckling period than in those receiving at least 22.5 to 32.5 kg milk (p <0.05). This difference remained significant until 28 weeks of age.
Kids born to dams producing 52.5 kg or more milk during the suckling period were exposed to the lowest overall risk of death. By comparison with the lowest milk yield level (<22.5 kg), the chances of dying within 2 weeks of birth and at weaning were, respectively, 7.2 and 4.3 times lower. Due to the interaction of the milk yield main effect with the linear time term, differences between milk yield levels disappeared, and some of the hazard curves began to cross, after week 34 of the follow-up time period. This indicates that beyond this time point, other factors became more important in determining kid survival than milk yields.
The effect of birth weight on incidence rates did not change with age due to the absence of interactions with time factors in the fitted model. The relative differences between the three hazard rate curves shown in Figure 3.2a thus remained constant throughout the observation period. Kids born with less than 2 kg were exposed to a much higher risk than kids weighing 2 to 2.5 kg (p <0.01) and more than 2.5 kg at birth (p <0.01). The corresponding odds of death were estimated to be 2.1 and 2.4 times higher in the lowest birth weight class.
The hazard ratios (or odds ratios) for the levels of both range condition scores remained constant throughout the observation period, as indicated by the lack of interactions between these risk factors and time covariates in models c) and d) in Table 3.4. Note that RC [I] and [II] were time-dependent risk factors that changed in value over time (i.e., with increasing age) for an individual observation. Therefore, proportionality of the odds for any two values of each of the range condition scores implied that the effect of median forage quantity and quality on offer over a 6 weeks period on the probability of death in kids was independent of kid age. The odds of death associated with each level of the two range condition scores are reported in Table 3.4, along with the corresponding approximate relative risks between all score level pairs. For both range condition indices, the odds of death increased
markedly with decreasing condition score. The differences in the likelihood of death were more pronounced among the levels of the browse adjusted index RC [II] than among those of the herblayer index. Kids exposed to pasture conditions characterized by lagged median RC [I] and [II] scores of 1, which typically were observed at the peak of the long dry season, were, respectively, 3.9 and 7.3 times as likely to die at any age as when exposed to conditions characterized by RC [I] and [II] scores of 4 (p <0.01). The protective effect of increasing pasture conditions levelled off at higher score levels, as indicated by the relative risks of 1.2 and 1.0 between RC [I] scores of 3 and 4, and between RC [II] scores of 4 and 5, respectively. Estimated survival probability for each level of the mating season, milk yield until weaning, and birth weight main effects are given in Table 3.5. With respect to mating season, the lowest kid survival rate until 2 and 16 weeks of age was observed in group 1. Although not significantly different, predicted survival rate in mating season group 6 until two years of age fell below even the 48.6 percent achieved in group 1, down to a very low level of 38.1 percent.
Perinatal survival rates increased when kids were born under favourable forage conditions, such as it tended to be the case for groups 4, 5, and 6. The timing of the pre-weaning period in relation to environmental conditions was a decisive factor in determining kid survival until weaning age. Almost 44 percent of the kids born at the onset of the long dry season (group 1) died before 16 weeks of age, a significantly larger figure than the approximately 11 and 7 percent observed in groups 4 and 5, in which weaning took place during and towards the end of the long rainy season, respectively. In the latter group, survival declined only slightly until the yearling stage. In contrast, conditional survival probabilities calculated from the figures in Table 3.5 revealed that 46.6 percent of the kids alive at weaning in group 6 were estimated to die until one year of age. (Note that these figures are conditional probabilities calculated from the results presented in Table 3.5). During the same period, substantial losses of 23.9 percent could also be observed in mating season group 4. The best performance until one year of age was achieved in mating season group 5 with 82.6 percent of the kids born surviving, followed closely by those born in groups 3 and 2 (81.5 and 77 percent, respectively). Mortality was less than 10 percent in all groups between one and two years of age.
Significant differences occurred between the three levels of the birth weight effect from 16 weeks of age (Table 3.5). Almost 12 percent of the kids born with less than 2 kg of body weight died within 2 weeks, whereas mortality rates in kids weighing more than 2 kg at birth were at most 6 percent. Substantial differences between the first versus the second and third birth weight levels were also apparent at subsequent ages. After one year from birth, approximately 47 percent of all kids born with less than 2 kg of body weight had died, compared to about 27 and 23 percent in the two other birth weight classes. Only minor differences were observed throughout the follow-up time period between levels 2 and 3.
The hazard rate profiles for milk yield levels until weaning depicted in Figure 3.2b translated into very low probabilities of survival for kids which received less than 22.5 kg milk, and maximum survival probabilities for kids whose dams produced more than 52.5 kg milk, although still about 21 percent of the kids died within one year from birth in the latter group. Overall, a slightly curvilinear trend in survival rates over the five milk yield classes was observed. Significant differences (p <0.05) were found between the first and the third to fifth yield classes, and between the second and fifth yield class until 16 weeks of age. The latter difference vanished after weaning, whereas the former persisted until the end of the observation period. The effect of milk yield levels upon pre-weaning mortality in kids was substantial, ranging from 9.2 percent in level 5 to 41.3 percent in level 1. As may also be seen from the profiles in Figure 3.2b,differential effects of milk yield on survival began to weaken after 28 weeks.
Of the 287 does exposed, 28 or roughly 9.8 percent died during the course of the experiment. In most of the cases (36 percent), postmortem examinations were inconclusive with regard to the cause of death. Taken together, unexplained losses and predation accounted for another 35 percent of all death events. Only three does died from pneumonia, three from starvation and two from miscellaneous causes.
The hazard function of an individual doe dying in a given follow-up time interval was approximated through a polynomial spline function in time consisting of a second degree base function, a segment joining at 28 weeks containing a quadratic and cubic term, and a segment consisting of a quadratic and cubic term joining at 38 weeks (Table 3.6), at which time most of the kids were weaned. Body weight at breeding, litter size, lagged live weight at each time point, as well as lagged median range condition scores [I] and [II] did not exert any significant influence on death rates. Doe survival was finally evaluated in terms of mating season, production cycle, parity, and reproductive status (fertile or empty).
The relatively small number of death events and their unequal distribution among mating season groups led to estimation problems. For instance, it was not possible to obtain separate hazard rate estimates for mating seasons 4 and 5. This was due to the fact that only two incidences had occurred in each of these treatment groups at the lower and upper ends of the follow-up time scale. Observations from both groups were therefore pooled before fitting logit models to doe survival data. Also, too few animals in parity stage four and greater survived beyond 26 weeks so as to preclude obtaining reliable hazard rate estimates, and does in this class weretherefore grouped with those having three prior kiddings. Significant interactions were found between time factors in the model and the main effects of mating season and reproductive status, indicating non-proportionality over time of death risks across the levels of these variables. In contrast, the influence of parity number on hazard rates remained constant over time.
Estimated hazard functions for pregnant and barren revealed that mortality was highest in does that failed to conceive, reaching a peak value of 0.01 during the time interval 16 to 18 weeks from the onset of mating and declining gradually thereafter (Figure 3.3). For fertile does, the risk of death increased towards the end of the gestation period and attained its maximum value just prior or after parturition (20-22 weeks). Incidence rates declined slightly during the first weeks of lactation, but peaked again before weaning at week 34 to 36 of follow-up. Although not statistically significant, death rates in lactating does exceeded those observed in barren females from 30 weeks onwards. Significant differences (p <0.05) in death rates between fertile and barren does were detected between the second and twentieth week. Over a reproductive cycle of one year duration, the average relative risk of death was estimated to be 2.3 times higher in barren than in fertile does.
|Figure 3.3. Estimated hazard rate curves according to reproductive status for the data on doe survival.|
|Figure 3.4. Estimated hazard rate curves according to mating season (MS) for the data on doe survival.|
The shape of estimated hazard rate curves differed markedly across mating season groups (Figure 3.4). Relatively high death rates after the onset of mating and during the short dry season were observed in mating season group 6. Slightly elevated incidence rates during the same stage also occurred in groups 4 and 5, which were joined during and at the end of the long dry season, respectively. However, in the latter two groups, does were exposed to the lowest overall risk of death, and this also hold true for mating season group 6 after 22 weeks of follow-up. Only one animal died during the lactation stage in group 4, and none in groups 5 and 6.
A very different pattern in the distribution of death events emerged from the estimated survival time distribution for mating season groups 1 through 3 (Figure 3.4). In all three groups, mating occurred under favourable forage conditions, and incidences were concentrated either just prior to or immediately after parturition (group 3), at weaning (group 1), or during both of these stages (group 2). These peaks in hazard rates were related to the concurrence of production stages with high nutritional demands with poor forage conditions prevailing during the dry seasons. For instance, in mating season group 1, parturition occurred at the onset, and weaning toward the end of the long dry season. In the case of group 2, in which does had been joined two months later, parturition took place at the middle of the long dry season in August/September, whereas kids were weaned during the short dry season. The large increase in incidence rates around parturition time in mating season group 3 coincided with the end of the long dry season. However, unlike in the first two groups, none of the breeding females exposed died during the lactation stage, as is apparent from the sharp decline in the hazard rate curve after 22 weeks of follow-up. Significant differences in hazard rates were observed between mating season 3 and 4+5 from 16 up to 22 weeks. Between 30 and 50 weeks, hazard rate estimates in groups 1 and 2 were also found to be higher than in the latter group (p <0.05).
First breeding females were exposed to the lowest risk. Although hazard rates increased linearly with the number of kiddings (p <0.05), only very slight evidence was obtained for a significant difference in response between nulliparous does, and second and third breeding does (p <0.15). All other differences between parity levels were non-significant. By comparison with first breeding females, the relative risk of death was estimated to be 2.6, 3.9 and 4.8 times higher in parity one, two, and three or greater does, respectively. Likewise to the analysis of kid survival, the highest mortality rates were observed during the first production cycle, and remained of similar magnitude during the second and third years of the experiment.
Estimated survival probabilities of does until selected time points as influenced by mating season, reproductive status, and parity are given in Table 3.7. As anticipated from the fact that only few significant differences in hazard rates were detected among the levels of these factors, the bootstrap confidence limits failed to reveal any significant differences in survival probabilities. Due to the relatively small number and sparse distribution of death events observed throughout the experiment, a much greater sample size would be required to isolate significant differences, if any, in doe survival across factor level means.
Survivability until 22 and 38 weeks in does that failed to conceive relative to those that did were estimated to be approximately 4.4 and 4 percent lower. Beyond 2 weeks of follow-up, mortality until completion of the reproductive cycle in both of these groups was predicted to drop to a low value of about one percent. The overall adjusted effect of reproductive status on doe survival thus was small compared to that of the mating season treatment.
The lowest survivability until the end of the reproductive cycle was exhibited by does mated in group 2, in which only 89 percent of the does exposed were expected to survive. The highest survival rate was achieved in groups 4 and 5, with an expected mortality rate of 2.6 percent until 52 weeks of follow-up. Survivability in mating group 6 was severely depressed due to the elevated incidence rates prior to 22 weeks. About 10.8 percent of the breeding females exposed in this group died before completing the reproductive cycle. The most severe reduction in survival rates during 22 and 52 weeks was experienced in mating season group 1, with an expected mortality rate over this time interval of 8.6 percent. Overall mortality rate in group 3 was predicted to be 8.8 percent, and thus was only slightly lower than in group 1.
Survivability over the reproductive cycle according to parity number, adjusted for the effects of mating season and reproductive status, ranged between 98.7 and 93.7 percent for first breeding and third or greater parity does, respectively. Estimated survival rates in does with one or two prior kiddings levelled of at approximately 96.6 and 94.9 percent beyond week 38.
High rates of preweaning mortality in goats have been reported to be a major constraint on improving productivity in traditional goat husbandry systems (Devendra & Burns, 1983; Sherman, 1987; Wilson et al., 1985). Indeed, the analysis of reproductive performance (Chapter 2) has shown that the advantage of high fecundity may completely be lost through correspondingly high postnatal death rates in youngstock. High postnatal mortality incidence rates have been associated with, among other factors, sex of kid, multiple births, low birth weights and suboptimal feeding levels during gestation, parity and age of dam, low milk production, and season of birth (Adu et al., 1979;Gatongi et al., 1997; Gebrelul et al., 1994; Osuagwuh, 1991; Rattner et al., 1994; Wilson, 1984; Wilson et al, 1985). Unfortunately, all of these factors tend to be confounded to some extent, and they can therefore not be assessed simultaneously by entering them into the same statistical model. The main problem arising from such an analysis is that the effect of each variable cannot be interpreted as being free of effects of other collinear variables (Gillespie et al., 1994). For instance, the analyses performed with the present data indicated that dam parity affected litter size, milk yield level, and birth weight, while birth weight and milk yield level were, in turn, found to decrease with increasing litter size. The picture was further complicated by the fact that litter size, birth weight, and milk yield level of the dam varied with mating season (and, consequently, season of birth), which is an indicator variable that carries information on environmental and nutritional conditions prevailing at various stages of the reproductive cycle. It is thus hard to separate the individual contributions of these risk factors to kid morbidity and mortality. The strategy followed here was to assess the effect of those variables that were or could have been manipulated by experimental design, i.e. mating season and parity of dam, separately from that of observational variables such as litter size, birth weight, milk yield of dam until weaning, and range condition scores.
Parity of dam was not found to have a significant effect on mortality incidence rates. This contrasts with the findings of other studies conducted under comparable environmental conditions, which identified parity of dam to be a significant source of variation in kid mortality. For instance, Traoré and Wilson (1989), working with West African Sahel goats in Mali, found that kid mortality until 150 days of age decreased from 47.9 percent in first-born young to 26.1 percent at the fifth or later kidding. Similar differences between first and greater parity kids were also observed by Sacker and Trail (1966), and Wilson et al. (1984) and Gatongi et al. (1997), who studied mortality in SEA goat kids in Uganda and Kenya, respectively. The latter authors tested the effectiveness of two anthelmintic treatments applied either prior to or just after the onset of the rainy seasons on reducing kid survival until nine months of age. Based on the mortality rates reported by Gatongi et al. (1997), one can calculate that the anthelmintic treatments reduced the relative risk of death of first versus later parity kids from as much as 12.0 in the control group, to 2.3 and 5.2 in the two treatment groups. This effect was apparently mediated through an overall improvement in body condition of dams and birth weights of kids. In the present study, a deworming treatment was regularly applied to all animals prior to the rains, and, in light of the evidence provided by Gatongi et al., this may have contributed to the observed lack of association between parity of dam and kid survival. Furthermore, first breeding was delayed until does weighed at least 20 kg, leading to a mean liveweight at first conception of 26±4.7kg, and a mean age at first kidding of 90.1±12.97 weeks. The age of roughly 21 months at first parturition was markedly higher than that reported for other goat breeds in semi-arid Africa e.g., 16 months for West African Sahel goats (Wilson & Traoré, 1988), and 14.3 months for Red Sokoto goats (Adu et al., 1979). It was also much higher than the 14.7 and 15.5 months observed by Gatongi et al. (1997) in the two anthelmintic treatment groups noted above. Consequently, the corresponding liveweights at first conception of 21.9 and 20.9 kg were also considerably lower than those reported in the present study.
Both weight and age at first conception are generally considered to be important maternal factors influencing kid survival. Sherman (1987) ascribes the protective effect of doe age on kid survival to the positive correlation between age and liveweight of dam, which itself appears to be positively correlated with birth weights, as well as to the ability of older does to produce broader colostral immunity for kids. Supportive evidence for improved survival of first-born kids due to delayed age and increased weight at first conception is provided by Wilson (1984). In SEA goat flocks owned by Maasai herders, he found that first parity deaths were only slightly greater than the overall mean and attributed this finding to the management practices of the Maasai which are aimed at delaying age at first parturition.
The effect of litter size on kid survival failed to reach statistical significance after adjustment was made for the effects of birth weight and milk yield until weaning. Based on the present data, this finding was not unexpected since litter size could be shown to have a significant impact on both birth weight, growth rate, and milk yield until weaning. Both traits were found to be negatively affected in twin litters. As indicated previously, many workers found a significant effect of litter size on kid survival. However, since the effect of litter size is often assessed simultaneously with that of birth weight and parity of dam, interpretational difficulties arise because of the inevitable correlation between these factors (Hinch et al., 1985). The present results suggest that the primary risk factors are birth weight and the amount of milk supplied by the dam, rather than litter size per se. The risk of death for kids born under 2 kg was more than twice as high as that for kids weighing more than 2 kg at birth; and it is worth noting that this effect did not disappear with increasing age. Rattner et al. (1994) observed a similar constancy of the effect of low birth weight on survival in a study of kid mortality under semi-extensive management in Israel. Based on the mortality figures reported by Rattner et al. (1994) for three consecutive time periods ranging from birth to 48 hours, from 2 to 70 days, and from 70 days to six months of age, the risk of death for kids weighing up to 1.5 kg can be calculated to be 7.6, 2.2, and 2.0 times higher, respectively, than that for those kids weighing more than 1.5 kg at birth.
Low birth weight has consistently been identified by most workers as a major contributing factor to mortality in kids, especially during the early stages of life (Devendra & Burns, 1983; Sherman, 1987). The most likely cause of the increased vulnerability of kids with decreasing birth weight is the reduced fitness and inability of such kids to ingest adequate amounts of colostrum and milk, which subsequently further lowers their resistance to environmental stresses and infectious agents; this may lead to chronical unthriftiness and slow growth. Typically, deaths in these animals cannot be attributed to any specific cause, as was also the case in the present study. For 38 percent of all observed kid losses, no clear diagnosis could be established. Often, these events are thought to be related to the so-called starvation-mismothering-exposure (SME) complex which, according to Haughey (1991), can be considered to be the most important cause for nonspecific deaths in the first two weeks of life. The present data revealed that about 33 percent of all recorded death events occurred within this period.
The estimated hazard curves by milk yield class clearly demonstrated the importance of early nutrition for kid survival. Kids born to does producing less than 22.5 kg of milk until weaning were exposed to a particularly high risk of death during the first two weeks of life. But in contrast to birth weight, the effect exerted by the amount of milk available to kids diminished with increasing age. For instance, the predicted risk of death in the lowest milk yield category was 6.1 times as high as that in the highest yield category (>52.5 kg of milk) over the first two weeks of life, but subsequently this estimate declined to 4.2 around the time of weaning, and to 1.2 at one year of age, i.e., a relative risk that is close to independence. A possible explanation for this pattern is that kids born to low milking does were able to compensate for the restriction in milk availability by starting to graze on pasture forage at an early stage. Nevertheless, in absolute terms the difference in predicted survival probability between the lowest and highest milk yield class was tremendous. While in the latter case losses of only 2.9 and 9.2 percent were expected to occur over the first two weeks of life and until weaning, respectively, about 17.9 and as much as 41.3 percent of the kids nursed by low milking does were estimated to die over these two periods. This underlines the strong protective effect of an adequate milk supply under extensive management conditions. Well nourished kids are not only more resistant to pathogens, but are also less likely to be lost due to wandering and predation. Unexplained losses and predation, which together accounted for 28 percent of all losses, can be expected to be greater in kids weakened by malnutrition and starvation.
Although, as demonstrated by the analysis of lactation performance, some does were intrinsically low yielders, by far the most important source of variation in milk yields under semi-arid conditions is undoubtedly the plane of nutrition of dam during late pregnancy and lactation. Similarly, birth weight of kids is influenced by nutritional status of the dam at breeding and throughout the gestation period. Nutritional conditions around breeding time may also exert an indirect influence on birth weights by affecting ovulation rate, and thus litter size. Consequently, though in the present study birth weight and milk yield until weaning appeared to be strong predictors of death risk of kids, ultimately, these factors were themselves related to seasonal fluctuations in availability and quality of forage. This relationship was expressed through the observed differences in birth weight and milk yield until weaning across mating season groups. The resulting patterns of death hazard in youngstock suggest that synchronizing nutrient demands of the herd with forage production from pastures by manipulating mating season and, hence, season of birth, is possibly an effective strategy for reducing youngstock mortality in the current production system.
In accordance with the results reported by Wilson et al. (1985) from the study on Maasai SEA goat flocks referred to previously, maximum kid losses until weaning occurred when the entire period from birth to weaning coincided with the long dry season (mating season 1). The authors estimated pre-weaning mortality [page 57↓]in kids born during the dry season at 40.6 percent, which is almost identical to the 43.8 percent estimated for mating season 1. The hazard function for mating season 1 revealed that these losses were caused by very high incidence rates during the first two weeks of life and around the time of weaning. Taking into account the fact that 75 percent of all deaths events occurred during periods of low quality and quantity of forage on offer (i.e., scores of 1 and 2 of RC [I]), it can be inferred that the nutritional stress imposed on dams and their progeny in this group was to a large extent responsible for these excessive mortality rates. For the same reason, high perinatal death rates were observed in all groups in which dams delivered during the long (groups 2, and 3) or short dry season (groups 4), but declined noticeably when kids were born at the onset or at the middle of the long rainy season (groups 5 and 6). Does freshening during the latter period had higher levels of nutrition and were able to sustain a higher level of milk production than does kidding in the dry season. Nevertheless, one could expect the prevalence of infections with gastro-intestinal parasites and bacterial diseases to increase sharply during the rainy season, and to observe a concomitant increase in the number of youngstock succumbing to infectious diseases. However, death incidences due to worm infections and pneumonia, in particular, were very low throughout the experiment. This was also reflected in the fact that the predicted death risk decreased dramatically with increasing range condition score, irrespective of kid age. Therefore, it seems reasonable to conclude that the health care programme was very effective in controlling diseases which are generally a major source of kid losses during the rainy season, such as CCPP and helminthiasis. With respect to helminthiasis, this conjecture accords with the results of the study by Gatongi et al. (1994) noted above, that showed the anthelmintic treatment administered before the rains to reduce kid mortality rate in the first nine months of life to less than half the rate observed in the control group.
Similarly to mating season 1, weaning was also found to be a critical stage in mating season 6. The sharp increase in death hazard during August/September, at the peak of the long dry season, was probably caused by declining body condition of dams, decreasing levels of milk production, as well as by an absolute lack of available feed for the young themselves. In contrast, the stress of weaning did not appear to be associated with major losses when kids were weaned between the months of December and May, as was the case in mating seasons 2, 3, and 4. Higher pre-weaning levels of nutrition of dams and kids, coupled with the protective effect of the health care programme, seemed to have considerably decreased the susceptibility of youngstock to disease and mortality. Much in the same way, these factors apparently kept incidence rates at a low level even when kids were weaned at the onset of the long dry season (mating season 5). Similar findings were previously reported by Traoré and Wilson (1988) for agro-pastoral systems in Mali, and Wilson et al. (1985) for Maasai pastoral goat flocks in Kenya. In the latter study, pre-weaning mortalities for kids born during the short rainy, short dry, and long rainy season were estimated at 34.4, 25.5, and 17.5 percent, respectively. By comparison, the rates of 17.5, 11.0, and 7.2 percent estimated in the current experiment for mating season groups 3, 4, and 5, respectively, were consistently lower. Again, the health care programme carried out in the present study might partly account for the difference in the pre-weaning mortality levels.
In conclusion, confining mating in the current goat production system to the period between June and November is likely to confer a distinctive advantage in terms of youngstock survival. Such a strategy can be expected to yield maximum probabilities of survival to one year of age of about 80 percent. The present data showed that considerable inter-year variability may nevertheless occur, and that exceptional climatic events, such as the heavy rainstorm observed during the short rainy season in 1987, may also cause substantial losses in youngstock. Though joining does during the short dry season is likely to produce the largest kid crop, this should nevertheless be avoided since subsequent mortality rates of about 50 percent until the yearling stage represent a serious reduction in biological efficiency. This is because a very large proportion of the resources invested in dams to initiate and maintain pregnancy would be wasted (Lebbie et al., 1996). The existence of a trade-off between maximum reproductive performance and youngstock survival induced by seasonal fluctuations in forage supply should be carefully considered in evaluating management interventions designed to enhance overall goat flock productivity under semi-arid conditions.
As already indicated by the analysis of kid survival, death incidence rates dropped dramatically after one year of age, with mortality rates over the second year of life ranging between 1 and 8 percent. This overall trend was maintained over subsequent life-cycle stages, but patterns in adult female mortality rates differed considerably in shape according to reproductive status, parity, and mating season.
The ranking of mating season groups with respect to survival was similar to that observed in kids, although the differences were much smaller and non-significant. Predicted annual mortality rates ranged between 2.6 and 11.2 percent and were somewhat lower than the 13 percent estimated by Traoré and Wilson (1988) for West African Sahel goats over one year of age in Central Mali. Ba et al. (1996) reported mortality rates over a one year study period in adult West African Djallonké goats of 4.1 percent in vaccinated and dewormed [page 58↓]animals, whereas a rate of 11.5 percent was observed in goats that received a vaccination treatment only. From comparison of these figures to the mortality rate of 16.2 percent observed in the control group, the authors concluded that the deworming treatment was particularly effective in reducing mortality in adult goats. Based on this evidence, it can be concluded that the anthelmintic treatment administered before the rainy seasons in the current experiment may have helped in achieving low levels of mortality, particularly when reproductive stages with high nutrient demands such as late pregnancy and early lactation coincided with periods of increased endoparasitic challenge. Good forage conditions during the latter reproductive stages, coupled with the protective effect of the deworming treatment, thus may explain the exceptionally low level of mortality of 2.6 percent predicted for mating season groups 4 and 5. Inspection of the corresponding hazard curves does not reveal any period of increased susceptibility of breeding females. In contrast, goats giving birth and lactating during the long dry season, as in mating season groups 1 and 2, were exposed to a markedly increased hazard. Typically, no specific cause could be attributed to a large number of the death events, but the majority of goats probably succumbed directly or indirectly to nutritional stress and malnutrition. The hazard curve estimated for mating season group 3 underpins the overriding contribution of plane of nutrition to the risk of death of breeding females. In this group, incidence rates peaked around parturition time, which coincided with the end of the long dry season. Metabolic and nutritional disorders, resulting from an imbalance between nutrient availability and/or intake and nutrient requirements may have been a major source of losses during late pregnancy and the first weeks of lactation. The last weeks before parturition are characterised by an exponential growth in weight of and nutrient requirements for products of conception. Over the same period, however, voluntary intake of energy relative to liveweight of dam is known to decrease (Sauvant et al., 1991), and, during the long dry season, this is further aggravated through an absolute shortage in nutrients supplied from pastures. The discrepancy between energy intake and requirements leads to an increase in lipid mobilization and represents favourable conditions for intense ketogenesis and, consequently, the incidence of pregnancy toxemia. Similarly, negative energy balances and massive losses of body weight due to lipomobilization also occur during the first days of lactation which, coupled with severe energy deprivation, may induce lactation ketosis (Sauvant et al., 1991).
An interesting feature emerging from the analysis was the higher susceptibility of nonpregnant as opposed to pregnant does. Generally, the converse could be expected to be true, since barren does did not endure the increased levels of physiological and nutritional stress associated with gestation and could therefore be assumed to be less sensitive to all kinds of hazards. Certainly, the estimated incidence rates in barren females may have been biased upwards by the small sample size in this category (6 death events out of 21 does at risk). Nevertheless, all events occurred within a few weeks from the onset of mating, thus suggesting that the failure to conceive and subsequent death were causally related.
Mortality rates over the reproductive cycle could be shown to increase linearly with the number of parturitions, and hence age. Corroborative evidence on this finding could not be traced. Compared with the literature on kid mortality under extensive management conditions, there is little information on death rates in adult goats. This is either due to the short duration of many studies, which precludes a full characterization of incidence patterns, or because the focus is exclusively on investigating mortality in youngstock, which is generally regarded to be the primary obstacle to increased productivity in goat herds. However, as has been pointed out by Devendra and Burns (1983), adult mortality plays an important role in determining overall herd productivity. Low mortality rates in breeding females lead to low replacement rates and shift stage abundances in the breeding herd toward higher parities, such that a maximum number of animals will reach the most productive life-cycle stages. Moreover, low replacement rates provide maximum flexibility for actively manipulating herd structure through selective culling of less productive animals. With respect to the present data, the finding that mortality increases with parity, while reproductive performance increases up to the third kidding and declines thereafter, will have consequences for the design of a breeding stock culling policy that it is ment to achieve optimum herd productivity. For instance, from a purely herd dynamics perspective, the inverse relationship between dam survival and reproductive performance with increasing parity can be expected to imply that, conditional upon sufficient availability of replacements, it will generally not be efficient to keep breeding females in the herd beyond their third kidding. Of course, an economic evaluation of the herding enterprise, which would be based on other criteria than maximizing herd growth potential, could well result in more differentiated optimal culling policies for breeding females.
The following discussion addresses some issues relating to the appropriateness of the type statistical model conventionally used for analysing time-to-event data, as opposed to the logistic regression technique employed in the present work. The ‘classical’ approach (Muenchow, 1986) for analysing survival rates is to subject the proportion or percentage of individuals that died during a fixed observation period to a conventional analysis of variance (ANOVA) or regression analysis, in order to examine the functional [page 59↓]relationship relating this variable to a set of independent variables (or risk factors) that may be either discrete or continuous. The analysis is usually carried out separately for a number of consecutive time end-points within the observation period, so that, implicitly, a cross-sectional perspective is adopted. This type of analysis eventually corresponds to the so-called linear probability model (Agresti, 1990), which appears to be the standard technique employed in livestock-related research. Applications relating to the analysis of mortality in goat flocks can be found in the studies by Gebrelul et al. (1994), Osuagwuh (1991), Traoré and Wilson (1988), Wilson and Light (1986), and Wilson et al. (1985), to cite just a few.
Several problems are associated with the use of the linear probability model. Firstly, ANOVA and regression analysis are intended to be used for analysing means of numeric, normally distributed response measures. Error terms are assumed to be normally distributed with zero mean and a constant variance that is independent of the value of the mean of the response. Clearly, this is not a reasonable assumption for proportions based on binomial responses such as mortality or survival rates. By definition, a dichotomous response variable y i having possible outcomes ‘alive’ or ‘dead’ (coded, for example, as 0 and 1) is distributed as a Bernoulli random variable, whose expected value E(y i), the probability of dying, determines its variance var(y i)=E(y i)[1—E(y i)]. It follows that a model with an additive error component with fixed variance is inappropriate for analysing survival rates, since it cannot accommodate the dependence between mean and variance. The problem becomes particularly acute as E(y i) moves toward 0 or 1, in which cases the variance moves toward zero. Ordinary least squares estimators are no longer minimum variance in the class of linear unbiased estimators, and their normal sampling distribution does not apply (Agresti, 1990; Neter et al., 1996; Searle et al., 1992). Moreover, given that the response function represents probabilities, the mean responses should be constrained to fall within the interval [0; 1]. Linear response functions do not possess this constraint. Although the problems associated with nonnormal errors terms and nonconstant error variance could be handled by applying a suitable transformation (e.g., the angular transform) to the response values and/or by using weighted least squares estimation, there is no remedy to the structural defect of a linear normal-theory model not to constrain the estimates to fall between 0 and 1 (Agresti, 1990). The ordinary least squares estimator of E(y i) can result in estimates outside this interval, thus precluding a meaningful interpretation.
The inappropriateness of the linear probability model is easily seen when a closer look is taken at some mortality or survival estimates and their standard errors reported in the literature. As noted above, the linear probability model cannot accommodate the fact that variance moves toward 0 for mortality or survival estimates near 0 or 1. Gebrelul et al. (1994), for example, report percentage mortality rates and standard errors thereof from 15 days to weaning in Alpine, Nubian, and NubianxAlpine kids of 1.8 (±2), 4.5 (±4), and 5.3 (±3), respectively. Approximate 95 percent large-sample confidence limits for these mean responses are, respectively: [-2.1; 5.7], [-3.3; 12.3], and [-0.6; 11.2]. Clearly, the negative lower interval limits do not allow a meaningful interpretation of these results, and the overestimated standard errors are likely to invalidate all significance tests that are based on the model derived by Gebrelul et al. (1994). A similar problem is apparent with a polynomial regression model reported by Osuagwuh (1991), who studied perinatal reproductive wastage in West African Dwarf goats. The author regressed linear and quadratic dam age on the percentage of neonatal deaths (transformed to the logarithm) in goat kids. Based on the reported regression coefficients and their standard errors, the point estimate of kid mortality for 12 year old dams (33 animals of that age were observed in the sample) can be calculated at 92.7 percent, with approximate 95 percent confidence limits [74.1; 115.8]. Again, the upper confidence limit of more than 100 percent illustrates the kind of spurious results which might be obtained upon fitting linear normal-theory models to mortality data. In light of these difficulties, the latter class of models should not be used for analysing a discrete dependent variable (proportions or rates arising from dichotomous dependent variables). Statistical procedures that explicitly assume binomially distributed errors with an appropriate link function of mean to variance, such as logistic regression, should be preferred instead. Logistic regression, which is a special case of generalized linear models, can be used in situations analogous to the use of regression or analysis of variance or covariance when analysing normally distributed continuous dependent variables (see McCullagh & Nelder, 1983).
A further problem associated with the use of standard ANOVA and regression techniques in analysing time-to-event data is caused by the repeated-observations structure of this type of data. The cross-sectional view adopted by the classical approach ignores the fact that data on the timing of events (e.g., death) are obtained by repeatedly observing individual subjects over time. Survival data, being a special kind of time-to-event data in which the event of interest can only be observed once for each individual under study, are therefore intrinsically longitudinal in nature, and give rise to features which are difficult to handle with conventional statistical methods: censoring, changes in the risk set (i.e., the number of subjects at risk of death at any given point in time), and time dependent covariables. The classical approach is wasteful of information because the event history is totally neglected and, at the same time, it is not clear how it could account for censored or incomplete observations. Possible solutions to the censored data problem consist in omitting all observations that were censored before the time endpoint of interest, or to assigning the maximum length of observation [page 60↓]time as the survival time of all censored cases. However, both approaches have been shown to lead to large biases in estimates of survival rates, while the former approach also leads to an inefficient waste of data (Allison, 1983; Fox, 1993).
In spite of these facts, in livestock-related research most authors do not state whether censoring occurred in their data, and if so, how it was accounted for in their statistical analysis. Censoring is likely to become a major problem particularly in experiments carried out under field conditions. For instance, every animal slaughtered for sale or consumption before the time endpoint of interest would represent a censored observation. A further complication frequently arises due to animals that enter into the herd studied during the observation period. In this case, the population at risk is constantly changing, causing difficulties in estimating the denominator used to calculate mortality or survival rates. Simply averaging the population size measured at various intervals during the study period, as proposed by Putt et al. (1987), does not seem to be a satisfactory solution to this problem. In studying adult mortality in West African Dwarf goat flocks, Ba et al. (1996) used a different method to account for fluctuations in flock size over time. They adjusted data on each animal for the proportion of the observation period that it had spent in the flock by using the total number of animal-days observed on study as denominator. This yields an estimate of incidence rate per day, which can be converted to an annual survival rate by substracting it from 1 and raising the obtained daily survival rate to the power of 365. This method was also recommended by PAN Livestock Services (1991).
Table 3.8 illustrates the effect of different methods of estimation on survival rate estimates when the risk set changes over time and measurement intervals are of unequal length. In addition to the methods of Putt et al. (1987) and Ba et al. (1996) which were denoted as methods 2 and 4, respectively, a life table estimate (method 1) and an estimate using the average herd size at the start and end of the observation period as denominator (method 3) were also included. Obviously, the choice of method has a dramatic effect on survival rate estimates. Methods 2 and 3 are the ones leading to the largest biases, since changes in the risk set over time are smoothed out (method 2) or not taken into account at all (method 3). A defect common to methods 2, 3 and 4 is to assume a constant force of mortality throughout the observation period, which clearly is not the case, as may be seen from the column reporting hazard rates. This assumption is particularly untenable when there is a strong seasonal variation in risk factors such as nutrient availability or the [page 61↓]prevalence of diseases. The bias introduced by failure to account for nonconstant hazards increases with increasing fluctuations in the risk set and incidence rates over individual time intervals. This effect is shown in Table 3.8 after reducing population size at the third census to 80. While the corresponding increase in the risk of death from 0.192 to 0.242 between third and fourth census is fully reflected in the life-table survival estimate at the end of follow-up, this is only true in part for methods 2 and 4.
In conclusion, if censoring and/or fluctuations in the risk set are present, the life-table estimate is to be preferred over the other estimation methods. Since it is based on hazard estimates for each time interval, both changes in the risk set and censored observations are readily taken into account. The life-table method is particularly well suited for grouped data such as those given in Table 3.8, where event times are not known exactly but are known to have occurred during some interval of follow-up (Allison, 1995). Note that this method of estimating hazard rates is closely related to the logistic regression approach employed in this study. The only difference is that the life-table method is non-parametric, whereas the logistic regression approach used herein models the baseline (logit) hazard function over time parametrically with polynomial splines. This leads to hazard estimates with much smaller variance (Efron, 1988; Gillespie et al., 1994).
The results presented in this study demonstrate the increased structure that can be seen by modelling hazard functions parametrically over time. It provides much more insight into patterns of mortality than the classical method of analysing survival data. Logistic regression is particularly flexible in implementing this approach, and allows comparison among any number of risk groups simultaneously at any time point. As demonstrated in the present study, non-proportionality in death hazard among risk groups can easily be tested or modelled by including interaction terms between time variables and the predictor variables representing these risk groups. Gillespie et al. (1994) and Gray (1992) point out that polynomials or polynomial splines fitted to the logit of the hazard rate form a family of possible distributions which is much richer than the usual parametric alternatives of Weibull, log-normal, log-logistic or gamma. With increasing number of knots, polynomial splines create very flexible families of models that permit explicit estimation of the hazard function rather than treating it as a nuisance parameter like, for instance, in the Cox proportional hazards model. An attractive feature of the logistic regression approach is that it is easily implemented using any standard logistic regression program. Because time (or age, as used in the analysis of youngstock survival) is just another variable in the regression model, the dependence of the hazard on time can take on many different functional forms, which are not limited to polynomials or polynomial splines as used in this study (Allison, 1995). For instance, situations may arise in which a more parsimonious representation may be obtained by letting hazard depend on the logarithm of time.
The possibility of investigating the entire baseline hazard function and its interaction with various risk factors may be considered to be an important advantage over the classical approach of analysing survival data, in which all information of the effect of time on incidence rates is discarded. In livestock-related research this information is of great value, since it allows the identification of critical stages in the production cycle, and provides details on how the associated risks of death differ in magnitude and timing across various risk groups. In general, many factors are involved in a complex way in determining mortality in livestock herds, and it can be expected that a full characterization of patterns of incidence rates will be beneficial in developing management interventions aimed at reducing such mortality.
|© Die inhaltliche Zusammenstellung und Aufmachung dieser Publikation sowie die elektronische Verarbeitung sind urheberrechtlich geschützt. Jede Verwertung, die nicht ausdrücklich vom Urheberrechtsgesetz zugelassen ist, bedarf der vorherigen Zustimmung. Das gilt insbesondere für die Vervielfältigung, die Bearbeitung und Einspeicherung und Verarbeitung in elektronische Systeme.|
|DiML DTD Version 3.0||Zertifizierter Dokumentenserver|
der Humboldt-Universität zu Berlin