# Imputation (statistics)

For other uses of "imputation", see Imputation (disambiguation).

In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as "unit imputation"; when substituting for a component of a data point, it is known as "item imputation". There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency.[1] Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved with listwise deletion of cases that have missing values. That is to say, when one or more values are missing for a case, most statistical packages default to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. Imputation preserves all cases by replacing missing data with an estimated value based on other available information. Once all missing values have been imputed, the data set can then be analysed using standard techniques for complete data.[2] Imputation theory is constantly developing and thus requires consistent attention to new information regarding the subject. There have been many theories embraced by scientists to account for missing data but the majority of them introduce large amounts of bias. A few of the well known attempts to deal with missing data include: hot deck and cold deck imputation; listwise and pairwise deletion; mean imputation; regression imputation; last observation carried forward; stochastic imputation; and multiple imputation.

## Complete Case / Listwise Deletion

By far, the most common means of dealing with missing data is listwise deletion (also known as complete case), which is when all cases with a missing value are deleted. If the data are missing completely at random, then listwise deletion does not add any bias, but it does decrease the power of the analysis by decreasing the effective sample size. For example, if 1000 cases are collected but 80 have missing values, the effective sample size after listwise deletion is 920. If the cases are not missing completely at random, then listwise deletion will introduce bias because the sub-sample of cases represented by the missing data are not representative of the original sample (and if the original sample was itself a representative sample of a population, the complete cases are not representative of that population either). While listwise deletion is unbiased when the missing data is missing completely at random, this is rarely the case in actuality.[3]

Pairwise deletion (or "available case analysis") involves deleting a case when it is missing a variable required for a particular analysis, but including that case in analyses for which all required variables are present. When pairwise deletion is used, the total N for analysis will not be consistent across parameter estimations. Because of the incomplete N values at some points in time, while still maintaining complete case comparison for other parameters, pairwise deletion can introduce impossible mathematical situations such as correlations that are over 100%.[4]

The one advantage complete case has over other methods is that it is straightforward and easy to implement. This is a large reason why that complete case is the most popular method of handling missing data in spite of the many disadvantages it has.

## Imputation techniques

### Single imputation

A once-common method of imputation was hot-deck imputation where a missing value was imputed from a randomly selected similar record. The term "hot deck" dates back to the storage of data on punched cards, and indicates that the information donors come from the same dataset as the recipients. The stack of cards was "hot" because it was currently being processed.

#### hot-deck

One form of hot-deck imputation is called "last observation carried forward" (or LOCF for short), which involves sorting a dataset according to any of a number of variables, thus creating an ordered dataset. The technique then finds the first missing value and uses the cell value immediately prior to the data that are missing to impute the missing value. The process is repeated for the next cell with a missing value until all missing values have been imputed. In the common scenario in which the cases are repeated measurements of a variable for a person or other entity, this represents the belief that if a measurement is missing, the best guess is that it hasn't changed from the last time it was measured. This method is known to increase risk of increasing bias and potentially false conclusions. For this reason LOCF is not recommended for use.[5]

#### cold-deck

Cold-deck imputation, by contrast, selects donors from another dataset. Due to advances in computer power, more sophisticated methods of imputation have generally superseded the original random and sorted hot deck imputation techniques.

#### Mean substitution

Another imputation technique involves replacing any missing value with the mean of that variable for all other cases, which has the benefit of not changing the sample mean for that variable. However, mean imputation attenuates any correlations involving the variable(s) that are imputed. This is because, in cases with imputation, there is guaranteed to be no relationship between the imputed variable and any other measured variables. Thus, mean imputation has some attractive properties for univariate analysis but becomes problematic for multivariate analysis.

#### Regression imputation

Regression imputation has the opposite problem of mean imputation. A regression model is estimated to predict observed values of a variable based on other variables, and that model is then used to impute values in cases where that variable is missing. In other words, available information for complete and incomplete cases is used to predict whether a value on a specific variable is missing or not. Fitted values from the regression model are then used to impute the missing values. The problem is that the imputed data do not have an error term included in their estimation, thus the estimates fit perfectly along the regression line without any residual variance. This causes relationships to be over identified and suggest greater precision in the imputed values than is warranted. The regression model predicts the most likely value of missing data but does not supply uncertainty about that value.

Stochastic regression was a fairly successful attempt to correct the lack of an error term in regression imputation by adding the average regression variance to the regression imputations to introduce error. Stochastic regression shows much less bias than the above-mentioned techniques, but it still missed one thing - if data are imputed then intuitively one would think that more noise should be introduced to the problem than simple residual variance.[4]

Although single imputation has been widely used, one shortcoming is it does not reflect the full uncertainty created by missing data. This problem is the motivation for "multiple imputation" as a method to give a full representation of the uncertainty that arises when data that were expected from an experimental situation are not observed.

### Multiple imputation

In order to deal with the problem of increased noise due to imputation, Rubin (1987) developed a method for averaging the outcomes across multiple imputed data sets to account for this. All multiple imputation methods follow three steps.

1. Imputation - Similar to single imputation, missing values are imputed. However, the imputed values are drawn m times from a distribution rather than just once. At the end of this step, there should be m completed datasets.
2. Analysis - Each of the m datasets is analyzed. At the end of this step there should be m analyses.
3. Pooling - The m results are consolidated into one result by calculating the mean, variance, and confidence interval of the variable of concern.[6]

Just as there are multiple methods of single imputation, there are multiple methods of multiple imputation as well. One advantage that multiple imputation has over the single imputation and complete case methods is that, multiple imputation is flexible and can be used in a wide variety of scenarios. Multiple imputation can be used in cases where the data is missing completely at random, missing at random, and even when the data is missing not at random. However, the primary method of multiple imputation is multiple imputation by chained equations (MICE). It is also known as "fully conditional specification" and, "sequential regression multiple imputation." [7] An important thing to note is that MICE should be implemented only when the missing data follow the missing at random mechanism.

As alluded in the previous section, single imputation does not take into account the uncertainty in the imputations. After imputation, the data is treated as if they were the actual real values in single imputation. The negligence of uncertainty in the imputation can and will lead to overly precise results and errors in any conclusions drawn.[8] By imputing multiple times, multiple imputation certainly accounts for the uncertainty and range of values that the true value could have taken.

Additionally, while it is the case that single imputation and complete case are easier to implement, multiple imputation is not very difficult to implement. There are a wide range of different statistical packages in different statistical software that readily allow someone to perform multiple imputation. For example, the MICE package allows users in R to perform multiple imputation using the MICE method.[9]

## References

1. Barnard, J.; Meng, X. L. (1999-03-01). "Applications of multiple imputation in medical studies: from AIDS to NHANES". Statistical Methods in Medical Research. 8 (1): 17–36. ISSN 0962-2802. PMID 10347858.
2. Gelman, Andrew, and Jennifer Hill. Data analysis using regression and multilevel/hierarchical models. Cambridge University Press, 2006. Ch.25
3. Kenward, Michael G (2013-02-26). "The handling of missing data in clinical trials". Clinical Investigation. 3 (3): 241–250. doi:10.4155/cli.13.7. ISSN 2041-6792.
4. Enders, C.K. (2010). Applied missing data analysis. New York: Guilford Press.
5. Molnar, Frank J.; Hutton, Brian; Fergusson, Dean (2008-10-07). "Does analysis using "last observation carried forward" introduce bias in dementia research?". Canadian Medical Association Journal. 179 (8): 751–753. doi:10.1503/cmaj.080820. ISSN 0820-3946. PMC . PMID 18838445.
6. van Buuren, Stef (2012-03-29). Flexible Imputation of Missing Data. Chapman & Hall/CRC Interdisciplinary Statistics Series. Chapman and Hall/CRC. doi:10.1201/b11826. ISBN 9781439868249.
7. Azur, Melissa J.; Stuart, Elizabeth A.; Frangakis, Constantine; Leaf, Philip J. (2011-03-01). "Multiple imputation by chained equations: what is it and how does it work?". International Journal of Methods in Psychiatric Research. 20 (1): 40–49. doi:10.1002/mpr.329. ISSN 1557-0657. PMC . PMID 21499542.
8. Graham, John W. (2009-01-01). "Missing data analysis: making it work in the real world". Annual Review of Psychology. 60: 549–576. doi:10.1146/annurev.psych.58.110405.085530. ISSN 0066-4308. PMID 18652544.
9. Horton, Nicholas J.; Kleinman, Ken P. (2007-02-01). "Much ado about nothing: A comparison of missing data methods and software to fit incomplete data regression models". The American Statistician. 61 (1): 79–90. doi:10.1198/000313007X172556. ISSN 0003-1305. PMC . PMID 17401454.