Lowering the bias from on the inverse likelihood of therapy weighting (IPTW) – Healthcare Economist

0
7

[ad_1]

When utilizing observational knowledge, project to a therapy group is non-random and causal inference could also be troublesome. One frequent strategy to addressing that is propensity rating weighting the place the propensity rating is the likelihood that an individual is assigned to the therapy arm given their observable traits. This propensity is commonly estimated utilizing a logistic regression of particular person traits on a binary variable of whether or not the person obtained the therapy or not. Propensity scores are sometimes used that to by making use of inverse likelihood of therapy weighting (IPTW) estimators to acquire therapy results adjusting for recognized confounders.

A paper by Xu et al. (2010) reveals that utilizing the IPTW strategy could result in an overestimate of the pseudo-sample measurement and enhance the chance of a kind I error (i.e., rejecting the null speculation when it’s really true). The authors declare that sturdy variance estimators can tackle this drawback however solely work nicely with giant pattern sizes. As a substitute, Xu and co-authors proposed utilizing standardized weights within the IPTW as a easy and simple to implement technique. Right here is how this works.

The IPTW strategy merely examines the distinction between the handled and untreated group after making use of the IPTW weighting. Let the frequency that somebody is handled be:

the place n1 is the variety of individuals handled and N is the overall pattern measurement. Let z=1 if the individual is handled within the knowledge and z=0 if the individual shouldn’t be handled. Assume that every individual has a vector of affected person traits, X, that influence the chance of receiving therapy. Then one calculate the likelihood of therapy as:

Below customary IPTW, the weights used can be:

Xu and co-authors create a simulation to indicate that the sort 1 error is just too excessive–usually 15% to 40%. To right this, one may use standardized weights (SW) as follows:

The previous is used for the handled inhabitants (i.e., z=1) and the latter is used within the untreated inhabitants (z=0). The authors present that beneath the standardized weights, the speed of sort 1 errors is roughly 5% as supposed. In reality, the authors additionally present that standardized weighting usually outperforms sturdy variance estimators as nicely for estimating foremost results.

For extra data, you may learn the complete article right here.

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here