BACKGROUND
We reviewed the results of the Observational Medical Outcomes Research Partnership (OMOP) 2010 Experiment in hopes of finding examples where apparently well-designed drug studies repeatedly produce anomalous findings. OMOP had applied thousands of designs and design parameters to 53 drug-outcome pairs across 10 electronic data resources. Our intent was to use this repository to elucidate some sources of error in observational studies.
METHOD
From the 2010 OMOP Experiment, we sought drug-outcome-method combinations (DOMCs) that met consensus design criteria, yet repeatedly produced results contrary to expectation. We set aside DOMCs for which we could not agree on the suitability of the designs, then selected for an in-depth scrutiny one drug-outcome pair analyzed by a seemingly plausible methodological approach, whose results consistently disagreed with the a priori expectation.
RESULTS
The OMOP "all-by-all" assessment of possible DOMCs yielded many combinations that would not be chosen by researchers as actual study options. Among those that passed a first level of scrutiny, two of seven drug-outcome pairs for which there were plausible research designs had anomalous results. The use of benzodiazepines was unexpectedly associated with acute renal failure and upper gastrointestinal bleeding. We chose the latter as an example for in-depth study. The factitious appearance of a bleeding risk may have been partly driven by an excess of procedures on the first day of treatment. A risk window definition that excluded the first day largely removed the spurious association.
CONCLUSION
One cause of reproducible "error" may be repeated failure to tie design choices closely enough to the research question at hand. Copyright © 2016 John Wiley & Sons, Ltd.