These words describe the way a medical study is designed and implemented. Some group of investigators somewhere wants to answer some scientific question and they do and they review experimental data. The value of the results of the study is greatest if the study is as described in the title of this piece: prospective, not retrospective; double-blinded, not single-blinded or unblinded; etc.
Prospective. This word, which literally means looking forward, means that the data that was analyzed in the study was collected for that study. It is also possible to study old data from experiments already done for other purposes, a so-called retrospective (looking back) study. But thus is a flawed methodology and can produce erroneous conclusions.
A famous illustration of that surfaced recently and caused an abrupt reversal of the position of the American College of Obstetrics and Gynecology on the use of estrogens in postmenopausal women. These are women who are no longer ovulating either due to natural menopause or due to surgical menopause, usually as part of a hysterectomy.
The older, retrospective data suggested that estrogens were cardioprotective or heart healthy - in these women. It was noticed that there was a positive correlation between postmenopausal estrogen usage and a lowered risk of heart attack. That is, women who were taking supplemental estrogen died older and had fewer heat problems along the way than the postmenopausal women who didn't receive it. This, of course, suggested that all menopausal women benefit from and should receive supplemental estrogen.
But that conclusion was gleaned from old, retrospectively studies data originally collected for another purpose. Eventually, decades later, a prospective study using new data from new patients studied specifically for that purpose was performed. It showed that women started on supplemental estrogens two or more years after menopause actually had *more* heart disease and died younger because of it, the exact opposite of the retrospective study's conclusion. Take a moment if you like, to think about how this could be - why the data might tell us that estrogens help the hearts of postmenopausal women if examined retrospectively, but tell us the opposite when a prospective analysis is performed.
The answer is that the women that were receiving estrogen were also receiving other regular medical care, and they lived longer for that reason. The untreated postmenopausal group was mostly women who didn't see doctors regularly. They would have been treated with estrogen if they had been seeing doctors since that was the standard of care at that time. This also suggested that they were better off financially to afford more health care, and probably better educated if financially more secure.
All of these differences improved their life expectancy, not the estrogens. The estrogens actually caused a partial loss of the gains of education, money and regular medical care. But not enough to offset those advantages, so they did better *despite* the estrogens. The two groups being compared were different in more ways than just the one of interest: estrogen use or not. The estrogen takers were also healthier (and wealthier and smarter) to start with, and that, not the estrogens, was their advantage.
Bottom line: if the study is not prospective, beware.
Randomized. In a nutshell, this means that there is a control group of identical patients that does not receive the treatment of interest. The control group is like the study group in every other measurable way that might be relevant, such as average age, coexisting diagnoses, average weight, fraction that smoke, etc.
The study discussed above failed to meet this criterion since the control group (nonestrogen users), as already stated, was different from the study group (estrogen takers) not just in the way of interest, but also in mean income and frequency of doctor visits.
When the estrogen study was repeated using prospective data, the patients were randomized into two identical groups. One group got treatment, the other did not. In all other ways, they were identical. Now, only a very unlikely statistical fluke could produce a wrong conclusion.
Placebo controlled. This means that control group, or the untreated group, received a sham treatment - or placebo - so that they could not tell that they were receiving no treatment. This allows us to keep the patients in the dark, or "blinded". If they knew, it might affect their responses.
The science of placebos can be quite challenging when the treatment being studied is bypass surgery and you awaken from "surgery" with no chest scar. Or perhaps we're studying the effects of pregnancy history on breast cancer. Pregnancies are thought to reduce its risk. But it's pretty hard to have a study group with kids and a control group with none with neither knowing if they have kids or not. The importance of that will be seen next.
Double blind[ed]. This means that neither the two patient groups *nor* the clinicians studying them know who is getting treatment and who is getting placebo. Both patients and clinicians are blinded.
To accomplish this, not only must the control group be given a placebo, but also the clinicians who examine and assess them cannot know who has received a placebo. This requires that someone behind the scenes prepare daily packages of medicine and placebo labeled for specific patients, and hand them over to the experimenters to distribute to patients without knowing themselves if they have passed out medicine or placebo in any particular case. When even the clinicians studying the patients are in the dark (blinded) about who is getting the treatment and who is not, the study is called double blinded.
This is important to keep both patients and clinicians from subconsciously skewing their judgments in favor of or against the treatment by suggesting a false benefit or side effect. That is, you might feel better just from knowing that you are receiving treatment, and worse knowing that you are not. Or you might not want to disappoint your doctor if you knew that you were being treated, and report your status more optimistically than is warranted. Or one might report less severe or no side effects if one knew one was given a placebo. We want to measure *only* the drug's physiological effect, not its psychological effect.
CONCLUSION. If all of this can be accomplished, and if the study is sufficiently powered, that is, if a sufficient number of patients in each group are observed long enough (and this is decided in advance by statisticians), one can say that there was or was not a difference between the two groups due to the treatment with a specified degree of certainty. And that is the purpose of any study.