Tag Archives: Statistics

Obama’s JAMA article is a must read for all professionals

There is a very important  article in this week’s JAMA – Internal Medicine, written by Barach Obama.

It highlights the effects of the ACA/Obamacare.  It is free on-line.

United States Health Care Reform: Progress to Date and Next Steps


If you are short on time, then the following link to just the figures provides many of the key results.


To me the highlights of the article are that it documents:

The decline in the uninsured (no surprise, but well presented) now down to 9.1 percent from over 16
Declines in teen smoking from 19.5% to 10.8% due to the Tobacco Control Act of 2009 (Wow)
Much slower rates of decline in the uninsured in states that refused the Medicaid expansion (no surprise)
The decline in the underinsured among privately insured as measured by the near disappeance of unlimited exposure (new to me)
Lower rates of individual debt sent to a collection agency (great to see)
Negative rates of real cost growth in Medicare and Medicaid since 2010, with drastically lower growth in privately Insured
Constant share of out of pocket spending as a fraction of total spending among the employer based insurance
(new to me, he cites increases in deductibles offset by decreases in copays and coinsurance.)
Forecast Medicare spending in 2019 is now 20% LOWER than when he took office.
Decline in Medicare 30 day, all hospital readmission rates as well as improvements in other measures.
This information is important to understand to counter the repeated false claims that Obamacare is a failure, or has increased health care spending, or is bankrupting the government, all of which are shown to be false in the evidence presented here.

Here is the link again.


Congratulations to BU’s Class of 2016 Economics graduates!

Please celebrate the students who earned 498 Boston University degrees in Economics at Commencement this May.

This year the program mentions:

22 Ph.D. recipients

203 Master’s degree recipients (MA, MAPE, MAEP, MAGDE MA/MBA, BA/MA)

273 BA recipients (including BA/MA)

This total of 498 degrees is up from 482  in 2015.

These numbers undercount the total for the year since it may exclude students who graduated in January 2016 and chose not to appear at Commencement.

The number of graduate degree recipients 225 is way up from last year when we had 177, with most of the growth in MAs.

In 2015 there were 22 PhDs, 155 Master’s degree recipients, and 305 BA recipients.

In 2014 there were 17 PhDs, 207 Master’s degree recipients, and 256 BA recipients.

Altogether 24 Ph.D. students obtained jobs this year (versus 19 last year).

To see the Ph.D. placements visit the web site linked here.


The department’s  website now lists 38 regular faculty (down two from last year) with titles of assistant, associate or full professors, a number which is two below the number of professors in 2012.



Congratulations to all!

Important Reposting on Placebo surgery from TIE

I am forwarding this excellent TIE post since every health researcher and indeed every consumer should realize how serious the lack of evidence is on many common surgical procedures. Here are some quotes organized in a succinct way.

“2002… arthroscopic surgery for osteoarthritis of the knee … Those who had the actual procedures did no better than those who had the sham surgery. ” (We still spend $3 billion a year on this procedure)
“2005… percutaneous laser myocardial revascularization, …  didn’t improve angina better than a placebo”
“2003, 2009, 2009… vertebroplasty — treating back pain by injecting bone cement into fractured vertebrae … worked no better than faking the procedure.”
“2013 … arthroscopic procedures for tears of the meniscus cartilage in the knee… performed no better than sham surgery” (We do about 700,000 of them with direct costs of about $4 billion.)
“[2014] … systematic review of migraine prophylaxis [prevention], while 22 percent of patients had a positive response to placebo medications and 38 percent had a positive response to placebo acupuncture, 58 percent had a positive response to placebo surgery.
“2014… 53 randomized controlled trials that included placebo surgery as one option. In more than half of them … the effect of sham surgery was equivalent to that of the actual procedure.”

If you are getting surgery done, do your own research on it and ask questions!


——– Original Message ——–

Subject: “The Placebo Effect Doesn’t Apply Just to Pills” plus 1 more
Date: Thu, 9 Oct 2014 11:13:06 +0000
From: The Incidental Economist <tie@theincidentaleconomist.com>
To: <ellisrp@bu.edu>

“The Placebo Effect Doesn’t Apply Just to Pills” plus 1 more

The Placebo Effect Doesn’t Apply Just to PillsPosted: 09 Oct 2014 04:00 AM PDT

The following originally appeared on The Upshot (copyright 2014, The New York Times Company).

For a drug to be approved by the Food and Drug Administration, it must prove itself better than a placebo, or fake drug. This is because of the “placebo effect,” in which patients often improve just because they think they are being treated with something. If we can’t compare a new drug with a placebo, we can’t be sure that the benefit seen from it is anything more than wishful thinking.

But when it comes to medical devices and surgery, the requirements aren’t the same. Placebos aren’t required. That is probably a mistake.

At the turn of this century, arthroscopic surgery for osteoarthritis of the knee was common. Basically, surgeons would clean out the knee usingarthroscopic devices. Another common procedure was lavage, in which a needle would inject saline into the knee to irrigate it. The thought was that these procedures would remove fragments of cartilage and calcium phosphate crystals that were causing inflammation. A number of studieshad shown that people who had these procedures improved more than people who did not.

However, a growing number of people were concerned that this was really no more than a placebo effect. And in 2002, a study was published thatproved it.

A total of 180 patients who had osteoarthritis of the knee were randomly assigned (with their consent) to one of three groups. The first had a standard arthroscopic procedure, and the second had lavage. The third, however, had sham surgery. They had an incision, and a procedure was faked so that they didn’t know that they actually had nothing done. Then the incision was closed.

The results were stunning. Those who had the actual procedures did no better than those who had the sham surgery. They all improved the same amount. The results were all in people’s heads.

Many who heard about the results were angry that this study occurred. They thought it was unethical that people received an incision, and most likely a scar, for no benefit. But, of course, the same was actually true for people who had arthroscopy or lavage: They received no benefit either. Moreover, the results did not make the procedure scarce. Years later, more than a half-million Americans still underwent arthroscopic surgery for osteoarthritis of the knee. They or their insurers spent about $3 billion that year on a procedure that was no better than a placebo.

Sham procedures for research aren’t new. As far back as 1959, the medical literature was reporting on small studies that showed that procedures like internal mammary artery ligation, a surgical procedure used to treat angina, were no better than a fake incision.

In 2005, a study was published in the Journal of the American College of Cardiology proving that percutaneous laser myocardial revascularization, in which a laser is threaded through blood vessels to cut tiny channels in the heart muscle, didn’t improve angina better than a placebo either. We continue to work backward and use placebo-controlled research to try to persuade people not to do procedures, rather than use it to prove conclusively that they work in the first place.

A study published in 2003, without a sham placebo control, showed that vertebroplasty — treating back pain by injecting bone cement into fractured vertebrae — worked better than no procedure at all. From 2001 through 2005, the number of Medicare beneficiaries who underwent vertebroplasty each year almost doubled, from 45 to 87 per 100,000. Some of them had the procedure performed more than once because they failed to achieve relief. In 2009, not one but two placebo-controlled studies were published proving that vertebroplasty for osteoporotic vertebral fractures worked no better than faking the procedure.

Over time, after the 2002 study showing that arthroscopic surgery didn’t work for osteoarthritis of the knee, the number of arthroscopic procedures performed for this condition did begin to go down. But at the same time, the number of arthroscopic procedures for tears of the meniscus cartilage in the knee began to go up fast. Soon, about 700,000 of them were being performed each year, with direct costs of about $4 billion. Less than a year ago, many were shocked when arthroscopic surgery for meniscal tearsperformed no better than sham surgery. This procedure was the most common orthopedic procedure performed in the United States.

The ethical issues aren’t easily dismissed. Theoretically, a sugar pill carries no risk, and a sham procedure does. This is especially true if the procedure requires anesthesia. The surgeon must go out of his or her way to fool the patient. Many would have difficulty doing that.

But we continue to ignore the real potential that many of our surgical procedures and medical devices aren’t doing much good — and might even be doing harm, since real surgery has been shown to pose more risks than sham surgery.

Rita Redberg, in a recent New England Journal of Medicine Perspectives article on sham controls in medical device trials, noted that in a recentsystematic review of migraine prophylaxis, while 22 percent of patients had a positive response to placebo medications and 38 percent had a positive response to placebo acupuncture, 58 percent had a positive response to placebo surgery. The placebo effect of procedures is not to be ignored.

Earlier this year, researchers published a systematic review of placebo controls in surgery. They searched the medical literature from its inception all the way through 2013. In all that time, they could find only 53 randomized controlled trials that included placebo surgery as one option. In more than half of them, though, the effect of sham surgery was equivalent to that of the actual procedure. The authors noted, though, that with the exception to the studies on osteoarthritis of the knee and internal mammary artery ligation noted above, “most of the trials did not result in a major change in practice.”

We have known about the dangers of ignoring the need for placebo controls in research on surgical procedures for some time. When the few studies that are performed are published, we ignore the results and their implications. Too often, this is costing us many, many billions of dollars a year, and potentially harming patients, for no apparent gain.



Placebo historyPosted: 09 Oct 2014 03:00 AM PDT

Here are my highlights from “Placebos and placebo effects in medicine: historical overview,” by Anton de Craen and colleagues. All are direct quotes.

  • In 1807 Thomas Jefferson, recording what he called the pious fraud, observed that ‘one of the most successful physicians I have ever known has assured me that he used more bread pills, drops of colored water, and powders of hickory ashes, than of all other medicines put together’. About a hundred years later, Richard Cabot, of Harvard Medical School, described how he ‘was brought up, as I suppose every physician is, to use placebo, bread pills, water subcutaneously, and other devices’.
  • The word placebo (Latin, ‘I shall please’) was first used in the 14th century. In that period, it referred to hired mourners at funerals. These individuals often began their wailings with Placebo Domino in regione vivorum, the ninth verse of psalm cxiv, which in the Latin Vulgate translation means ‘I shall please the Lord in the land of the living’. Here, the word placebo carries the connotation of depreciation and substitution, because professional mourners were often stand-ins for members of the family of the deceased.
  • In 1801, John Haygarth reported the results of what may have been the first placebo-controlled trial. A common remedy for many diseases at that time was to apply metallic rods, known as Perkins tractors, to the body. These rods were supposed to relieve symptoms through the electromagnetic influence of the metal. Haygarth treated five with imitation tractors made of wood and patients found that four gained relief. He used the metal tractors on the same five patients the following day and obtained identical results: four of five subjects reported relief.
  • In the 1785 New Medical Dictionary, placebo is described as ‘a commonplace method or medicine’. In 1811, the revised Quincy’s Lexicon-Medicum as ‘an epithet given to any medicine adapted defines placebo more to please than to benefit the patient’.
  • In the 1930s, several important papers were published with regard to the introduction of placebos in clinical research. [… Two] papers assessed the value of drugs used in the treatment of angina pectoris in cross-over experiments and deceptively administered placebos to the ‘no-treatment’ comparison group. […] In both trials the drugs were judged to exert no specific action that might be useful in the treatment of angina. Gold and colleagues tried to explain why inert interventions might work: their points included ‘confidence aroused in a treatment’, the ‘encouragement afforded a new and ‘a of medical by procedure’ change advisor’.
  • Placebo was a fraud and deception that had the ‘moral effect of a remedy given specially for the disease’, but placebos did not affect the natural course of disease; they were a priori excluded from having such an impact. Placebos were therapeutic duds to manage patients, or, as in the Flint investigation, a camouflage behind which to watch nature take its course.
  • In 1938, the word placebo was first applied in reference to the treatment given to concurrent controls in a trial.
  • The efficacy of cold vaccines was evaluated in several placebo-controlled trials. […] The conclusion [of one] reads ‘one of the most significant aspects of this study is the great reduction in the number of colds which the members of the control groups reported during the experimental period. In fact these results were as good as many of those reported in uncontrolled studies which recommended the use of cold vaccines’. The placebo effect was born.



Congratulations to BU’s Class of 2014 Economics graduates!

Please celebrate the 463 Boston University students who earned degrees in Economics at Commencement over the weekend. This year the program contained:

17 Ph.D. recipients

207 Master’s degree recipients (MA, MAPE, MAEP, MAGDE MA/MBA, BA/MA)

256 BA recipients (including BA/MA)

This represents a total of 463 degrees!

These numbers undercount the total for the year since they exclude students who graduated in January 2014 and chose not to appear at Commencement.

The number of graduate degree recipients (234) remains close to the number of  BA students (256) both of which are down from the previous year, which was itself up 10% over 2012.

Last year (2013) there were 21 PhDs, 257 Master’s degree recipients, and 292 BA recipients.

Altogether 23 Ph.D. students obtained jobs this year. To see their placements visit the web site linked here.


Many MA students did well on the job market and in being accepted to Ph.D. programs. For a partial list see:

The department’s recently redesigned website now lists 38 regular professors, a number which is down two since 2012.

Congratulations to all!

Explaining these two graphs should merit a Nobel prize

Reposting from The Incidental Economist Blog

What happened to US life expectancy?

Posted: 07 Jan 2014 03:00 AM PST

Here’s another chart from the JAMA study “The Anatomy of Health Care in the United States”:

life expectancy at birth

Why did the US fall behind the OECD median in the mid-1980s for men and the early 1990s for women? Note, the answer need not point to the health system. But, if it does, it’s not the first chart to show things going awry with it around that time. Before I quote the authors’ answer, here’s a related chart from the paper:


The chart shows years of potential life lost in the US as a multiple of the OECD median and over time. Values greater than 1 are bad (for the US). There are plenty of those. A value of exactly 1 would mean the US is at the OECD median. Below one would indicate we’re doing better. There’s not many of those.

It’d be somewhat comforting if the US at least showed improvement over time. But, by and large, it does not. For many conditions, you can see the US pulling away from the OECD countries beginning in/around 1980 or 1990, as was the case for life expectancy shown above. Why?

The authors’ answer:

Possible causes of this departure from international norms were highlighted in a 2013 Institute of Medicine report and have been ascribed to many factors, only some of which are attributed to medical care financing or delivery. These include differences in cultural norms that affect healthy behaviors (gun ownership, unprotected sex, drug use, seat belts), obesity, and risk of trauma. Others are directly or indirectly attributable to differences in care, such as delays in treatment due to lack of insurance and fragmentation of care between different physicians and hospitals. Some have also suggested that unfavorable US performance is explained by higher risk of iatrogenic disease, drug toxicity, hospital-acquired infection, and a cultural preference to “do more,” with a bias toward new technology, for which risks are understated and benefits are unknown. However, the breadth and consistency of the US underperformance across disease categories suggests that the United States pays a penalty for its extreme fragmentation, financial incentives that favor procedures over comprehensive longitudinal care, and absence of organizational strategy at the individual system level. [Link added.]

This is deeply unsatisfying, though it may be the best explanation available. Nevertheless, the sentence in bold is purely speculative. One must admit that it is plausible that fragmentation, incentives for procedures, and lack of organizational strategy could play a role in poor health outcomes in the US — they certainly don’t help — but the authors have also ticked off other factors. Which, if any, dominate? It’s completely unclear.

Apart from the explanation or lack thereof, I also wonder how much welfare has been lost relative to the counterfactual that the US kept pace with the OECD in life expectancy and health spending. It’s got to be enormous unless there are offsetting gains in areas of life other than longevity and physical well-being. For example, if lifestyle is a major contributing factor, perhaps doing and eating what we want (to the extent we’re making choices) is more valuable than lower mortality and morbidity. (I doubt it, but that’s my speculation/opinion.)

(I’ve raised some questions in this post. Feel free to email me with answers, if you have any.)


AHRF/ARF 2012-13 data is available free

AHRF=Area Health Resource File (Formerly ARF)

2012-2013 ARHF can now be downloaded at no cost.

The 2012-2013 ARF data files and documentation can now be downloaded. Click the link below to learn how to download ARF documentation and data.


“The Area Health Resources Files (AHRF)—a family of health data resource
products—draw from an extensive county-level database assembled annually from
over 50 sources. The AHRF products include county and state ASCII files, an MS Access
database, an AHRF Mapping Tool and Health Resources Comparison Tools (HRCT). These
products are made available at no cost by HRSA/BHPR/NCHWA to inform health resources
planning, analysis and decision making..”

“The new AHRF Mapping Tool enables users to compare the availability of healthcare providers as well as environmental factors impacting health at the county and state levels.”

Useful reference for serious SAS programmers

I often do bootstrap and simulations in my research, and for some background research, I found the following excellent short article on how to use SAS to do efficient replications/bootstrapping/jackknifing.

Paper 183-2007
Don’t Be Loopy: Re-Sampling and Simulation the SAS® Way
David L. Cassell, Design Pathways, Corvallis, OR


Here is an elegant example that shows how to do 1000 replications of the Kurtosis of X. Note that proc univariate could be replaced with anything. Discussion of proc append and critique of alternative programs is also useful.

(I will note that it starts by creating a sample that is 1000 times as large as the original, but still, it is very fast given what is being done.)

proc surveyselect data=YourData out=outboot /* 1 */
seed=30459584 /* 2 */
method=urs /* 3 */
samprate=1 /* 4 */
outhits /* 5 */
rep=1000; /* 6 */
proc univariate data=outboot /* consider noprint option here to reduce output */;
var x;
by Replicate; /* 7 */
output out=outall kurtosis=curt;
proc univariate data=outall;
var curt;
output out=final pctlpts=2.5, 97.5 pctlpre=ci;