Here's a pair of medical terms I have often seen together. One of them I thought I had a moderately good understanding of the meaning, and the other I wasn't really sure exactly what it meant.

As with my previous post in this series, the same comment applies: If a medical doctor happens to read this and notices that I have something wrong, I would be thrilled to get a correction. I'm not a doctor and I'm writing this for other not-doctors; while I'm ok with simplification I don't want to be wrong.

Now, for my pair of terms: mortality is the former; it means how many people die. (Rather appropriate for a Hallowe'en post!) But it's also more than that; it is, specifically, the number of deaths in a given group over a given time period, and what the group and time period is has to be defined. The restrictions make sense, once I stopped to think about it: ultimately, the mortality of humans is 100% - everybody dies of something, at some point. But if you look at the mortality of a disease, or a type of accident, then the group is restricted to the people who have that disease or that injury, and the time is restricted to the time of the study, and the mortality is less than 100%. Something else kills the other people in the study at some other time, not covered by the study.

Morbidity is the latter. The short version of what this term means is how many people in a given group contract a particular disease. Not everybody catches influenza when it goes around, although the closer the contact between people, such as at a school, the higher the morbidity.

The two terms seem to be related and not related in interesting ways.

Let's go back to mortality, and back in history to the early days of epidemiology and a paradigm-changing study of the mortality of patients in hospitals, with the famous Florence Nightingale.

She got herself assigned to an army hospital in the middle of the Crimean War, where the mortality of british soldiers (not even wounded soldiers, but all soldiers) in that area was a horrifying 20%. In addition to being a thorn in the side of the military and governmental bureaucrats, she was a dedicated nurse, a skilled statistician, and a sanitarian. (The germ theory of disease had not yet been developed, but the association of sewage and filth with disease was gaining ground.) Once there, she used all of those skills and cleaned the hospital of raw sewage, lice, and blood, opened the windows for some fresh air, and made sure the patients were kept clean and well fed—and kept statistics on everything she did and the death rates and causes.

The graph above shows the two years from the time Britain declared war (end of March 1854; right-hand graph) to the month after the war ended (March 1856; left hand graph). You can see in the right-hand graph that the first few months saw few deaths, possibly as troops were moving into position and there weren't that many injured soldiers gathered together yet. Then the number of deaths started to increase, peaking in January 1855, shortly after Nightingale arrived at the hospital. Unfortunately, while the cause of death was separated into three categories, the reproduction is in black and white so it's hard to tell which zone corresponds to which colour noted in the legend.

No matter what her opponents claimed about her, the numbers show clearly that mortality among the soldiers decreased dramatically once she got the place cleaned up, both in terms of sanitation and in terms of corruption, by making sure that food and medical supplies intended for the wounded soldiers actually got to them.

So that's a study of mortality, with a defined group of soldiers deployed to a particular war.

Morbidity on the other hand, is how many people catch a particular disease. One thing that wasn't in Nightingale's graph (but which may have been in the full report) was an indication of how many people suffered from the fever, to give an indication of how many feverish patients died or didn't. We could then see if her changes affected the number of soldiers catching fever in the first place, the number of feverish patients dying, or both.

An interesting illustration of morbidity and mortality interacting in that way is in the statistics on measles.

I'm using UK data here because their NHS website has the morbidity, mortality, and vaccination rates all in a couple of convenient tables on their website. This let me play with the numbers in my spreadsheet to see what information fell out. I would have preferred to use Canadian data, since I'm Canadian and all that, but Canada annoyingly has a gap in their reporting right in the middle.

The first graph is raw reported number of cases of measles: morbidity. Not all cases are reported, so the numbers here are probably on the low side.

As you can see, the number of cases per year is very spiky and varies a lot year to year. This is characteristic of acute infectious diseases, to my understanding because you only get it once, then you're immune, so enough new babies have to be born to support an epidemic before another epidemic can happen. Apparently measles epidemic frequency is about 2-5 years, and during the period of the early 1950s to the late 1960s it seemed to be on a 2-year oscillation.

The second graph is raw deaths from measles: total mortality. Death certificates generally list a cause of death, so these numbers would be a lot more accurate than the previous set.

This does show cyclical spikiness as well, but it has a downward trend until it flattens out so close to zero it's hard to read on the graph.

However, if you look at the section in the mid-1950s to late 1960s, the mortality rate stops dropping and remains spiky; the ups and downs correspond to the reported cases in the first graph, until 1970 when the morbidity dramatically drops off and the mortality goes very close to zero.

A third graph, no longer raw data from the NHS but a calculation I did in my spreadsheet, smooths those spikes out.

What I'm doing here is about the simplest form of statistical analysis there is: normalization. Very simply, I divided the number of deaths by the number of cases, to display the number of deaths per 100,000 cases of measles, or case mortality. The data is still a little spiky prior to 1950, then shows a remarkably smooth decrease to a stable value between about 20 and 30 deaths per 100,000 cases. This case mortality stays stable right through the drop in morbidity in the 1970s. It goes a little crazy after 1990; I'll discuss that in a bit.

What these graphs tell me is that there was something that affected case mortality until about 1960 (graph 3) and something different that affected morbidity after 1970 (graph 1), and the two effects together caused the overall long term effect on total mortality (graph 2).

Which is to say, we got better a treating measles and making sure people didn't die from it as often, reducing case mortality, and then later we got a vaccine which reduced the morbidity.

As you can see, there is a jump in vaccine uptake in the 1970s, followed by a steady climb through the 1980s, until there is near-universal vaccine coverage in the 1990s. If you scroll back up to the first graph, you'll see that right around 1990, the morbidity drops so close to the zero axis that it's hard to read.

And now for my explanation of the craziness in the case mortality graph after 1990: because I'm suddenly now dividing by a very small number, much smaller than the "per 100,000 cases" that forms the basis of that graph, the difference between one death and three can cause a spike in the case morbidity rate all out of proportion to the actual number of deaths. This is just an artifact of dividing by small numbers.

No comments:

Post a Comment