Nov 14, 2016
Carolyn:
Welcome to Circulation on the Run, your weekly podcast summary and
backstage pass to the journal and it's editors. I'm Dr. Carolyn
Lam, Associate Editor from the National Heart Center and Duke
National University of Singapore. In today's podcast interview we
will be discussing the ruling in and ruling out of myocardial
infarction with the European Society of Cardiology 1-hour
algorithm. Stay tuned for a discussion of new data and
controversies on this hot topic. Now, here's a summary of this
weeks issue.
The first paper brings us one step closer to the ultimate goal of
cardiac tissue engineering. That is to replicate functional human
myocardium in vitro. In this study, by first author Dr. Ruan,
corresponding authors Dr. Murry and Regnier from the Institute for
Stem Cell and Regenerative Medicine and University of Washington,
authors recognize that human-induced pluripotant stem cells, or
iPSC-derived cardiomyocytes, really provide a cell source for
cardiac tissue engineering. However, their immaturity limits their
potential applications. Hence, they sought to study the effect of
mechanical conditioning and electrical pacing on the maturation of
iPSC-derived cardiac tissues.
They found that after two weeks of static stress conditioning, the
engineered myocardium demonstrated increases in contractility,
tensile strength, construct alignment, cell size, and SERCA2
expression. When electrical pacing was combined with static stress
conditioning the tissue showed an additional increase in force
production and further increases in expression of RyR2 and SERCA2.
These studies really demonstrate that electrical pacing and
mechanical stimulation promote the maturation of the structural,
mechanical, and force generation properties of iPSC-derived cardiac
tissues and constitute a really important contribution to cardiac
tissue engineering.
The next study is the first large-scale, nationwide,
population-based investigation of the association between
congenital heart defects and any placental measure. This study by
Dr. [Matheson 00:02:27] and colleagues from Aarhus University
Hospital in Denmark, included all 924,422 live-born Danish
singletons from 1997 to 2011. Congenital heart defects was present
in 7,569 newborns. The authors compared the mean differences in
placental weight between newborns with and without congenital heart
defects and found that only three specific subgroups of congenital
heart defects were associated with measures of impaired placental
growth. These included Tetralogy of Fallot, double outlet right
ventricle, and major ventricular septal defects. In these
subgroups, the mean deviations from the population mean head
circumference and birth weights were reduced by up to 66%, with
adjustment for placental weight. In other words, up to two thirds
of the deviations in fetal growth, including fetal cerebral growth,
may be related to the impaired placental growth. The present work
provides an important contribution to the existing knowledge on the
association between congenital heart defects and placental
anomalies as well as the possible importance for fetal growth in
this population.
The next study provides an up-to-date evaluation of the cost
effectiveness of antibiotic prophylaxis in the prevention of
infective endocarditis. In this study by first author Dr. Franklin,
corresponding author Dr. Thornhill, and colleagues from the
University of Sheffield, the cost effectiveness of antibiotic
prophylaxis, namely single dose amoxicillin or clindamycin, in
patients at risk of infective endocarditis. They did this using,
firstly, recent estimates of the effect of antibiotic prophylaxis
on infective endocarditis in the English population; secondly,
rates of antibiotic adverse drug reactions; and thirdly, estimates
of the probability of developing infective endocarditis following
dental procedures derived from French data. All this as foundation
for analysis of cost and health benefits.
A decision analytic cost effectiveness model was used based on the
decision model by the National Institute for Health and Care
Excellence, or NICE, that was used to inform the 2008 guidelines.
The authors found that antibiotic prophylaxis was less costly and
more effective than no antibiotic prophylaxis for all patients at
risk for infective endocarditis. In fact, if antibiotic prophylaxis
was reinstated in England for those at moderate or high risk of
infective endocarditis, it could save 5.5 to 8.2 million pounds and
result in health gains of more than 2,600 quality-adjusted life
years. Antibiotic prophylaxis was even more cost effective for
those at high risk of infective endocarditis, being cost effective
even if only on 1.44 cases of infective endocarditis was prevented
per year. In summary, these updated findings really support the
cost effectiveness of guidelines recommending antibiotic
prophylaxis use, particularly in high risk individuals.
The last study provides data on long term cardiac mortality among
survivors of cancer diagnosed in teenagers and young adults in the
largest population-based cohort to date. Furthermore, the study
provided, for the first time, risk estimates of cardiac death after
each cancer diagnosed between the ages of 15 to 39 years. For
example, survivors of Hodgkin lymphoma, lung cancer, acute myeloid
leukemia, non-Hodgkin lymphoma, and CNS tumors experience 1.3 to
3.8 times the population-based mortality rates. This study provides
important insight into the cardiotoxicity of the treatments given
in the past to teenagers and young adults with each individual type
of cancer and importantly, provides an initial basis for developing
evidence-based follow up guidelines.
Those were you summaries. Now for our feature interview.
Our feature paper today discusses the hot and controversial topic
of ruling in and ruling out myocardial infarction with the European
Society of Cardiology 1-hour algorithm. I'm so excited to have with
us the corresponding author of the paper that really represents the
first multi-center external validation of these ESC guidelines for
MI and the first multi-centered direct comparison of the
performance of the algorithm with high-sensitivity troponin I and
high-sensitivity troponin T assays. This would be Dr. Martin Than
from Christ Church Hospital in New Zealand. Welcome Martin.
Martin:
Thank you very much. It's a great pleasure for me to be able to
join everybody and talk here.
Carolyn:
It's great to have you. We also have with us the editorialist on
this paper, Dr. Allan Jaffe from Mayo Clinic, Rochester, Minnesota.
Allen, it's so good to hear your voice again.
Allan:
Good to talk to you again too, Carolyn.
Carolyn:
Finally, we have Dr. Deborah Diercks, Associate Editor from UT
Southwestern. Welcome Deb.
Deborah:
Oh, it's good to be here and I'm looking forward to the
conversation and what we're going to learn from these two
gentlemen.
Carolyn:
Absolutely. You know what? I'm going to start with Martin. I love
the way to set up your paper. You very correctly pointed out that
there's a tension in that ED physicians require really high
sensitivity to confidently rule out MI and send patients home,
whereas cardiologists do not want high proportion of false
positives because we don't want false high risk to lead to invasive
testing. I just love, if you could start by telling us how the ESC
1-hour algorithm fits into all this and what you were trying to do
in your study.
Martin:
I heard Deb Diercks on the phone as well, who's a very respected
emergency physician in this area, and I think we would both say
that we have a certain bias in our perspective on this, which is of
course we are the people at the end of the day that have to send
people home when they present with chest pain and possible
myocardial infarction. We are also, of course, the people that take
the fall if there are any mistakes made. Historically, people have
not been very kind to emergency physicians who miss such a
diagnosis. It's an extremely high source of medical legal action in
the United States and, in fact, worldwide. So we're somewhat
paranoid as a speciality about missing cases of myocardial
infarction because at the end of the day, the worst thing that can
possibly happen is for you to send someone home who comes to harm
from the very clinical complaint for which they came to you for
help. We want to avoid that at all costs and that was the basis
behind us trying to put together this paper.
Soon after the ESC guidelines come back and I returned from London,
where they were announced at the conference, to New Zealand, I
received quite a lot of phone calls and correspondence saying,
"Okay, we see these new ESC guidelines are out. When are we going
to start introducing them?". I immediately wanted to say, "Well,
the key thing is to understand how they would work, how they would
be implemented, and whether they'd work in my own setting" because
if we want to implement them in New Zealand or Australasia, we
would want to double-check on that first. That's the basis and the
philosophy behind the manuscript.
Carolyn:
Tell us what you found.
Martin:
As Allan will be the first to point out, I think there are a number
of flaws in the data we had available to us that allowed us to do
this analysis, but based on the concept that when we've surveyed
emergency medicine physicians, the sensitivity that was wanted was
at least 99% if not higher. We found that neither of the algorithms
produced that level of sensitivity, although the algorithm based on
hsTnI was very close. I think it's 98.8%, so that was very good.
Reasonably wide confidence intervals on that. The hsTnT algorithm
performed slightly less well with a sensitivity around 97%. I
guess, if I was to start with an a priori question, which is did we
reach a standard of 99%, then our answer to this was, in one case,
not quite, and the other case, no, we probably didn't. We said that
if you wanted to use a metric of negative predictive value, which I
know a lot of people do, then there was actually very good negative
predictive value in the high 99 percentage range for both
pathways.
Carolyn:
Do you mind if I stretch you a little bit and ask you to describe
exactly what you did in the cohorts? You were saying that there
were some imperfections. Maybe you'd like to tell us a little bit
about that.
Martin:
Absolutely. As always, when you're writing a paper, you look back
and you always feel there are far too many imperfections, but I
guess the principle one I would say that's been noted is that we
had samples done on arrival and the algorithm itself specifies a
[inaudible 00:11:43] one-hour second sample. We didn't have those
specimens, so we had to base our data analysis on samples done
either at 90 minutes afterwards or two hours afterward. It's
clearly not being tested exactly as it was written, although one
could argue that that slightly delayed sampling is potentially
reflective of real life, where it's very hard to hit a one hour
mark in a busy emergency department, and two, where the slight
delay in getting the samples would actually allow more time for a
troponin to rise and therefore give a chance of providing a better
sensitivity.
I think the other I guess key flaw is that of course, the people
present to emergency departments at different time frames following
the onset of their symptoms. There's been some valid concern raised
that algorithms may not necessarily perform as well in very early
presenters. In fact, that is something that's being emphasized now
in the ESC guidelines.
Carolyn:
Right. Allan, I loved your editorial. You did mention a couple of
these points. Would you like to maybe clarify your view of
this?
Allan:
I think that there are two or three terribly important issues. We
all would like to have very facile algorithms. Particularly given
removing the high sensitivity, the idea would be gee, wouldn't it
be nice to have something really simple that works perfectly? If
you look at the validation and the way the algorithm has been put
together, immediately there are some concerns that people ought to
have and that at least we tried to point out, that were important.
One of them Martin has already discussed a little bit, which is one
looks at most of the validation studies. There are very few
patients who are evaluated very early after the onset of their
symptoms. That's a potential problem because the overlap, since
they use very low values or very small change, that there could be,
with people who have real disease, is in those very early
presenters. The initial algorithm from the ESC used both a very low
level troponin and a set of change criteria. Actually when they
published those criteria, they changed that and eliminated, at
least for the first three hours, the very low values. If one looks
at Martin's study, it was again, the very early patients who
potentially may have been missed. I think we need more data before
we go ahead and acknowledge that this will be working for those
early presenters.
There are two other problems with the population that we need to be
careful about. It's been well known that when you have a negative
troponin at six hours all the way back to [Chrisann's 00:14:26]
original article in the '90s, that you're pretty safe. The
population that you'd like to look at really are the patients who,
after two hours in Martin's study, since he took a little bit
longer given the logistics that were there in New Zealand and
Australia, is the patient who came in at four hours because by six,
they're actually meeting that six-hour criteria. When you have a
large number of other such patients, you simply add noise and it
makes you sensitivity look better, but it's not necessarily the
case that that give you that same degree of reassurance that ED
physicians would like.
The third population-related issue is that you'd like to do this in
all-comers. The protocol was developed for chest pain patients, but
there are a variety of patients in whom we evaluate myocardial
infarction in, who may not qualify for that. The patients who are
critically ill, for example, who may have Type 2 infarctions. The
individuals who may come in who are very elderly, who often don't
have chest pain so we don't identify them necessarily as a rule
out. Interestingly, if you start thinking about those groups, they
tend to have much higher troponin, so they may well skew the
cut-offs that are used and change the algorithm.
In truth, we don't want more than one way of defining myocardial
infarction. We only want one algorithm for ruling in and ruling
out. Having an all-comers study, in my way of thinking, would be
important. In that same regard, let me point out that you can rule
out myocardial infarction because you don't have an acutely
changing pattern of troponin elevations, but what we really rule in
myocardial infarction? You rule in acute cardiac injury. Could be
myocarditis, could a apical ballooning. There are a whole variety
of other types of disease entities that could be involved and the
arbitrary value of 52 that was put in the algorithm really, I
think, is much too low for two reasons. One reason, because it
didn't include all-comers. A second reason is because of the way in
which the comparison between troponin T and I were done. I'll talk
about that in just a moment. I would point out that using a
different assay, the troponin I assay, in another set of studies,
another group from Hamburg has suggested that very different
metrics would be much better.
The final thing to say about extrapolation between the assays, and
then I have some suggestions about what would make this better if
you want to go there now or we can wait, is the comparison and the
way in which the metrics for troponin I were developed really
weren't by using troponin I as a gold standard. It was by taking
and using troponin T as the gold standard for the diagnosis, then
thawing samples many years later, running troponin I, and then
extrapolating from the gold standard of troponin T to troponin I.
Well, there's several problems with that. Number one is that
appropriate comparisons should be fresh samples. Fresh samples. In
addition, we believe, from the way in which we think about high
sensitivity, which may not be correct, that the troponin I assay
should be more sensitive and in [inaudible 00:18:05] fact, in the
papers that were done validating this approach or attempting to
describe the approach, troponin T was wildly more sensitive than
was troponin T. We're extrapolating some data that doesn't sort of
fit the way in which the information we have, it would mean all of
the troponin I validation studies are incorrect.
That's where those numbers came from and even more problematic are
the change numbers, which are very low. For the troponin T assay,
they're three in five between ruling in and ruling out, which if
you look at the assay imprecision, is something the assay can't do.
Now you're extrapolating them in a very, very loose manor to
troponin I and making them even lower. Those are not doable sorts
of things. There's a real problem with the way in which the metrics
for troponin I, even though it performed well in this circumstance,
ended up being developed. I think all of those things need to be
taken into account when we look at the results of the study. The
results that Martin and his group got are very similar to the other
validation studies that have been done because they've all done it
pretty much that same way. There's not a surprise that their
validation is similar, but I think unfortunately, we didn't have an
opportunity to unmask, in a data-driven way, the problems that I
just described.
Carolyn:
Thank you Allan. Deborah, if you could share your thoughts on
this.
Deborah:
Martin raises some valid issues. That if something goes out as an
algorithm, people want to use it. That use needs to be predicated
on does it work in their patient population and is it feasible in
the time frame and can it be adopted safely and what the
indications are. In the emergency department, the value really is
the negative predictive value because we want to be able to safely
send people home. That's where rapidity of an evaluation is very
important.
The other issue raised was exactly what Dr. Jaffe talked about.
Does the algorithm itself reflect what we really need? Can you
validate something that was created by the scientific way, but
really a combination of a lot of information? Are the thresholds
really valid themselves? That's the challenge with it. I think what
you heard here are kind of two issues we struggle with it. We have
a very respectable organization putting out an algorithm that is
scientifically based and we want to adopt early, but there are
questions on both sides of the issue on whether it can be adapted
into real-world clinical practice on a global nature where
prevalence of disease is different and the patients it'll be
applied to vary, whether it's been on time of presentation or
overall demographics.
Also on the scientific side, on the assays itself, are we using the
right cutoff? Especially when we're looking at deltas and looking
at such a rapid change. It's very nice to hear both of those points
so eloquently described today during the discussion.
Carolyn:
Thanks Deb. I fully agree. Hence, again, the importance of this
paper. Martin, I'd love to hear your responses to Allan's comments
and then also share with us, what's the take-home message for you
as a clinician? How are you applying what you just found?
Martin:
The guidelines are good on the right line, it's just as I said,
they may not necessarily translate to all other environments. I
guess that's my take-home message to myself, which was if I were to
look at my own data from my own center, in Christ Church, and the
way it's applied here, if I had applied the ESC guidelines and it
had met the metrics which I was satisfied with, which I guess would
be a very high sensitivity for me in terms of rule out, then I
would actually seriously consider implementing it in my own center.
It didn't reach that threshold so now I want to try and refine or
explore further how I could allow the guidelines to do that. For
example, one way that, and this is in the guidelines, but not
necessarily in the flow chart, is the importance of applying
clinical judgment and clinical findings with the results of the
algorithm. I think that's a very important step in it. For example,
if I was going to apply this in my own center, I'd want to be
setting out clearly for the doctors concerned, how one would
incorporate clinical judgment rather than it being a very
subjective thing, which might vary significantly between a junior
doctor or a far more experienced one.
I guess the take home message for me is this. The ESC guidelines
are a very important piece of work. They've been robustly
developed. For people who want to implement them, I'm no saying
don't use them at all. I'm just saying that, you know, just think
about carefully how you would use them and check whether you think
they're appropriate for your setting.
Carolyn:
That's great. Allan, what about you? What are your thoughts on how
this may be applied in clinical practice and what more needs to be
done?
Allan:
I think we need to have a real trial where patients are managed
based on the results of these approaches rather than more
observational studies. I would argue that those management trials
that involve an all-comers sort of population, so we are
comprehensive, and should also interrogate whether or not the
protocol itself is adequate or whether or not it requires follow-up
to meet the metrics that have been proposed. I would point out that
in the past, in the studies from the group from New Zealand and
Martin Than particularly, have had very, very good follow-up. One
at least needs to ask the question whether or not the algorithms
that are proposed work perfectly without any follow-up or whether
or not follow-up is an important component. We don't know that
yet.
Carolyn:
Thanks Allan. I'd love to give the final words to Deb. Take home
messages?
Deborah:
You know, I think that we need to look at this as a positive in
that we're looking at time frames that provide a rapid evaluation
and the discussion is around safety. As long as we keep focused on
appropriate evaluations for the patients and applying the right
algorithm to the right patient, we're going to benefit the care of
those we're really concerned about. I appreciate the work that both
Martin and Allan both have done on really pointing out how we can
do that in a great manor.
Carolyn:
Thank you, all of you, for joining us today. I mean, it's been such
an enlightening conversation. I'm sure the listeners have enjoyed
it and thank you listeners for tuning in. Don't forget to tune in
again next week.