Preview Mode Links will not work in preview mode

Circulation on the Run


Jul 20, 2020

This week’s episode of Circulation on the Run features author Robert Yeh and Associate Editor Brendan Everett as they discuss the article "Use of Administrative Claims to Assess Outcomes and Treatment Effect in Randomized Clinical Trials for Transcatheter Aortic Valve Replacement: Findings from the EXTEND Study."

TRANSCRIPT

Carolyn Lam: Welcome to Circulation on the Run, your weekly podcast summary and backstage pass to the journal and its editors. I'm Dr Carolyn Lam, Associate Editor from the National Heart Center and Duke National University of Singapore.

Greg Hundley: And I'm Greg Hundley, Associate Editor, Director of the Pauley Heart Center at VCU Health in Richmond, Virginia. Well, Carolyn this week, we're going to examine outcomes in patients that have undergone transcatheter aortic valve replacement or TAVR. I can't wait to get to the results from the EXTEND study. But before we do that, how about we grab a cup of coffee and start in with some of the papers and maybe I'll go first this time.

My paper involves a validated model for sudden cardiac death risk prediction in pediatric hypertrophic cardiomyopathy. And the corresponding author is Dr Seema Mital from the Hospital for Sick Children. Well, Carolyn in this study, the objective was to develop and validate a sudden cardiac death risk prediction model in pediatric hypertrophic cardiomyopathy to guide sudden cardiac death prevention strategies.

To address this, the authors performed an international multi-center observational cohort analysis. Phenotype positive patients with isolated hypertrophic cardiomyopathy, who were under the age of 18 years at diagnosis were eligible. The primary outcome variable was the time from diagnosis to a composite of sudden cardiac death events at five years of follow-up. That included sudden cardiac death, resuscitated sudden cardiac arrest, and aborted sudden cardiac death, that is, an appropriate shock following primary prevention ICD.

Carolyn Lam: Nice. What did they find?

Greg Hundley:   Well, overall 572 patients met the eligibility criteria with 2,855 patient years of follow-up. The five-year cumulative proportion of sudden cardiac death events was 9%. Risk predictors included age at diagnosis, documented non-sustained ventricular tachycardia, unexplained syncope, septal diameter Z scores, LV posterior wall diameter Z scores, LA diameter Z scores, peak LV outflow tract gradients, and the presence of a pathogenic variant.

Now, unlike adults, LV outflow tract gradient had an inverse association and family history of sudden cardiac death had no association with sudden cardiac death. The combination of clinical and genetic data were developed to predict five-year freedom from sudden cardiac death.

In conclusion, the authors study provides a validated sudden cardiac death risk prediction model with over 70% prediction accuracy and incorporates risk factors that are unique to pediatric hypertrophic cardiomyopathy. These results therefore raise the possibility that an individualized risk prediction model has the potential to improve the application of clinical practice guidelines and shared decision making for these children prior to an ICD insertion.

Carolyn Lam: Very interesting. Well, Greg, have you ever wondered what are the temporal trends in the burden of comorbidities and risk of mortality among patients with heart failure with preserved ejection fraction or HFpEF and heart failure with reduced ejection fraction or HFrEF? Well, the next paper comes from Dr Caughey and colleagues from University of North Carolina and North Carolina State University who performed an analysis of the community surveillance component of the atherosclerosis risk in communities, or ARIC study, and they found a significant increase in the burden of comorbidities among hospitalized patients with HFpEF as well as HFrEF across both sexes. Higher number of comorbidities was associated with higher risk of one-year mortality with a stronger association noted among patients with HFpEF compared to HFrEF. The one-year mortality risk associated with increasing comorbidity burden also increased over time.

Greg Hundley: Interesting, Carolyn. So more comorbidities in HFpEF versus HFrEF. How do we use this clinically?

Carolyn Lam: This study demonstrated a shift from ischemic etiology heart failure to multi morbidity heart failure over time, particularly among patients with HFpEF. This really highlights the importance of a holistic approach in targeting multimorbidity burden and guiding the management of patients with heart failure.

Greg Hundley: Very interesting. Well, Carolyn, my next paper comes from Professor Matthias Nahrendorf from Mass General Hospital and involves the relationship between bone marrow endothelial cells and myelopoiesis in those with diabetes.

 Carolyn, this study investigated the role of bone marrow endothelial cells in diabetic regulation of inflammatory myeloid cell production. The authors utilized three types of mice with diabetes, including a streptozotocin model, a high fat diet model, and a genetic induction using leptin receptor deficient mice. They assayed leukocytes, hematopoietic stem cell and progenitor cells, and endothelial cells in the bone marrow with flow cytometry and expression profiling.

Carolyn Lam:  What did they find?

Greg Hundley:   Well in diabetes, they observed enhanced proliferation of hematopoietic stem cells leading to augmented circulating myeloid cell numbers. Analysis of bone marrow niche cells revealed that endothelial cells in diabetic mice expressed less CXCL-12, a retention factor promoting hematopoietic stem and progenitor cell quiescence. Transcriptome wide analysis of bone marrow endothelial cells demonstrated enrichment of genes involved in epithelial growth factor receptor signaling in mice with diet induced diabetes.

In summary, Carolyn, in diabetes, bone marrow endothelial cells participate in the dysregulation of bone marrow haematopoiesis specifically diabetes reduces endothelial production of CXCL-12, a quiescence promoting niche factor that reduces stem cell proliferation. The authors also describe a previously unknown counterregulatory pathway in which protective endothelial EGFR signaling curbs hematopoietic stem cell and progenitor cell proliferation as well as myeloid cell production.

Carolyn Lam:     Wow, thanks for explaining all of that, Greg. For this next paper, we're going to switch tracks a little. This comes from Dr Drakos and colleagues from University of Utah in Salt Lake City. They noted that significant improvements in myocardial structure and function have been reported in some advanced heart failure patients. This is they're going to call responders and the responders improve the myocardial structure and function following left ventricular assist device induced mechanical unloading.

This therapeutic strategy may alter myocardial energy metabolism in a manner that reverses the deleterious metabolic adaptations of the failing heart. Dr Drakos and colleagues hypothesized that the accumulated glycolytic intermediates are channeled into cardioprotective and repair pathways, which may mediate myocardial recovery in these responders.

To test this hypothesis, they prospectively obtained paired left ventricular atypical myocardial tissue from non-failing donor hearts, as well as responders and non-responders at left ventricular assist device implant and at transplantation. They conducted protein expression and metabolic profiling and evaluated mitochondrial structure using electron microscopy.

Greg Hundley:   Interesting. What did they find, Carolyn?

Carolyn Lam:     The recovering heart appears to direct glycolytic metabolites into pentose phosphate pathway and one carbon metabolism, which could contribute to cardioprotection by generating NADPH to enhance biosynthesis and by reducing oxidative stress. This new information could redirect future translational investigations to efforts to identify novel therapeutic targets for myocardial recovery in patients with chronic heart failure.

Well, Greg, can I tell you a little bit more about what else is in this issue? There's a letter by Dr Wang regarding the article, A Novel Role of Cyclic Nucleotide Phosphodiesterase 10A in Pathological Cardiac Remodeling and Dysfunction, and there's also a response by Dr Yan. In Cardiovascular Case Series, there's a paper by Dr Michelena on the nosology spectrum of the bicuspid aortic valve condition, the complex presentation of valvular aortopathy. That's so interesting.

There's a research letter by Dr Gaudino on the response of cardiac surgery units to COVID-19, an internationally based quantitative survey. As well as another research letter by Dr Salem on cardiovascular toxicities associated with Hydroxychloroquine and azithromycin and analysis of the World Health Organization pharmacovigilance database.

There is a perspective piece by Dr Jacobs entitled The Temporary Emergency Guidance to STEMI Systems of Care During the COVID-19 Pandemic: AHA's Mission: Lifeline.

In cardiology news, Tracy Hampton reviews three papers, one, Video-Based AI for Beat-to-Beat Assessment of Cardiac Function in Nature, 2020. Two, Dynamic Transcriptional Responses to Injury of Regenerative and Non-regenerative Cardiomyocytes Revealed by single Nucleus RNA Sequencing, that is in developmental cell 2020. And three, ATP and Voltage-Dependent Electro-Metabolic Signaling Regulates Blood Flow in the Heart, the proceedings of the National Academy of Science, 2020.

Greg Hundley:   Very nice. Well, I've got an in-depth review from Dr Bin Zhou regarding the heart regeneration by endogenous stem cells and cardiomyocyte proliferation, controversy, fallacy, and progress. And then there are three on my mind pieces. The first is from Dr Rashmee Shah regarding machine learning and artificial intelligence. Do we need more data, or do we need the right data? The next one is from Sharon Reimold and it discusses the importance of gathering historical information on risk factors when seeing patients with, or suspected, of COVID-19. And then finally, Dr Prateeti discusses ethical challenges in cardiology during the COVID-19 pandemic.

Well, Carolyn, what a great review. How about we proceed to that feature discussion?

Carolyn Lam: Let's go.

Greg Hundley: Well listeners, we are now turning to our feature discussion and this week we have Dr Robert Yeh, also Bobby Yeh, from Beth Israel Hospital and our own associate editor, Dr Brendan Everett from Brigham and Women's Hospital. Welcome gentlemen. Bobby let's start with you. Can you tell us a little bit about some of the background related to your study, and then what hypothesis were you trying to address?

Robert Yeh: The study that we performed is the sub study of what we're calling the EXTEND study, which is an NIH funded group of investigations meant to really examine what the value is of real world data and how it can augment clinical trial evaluations of medical devices and therapies.

We know that randomized clinical trials remain the gold standard for therapeutic evaluation, but they are expensive, difficult to do, and sometimes impractical. Real world data is cheaper, it's potentially more efficient to do observational research studies, and in fact the 21st Century Cures Act explicitly asks, among other things, that the FDA explore the use of real-world data for regulatory evaluations.

People have problems with real world data, of course, they have their own inherent challenges which are subject really to confounding. What we thought about is, well, there's probably this middle ground that we and others have proposed, which is can't real world data somehow supplement or augment randomized clinical evaluations, and in particular, in our question, can real world data be the provider of outcomes in place of adjudicated clinical trial outcomes?

What we did is we took two large pivotal randomized clinical trials of transcatheter aortic valve replacement, namely the CoreValve, high risk, and intermediate risk trials. Otherwise, the intermediate risk trials known as the SURTAVI trial. And we found those patients in those trials and then linked them with real world data from administrative claims databases in Medicare.

Our hypothesis was that had the trial been evaluated in terms of outcomes by the Medicare claims instead of the clinical trial adjudicated outcomes. Our main question was, would we have had the same findings within those trials? Would the primary hypothesis of those trials still have been met with this alternative clinical trial end point ascertainment strategy?

Greg Hundley: In your study design, how did you accomplish the comparisons? You've told us a lot about the study population. Was this everyone from those two studies or was this a subgroup of them? Maybe just expand on that a little bit.

Robert Yeh: Good question. This is a US based comparison, so we have claims for US patients, and most patients in these trials were in the United States, but the CoreValve trial and the SURTAVI trials, we took all of this patients and then tried to find those patients who we could also find in Medicare claims.

It turns out that in order to qualify for Medicare, you have to be over 65 years of age, under most circumstances. And so it's limited to really those patients over age 65, who we could then search for in Medicare and then within Medicare, there's two types of insurance, fee for service and Medicare Advantage managed care. And what we can only find are those patients who are in Medicare fee for service, which represents somewhere between two thirds and three quarters of patients age greater than 65 or older. So it is a subgroup of patients in these two large pivotal randomized trials.

We've compared those who could not be linked versus those who could, find large part, from some of the age differences, which are just inherent in looking at Medicare, there are really not that many differences between those two groups.

Greg Hundley: Bobby, what did you find?

Robert Yeh: We found those patients. So now we have this situation where we have patients in trials, and we can look at them from two lenses. The same group of patients, one lens is through the clinical trial lens and the second is through the lens of real-world data, those exact same patients.

We found that whether or not we ascertained their outcomes via claims or with clinical trial adjudication, essentially the primary hypotheses were identical, that in both scenarios, the transcatheter aortic valve was non-inferior to the traditional surgical aortic valve replacement, that the effect sizes and the hazard ratios, the confidence intervals, they were roughly the same. In fact, for the primary endpoint of the high-risk pivotal study, which was all cause mortality, it was identical. It turns out that Medicare claims and what we called the denominator file very accurately identifies exactly when a patient dies. It does so equally well compared to rigorous clinical trial adjudication.

For SURTAVI the primary endpoint was combined death or stroke in that case, stroke is reasonably accurate. There were a lot of deaths that also drove that combined end point. And the net result was that really very similar, both effect sizes and primary, P values for those comparisons.

Greg Hundley: Now how about secondary analysis?

Robert Yeh: I think the secondary analysis that's where you had some variability. There are some types of outcomes in this device specific trial that are procedure oriented. Pacemakers are a concern. Aortic valve reintervention is a concern. Those end points in billing claims turns out are quite accurate. You can understand why. I think providers and institutions, when they do a service that requires insertion of a new device, they want to get paid for those devices and they do that billing accurately.

 But there are other claims which I think are a little bit more subjective, diagnoses that are more subjective, those like bleeding or cardiogenic shock, those things actually started to look different. And in fact, in some cases started to give you different inferences if you used the clinical trial data versus the real-world data. And so if I were to summarize it, I would say that mortality looked absolutely pristine identical between the two groups that some diagnoses, particularly procedural ones, looked quite good, sufficient, I think, for an accurate estimation of the treatment effect size as well as the magnitude of the risk, but then some end points, I think the softer more subjective endpoints are slightly different.

Greg Hundley:   Thank you so much, Bobby. Now we're going to turn to our associate editor, Dr Brendan Everett, who has helped work this article through the entire editorial process and is also an expert epidemiologist. Brendan, we have randomized trial data versus real world data. How do you interpret these results in the context of how we're conducting studies both now and then how we will conduct them in the future?

Brendan Everett: If you think of observational research and clinical research on a spectrum with truly just observational studies on one end, where you were trying to look at an exposure and an outcome and adjusting for potential confounders, to tightly controlled randomized trials on the other, Bobby's group has managed to create a hybrid, which I think gives us some opportunity to not have to be either on one and or the other of the spectrum.

What I mean by that is that there are a couple key features of trials that are retained in the approach that Bobby used, and his group used. He mentioned those, but I want to emphasize them. I think the key thing is that there's a randomization step. From the perspective of an epidemiologist, that's key because it balances between the people who get your therapy, in this case a TAVR and don't get the new therapy…confounders that you can measure and confounders that you can't measure, the unmeasured confounders. So it allows some balance between the two treatment groups so that you can be sure, at least at baseline, that they're similar groups and what you're measuring after that point is the effect of the intervention.

The key piece that Bobby replaced is the classical trial ascertained end points where investigators are asked if their patient had one of these outcomes such as a stroke or a death, and then they're adjudicated independently.

As he pointed out, I think there are many of those outcomes, at least in this particular application, are really well collected by billing data. And in fact, some might argue that in some cases they're actually better collected. There's a higher sensitivity, if a somewhat lower specificity for the events of interest.

I think the key question, and you touched on this, Greg, is what about the outcomes that maybe are not collected quite as well by billing data? In particular, remember that any clinical trial is looking at both the efficacy of a novel treatment as well as its safety. You ultimately, at the end of the trial, want to be able to compare efficacy with safety, to make a decision, in this case from the FDA, a regulatory decision about whether to approve the device or the drug.

The question becomes, what safety events are you worried about and how reliably are you going to be able to collect them with claims data? In this case, I think Bobby mentioned that the bleeding data maybe was not quite as good as some of the other safety concerns that are common in TAVR.

I think when you look to apply this approach, which I think is ingenious, to a different research question, you have to ask whether or not the end points, the efficacy end points, and the safety end points that you're collecting will be done in a valid and consistent and sensitive way with claims data as compared with the traditional trial ascertainment process. In this case, I think they were, but that's not always the case. We can all think of examples where you might run into some trouble depending upon what your end points are.

Greg Hundley:   Well, gentlemen, this has been really an informative study to present and talk about in this feature discussion. I want to ask you both just briefly, in a minute or so, what do you see as the next step forward in research in this particular area? Maybe Bobby you first and then we'll follow with Brendan.

Robert Yeh:  I think that there are a couple of different areas that really need to be pushed forward. One, and Brendan alluded to this, is because these studies are really domain specific this validation does not tell us that all claims can be used to answer all questions, that in this particular question it worked, but in others it might not, so more validation work in different fields, different randomized trials, need to be done.

We're doing some of those, but they need to be done throughout so we can really get a better sense of what are the types of questions that are best answered by this type of linkage approach.

The second that is more operational, we were limited to Medicare claims data, so for questions for patients who are younger than 65, this approach just doesn't work. Whereas a place like Sweden can do a large national registry like they did in the TASTE randomized clinical trial and do this for their entire country.

We do need to develop better systems that have comprehensive real-world data collection. Maybe those involve more consolidated electronic health system, health record data that are available in big integrated health systems, for example, but a better system needs to be developed that can answer questions among more than just Medicare fee for service patients.

Greg Hundley:   Very good. And Brendan?

Brendan Everett: Well, I think it's a really promising approach to trying to lower the cost of clinical trials and to do valid research on the effect of some treatments as compared to others. From my standpoint, we have to be careful that we don't try and shortcut the process too much. In particular, I think the randomization step, at least for novel treatments, is of fundamental importance. And of course, to do that, you have to collect a population that is then willing to be randomized to option A or B.

There's a lot of upfront work that is not eliminated by looking solely at the outcomes, using this technique to look at the outcomes. There's a lot of upfront work to collect the patients and then randomize them.

I think also it's important, as we saw recently, that the quality and validity of the database be ascertained and be well-established both with the investigators and the providers of the database. We can see that sometimes, if you're not careful, you can come up with outcomes that are not correct because of the example. Of course, I'm alluding to is the two papers in Lancet and the New England Journal that had to be retracted that were large database studies as well.

The quality of the underlying data remains paramount. That, of course, is where a lot of the elbow grease comes in. It's not just in the ascertainment of the events, but a lot of the stuff that leads up to counting the events at the end of the study.

Greg Hundley:   Well, listeners, this has been just a superb discussion. On behalf of Carolyn and myself, we wish you another great week and look forward to catching you on the run next week. Take care.

 This program is copyright, the American Heart Association 2020.