NIMH CMN Machine Learning Talk Series: Christos Davatzikos 2020/02/11


Hello everybody, it’s my pleasure to introduce today’s speaker who is dr. Christopher Davatzikos. He’s the Wallace Miller Professor of Radiology at the University of Pennsylvania and the director of the Center of Biomedical Imaging Computing and Analytics. He is IEEE fellow and a fellow of the American Institute for Medical and Biomedical Engineering. Dr. Davatzikos holds a PhD in electrical and computer engineering from John Hopkins University and after that he joined the faculty there in the radiology and computer science department where he founded and directed the neural imaging laboratory. In 2002, he moved to University in Pennsylvania where he also founded and currently directs the Section on Biomedical Image Analysis. His interest is mostly in medical image analysis: he oversees diverse research programs ranging from basic problems of image pattern analysis to machine learning in a variety of clinical studies of glioblastoma, aging, Alzheimer’s disease and schizophrenia. And today, he’s going to talk to us about his own experience using different machine learning techniques to find imaging signatures of reneging early stages of neuro generative disease, as well as how to apply those methods to develop precision diagnostics for brain cancer. So please help me welcome Dr. Davatzikos. Thank you, thanks so much Javier for the introduction and it’s great to visit this beautiful campus and the nice group here at NIMH. So I I got involved into machine learning back in, you know, the early 2000s, in the dark ages where everybody was showing me the door when I was trying to convince them that machine learning is a way to go, and multivariate analysis. At that time, I think there was one of the very first papers was a conference paper by Paulina Berlin and others I think Francisco and Tom Mitchell and others had one of the very first papers in brain decoding, and we had a paper in now you know 2003, or so, on classification of MRIs scans of men versus women, older people versus younger people, fairly mundane things– you don’t need an MRI scan to do that but we have to start from the basics. Hopefully we’re doing some more interesting stuff now. So, if we know, then we you know, we have some–ran some experiments with fMRI where we show that you can look at the pattern of brain activity and tell whether someone is telling the truth or lying. This was a mistake because there was a company that took this idea they you know they were claiming that they can catch the cheating spouses or criminals or whatever, so I kind of got out of this quickly, but we continued primarily with structural brain MRI scans. In 2005, in JAMA, in the archives of General Psychiatry, back then we we showed I think for the first time that there is a fairly robust imaging signature, neuroanatomical imaging signature, of schizophrenia that can be identified on an individual patient basis. And up to that point it was a whole you know there were a lot of studies looking at the group analysis and comparing patients versus controls, but it wasn’t clear if you can actually see this, this neuroanatomical differences or peculiarities on an individual person basis. And after that of course there’s been a whole bunch of things that machine learning has been useful for. Now the pendulum has swung the other way and everybody wants to do machine learning which is also a mistake. Think we have to balance somewhere in the middle: it’s one of many tools that we have in our toolbox and as long as we use it right, for the right tasks I think it’s very useful. So machine learning can do this quickly, a lot of things that one of the areas that particularly deep learning has completely reshaped what we’ve been doing is image segmentation. This is one of a unit resonant variations at one of the–Jamie Dornan in my group has developed. It’s replacing everything we were doing for years for segmentation of brain tumors and skull stripping, to segmentation of small vessel ischemic disease, to segmentation of hippocampus and other brain structures. So now we’re basically redoing our segmentation pipelines, like everybody else, because I think for for this type of problem deep learning is great. But this is not what I am primarily interested in, actually, because these are things that we can see, you know where the hippocampus is, we know it with these lesions are. You have a tool that can do them systematically, reproducibly, and this is great, but it still doesn’t augment our knowledge much in terms of pointing the way to something that we don’t see. But machine learning can can actually tell us a lot about things that we don’t see but the machine has seen, and in clinical neuroscience, if you look at the MRI a structural MRI of a patient with schizophrenia, I think every neuro radiologist will tell you that this is a perfectly normal MRI scan. It doesn’t differ in any way from typical MRI scan of that age range, but this person has a devastating disorder. Something must be different here that we don’t see. It turns out, we now know, that computers can see that and can tell us what it is. The same thing with a mild cognitive impairment: this is an MRI of a healthy individuals about the same age and this is basically a normal MRI for that age range, but this person has memory decline and my progress to Alzheimer’s disease. We would like to know what’s different here and in oncology, the field of radiomics and radiogenomics has started to completely revolutionize things we we can see in imaging. These are different channels that we have the standard conventional images you know t1, t1, t2-FLAIR, this is a time series from perfusion signals, this is derivatives from the diffusion tensor imaging and so forth. And it turns out but with machine learning in writing the right features finding the right features in the right combination can actually tell a lot of things that we’d never expected, such as does the tumor have EGFR v3 mutation or not. And we now know that we can actually tell, and I’ll you some examples, and in the– is the MGTMT promoter methylated or not? Does it have an IDH1 mutation? What is the expected survival of this person, of this patient? Is–How is it tumor infiltrating beyond the visible margins? All of these things are things that we can now tell with machine learning and towards the end of my talk I might have some time to show you a few slides on this. And along these lines, I think what I’m particularly excited about is that the machine learning can can help us reduce the dimensionality and the complexity of multiparametric images and non-imaging data as well in a way that can distill this complex multivariate patterns down to a fairly small number of informative biomarkers. In leading to dimensional neural imaging of course there is a number of dimensionality reduction methods we’ve all been using: principal component analysis, independent component analysis, other things, non-negative matrix factorization and so forth, but they they try to explain a variance and they reduce the data down to components that we don’t–we cannot easily interpret. If you look at an ICA component or ICA components of a structural or functional signal, you can’t really, I mean you can’t exactly interpret the this reduction of variance, but machine learning can potentially reduce the signals in a different way that has–that relates more to a clinical outcome, to a genomic variable, the genetic variable, and so forth. For example, and I’ll show you examples of subtypes of schizophrenia that we’ve identified in terms of brain structure subtypes of mild cognitive impairment and Alzheimer’s disease. So these can be different dimensions and these have different prognosis potentially. Structural and functional covariance networks is another way to reduce the data, patterns of amyloid deposition. So all of these dimensions can potentially summarize this complex multi multi- parametric data in a simple and fairly interpretable way. And I think we need that because the complexity of imaging and nonimaging data is becoming unmanageable. The other thing, and I’ll come back to this busy slide later on, but machine learning actually can inform clinical trials in the sense that if you have personalized predictors about even actually enroll into clinical trial people with a much more homogeneous profile rather than having heterogeneous populations who which will need a large sample size to be able to show treatment effects. So a pattern is formed by by sampling different parts of the brain, different images, whether that’s a structural PET image, functional MR, or other types of images, and you can of course couple it with demographics, genetic, clinical information, all these features, that going to this black box, that remains black box to a large extent and and the black box will find out which combination of these and which of these features will give us the best prediction or of the of the outcome variable that we’re interested in. In much of our work would be calling this SPARE indices for spatial patterns for recognition of something: AD, schizophrenia, and so forth. This is some early work with from Alzheimer’s disease that shows areas patterns that produced an index that is positive if you’re expressing this neuroanatomical pattern ,if you have this type of brain atrophy, and negative if you don’t. Same thing for schizophrenia and also you’re more specific of this. So it turns out that if you do that, whereas if you were to look at hippocampal volumes or total brain volume or other conventional measurements, you cannot really separate even Alzheimer’s patients from controls very well. There are differences, there are significant group differences, but if you take an individual’s hippocampal volume, you cannot tell if they have AD or not. It turns out the SPARE-AD score can actually almost perfectly separate controls healthy controls from patients–this is data from ADNI–and and create an ROC that is almost almost perfect very close basically to the to the accuracy of the clinical categorization into AD and control. And the areas that this pattern weights more are of course hippocampus, temporal lobe, medial temporal lobe, some you don’t see some of interior stuff posterior cingulate, hippocampal white matter patterns of a ventricular expansion and so forth. All of these go together into a model that says “you have this pattern” or “you don’t have this pattern.” But of course you don’t need an MRI to to tell if someone has Alzheimer’s disease. It would be too expensive, or, and too uninformative. What you would like these indices to do is to predict future outcome and and this is again the SPARE-AD index looked at in patients with mild cognitive impairment, and these people compared to Alzheimer’s disease at a rate of 15 percent or so per year. And so you would like to know in advance if you can predict which of the people with mild cogn–with MCI are going to convert to Alzheimer’s in the future. So just by looking at the SPARE-AD index, which tells you whether or not you have a pattern of atrophy that looks like that we’ve seen in Alzheimer’s disease, just by looking at that you can if you can split it it’s even divide it into quartiles you can see that the people who don’t have this pattern have much better prognoses than that people who have a full-blown pattern that we’ve seen in AD patients so, and then in between. And if you had a couple of simple tests, and ADAS-Cog is a fairly straightforward cognitive test that can be administered in the clinic, and APOR genotype, which is taken from a blood test, you have very very good prognosis of whether an MCI patient who shows up in the clinic is likely to remain stable for a long time– and this is a five-year period here of course–or progress very rapidly to Alzheimer’s disease, or in between. What is even more exciting is if you can do that for cognitively normal individuals, and this is a paper who had in in brain back in 2009, where we showed from the Baltimore Society of aging–this is a longitudinal study–that if you measure the the the the presence of these patterns of brain atrophy, the ad like patterns of atrophy, in cognitively normal individuals, and if you look at the rate of change of them, you can predict whether or not they were converted within an eight year period to MCI, or will remain stable. And unfortunately everybody had a positive rate of change, none of us is getting younger, but the people who had rate of change increase in prevalence of this AD like atrophy were almost distinguishable from from those who didn’t in terms of and those are the people who progress to MCI in an eight year period. And again you’re talking about potentially two decades before these people become demented that you start to see these signatures maybe 15 years because even when they convert to MCI in this prospective study of fairly normal, super normal, I would say, subjects, then they will take several years before they get demented, at least some of them. So you’re looking at a window of opportunity for at least, you know, more than a decade, perhaps two decades before people get demented, and if there is an effective treatment that’s the time to administering it. So these very early markers can be can be detected, and this is in contrast to what many people believe that structural change comes last. Of course, if you’re measuring brain volume and hippocampal volume you’re going to see it very late, but if you if you zoom into these more complex patterns you can you can potentially detect them much earlier than other things. So back in 2005, you had we had the study in schizophrenia where, as I said earlier, we showed that you can detect a pattern a fronto- temporal pattern of of, I’m not gonna call it atrophy because atrophy implies longitudinal change. It’s reduced cortical volumes in a number of areas that gave pretty decent ROC’s back then in a fairly small sample, about 150 people. So we have recently expanded this into a consortium and also your results later on, but that was the kind of first indication that we can pick up this patterns of brain atrophy or reduce brain volumes in patients with schizophrenia on an individual patient basis. So machine learning has made a lot of progress but there are a lot of unsolved challenges, and I actually wrote a commentary in editorial like one or two years back when I stepped down from the NeuroImage editorial board, about many of the challenges, but two of them are–we’re dealing with them pretty actively in the lab: one is, you know, reproducibility and overtraining to a single data and unfortunately a lot of especially older machines learning studies were shown in single cohorts in in maybe one or two hundred individuals, and they’re not very robust findings. Sometimes you have a lot of degrees of freedom, you can overtrain, you can overfit the data, you might be sensitive to picking up specific characteristics of the acquisition protocol that was used in this study versus that study, so reproducibility has been a main issue. The second issue is heterogeneity of biological patterns: if you are assuming that is a single pattern of diseases you’re looking at, you might be missing the fact that there are multiple perhaps, and we know that is there is disease heterogeneity in pretty much everything I’ve been working with, whether it’s aging in cognitive impairment, schizophrenia and other neuropsychiatric disorders, brain cancer, there’s a lot of heterogeneity. So not one pattern fits all. So that that raises the need for a lot of data from multiple sides, which which raises–which brings about the problem such as harmonization and the need for advanced computational tools that do not require too much babysitting, because if you have 10- or 20,000 scans now, you cannot really be prepping your data too closely; you have fairly automated and robust computational tools that are feasible at that scale. So to address some of these, we’ve been we’ve been leading the effort on a couple of consortia. One is the iSTAGING: imaging bases for aging in neurodegenerative diseases, which tries to put together data from, I think 13 studies, right now we have about more than 30 thousand MRI scans. A subset of them have amyloid-PET, a subset of them have resting state fMRI there.. The age range is from 45 and up but the majority is in the 60s and 70s actually, no as the no, actually there’s a fair number of it in younger ages as well, and some of them come from 1.5 T some of them come from 3 T and so forth, so it is that there’s challenges with harmonization. The second consortium is the PHENOM consortium were is it evaluates the heterogeneity in schizophrenia and psychosis more generally and it again it involves groups all over the world from from London to Munich to Utrecht to Australia to the U.S. to China, and we’re trying to put together all this data. It’s about 3,000 MRI scans and some of them have longitudinal follow-ups, trying to better understand how reducible and generalize about these imaging signatures of schizophrenia are, and tries to tease out heterogeneity. We had to solve the problem of harmonization and we’ve been using a combination of COMBAT which is a method that people in genomic have been using, it’s a multivariate correction harmonization method, along with the GAMs using spline models to model nonlinear effect, such as age effects. So managed to put this efforts been led by Ray Pomponio which is the research assistant ib my lab and who is also co-supervised by Haochang Shou in biostatistics at Penn. And we managed to kind of obtain reasonable at least seemingly reasonable harmonized brain volumetric measurement of different parts of the brain. These are different studies having different colors, and you see that in the underlying model of aging trends as the one that is obtained by the spline GAM. So now that we put the data together, I apologize I skipped all the technical details, because otherwise I would give a lecture for each of these methods. Now that we put all the data together, we were able to observe a lot of interesting things that before it was kind of limited in in in small data tests. For example, this is the SPARE-AD score as a function of age in five over 5,000 people, and you see that it’s negative for the most part which means that that that they’re normal they don’t have don’t express this imaging phenotype of this brain atrophy phenotype, but after some age, the trend is in fact nonlinear, or perhaps exponential, upwards and a lot of people develop this AD-like atrophy, and of course that agrees with the frequency of the disease as a function of age. This is from a number of different studies here, and recently the UK biobank has been a major contributor for us to do this. So now we put the data together we want to start playing with this dimensional neuroimaging idea and create brain charts. Everybody likes brain charts under whether people like brain charts–I’m not sure how useful they are but they’re fun to look at–the brain charge might be hopefully a little more useful and predictive but for the most part they’re fun to look at. Right now this work Mohammed Habes has played a key role in this study. He’s now an assistant professor in the University of Texas in San Antonio. So we started to create of course I cannot show you 20-dimensional graphs, brain charts, but I can I can plot three things at a time, so this is age this is you know ischemic disease white mother lesion volume, and the color here or or shown as isocontours in this particular case is brain age SPARE- BA which is basically the MRI-based estimate of your or of your brain age. You start seeing interesting things such as for a 61 year old who has very low relatively low load of ischemic lesions has a brain that looks like that of a 59 year old whereas if they have a lot of relatively larger volume of of these ischemic lesions, their brain looks like a 69 year old’s brain. So you lose at least a decade or more if you if you if you have a large relatively large number of, volume of, lesions. What is low and what is large is not–there’s no absolute value–here it’s all relative to the populations. So this is the entire population of six thousand seven hundred people, and you see that if you’re an outlier here, your brain doesn’t look very good. These are this is memory now, I don’t remember which with memory score this is, but these are isocountours, so people actually as you slide along this curve you look at people who have the same memory performance, and you see that people who are a 60 year old person who has high white matter lesions load cognitively looks like a 74,5-year– currently looks like a 74 year old who has relatively low lesion load. So again, more than a decade difference, so the these ischemic lesions are not a good thing and at least they are the one thing that would be controlled perhaps by aggressive treatment of hypertension, diabetes, and all the other comorbidities– cardiovascular risk factors, basically. A finding that was the perhaps the most provocative of all was that that the presence of these white matter lesions doubles your probability of having of testing amyloid positive, even though, presumably, these are not directly related to each other. Potentially they are or there is an underlying latent factor that drives both of them up. But these ischemic lesions double the probability of, whether or not they are causal or whatever they are, they double your probability of testing positive for amyloid which is quite important, I think. So now I told you earlier about our 2005 study where we looked at the schizophrenia pattern now we in the PHENOM consortium were able to look at almost a thousand people and we had a couple papers. The first paper came out in 2017, there’s another one in in press now in BRAIN, and we showed that we found a very robust pattern of reduced brain volumes from you know prefrontal, orbitofrontal, some some visual areas, perisylvian areas that has relatively reduced brain volumes, that in combination all of this created a pattern that allowed us to build an ROC curve that’s fairly respectable, you know 0.83 or so. And we looked at it using leave-one-side-out used 10-fold cross-validation and split samples and so forth. So it’s a fairly robust imagine signatur, even though this was across Germansm, Chinese, and Americans, and and I was less optimistic in the beginning, but it turns out that there is a fairly reducible imaging–neuro- atomical imaging signature for schizophrenia. What is interesting is then like, we had reported in 2017, and ENIGMA and other studies have reported the same findings, effect sizes of reduced brain volumes, these are–these green blue area are reduced brain volumes–effect sizes that go up all the way to 0.5 and then some increases which are larger ventricles, which has also been reported, of course, in the literature, amply in schizophrenia, but there was the internal capsule in the pallidum here were reported as significantly higher particularly the pallidum and both we and ENIGMA reported this, but we didn’t realize exactly where it comes from you know I do it was reported as part of the imaging signature. It turns out that, it’s very interesting and I’ll come back to this in a second, and relates to heterogeneity, but I’ll highlight these two findings right now, that two years later we actually elucidated where they come from. Like everybody else people have transitioning from conventional machine learning to some of these monstrous networks but I no longer know exactly what they do but they fit the data, basically, and and it turns out that it doesn’t quite matter what architecture you use–I’m exaggerating on this–but as long as you have enough degrees of freedom to fit the data, and you make sure you don’t overfit the data, but this is a something that one of my students has been playing with in terms of looking at the brain age. The nice thing about this is the now you no longer need any pre-processing of the data, so you can estimate brain age from raw MRI scans without any processing, and you can apply it to an independent set (this is UK Biobank) and you get fairly good predictions of age. Again, it’s a mundane task, but the idea is that if your brain age shows that you your brain is older or younger than expected that says something about your brain health. The one thing that we found is that by pre-training, and a lot of you know you use transferred learning or pre- training of these networks on on on on brain MRI scans, if we use the network that was trained on on to estimate brain age and we try to now build classifiers of Alzheimer’s disease, MCI, schizophrenia, and so forth across the board we found that the network we trained on and transferred to, from brain age to, these problems does better than vanilla, the same exact architecture trained on imagenet which is of course millions of natural, uh, natural images that people have typically used to train these networks. So pre-training on brain MRI scans with the mundane task of trying to predict the brain age helps build the network with weights that are more relevant to classification of AD, MCI, schizophrenia, and potentially other pathologies, especially if you have small sample sizes which is all the things on the right that you see here. The blue is what we get with a pre-trained network, the green is where you get with the vanilla trained networ, and you see that especially for for small sample sizes you start to lose your ability to build a classifier. So for smaller studies it’s important to have a pre-trained network, and this was just a fairly straightforward application. We’re working on the idea of domain-specific networks that presumably capture the characteristics of brain scans and not of cats and dogs and tables and so forth. No that was sending the MR. Yeah..no but these are this is these are initialized with the weights that you get from brain age, but they change afterwards in order in order to build a network that tries to differentiate between ADs and control or schizophrenia and controls right, so these these three networks here end up in having different weights because they each try to to achieve a different task. It’s just the starting point, which is important because these are highly nonlinear function at the optimize, the starting point was that of the weights that we have found that predicted brain age. Alright so heterogeneity was the second major challenge and I’ll talk about that. A few years ago I sent a couple of my students, we get a lot of bright engineering students, so so I sent them off to study this problem of of describing multiple or heterogeneous pathologies and kind of deviating from the assumption that we’re looking for THE pattern of schizophrenia vers–which is essentially a direction in the high dimensional space that separates patients from controls. Of course we have nonlinear classifiers, kernel machines, other things that and now with deep learning of course, that build highly nonlinear hyper surfaces that presumably separate patients from controls, but they are not very interpretable because you have a continuously varying pattern so–it’s just–you cannot answer the question what does the pattern of brain atrophy in schizophrenia patients look like. So we wanted to find the middle ground solution where we had separable but nonlinear transformations between controls and patients so Aoyan Dong, who is now in industry, he approached this with from a genetic perspective so, if this has a probability density function, these are controls and these are patients, and we put all the patients in the same bucket without worrying, at this, at this point of the clinical phenotypes and potentially subtyping. We say these are–and you try to be as broad as possible here, for example including people with early stage MCI late stage MCI and AD all together–and ask the question how do these people differ from controls. So if this is the probability density function that controls have, you need the number of transformations to map it, and to generate this probability density function of the patients. And then it turns out that is the method that it’s it’s a generative method that solved this problem. A more discriminating analog is is that of Erdem Varols who now is at Columbia University postdoc-ing, he treated, he found a convex polytope that separates the control population from all the patients, and of course you need multiple hyperplanes to take care of multiple potentially disease subtypes, or patterns, if you want. So, and you know of course, we tested this in various contexts of simulated atrophy or and so forth, but then we start to apply them in different problems and we found out that, as expected of course, there’s, there’s tremendous heterogeneity even in cases where we didn’t expect as much heterogeneity as in the general, in a more general patient population. For example, this is data from ADNI, and and we looked at the, we put together all the early, late state MCIs and ADs into a bucket of people with some pathology, and then we found that there are four patterns in which these people differ from controls. There was a 30 percent of these patients that didn’t differ at all from controls; this might be because we were not able to pick them up or because, perhaps, some of the controls were not exactly healthy but they have pathology, but they haven’t progressed yet to early stage MCI or late stage MCI. 44 percent of the people had a more full-blown pattern of brain atrophy that included especially the medial temporal lobe and the hippocampus and some other areas throughout the cortex. This was what you expect from Alzheimer’s patients who have a fairly pronounced atrophy in these areas. 15% of the of the people had very focal hippocampal and other temporal lobe atrophy–very focal– the rest of the brain was, uh, was intact, and then the 28% had a generally global, spread out pattern of atrophy that included the hippocampus, otherwise they couldn’t have been part of ADNI, but included a number of other areas. What was interesting is if you follow these four subgroups over time, and this is now in week, yeah, it’s a long time of course, this is this is six years, eight years, and so forth you get that the people who had fairly normal anatomy, at least seemingly normal anatomy at first, had, in general, the best prognosis over time (it’s the red group, number one). Of course, as we know this shows only data for late stage MCI; if you are at late stage MCI the chance is that you will remain stable over time or low, but the people who had this pattern had the best chances of remaining stable. People who had the full-blown type of AD atrophy were the people who had the worst prognosis, the green here, which they converted fairly rapidly to Alzheimer’s disease. People who had this focal of hippocampal atrophy but pronounced were the people, the purple here, who initially had a good prognosis but then they started to deviate and quickly quickly and rapidly started progressing to to Alzheimer’s disease…to the point that they ultimate outcome eight years down the road in terms of survival curves was about the same as these people over here. So these are presumably the, the late stage MCI patients who have early stage but progressive disease. And these people with a more generalized pattern of atrophy are the blue-ish, number three here, the blue-ish line here, that initially they conver–they declined about fifty, fifty percent of them, but then they remain stable over over many years..about half of them or forty percent of them, which potentially means that at least half of these people did not have Alzheimer’s or dominant–the dominant pathology was not Alzheimer’s but potentially something else, whether that was more personal disease or other pathologies. So you see that even in a fairly homogeneous group of patients who were selected to have optimistic MCI and to be good candidates for having Alzheimer’s disease, even in them there’s there’s heterogeneity and this heterogeneity maps to longitudinal outcome, and I think some of these unsupervised machine learning methods can tease this out. In in schizophrenia, we had very interesting results and this is a paper that is in press in Brain now it was led by Ganesh Chand who is in my group and Guray Erus and Dominic Dwyer who was in Munich, and we did a number of experiments that showed that repeatedly reproducibly and and robustly there were two subtypes. This is a number of clusters that we got from the data. Now remember we’re not, we’re not clustering patients, we’re clustering differences between patients and controls and this is very important because it makes it a little more robust, or quite more robust, to various confounds. Otherwise we can start clustering data according to you know potentially demographic or other variables, genetic variables, that might not be of relevance to schizophrenia. So this shows the number of clusters that we get, or sort of subtypes, and this is, we tried different number of clusters and for two we have a very reproducible ARIs adjustable Rand–Adjusted Rand Index that goes up to 0.7. It was highly significant compared to the null distribution, that we got from permutation tests. So we looked at these two subtypes in more detail; we started to get very significant clustering for higher number of clusters, but they tended to line up with sex and age and site, so we didn’t really trust those results, at least at this point we only had about a thousand people in the analysis, so we focused on on k equals 2 – and we found basically that about two-thirds of the people, the ones who belonged to cluster one were the ones that had the typical pattern of brain atrophy that we expected and we knew of and that everybody knows of: a lot of orbitofrontal, prefrontal, temporal, perisylvian atrophy, or reduced volumes, that is present in this in this subgroup of patients compared to controls. These people also had a seemingly progressive brain atrophy and this, you know, this is a cross-sectional study so I shouldn’t say progression because it’s not a longitudinal study, but it correlated with the illness duration. But the 1/3 of the people that came out as a separate cluster had a perfectly normal brain and they had the increased striatum. So, and these are the people whose brain volumes didn’t correlate with disease duration. So presumably if we had longitudinal data they might be more stable. We didn’t find any other difference other than this, in between these two, and other than the fact that the second subtype had more, was more educated, significantly more educated. But this striatum effect was not was not explained by medication. These results were robust to split-sample – also to cross-validation – permutation tests, to everything else we did. But they were consistent between men and women, so there has to be something there, and at the bottom is a difference between the two subtypes. But you see tremendous heterogeneity here: one third of the patients are the ones who contributed to these two findings here, the internal capsule in the, the pallidum that we had reported before, and we didn’t realize at that point, but now we know that all this reduced cortical volumes comes for about 2/3 of the people but one-third of the people contribute to these increases. I still don’t know what the explanation is which I do–this paper is in press in Brain. We tried to publish in JAMA Psychiatry and people of course has this–always raised the question “so what does it mean?” But I think that it’s important to, to understand that at the very least we have to look at these people separately from these people. And now we’ll come back to the larger consortium with 3,000 people that includes first episode individuals, that has, we look at richer phenotypes, and hopefully genetics as well, to see what what is different in these people compared to these people. But, it’s an example of I think very vividly shows you that we need to be very cautious about explaining group analysis even if there are machine learning based, because ultimately under these umbrella diseases there are subtypes or may be completely different in certain in certain areas like glioblastoma, for example, we now know that the giablastoma might not be a single disease but might be multiple diseases depending on what mutation makes the cancer has. So I think we’re moving rapidly towards precision diagnostics which is important. I think I’ll skip through this fairly quickly and I say that there’s unsupervised methods that we’ll be using as well. This is my former student Harini Eavani who now leads a group in Microsoft, a research group. She applied a decomposition–sparse non-negative decompositions to capture covariations in resting state fMRI data, and come up with a number of brain networks each of these components is a brain network or brain pattern if you want, and we found very reproducible–this is essentially something that is analogous to ICA independent component analysis, except that we’ve always thought that sparse non-negative decompositions compositions are probably a more natural way to describe this functional covariations in the data for a couple of reasons. Number one they allow overlap of these components, whereas independent component analysis, by virtue of looking at independence–enforcing independence–or looking for independence, doesn’t like to have a lot of overlap between these components, even though the final results might have components that overlap–in principle it doesn’t like it because it violates the independence assumption. Whereas these sparse connectivity components allow you to have partial overlap, presumably because one brain region might might mediate multiple networks and multiple functions. And the second thing is that I find it difficult to with, with unconstrained decompositions, I find it difficult to interpret the components were with non-negativity constraints, you have only added–the addition of this component makes them more interpretable. And we have applied similar methods: Ari Sotiras was also with me–we had a couple papers in PNAS and NeuroImage, e is now at Washington University as faculty–where we we applied the orthogonally projective non-negative matrix factorization to both cortical thickness and regional brain volumetric data and we found–I was kind of astonished by this result–it parcellated the brain basically. There was no a priori assumption here. We just said “Look at the regional grey matter volumes or regional cortical thickness, and look at how it co- varies across individuals and try to decompose this covariation into a number of components using non-negative matrix factorization methods.” It ended up providing almost segmentations like, you know, that caudate, putamen, and thalamus went together into one component, you know, the sensory motor strip is somewhere here–I don’t see in this plot– supplementary motor cortex was there, these the anterior and the medial cingulate is here versus the posterior cingulate, the fusiform gyrus came out as a different region. So again there’s no other a priori assumption here. What it tells you is that the cortical thickness and regional volumetrics of these regions varies in such a systematic way across individuals, that there probably is an underlying neurobiological reason why this happens, that, that causes this coordinated variation of cortical thickness. But it’s kind of interesting, because we’ve started to use this now as a data-driven parcellation rather than regions of interest that are derived from anatomical atlases, that might or might not be the best way to partition the brain. So that leads to–again I’m revisiting the dimensional imaging slide–where now potentially these dimensions are not going to be principal or independent components, but are going to be dimensions of “are you expressing this pattern that we’ve seen in subtype one of schizophrenia? or “are you expressing this pattern?” “Are you expressing this or that or that pattern that we’ve seen in Alzheimer’s disease.” You know what are the patterns of amyloid deposition, we have similar results in there and so forth. So, this, now each of these dimensions has been informed by either clinical or, you know, or genetic or other biologically interpretable variables, that potentially can create a dimensional neuroimaging system that this is both concise and interpretable. So finally, we’ve been doing this in neuro- oncology, is playing, we’re all looking at glioblastomas[–and and again, this is another sort of complex data: our standard clinical protocol includes, you know, the conventional imagees t1, t1 gadolinium, t2, FLAIR, perfusion series which capture the dynamics of absorption of the tracer, and which, of course, depend on the tumor, you know derivatives from diffusion tensor imaging and so forth, spectroscopy data, there’s models of–bio- physical models of tumor growth that we’ve been developing over the years. And and, and it’s very interesting because with machine learning, we can actually predict where the tumor is going to recur in the future. So gioblastoma has a very grim prognosis: it always comes back. But what they do right now is they resect the enhancing part of the tumor, this is baseline scan, this is what they would do right now, is an peritumoral edematous region which is both edema. we all know that is also infiltrated tissue. But they don’t know– they don’t resect or–they don’t resect this area, because they don’t know what is likely to recur. This is our predictions of what is likely to recur on this patient and this is where recurrence came–appeared–months later. This is another case over here where we predicted that this area here has been heavily infiltrated and it recurred early. So now there are a couple of clinical trials at Penn. One is, one is delivering dose–elevated radiation–escalated radiation to these areas here, 75 gray instead of 60 greater that they usually deliver, to be able to potentially delay recurrence, at least in this area. And we’ve also been talking about potentially an extensive resection clinical trial that would try to go after these areas that show high probability of recurring. You cannot cut the whole area, of course, because you will kill the patient or create series morbidity but if you if you if you can go in and target it in a focal way, that–that would potentially change the outcome. We can predict patient survival as well, which is response to standard treatment, if you want, by using all sorts of different measures from imaging and demographics and even morphologic parameters, of what is the size of enhancing tumor versus non- enhancing tumor and so forth versus edema. Putting it all together into machine learning, we showed back in 2015, that you can predict someone’s survival and, of course, like I said, giablastoma has a very poor prognosis: in three years the majority of patients are dead unfortunately, but some of them have very poor prognoses and some of them have better prognosis. And it’s important to know who has good prognoses because someone who has good prognoses might go to a standard treatment and then be evaluated for–see the progression, and then or to progression–but someone who has poor prognosis might not want to wait until the tumor comes back. They might want to be placed on immunotherapy or some other trial–some other therapy that might be more effective for them. And also our ability to detect treatment effects can go up dramatically, and this is a paper we’re having the works now, we’re trying to publish this. So say that these are– you’re trying to test a treatment and the the test treatment is red, so it prolongs patient survival and this is a histogram of the patients who undergo standard of care. If you have 20 or 30 patients you’re never gonna–or you’re unlikely to find significance of this treatment, because you will need potentially 100 people or 200 people. If you had a perfect predictor of survival. So at baseline if you know for each of these patients how long they’re going to survive. You take the black curve and you squeeze it to almost a delta function. So this is this shows the difference between the actual survival and the predicted survivalm. Because you can predict everybody’s survival the predicted survival and the actual survival fall on this line, and people who responded to the treatment fall below this line. So now you all you need is a few patients to be able to detect treatment effects. More generally the situation is going to be like this, but essentially what this says, is that if I know upfront how long if the patient is going to survive, I can evaluate the treatment whether the treatment worked or not much better than if I don’t. If I–if I–if we try to evaluate the treatment in giablastoma, where survival varies from three months to three plus years, that’s it–it’s a very heterogeneous group–if I know that this patient was expected to survive, roughly speaking, 12 months and they survived 15 months then the treatment must have done something. So this the idea here. So I think this machine learning has a lot of potential in this area in terms of informing clinical trials and treatments. So finally, this is from the field of radiogenomics, something that even to, you know, to me was very surprising. We published this in Neurooncology a couple years ago. We showed that looking at different imaging features of the diffusion perfusion signal and t2 signal and so forth, you can tell whether or not a pati–a cancer has–a patient has cancer with EGFRvIII mutation which is an important target for immunotherapies and other trials. So we you know this is our ROC that we got in the cross-validated discovery set and the replications. It was a fairly robust signal, but small N–at that point it was 135 people or so. So there were a bunch of imaging features and I’m not gonna go through the details. There were, you know, the perfusion signal, the diffusion signal was different, there was these islands of pre-necrotic tissue that were features that characterized EGFR positive tumors. But what was really striking that we didn’t expect at all was this thing on the left here. This is the spatial distribution of the tumors who in patients who did not have the mutation down here and who had the mutation up here. They appear in–this is a probability of finding a tumor in this location of the brain after the formal registration into an atlas. So these are almost disjoint distributions. So the location of the tumor relates to its molecular characteristics, at least in terms of the EGFR mutations. I didn’t expect that. I don’t know exactly why. Potentially the cells of origin, potentially the microenvironment, potentially the higher migratory behavior. It’s not clear why, especially since we don’t have longitudinal data, but it’s a very striking, I think discovery of something that we we didn’t quite expect even though I’ve been working on the formal registration of brain tumors since 1997. It was more of an academic exercise for years, and now that we see the results, I understand that sometimes findings come out of where you don’t expect them. So in with–for cancer imaging we try to bundle a lot of our software into CaPTk: the Cancer Imaging Phenomics Toolkit based on funding that we got from the ITCR: the Information Technology in Cancer Research program here at, in NCI, and we’re glad to be able to do that. And, and I will close my talk here saying that high dimensionality medical images and multiparametric images hide a lot of patterns, of, that we cannot really appreciate but they are there and they have a lot of information and machine learning can tease them out. And finding such patterns is important for diagnostic and predictive purposes, understanding disease processes or patterns of functional activity, and machine learning can help us see things that we cannot see, and–and quantify such patterns in a way that it can lead to dimensional neuroimaging that is both concise and interpretable and informed by, by variables of clinical and biological interest. I’d like to thank a lot of the people who have now changedm in my lab. Many of these people are going off to their–form their own labs or to industry and so forth. And thank you for listening and I’ll take any questions that you have. [Applause]

Leave a Reply

Your email address will not be published. Required fields are marked *