National Cancer Institute
Twitter facebook linked in Email
 

Paul Macklin

 
Paul Macklin

 

University Of Southern California

Paul Macklin is a mathematician at the University of Southern California and a member of the USC PS-OC. He develops mathematical and computer models that simulate cancer in individual patients. He explains how he goes about the process.

Transcript

Paul Macklin: Cells, like anything else, should obey the same laws of physics, and it’s up to us as modelers to write those into a computer program. And so the key then is to the computer program that has the basic laws of physics in it, is coming up with how to come up with all the little tiny parameters; the little control knob of the model, to make it work with the patient’s data. So we take things like the patient’s pathology and say, “This is the number of cells dividing - we’ll make our cells in the model do that, and these are the number of the cells that are dying, we make our cells in the model do that. Here’s how we think the oxygen is in the patient’s tissue - and we’ll try to make the oxygen diffuse in our model in the same way”. So we take all these inputs that are based upon the patient, and we tune up all the knobs in the computer model, and you run it and you see what we predict: How big is the tumor? How fast does it grow? Does it have a lot of necrosis like we see in some images from pathology, or is it very well oxygenated? Once we can get a model that can predict what is happening in the patient, how does [the tumor] grow, how does it respond to therapy, then maybe we can start saying, “Given this model, can I change it now, can I give therapy to my virtual cancer?” and say, “In your tumor, I predict that it shrinks in this way; in your tumor I predict that that would be a terrible therapy don’t do that”. That’s the goal, I think, of any computational model.

Pauline Davies: Are you trying these out with animals yet, or have you tried it in patients?

Paul Macklin: Well, we’re trying bits and pieces of it in patients in the sense that I’ve been doing a lot of work in ductal carcinoma in situ - this is a type of breast cancer that is non-invasive so far. It’s the most common precursor - it’s found in a lot of women. And the interesting thing is we’re very good at treating DCIS. We can do a lumpectomy and long-term survival is just as good as a mastectomy, so it’s really incredible. But even in that case, we don’t fully know what we’re doing. It turns out that even today, state-of-the-art, over 50% of women—when they get their samples back from pathology–find out that the [leftover cancer] cells are too close to the edge of the surgical sample; they didn’t cut enough tissue out to be safe and feel comfortable about it. So the surgeons have to go in a second time, and this is a cancer we are very good at treating. So there is not a very good understanding of how mammograms and other information we’re getting from patients really maps into what’s actually going on in that cancer. And so one of the early projects that I have worked on was trying to develop a tool to be able to say, “Give me your pathology sections and I’m going to try to calibrate the model and predict how quickly your DCIS grows and what the relationship is between the calcifications in your mammogram and the actual [cancer] cell locations that a pathologist sees”. So we have done some testing in patients. As part of our PS-OC, we are developing a mouse model and so we have some people measuring proteomics and all sorts of very state-of-the-art molecular stuff. We have other people who are actually putting in ways to image the way cells move around inside the mouse and giving us all sorts of new information there. And then we have things where we can put the mouse under a special kind of imager that can look for something called bioluminescence that can make the cancer cells glow and they can detect that glow to see where it is in that mouse. So we have all sorts of data coming from all the way from the molecular scale, stuff that is just horribly small, all the way up to the whole-mouse scale. And the work I am doing now is trying to find ways to mash up all these different bits of data together to give a better picture of what’s going on in the mouse than any one piece gives you alone. And if we can figure it out in the mouse, then we hope we can try it on patients.

Pauline Davies: And would you produce a database that would be of use to oncologists around the world?

Paul Macklin: Well that is an interesting point. Right now I think that we’re all very much in the model building and testing stage, but down the road, the hope would be that you have a few good models that have a track record, and you start building this database saying that this model applied to this cancer patient works; this model applied to that cancer patient didn’t, and why. Eventually you get this large database of model successes and failures and how it relates to the type of patient you put in, and maybe then down the road an oncologist can say, “Based upon my patient, I most closely matched these patients in the database so I’ll use that model to make my prediction and plan the therapy. That would be the long term goal of the database of models and results.

Pauline Davies: So that’s what you’re working towards?

Paul Macklin: That’s what we’re working towards. It’s a bit of a holy grail, but that’s what we would all like to see.

Pauline Davies: I think you mentioned in your talk that you needed standardization throughout the entire process, from taking the samples from the patients to whatever other information you need to put into your models, just so you can make sure that ‘like is like’.

Paul Macklin: Well that’s a good point, and that’s a big problem that were facing. It’s a daunting challenge right now, because we don’t have this standardization. What this means is that if one scientist runs an experiment—say at Moffitt—and they report it as a paper, and I would like to use that information in my model, the only way I can do it is by re-reading the paper, fully understanding their experiment, trying to pluck out the right numbers, tune them in some way to my model and then try to run it. And that’s very very inefficient, and it’s probably error-prone. And that’s going to be an issue more and more as we try to get different doctors working with different models. Probably what we are going to need is the same way to describe the patient or the same way to describe the experiment for everybody, so that all the models can read the same data in the same way and output results that are the same, too. It should cut down on error, and make it a lot easier to trade data between groups.

Pauline Davies: How would you ever get this standardization? Who would be responsible for saying we want it all reported in this particular way?

Paul Macklin: That’s a good question. It’s a bit of the chicken and the egg problem. Who’s going to come and give you data in your standard if you don’t have a standard? How do you plan a standard without any data? And so it’s a bit interesting. I just think someone needs to step forward and show leadership and try to get a small working group together, and at the end of the day, perfect is the enemy of the good. I think you start small and give it a go, and you add more to your standard as you need it. So maybe version one is, let’s say, how quickly the cells divide, how often they do it, how quickly they die, and what their oxygen level is, and maybe their positions. And that can be version one of this standard and a few of us try it out and see what we can do. I think it really comes down to a starting group of people and a simple starting point, and you grow it as you need it.

Pauline Davies: Now another thing that certainly grabbed my attention during your talk was when you described models that led some people to believe that Hurricane Sandy would miss the U.S. but there was one model that said it would make land fall and you figure out that by averaging the different models you are going to get a more accurate prediction, is that correct?

Paul Macklin: More or less. The National Hurricane Service has a number of models that they run on the same data, so it’s fascinating to me as a modeler. They take in this meteorological data, things from the weather stations, things from the satellites, all sorts of data sources, hurricane hunter aircraft, and they plug the same data into multiple models and they all make their own prediction of where that hurricane is going to go. And more or less they are doing a good job, I mean some models do well, some do poorly, and it’s never the same model that does a bad job because they would just throw it out. It’s never the same model that does a great job or else they would just use that one. So what they find is that generally speaking, when they look at the hurricane tracks for a five-day prediction of where it’s going, there are going to be outliers, and there is no way to know if that outlier is going to be right or wrong. Hurricane Sandy just happened to be a fascinating one in that several days before landfall, most of the models were saying it was going out to sea and one of the models—I think the European model—said it was going to hit landfall. And so if you have a model that takes that outlier into account, it shifts your average prediction of where it’s going to go a little bit, and it makes people pay a little bit more attention to it. And the meteorologists in the National Weather Service are very very good at doing this. They have had an incredible amount of improvement in their landfall predictions in the last 10 years, and a lot of that is due to how to average these models together. So with Sandy, there was one that went [that way], and over a few days the other models came over to that same point of view too as the data started feeding them the right way. But there is no way to know which model is going to give you the right answer in advance, so you pay attention to all of them.

Pauline Davies: And how would that apply to cancer?

Paul Macklin: Well, I think with cancer, we’re in a very interesting situation right now. We’re very much at the beginning in my opinion. We have a lot of groups that have developed a good model of this, a good model of that, and they are all very good models; they have great insights. But all models are models. And that means they have weaknesses, and they have assumptions that aren’t always right. And with cancer, you never know fully which assumptions are going to be right or wrong. And so some models do great on some cancers, and some models do great on other cancers. But right now, what we tend to have is a group here that applies the same models to a bunch of cancers and another group over there that applies the same model to a bunch of cancers. Which means that no one group is doing a great job on all of their cancers. So probably, instead of having many groups with a single model, it would be better to have many groups putting their models together into an aggregate model. So everybody gets to try all the models on all of their cancers. That way, you can imagine if the weather service did this, then CBS on one station would use the one model, and NBC on another station would use another model. So sometimes you would be lucky and listen to the right station and get the right prediction, and then sometimes you would be unlucky and listen to the wrong station and get the wrong prediction. Instead the National Weather Service said, “No, let’s just put them all together, we’ll take care of it and send the same predictions to everybody”. That’s the sort of thing we need here, to have the same models, accessible by many people and the same data inputs, accessible by many people and to really try to make this work.

Pauline Davies: How do you think the PS-OC can help do this?

Paul Macklin: The PS-OC has a whole lot of talent here. We have a lot of modeling groups that have developed a track record in applying math, computer science and engineering to cancer. We also have a lot of biologists who are very accustomed to working with modelers, and a large part of the beginning of this network was us learning how to speak each other’s languages. How do biologists speak math? How do mathematicians speak biology? How do clinicians speak these languages? We have finally reached a point where all the right people can speak the same language and are making good headway on projects; they are generating data, and they are generating models with a track record. Importantly the modelers are learning how to work with data, and that’s actually a very difficult problem. So I think all the elements are here. We have people who used to working with data; we have people who generate data and who are used to work with modelers, and it is just now a matter of getting the willpower for people to put it together for the common good. I think there is a lot of desire out here to do that, because everyone wants to make life better for the patients. I think this is one way towards that.

Pauline Davies: Would you have been doing cancer research apart from the PS-OC?

Paul Macklin: Yes, I would. I started working on cancer modeling as a master’s student in 2003. At first it was an interesting numerical problem to me as a mathematician, but the more and more I learned about it, I found the cell biology fascinating. And I was very excited to be involved in a problem that could make a real impact on people’s lives. We’ve all been touched by cancer, we’ve all had loved ones that we’ve lost, and I want to make a difference on that. I am passionate about this and I want it to work.

I hope the PS-OC is around for a long time!