I’m taking a philosophy class that touches a lot upon what cognition really means. Which led me to thinking – if we’re becoming closer and closer to developing artificial intelligence that rivals human intelligence, could we develop artificial intelligence to solve problems within healthcare?
Solutions are already in the works. A 2013 study from Indiana University showed that artificial intelligence machines were able to diagnose and reduce the cost of healthcare better than physicians by 50%! Using 500 randomly selected patients from that group for simulations, the two compared actual doctor performance and patient outcomes against sequential decision-making models, all using real patient data. They found great disparity in the cost per unit of outcome change when the artificial intelligence model’s cost of $189 was compared to the treatment-as-usual cost of $497.
However, one problem with replacing physicians with artificial intelligence may be the possibility of removing the doctor-patient relationship from the equation and undermines the importance of human relationship in the treatment process.
We are reaching a time in our society that we are slowly developing the tools needed to create intelligent beings that could solve problems. But a key distinction so far is that our goals in artificial intelligence have always been to create something as good or better than an average human.
But what if we switched that around? What if our goal was actually to create an artificial intelligence that had a problem itself? For example, could we develop an artificial intelligence that thinks like a patient in order to understand patient behavior?
There are plenty of virtual reality programs that exist for doctors to test their skills on surgery on specific parts of the human body, and now we know artificial intelligence could even replace doctors in diagnosis, but could there be one day be an artificial intelligence modeled after a sick person – an intelligent agent that may not be biologically (mainly because if it’s a robot it may not be made of biological parts) sick but we install a state of mind into it that would make it behave as it was sick? I’m talking about creating a robot patient who we would somehow program into thinking it has cancer, and doctors could be able to talk to the robot and it would respond and behave the same way as a cancer patient. It would be a great tool for doctors to understand patient behavior and how to meet their needs relationally, and I can see the uses it may have in studying psychology and philosophy as well.
As a Cognitive Science major, I can’t help but wonder since scientists, philosophers, and engineers have not been able to agree on an exact theory and replica of an artificial intelligence that represents a normal, healthy human, then how much harder would it be to create an accurate artificial intelligence that is a replica of someone who is sick. After all, to model something that we might called defective, do you need to have a complete understanding of the original, non-defective object first?
Another complication would be distinguishing whether we could create a patient based on what is called “weak artificial intelligence” vs. “strong artificial intelligence”. Weak artificial intelligence is being able to create a machine that behaves intelligently, but strong artificial intelligence is creating a machine that can actually think. The current goal of researchers is to create strong artificial intelligence, which is why you have supercomputers like Watson who apparently can solve problems and answer questions by finding the information on its own. So if we even were able to create a machine that can behave like a patient, would it be because it has weak or strong artificial intelligence?
I believe there are many factors to consider both in philosophy and in technology before this possibility could ever be achieved. But for now, perhaps the best way to understand patient behavior is to communicate with the patient.