Category Archives: Humanity

Trust the Knife

Last summer my father had a basal cell carcinoma removed. It was about a dime sized patch of skin just to the left of his nose. Although this form of cancer is rarely deadly it was still a sobering experience for my family especially considering we are all fair skinned and highly susceptible to skin cancer. Living in southern California does not help either. The surgery was successful and after one year and several cortisone shots one can barely tell my dad had a chunk of flesh taken out of his face.

My dad did however have one problem with his treatment process. It wasn’t the hospital facilities or the painful tending to his wound every night after the surgery. Instead his biggest issue with the whole experience was that his doctor rarely talked during checkups. During the whole process I remember him constantly bringing up how the doctor would come in the room, examine him and then most often leave without uttering a single word. When my dad tried to ask him how everything is going he would nod and mutter inaudibly under his breath. The only words the doctor ever said to my dad involved what he was going to do and that my dad had to make another appointment with his secretary. The nurse was responsible for informing him why they were doing surgery and providing background information on this form of cancer. My dad was really turned off by his doctor’s lack of enthusiasm and transparency. I was shocked that a doctor, whose job it is to form a bond with his or her patient and instill trust, would not share information face to face and instead use nurses convey reasoning for the treatment.


To me this kind of doctor seems to be of the old school type, those who believe you do what I say and everything will be okay. While many younger doctors focus on good bedside manner there remain many that practice old-fashioned principles. Granted my dad’s doctor is in his late seventies so he is most likely the byproduct of this archaic brand of practicing medicine. Nonetheless, this example draws attention to the necessity of doctor-patient communication. It is important not only that communication take place regularly but that the patient feels he or she is on a level playing field and can speak freely. The best way to ensure patient involvement is for the doctor to speak more often, using language that the patient can understand while having a pleasant and familiar tone. In this class we have learned a lot about how technology can enhance communication but it is vital we do not forget that quality care involves personal conversation that creates an atmosphere conducive to establishing trust.

Artificial Intelligence and Healthcare

I’m taking a philosophy class that touches a lot upon what cognition really means. Which led me to thinking – if we’re becoming closer and closer to developing artificial intelligence that rivals human intelligence, could we develop artificial intelligence to solve problems within healthcare?

Solutions are already in the works. A 2013 study from Indiana University showed that artificial intelligence machines were able to diagnose and reduce the cost of healthcare better than physicians by 50%! Using 500 randomly selected patients from that group for simulations, the two compared actual doctor performance and patient outcomes against sequential decision-making models, all using real patient data. They found great disparity in the cost per unit of outcome change when the artificial intelligence model’s cost of $189 was compared to the treatment-as-usual cost of $497.

However, one problem with replacing physicians with artificial intelligence may be the possibility of removing the doctor-patient relationship from the equation and undermines the importance of human relationship in the treatment process.

We are reaching a time in our society that we are slowly developing the tools needed to create intelligent beings that could solve problems. But a key distinction so far is that our goals in artificial intelligence have always been to create something as good or better than an average human.

But what if we switched that around? What if our goal was actually to create an artificial intelligence that had a problem itself? For example, could we develop an artificial intelligence that thinks like a patient in order to understand patient behavior?

There are plenty of virtual reality programs that exist for doctors to test their skills on surgery on specific parts of the human body, and now we know artificial intelligence could even replace doctors in diagnosis, but could there be one day be an artificial intelligence modeled after a sick person – an intelligent agent that may not be biologically (mainly because if it’s a robot it may not be made of biological parts) sick but we install a state of mind into it that would make it behave as it was sick? I’m talking about creating a robot patient who we would somehow program into thinking it has cancer, and doctors could be able to talk to the robot and it would respond and behave the same way as a cancer patient. It would be a great tool for doctors to understand patient behavior and how to meet their needs relationally, and I can see the uses it may have in studying psychology and philosophy as well.

As a Cognitive Science major, I can’t help but wonder since scientists, philosophers, and engineers have not been able to agree on an exact theory and replica of an artificial intelligence that represents a normal, healthy human, then how much harder would it be to create an accurate artificial intelligence that is a replica of someone who is sick.  After all, to model something that we might called defective, do you need to have a complete understanding of the original, non-defective object first?

Another complication would be distinguishing whether we could create a patient based on what is called “weak artificial intelligence” vs. “strong artificial intelligence”. Weak artificial intelligence is being able to create a machine that behaves intelligently, but strong artificial intelligence is creating a machine that can actually think. The current goal of researchers is to create strong artificial intelligence, which is why you have supercomputers like Watson who apparently can solve problems and answer questions by finding the information on its own. So if we even were able to create a machine that can behave like a patient, would it be because it has weak or strong artificial intelligence?

I believe there are many factors to consider both in philosophy and in technology before this possibility could ever be achieved. But for now, perhaps the best way to understand patient behavior is to communicate with the patient.


Guided Medicine or Big Brother: A Thought Experiment

Self-tracking devices have been lauded as the potential solution to filling in the gaps in traditional clinical data collection.  Oftentimes, measurements in the doctor’s office are not truly indicative of the patient’s everyday behavior and lifestyle; patients may experience white coat syndrome, or increased anxiety in the presence of the doctor.  Automatic self-tracking in everyday living may provide more accurate data because the data is collected in more natural settings.

One of the goals of self-tracking is to model and predict human behavior.  This sounds quite promising; however, how does this automated self-tracking actually come about?  Would we want our personal handheld devices to predict our next moves?  And what a fascinating thought experiment it would be to have our phones, these inanimate devices, give us life suggestions.  But oh wait, they do.

Google Now carefully watches its users’ every interaction to improve its efficacy.  It can predict where you will go judging by your past behavior.  It can detect that on Wednesdays, you like to get a Grande green tea frappe at Starbucks before your Russian literature class, and sometimes, when you’re having a particularly packed week, you treat yourself and venture into the bold Venti end of the spectrum.  While Google Now has the potential to notify you if there is a promotion on green tea frappes, it may suggest another drink perhaps, and as a subtle suggestion, a drink with fewer calories and a lower fat content.

Photo Credit:

Popular Science awarded Google Now as the 2012 “Innovation of the Year” for its potential to serve as an “intelligent personal assistant.”  It can infer your age bracket from your recent searches and tailor advertisements to your curated predilections.  For your mother, it can suggest her favorite hair dyes or jewelry boutiques, but what if one day following her sixtieth birthday, it begins suggesting cholesterol medicine and life insurance?  While this teeters on the edge of being mildly insensitive, it may regrettably be a sensible recommendation.

But it doesn’t stop there.  Google Now has a minute-by-minute map of your life.  Not only can it suggest nearby attractions and events, but it can also summarize your daily physical activity.  Given your latest late-night food adventures, it could now suggest restaurants with healthier vegetarian options.  It could also suggest a route that requires more physical exertion (to make up for that discreet donut run that you thought went undetected), and in your hurry, you wouldn’t notice that it was slightly more strenuous, with a steeper incline of about two degrees.

Photo Credit:

Physicians have the potential to produce mobile health applications that use the same tracking devices as Google Now.  While they have the promise of displaying customized content and advertisements, they can also subtly suggest healthier eats and longer walking routes.  With smartphones constantly linking accounts and contacts, mobile health applications will soon be connected to the information collected by Google Now.  And suddenly, without your conscious awareness, you will be forced to be utterly and irrevocably healthy.

Instant Access to Yourself

With our constant obsession with technological advancements and the fashionable desire to be the first owner of the newest products, we must remember what we already have.  And this isn’t just a banal platitude about being grateful for what we have.  Even though the answers to the world’s problems seem to lie in the continued miniaturization of sensors and further embedded systems, have we forgotten what is already available to us?  Perhaps we should shift the focus from finding the most sophisticated devices to becoming more proficient with what already exists.

In the summer of 2013, I took a psychiatry course at the Geffen School of Medicine at UCLA.  The course had the rather grandiose title: “Personal Brain Management,” yet that was exactly what the physician taught.  It turns out that by having a greater control of what we think and how we think can protect us from a wealth of illnesses.  The only technological advancement I needed to supplement my project was a thermometer, yet that was enough.

Photo Credit:

My independent project focused on utilizing biofeedback for Mindfulness-Based Stress Reduction (MBSR).  MBSR advocates that practicing mindfulness meditation can help reduce stress and promote greater mental and physical health.  By using a simple stress thermometer, I was able to increase my awareness of my body temperature.  While such a physiological marker may seem to be beyond our control, managing our internal thermostat is surprisingly possible. Roughly speaking, more relaxed states are correlated with increased body temperature, and the thermometer served as a means to quantify these changes.

With just a crude thermometer in hand, I was able to cultivate my relaxation response (in contrast to the familiar stress response).  At the end of a six-week trial, I found that I was better able to control my body temperature, and I scored significantly lower on a battery of stress measures.  For my project, I did not need a smartphone or the newest Nike product or the most sensitive sensors.  I needed myself and 30 minutes of my day.  And am I really so important that I cannot sacrifice the entirety of 30 minutes to myself?

Photo Credit:

In our constant and desperate search for what is new, let’s not forget that we have instant access to ourselves.  While innovative electronic devices can help us organize data and take measurements, let’s not get carried away with their seemingly whimsical promises.  It is as much our duty to discover and invent as it is to make more effective use of what already exists.  By remembering that the first generation of iPhones was released in 2007, we become aware of the humbling reality that perhaps society can function without a supercomputer in hand.

While simple and sophisticated mobile health applications can encourage patients to become more empowered, decreased reliance on digital technology is in its own right just as empowering.  My project at UCLA showed me that I could become more self-sufficient and cultivate my body’s natural capacity to heal with a minimalist approach to technology use.

Taking Heart Transplants to the Next Level…But Should We?

This is so crazy how relevant this is to our project, but I saw this news story shared on my Facebook newsfeed: Link here because I can’t embed the video for some reason.

While we’re working on telling the story of artificial hearts in the Texas Medical Center, at the same time researchers in the Texas Heart Institute right here in Houston are taking heart transplants to the next level. Bypassing even a total artificial heart transplant, they are now using stem cells to manipulate pigs’ hearts into hearts that can work in humans. In the video, you can see so far they have successfully been able to transforms the cells the pig’s heart into a mold of a human heart and the next step is to insert cells inside the heart so that it will properly perform the pumping functions. It was crazy how the reporter was able to hold this modified heart (still white from being grown by stem cells) and squeeze it like it was a toy.

However, the meat of the piece started when the reporter started questioning Dr. Doris Taylor, the head researcher, on the ethical implications of conducting this stem cell research. I was surprised at how quickly Dr. Taylor defended her work, probably because this was a commonly asked and attacked question. Instead of thinking about the lives that may be lost by using stem cells, she reasoned that because she had the ability and the tools able to save lives, even if those tools were stem cells, it would be “morally wrong not to go forward using those tools”.

I noticed in this video how they utilized emotional (ethos) and moral (pathos) appeal to convince the viewer to support the stem cell research. The beginning of the news piece features a young woman who, suffering from a terminal heart disease, waited and eventually received a traditional heart transplant from a dying man. I was kind of confused at first because I thought the news piece was going to be more of this young woman’s story but instead turned into a news story about stem cell research. However, they brought her back at the end of the piece and asked her if she would support someone getting a heart made out of stem cells, and with tears in her eyes, the patient talked about how lucky she was to get a heart and how if it was possible in any way for others in need to get the same she was all for it. Now, I do have my own opinions about whether it’s morally right to conduct stem cell research and I won’t be sharing it here, but to me it was an obvious storytelling tactic to get viewers to sympathize and support stem cell research.

The concept of ethical conduct in research and treatment has been an ongoing issue for the physician. Dr. Akers faced similar concerns and backlash when he performed testing of artificial hearts on animals and in society today the hot topic is the consequences of using stem cells. I am not a medical student, but I have heard that when a student first enters medical school they must recite the Hippocratic Oath that states they will vow to take care of the patient as best they can and do no harm to them. But for the physician (and the government), is the best way possible a solution that involves stem cells and should stem cell research be considered unfairly taking a life from another to save someone else? Or is it indeed is morally wrong not to use whatever means possible to save a person’s life?


Get every new post delivered to your Inbox

Join other followers: