Human nature and our ability to adapt seamlessly to new technologies that simplify our lives creates a space for A.I. to be abused. In effect we allow it to take over primary roles once assumed the exclusive domain of highly trained medical professionals. A.I. isn’t to blame for this.
We can build software to do, quite literally, anything. The sky and the extent of our creative processes are literally the only limits. Facial recognition software is a great example. It has made spectacular advances in the last five years and not even a mask can help you evade its ever-present gaze. Unlimited budgets presented by eager backers from the NSA and almost every other intelligence and law enforcement agency on the planet have fostered and ensured a focused industry dedicated to the pursuit of perfecting the human version of “Where’s Wally”.
It may interest you to know that a deep learning company Megvii, in association with Huawei, has developed A.I. that can, with a reasonable degree of accuracy, determine the ethnicity of an Asian face. This machine learning model described in a recent paper successfully predicted Chinese Uyghur, Tibetan, and Korean ethnicity. Impressive you might say, and particularly helpful in a dystopian world where the Chinese allegedly seek to identify certain segments of their population.
Quite a few A.I. projects have focused recently on identifying race, gender, age, and facial features simply from the sound of your voice. The systems then generate a face they construct based on what the software has identified in your voice. These tools have created an ethical minefield made all the more complex by the inclusion of the gender debate and clearly defined sexes, parameters not addressed by the developers.
There is nowhere on the planet that you can escape the reach of A.I. It has infiltrated our lives, lives in our pocket via our smartphones, runs our cars, our homes, and is now making choices for us. It’s been doing that for a while. Think of the route Siri worked out for you this morning to avoid traffic. It’s a simple, but effective example of the impact of A.I.
Did you decide how you reached your place of work or did A.I. deliver you, safe and gridlock free, to the office? Who just stocked your refrigerator, you or Alexa? Any music tracks you haven’t heard in a while from your music library? You do realize there’s nothing random about the random playlist option? A.I. doesn’t do random.
Ethics in Science
Medicine‘s raison d’etre’ is the care of human life. For this reason, and this reason alone, a carefully considered set of ethics developed alongside the profession, keeping everyone safe and ensuring medicine placed a premium on the value of human life. For generations, this ethical system of checks and balances has protected both the patient and the provider. That time is at an end and it is not only A.I. that reflects this change in the field of medicine.
Encroachment and overlapping technologies from other fields of science and commerce have become commonplace in modern medicine. Technologies with the potential and ability to revolutionize and reshape the healthcare profession are introduced to a shell-shocked medical industry on an almost daily basis.
The scale of deployment of new technology is dizzying and unparalleled and we are simply not prepared for it. Not ethically, not morally, and not as a society.
Medicine can not rely on other fields of science or third-party agents to respect its code of ethics. Nonrelated fields, like the study and development of artificial intelligence, have until recently, only touched on the moral and ethical implications of their work in passing. New committees and bodies are gradually emerging from within these industries to address societal ethics and safety, but they remain in their infancy as they grapple with the larger debates centering on societal values.
Unlike medicine, scientific projects and papers generated by the computational sciences arent held up to rigid ethical scrutiny and it is easy to understand why. Until very recently, A.I. ethics only occupied the domain of philosophical discourse. Technology’s rapid rise caught everyone by surprise, including the technologists.
Take the example of the challenge of building a face simply from a voice sample. Any decent self-respecting coder is going to jump at the opportunity to develop it. The thrill of the challenge aside, the potential applications for systems like this are legion, and that’s exactly where the rub comes in.
Technology or A.I. is neither self-aware nor prescient. It can, currently, only perform tasks we ascribe it and A.I. does not provide flawless solutions. A.I. makes mistakes, simply because it is, for now, and the foreseeable future, reliant on human coding, on our biases, and the flawed models we provide it with.
We are fallible and therefore A.I. is fallible. It is the one key hurdle A.I. may never overcome. To create itself, it requires systems. Systems we develop, reflecting our biases and flawed ethics.
Recognizing the Limits of Technology
Humans are unethical and immoral creatures. We are all guilty of crossing lines we recognize and choose to ignore, from small daily transgressions to world-altering genocide. Our world is habituated by areas of gray, shades of right and wrong that we blend on a continual basis, most often for the benefit of self. Technology, in particular, A.I. is a tool, an extension of our interpretation of this world, and as always, the tool is not inherently evil. Its fate or intent rests in the hand that wields it.
This is why the field of ethics matters in medicine. It is also why the medical industry cannot rely on outside models to dictate the safety and implementation of its patient-centered view of the world. This is both irresponsible and naive and a failure to address immerging technology now will result in a dystopian future for medicine, possibly within the decade.
Tools do not, and cannot make decisions on our behalf or the behalf of the patient. They exist merely to serve both provider and patient alike.
Urgent rules of engagement are required that emanate from within medicine, rules that dictate the reach and limit of Artificial Intelligence in human-based care settings.
Let’s examine that statement in relation to care settings through the use of a simple example. We’ll return to our facial recognition software and the potential for this software to interpret intent and emotion. Assume it’s deployed into casualty wards across the country to improve patient assessment. The software analyses patient’s faces, picking up signs it interprets as distress, and places patients awaiting care into a queue, based on its assessment of their distress.
I can almost guarantee you that a system similar to this is currently under development. Software used by the DOD to analyze faces for perceived threats would be an excellent starting point and it would make sense to incorporate this into a diagnostic system to prioritize triage.
The problem arises when we empower A.I. with the ability to make its own decisions, rather than using its output to make our own decisions. When A.I. decides who gets to see the surgeon next, we have a problem. That is unacceptable.
The true purpose of A.I. is to supplement the human brain, to allow for millions of variables we cannot process, to consider everything, and to increase or expand the options for human-based choice. The ways in which we choose to implement these digital gifts in the practice of healthcare are where the issues arise.
Redefining Medical Ethics for a Technological Future
A.I. exists to augment, not dictate. This one simple rule should form the basis for a new arm of medical ethics, one that engages specifically with technology. At the heart of this supplemental code of ethics, as ever, are the patients, their safety, their health, and their access to care.
It isn’t simply the care patients receive, but their access to it, and the spider of technology has spun an all-encompassing digital web that envelops all of modern healthcare. A.I. is everywhere. It controls data and mines patient information on behalf of third parties, health insurers, drug developers, and federal agencies.
Each A.I. system functions independently and is subject to the biases and preferences hardcoded by developers, instructed accordingly in their pursuits and design.
Patients are punished in a myriad of ways. Refusal to care, higher insurance premiums, medications prescribed based on algorithms rather than clinical evidence, flawed diagnoses, the list is endless. A.I.-based diagnoses of scans, for instance, pick up early signs of disease not visible to the trained human eye, leading to preemptive treatment for conditions that may or may not, eventuate over time.
Evaluative software is another example, as it analyses provider efficiency. Do you think for an instant that healthcare chains, hospitals, and large clinics deploy this software for the benefit of the patient? Results and rankings are based on profit and efficiency, not patient-oriented outcomes. This is the inevitable cost of developing health care solutions in commercial isolation.
The issue is broad, complex, and impacts all aspects of healthcare. Telehealth is another perfect example. What do we know of the companies that offer remote platforms for mental health care providers? Has it occurred to the proponents of these systems that unbeknown to you or your patient, the A.I. running the system is under instruction?
It is more than likely analyzing voice patterns and facial signals to decide what the most appropriate services would be to offer to both you and your patient the next time either of you opens a browser?
You aren’t alone in that virtual consultation. There is a third party present and it’s called A.I. To assume otherwise is to be naive.
Who checks? Who validates the software we have so willingly accepted into the lives of providers and patients as being beyond reproach? It’s time to wake up from our self-indulgent siesta, before commercial interests completely replace the inefficient and naive humans that inhabit healthcare. A.I. won’t be to blame, it is simply a tool in the hands of unethical forces who would have their way with the highly profitable platform of healthcare.
It is up to the industry to look out for itself and now would be a really good time to start. The clock has been running for a while. A.I. was never, and never will be the enemy. It is an essential and integral part of healthcare’s future. We simply need to manage it responsibly, for the sake of our patients.