Over in North Central Pennsylvania, healthcare provider Geisinger is creating artificial intelligence models that can beat trained cardiologists with years of experience. The key ingredient in being able to do this is having an electronic health record (EHR) that stretches back to the 1990s.
This is exactly the sort of research that proponents of electronic records have long put forward to justify their existence, while for the last year, Australians have wrestled with whether to submit to a national EHR regime, or choose to opt-out. Around 10% of eligible people, or over 2.5 million, decided to not be a part of the system, as of February.
Having looked at the My Health Record discussion, and the very real progress the likes of Geisinger have made thanks to electronic records, an unanswered question remains lurking in rear of my mind: If you opt-out of EHR, are you morally justified to benefit from the advances made from the records of others?
To be clear, there is no suggestion that treatment would be withheld from anyone who opted-out in the future, should a piece of research based on electronic records lead to the provision of better treatment.
But if the dreams of the opt-out campaign were successful, then My Health Record would not have a large enough patient base to be useful for research, and the future systems could not exist. In such a world, systems trained on EHR data would not be able to be created.
Therefore, is it hypocritical to benefit from a regime that one fought against?
Not that the fighting is without merit, data breaches of medical history are very real, but there is a question whether this is a necessary risk to ensure the progress of medicine.
In effect, are the opted-out asking the rest of society to carry the risk of signing up to a system, while they benefit potentially at no cost? Or to put it another way, if you opt-out of an electronic health record, are you freeloading as others face the risk of privacy breaches or data misuse?
As someone that opted-out of the My Health Record, it's a moral question that has stayed in the back of my head for weeks. In some small way, in order to protect my own self-regarded important privacy, am I holding back the progress of medicine? In coming years, will there be a subtle haemorrhage that proves fatal in someone, that could have been detected if only a neural network had been better trained with more data?
It's a heck of a question to reckon with, but its the kind of moral quandary that warrants thought as machine learning and artificial intelligence impacts more and more on our lives.
ZDNET'S MONDAY MORNING OPENER:
The Monday Morning Opener is our opening salvo for the week in tech. Since we run a global site, this editorial publishes on Monday at 8:00am AEST in Sydney, Australia, which is 6:00pm Eastern Time on Sunday in the US. It is written by a member of ZDNet's global editorial board, which is comprised of our lead editors across Asia, Australia, Europe, and North America.
PREVIOUSLY ON MONDAY MORNING OPENER:
- Cybersecurity is broken: Here's how we start to fix it
- Video games meet enterprise technology, business: The intersection blurs more
- Windows 7 versus Windows 10: Here comes the big push
- Interference of Things: A Sydney smart city story
- Cloud cost control becoming a leading issue for businesses
- Foldable phones could finally push office workers away from the PC
- Your smartphone is going to look a lot stranger next time around
- Good enough 5G fixed-wireless broadband could change everything
- 5G initial use cases are going to be all about business