Within the existing drugs market, the potential of AI for enabling new innovations has also been recognised, with a number of pharmaceutical companies dedicating significant resource to the investigation of machine learning techniques for drug discovery. Future applications are likely to include the use of AI techniques for the identification of treatments that could be repurposed4, or for the detection of new molecules or genetic markers for rare diseases5.
Artificial intelligence has also been embedded within more mainstream health products, with giants such as Apple, Amazon and Google making an obvious play toward the healthcare and wearables sector. A number of algorithms have gained FDA approval since 2014, starting with AliveCor’s algorithm for the detection of atrial fibrillation, which has also been approved for use in conjunction the Apple watch6,7. Another strong indicator of a technology-driven future for the healthcare industry came from a recent statement made by Apple CEO Tim Cook, who proposed that Apple’s long term legacy would lie not with its iPod or iPhone technology, but in its work in personal health8.
But are we data-ready?
However, harnessing the potential of AI within existing healthcare systems, both in the UK and internationally, will not come without its challenges. For them to work properly, AI systems need to be trained using vast amounts of high quality data, which will require standardised, electronic infrastructure. Currently in the UK, many providers use contrasting IT systems and record and label their data in different ways, with some hospitals and practices still relying on paper-based records. Furthermore, for an AI algorithm to work effectively, it will require huge amounts of data to learn from, which will bring its own practical challenges in terms of storage and data management. Algorithms are ultimately only as good as the information they have to learn from, and so new, fit-for-size data quality procedures will also be necessary to enable stakeholders to identify and address anomalies within datasets. With current estimates suggesting that less than 20% of the world's medical data is in a sufficient format to enable the routine use of AI9, it is clear that a huge amount of work is still to be done before available data is actually fit for purpose.
Engaging stakeholders and evidence-based solutions
The successful use of AI within healthcare systems will also rely on the engagement of patients and healthcare professionals. The trust of patients will need to be earned, especially in light of increased awareness of data security across organisations and amongst the wider general public. This is something the NHS learned the hard way after entering a partnership with Google’s DeepMind in 2014, when the Information Commissioner's Office ruled that 1.6 million NHS patient records were accessed and analysed without properly informing patients how their details were going to be used10. Going forward, it will be of paramount importance that healthcare bodies prove that procedures have been followed and patients adequately informed about how their data is being used.
There is also the potential for misdiagnoses to consider, such as false positives, with experts warning that software may even be at risk of covert attacks that trick algorithms into misdiagnosing conditions11. Additional questions and legal and ethical issues will continue to be raised, such as what happens or who is responsible if an algorithm misdiagnoses a patient, or how to manage the data of patients who do not agree to the use of their records.
Furthermore despite claims it will help to reduce costs, the reliance on huge data numbers and complex infrastructure means AI has the potential to be very costly, and stakeholders will need evidence that algorithms are truly effective and are providing genuine relief and impact for healthcare systems. Robust clinical and real world evidence will be needed to convince stakeholders of the value and cost effectiveness of new AI approaches. For instance, some have argued that GP at Hand was adopted too quickly, in the absence of robust evidence supporting its effectiveness for primary care12, which may become a significant barrier to its widespread adoption across the wider NHS.
A trusted sidekick (but not a replacement)
One of the biggest concerns of critics is that AI algorithms will be viewed as a replacement for experienced HCPs, with a resulting negative impact on patient care. Each patient and situation is different, and by relying entirely on AI one risks disregarding the valuable real world experience, often gained from other parts of life, that healthcare professionals will bring to their day-to-day job. As such, many feel the use of AI within clinical practice should be restricted to a supporting role, and viewed only as an aid, and not a replacement, for the expertise of trained medical professionals.
For AI to be routinely adopted for the benefit of patients there will need to be adequate infrastructure and standards in place that ensure the patient is kept at the heart of care. This concept has been recognised by NHSX, the new unit that plans to oversee digital transformation of the health and care system in the UK. One of the initial priorities for NHSX will be to create a “policy toolkit” in collaboration with patient and public groups that will help guide the responsible use of AI for healthcare delivery while ensuring the needs of the patient are not forgotten13.
Ultimately, while it does appear that the AI revolution has begun- bringing with it countless opportunities for our industry – there is undoubtedly further work to be done in order to instil confidence, generate evidence and truly harness the potential that artificial intelligence can offer.