Healthcare Organizations Must Address the Security and Privacy Risks of AI

Healthcare Organizations Must Address the Security and Privacy Risks of AI

Artificial intelligence has many compelling use cases in healthcare. Computer vision systems, for example, can identify patterns that humans might not detect. A recent study published in the Lancet found that AI-assisted analysis of medical images improved the detection of breast cancer by about 20 percent. A 2022 Mayo Clinic study found that AI-assisted colonoscopies reduced the rate of missed cancers by 50 percent.

Machine learning systems can rapidly analyze vast clinical documentation and predict medical outcomes, enabling doctors to make more accurate diagnoses. AI can also identify previously unknown correlations in healthcare data, paving the way for new drugs and treatment plans.

AI systems based on large language models (LLMs) are increasingly accurate. A new study published in the Journal of Medical Internet Research found that ChatGPT made accurate clinical decisions about 72 percent of the time. Marc Succi, M.D., one of the study’s authors, said the chatbot’s accuracy compared to that of an intern or resident.

Image 1- medical ai.jpg

Patients Wary of AI

Natural language processing shows great promise for enhancing the patient experience. NLP systems can quickly process information in electronic health records to identify potential risks and relevant treatments. Speech-to-text applications can transcribe clinical notes, enabling physicians to spend less time looking at computer screens and more time delivering personalized services. AI can also automate many administrative processes, saving time and money and minimizing human error.

But while clinicians are optimistic about the potential for AI in healthcare, patients are more cautious. A new Morning Consult survey of U.S. adults found that 70 percent are “concerned” about the increased use of AI in healthcare.

Almost half felt comfortable with the use of AI for administrative tasks such as analyzing medical data and helping doctors take notes. The comfort level decreased sharply for AI’s use in clinical decisions. Just 38 percent of survey respondents said they were comfortable with AI making diagnoses. More than three-quarters said they should be notified if AI is being used.

Image 2- medical ai.jpg

Privacy and Security Concerns

Consumers’ concerns are not unfounded. AI systems come with a range of risks, including the security and privacy of sensitive health information. When used in a healthcare environment, AI must be secured like any other application. However, organizations must take additional steps to ensure that data is accessed and used in an appropriate way.

For example, a hospital might want to analyze data across its environment to determine how many people suffer from X disease in a given timeframe. The objective might be to determine how large to make a particular hospital wing or how much to invest in certain types of diagnostic equipment. This is a great use of AI. An AI system can rapidly process vast amounts of data, extract the relevant information and produce the requested analysis.

To protect patient privacy, however, the hospital must ensure that the data is anonymized correctly. The data must also be properly tagged and categorized, and permissions set based on how the data is used rather than the role of the individual requesting the information.

Image 3- medical ai.jpg

Context Is Key

If a doctor is using AI to analyze patient-specific information, the output will include the patient’s name, address, health history and other personal data. However, if the same doctor is asking a more general question, the output should not contain personal data. That’s tricky. In most organizations, it’s black and white — either you have access to the information or you don’t. With healthcare AI, it gets a little grayer. An individual might have permission to access the information, but the data should not be included given the context of the request.

That’s where DeSeMa’s talent can help. We can set the permissions on data and instruct the AI system to recognize context when generating output. The AI system will know when to anonymize data and when it can safely identify individuals.

AI in healthcare holds great promise but also comes with significant risks. DeSeMa can help healthcare organizations tap into the value of AI while ensuring the privacy and security of sensitive health information.

Get Started Today!