Legal considerations when using artificial intelligence in practice
Legally Speaking
Daniel F. Shay, Esq. is a health care attorney at Alice G. Gosfield and Associates, P.C.
By Daniel F. Shay, Esq, March 1, 2025
DermWorld covers legal issues in “Legally Speaking.” This month’s author, Daniel F. Shay, Esq. is a health care attorney at Alice G. Gosfield and Associates, P.C.
ChatGPT launched on Nov. 30, 2022, propelling “artificial intelligence” (“AI”) from the pages of science fiction novels into one of the hottest buzzwords in popular culture. Today, AI is everywhere. We may not have holographic doctors and robot butlers yet (to say nothing of flying cars), but the tech industry continues to tout the potential that AI offers, while inserting AI functionality into a broad range of software. The medical industry is likewise already using AI, and likely will expand this use in the future. However, such use must be undertaken carefully, and smaller physician practices should be cautious about adopting AI without a good understanding of how it works, and the legal and practical risks it poses.
AI overview
At a baseline, AI in its current form, especially “generative” AI, is best understood as a sophisticated tool that is very good at recognizing and reproducing patterns. Without turning this article into a computer science primer, the AI currently under discussion usually uses a form of “machine learning” to process information rapidly and recognize the patterns within that information.
As a simple example of “machine learning,” consider a streaming music service that responds to your “thumbs up” and “thumbs down” selections by presenting you different music that shares certain common features with the music you “thumb up.” The latter case is arguably an example of “machine learning.” The software has, either by human programming or on its own, developed the ability to recognize common features across songs (e.g., tempo, instrumentation, musical genre, etc.), and then also responds to user input to more fine-tune the results it produces. This functionality distinguishes the software from a standard media player that may only be able to “repeat” or “shuffle” a playlist, but otherwise is limited to only what the user inputs.
Generative AI, such as ChatGPT for text, or DALL-E or Midjourney for images, use machine learning to create “statistically probable” outputs when prompted by a user. Note that “statistically probable” applies to the probability that the software produces a result akin to what the user requested and that corresponds to the training data provided to the software. For example, if prompted to describe the experience of eating a cheeseburger, a generative AI may rely on common factors and data that appear across a broad variety of sources, from advertising copy to menu descriptions, to restaurant and/or dish reviews, recipes, etc., all to ultimately attempt to provide a “statistically probable” description of “eating a cheeseburger.” This may become more noticeable if one submits the same query multiple times, at which point one can sometimes spot repeated turns of phrase in each of the AI’s responses, thereby underscoring how AIs deal primarily in patterns.
Academy Practice Management Center
Get more resources on practice management.
Health care uses
The integration of AI in the health care industry is still in its infancy, but we are likely to see greater adoption of this new technology over time. At present, AI is being used most often in the field of radiology, but it is easy to see how the technology could be adapted to other similar uses. Whereas AI can, within radiology, distinguish between “normal” and “abnormal” imaging results, it is not hard to imagine how it could be used, for example, to distinguish between “normal” and “abnormal” pathology slides from biopsies in the dermatology space. Some hospitals are also using AIs to do things like detect malnutrition or Alzheimer’s in certain patient populations, or to screen records for people due for a lung screening or a colonoscopy.
The ideal environment or use cases for AI tend to be those in which there is a great amount of repetition, but lower risk. This lends itself toward administrative tasks, although we will still see AI integrated into clinical settings where tasks can be repetitive and risks can be mitigated. In the administrative realm, one can envision good use cases for AI in performing certain front desk tasks, such as providing appointment reminders, or web portal questionnaires pre-visit, or performing certain intake services when patients arrive to assist with patient triage. Within the billing space, AI could be used to standardize the preparation and submission of claims, as well as to learn physician shorthand to allow for better, more thorough notetaking. If the AI was sufficiently advanced, it could be used to assist with prior authorizations as well (ideally, on both ends of the discussion, so there is less need for physician involvement). The key with any of these interactions, however, is to ensure proper oversight from professionals, especially when AI is used in a clinical capacity.
Risks
Using AI in medical practice is not without risks, because AI is not infallible. Generative AIs are known to create “hallucinations,” which occur when the AI produces a result in response to a user query that recognizes non-existent patterns, or patterns that are unrecognizable to humans, or which appear nonsensical or otherwise inaccurate. For example, AI art programs may draw a human figure with too many fingers or teeth. Lawyers have been disciplined for submitting legal briefs with citations to non-existent cases that the AI “imagined” existed. In my own practice, two separate clients used ChatGPT to research Medicare billing rules, and were given results that were hallucinations as Medicare manual sections that do not exist.
This risk looms over all AI use, but especially clinical usage. Without human oversight, an AI could hallucinate an incorrect diagnosis and harm a patient, thereby exposing the physician to malpractice risk. For example, within a dermatology practice, one can imagine an AI being used to review biopsy slides or other test results and providing an incorrect diagnosis. If a physician fails to catch the error, the physician will be at fault. Also, a physician might query an AI about a patient’s symptoms and be provided with a response that, in fact, is incorrect. If the physician relies on the AI response without doing any additional research, the physician will still be at risk for the misdiagnosis. Again, it bears reminding that AI is best used not for research, but for pattern recognition and reproduction.
Within the billing context, one can imagine how an AI could “hallucinate” that a specific upcoded service is appropriate, given a specific set of factors appearing in a patient’s chart. Without effective oversight, if claims are submitted, the practice will still be held liable for any billing errors and will have to deal with overpayments. If the practice fails to return federal overpayments within a sufficient amount of time, it could expose the practice to False Claims Act liability.
The use of AI can also pose HIPAA risks, depending on how the AI uses patient information. If an EHR software suite rolls out an AI tool, that AI may be training itself on patient PHI. When this occurs, it could implicate HIPAA if and when the practice using the software switches to a different EHR; the old software will still be (in a sense) “using” the practice’s PHI. The language of Business Associate Agreements may address this in some respects, but this is currently a regulatory grey area. We shall see whether regulations are eventually produced to address this. In the face of these risks, physician practices must be careful in their use of AI. Not every new use of AI is necessarily one a practice should adopt, depending on the risks involved and the practice’s capacity to mitigate them. Relatedly, when reviewing software license agreements for packages involving AI, physician practices should consider demanding that indemnification language be added to address the AI’s errors or omissions. The more autonomy the AI will have, the more the software vendor should be willing to stand behind their product. Traditionally, vendors have not offered such protections, but AI poses a new set of risks, and physicians should not be shy in requiring that the license agreement protect them from software malfunctions.
In addition to these efforts, physician practices may want to consider adjusting existing compliance policies and procedures to account for the use of AI. Knowledgeable health care counsel can assist in each of these efforts.
This article is provided for informational and educational purposes and is not intended to provide legal advice and should not be relied upon as such. Readers should consult with their personal attorneys for legal advice regarding the subject matter of this article.
Want more Legally Speaking?
Check out archives of the most popular Legally Speaking articles.
Additional DermWorld Resources
In this issue
The American Academy of Dermatology is a non-profit professional organization and does not endorse companies or products. Advertising helps support our mission.
Opportunities
Find a Dermatologist
Member directory
AAD Learning Center
2026 AAD Annual Meeting
Need coding help?
Reduce burdens
Clinical guidelines
Why use AAD measures?
New insights
Physician wellness
Joining or selling a practice?
Promote the specialty
Advocacy priorities