Augmented intelligence in medicine: what the American Medical Association stands for and why it matters
The American Medical Association (AMA) has officially adopted the term augmented intelligence instead of artificial intelligence to describe the application of intelligent systems in medicine. This choice is not just semantics. It carries a clear philosophy: technology should serve as an assistant to healthcare professionals, amplifying the human capacity to diagnose, treat, and care for patients, without ever replacing clinical judgment. This distinction is essential to understanding the path that the largest medical organization in the United States has charted to guide the development, deployment, and use of AI in healthcare.
How augmented intelligence works in clinical practice
When we talk about augmented intelligence applied to healthcare, we are referring to a model where algorithms and intelligent systems act as a copilot for the medical professional. Picture a radiologist reviewing hundreds of imaging studies per day. Visual fatigue is real, and small details can slip through the cracks. With the support of artificial intelligence, that same professional receives alerts about suspicious areas, patterns that deserve a second look, and even differential diagnosis suggestions. The key point is that the final decision still belongs to the physician, who evaluates the patient’s clinical context, talks with them, reviews their history, and only then determines the course of action.
This collaborative dynamic between human and machine is what sets augmented intelligence apart from pure automation, and it is precisely why the concept has won over heavyweight organizations like the American Medical Association. The AMA House of Delegates formalized this approach as a conceptualization of artificial intelligence that emphasizes its assistive role, reinforcing that the design of these tools should enhance human intelligence rather than attempt to replace it.
In dermatology, for example, systems trained on millions of skin lesion images can identify patterns associated with melanomas with impressive accuracy. But no algorithm can ask the patient how long that spot has been there, whether there has been a recent medication change, or whether there is a family history of skin cancer. This human layer of interpretation, empathy, and clinical judgment is irreplaceable, and augmented intelligence explicitly acknowledges that.
The same applies to fields like cardiology, where wearables and smart monitors collect real-time data on heart rate and rhythm, but it is the cardiologist who knows the patient’s reality who decides whether to start or adjust a treatment. Technology delivers processed data and valuable insights, while the healthcare professional turns all of that into effective care.
The AMA’s policies for AI development, deployment, and use in healthcare
The AMA did not stop at choosing a nice term. The organization built a robust set of policies guiding how artificial intelligence should be developed, deployed, and used in the healthcare context. Its stated commitment is to ensure that AI reaches its full potential to advance clinical care and improve physician well-being. With a growing number of AI-enabled tools in the healthcare landscape, the AMA advocates that they need to be designed in an ethical, equitable, and responsible manner.
The guidelines published by the association address specific areas in considerable depth:
- AI oversight in healthcare — governance and monitoring mechanisms to ensure systems perform as expected
- Transparency — when and what to disclose to physicians and patients about the use of AI
- Generative AI policies — specific guidelines for language models and generative tools applied in the clinical setting
- Physician liability — defining the limits of a physician’s responsibility when using AI-enabled technologies
- Data privacy and cybersecurity — protecting patient information in an increasingly digital ecosystem
- AI use by health insurers — regulations on how payers and insurers use AI and automated decision-making systems
One point that deserves special attention is that the AMA recognizes AI is not limited to traditional medical devices. Increasingly, intelligent systems are used in healthcare administration and in reducing the bureaucratic burden that weighs on physicians. Clinical documentation, procedure coding, scheduling, and patient triage are all areas where AI already operates in a significant way. That is why the association’s policies cover both device and non-device applications, creating a more comprehensive and realistic regulatory umbrella.
The numbers that show how physicians view AI
In 2023, the AMA conducted a comprehensive study with more than a thousand physicians to understand how they viewed the use of artificial intelligence in healthcare. The survey assessed everything from current usage to future motivations for adoption, along with concerns, areas of greatest opportunity, and implementation requirements. Given the rapid pace of AI evolution, the study was repeated at the end of 2024 and again in 2026.
The most recent results are quite revealing 🚀. More than 80% of physicians report using AI in their professional work, double the rate recorded in 2023. Confidence has also grown significantly: in 2026, more than three-quarters of physicians say AI improves their ability to care for patients, a considerable jump from the 65% reported in 2023.
If in 2023 only 40% of American physicians used some form of artificial intelligence in their routine, the more than 80% mark in 2026 shows an accelerated adoption curve. This shift did not happen by accident. Tools that genuinely function as support, without trying to replace clinical reasoning, earn professionals’ trust organically. When a physician realizes they can deliver better care, with more confidence and in less time, the natural resistance to technology gradually gives way to genuine integration into the workflow.
In 2026, the survey also expanded its scope to examine two additional areas: physicians’ perspectives on AI use by patients and medical training needs, including concerns about the potential loss of clinical skills as AI adoption grows.
At the same time, cautious optimism remains the dominant trait. About 40% of physicians say they feel both excitement and concern about the role of AI in healthcare. The main concerns revolve around protecting patient privacy and preserving the integrity of the physician-patient relationship. As adoption accelerates, solid clinical evidence and clear guidance for practical implementation continue to be essential.
The role of ethics and transparency in this transformation
With the rapid expansion of augmented intelligence in medicine, questions of ethics and transparency have moved to the center of the debate. It is not enough for an algorithm to be accurate if no one can explain how it reached its conclusion. This problem, known as the black box of artificial intelligence, generates distrust among both professionals and patients.
If a system suggests that a particular lesion is malignant, the physician needs to understand which criteria were considered in order to validate or question that recommendation. Without this layer of explainability, the tool stops being a support and becomes a risk. That is why international guidelines have been requiring that developers of AI-based healthcare solutions adopt interpretable models, where each step of the algorithmic reasoning can be traced and understood by those at the point of care.
The AMA itself addressed this issue in an article published in the Journal of Medical Systems, titled Trustworthy Augmented Intelligence in Health Care. The paper reviewed the literature on the challenges AI in healthcare presents and reflected on existing guidance, proposing practical paths toward trustworthy implementation.
Transparency also extends to the relationship with the patient. The AMA advocates that the use of AI in healthcare should be transparent to both physicians and patients. Knowing that technology is being used as a support tool, and not as a replacement for human judgment, strengthens the trust relationship. Some hospitals already include this information in consent forms, explaining in accessible language that intelligent systems assist in analyzing exams and formulating diagnostic hypotheses.
From an ethical standpoint, another significant challenge is algorithmic bias. Artificial intelligence systems are trained on datasets that do not always represent the real diversity of the population. If a dermatological algorithm was predominantly trained on images of lighter skin, its performance on darker skin tones can be significantly lower, leading to inaccurate diagnoses and widening inequalities in access to quality healthcare. Recognizing this problem and actively working to correct it is a shared responsibility among developers, research institutions, and healthcare systems.
Collaboration across medical specialties to shape the future of AI
The AMA created the AI Specialty Collaborative, an initiative bringing together 21 medical societies from different specialties. The goal is to ensure that physicians play a central role in defining how AI is developed and integrated into healthcare. This collaborative approach makes sense because each specialty has unique needs, workflows, and challenges. What works in radiology may not work in psychiatry, and vice versa. Bringing these diverse perspectives together in a single forum allows guidelines to be more comprehensive and applicable in real-world practice.
This kind of interdisciplinary collaboration is a major differentiator. Instead of letting technology companies define on their own how AI will be used in medicine, physicians themselves actively participate in the design, validation, and governance process. This increases the likelihood that the resulting tools will be genuinely useful in day-to-day clinical work and will respect the ethical principles that guide medical practice.
AI in medical education: training prepared professionals
Artificial intelligence is playing an increasingly important role across all stages of medical training. It functions both as a tool for educators and students and as a subject of study in its own right. The AMA recognizes that AI has the potential to transform the educational experience as part of precision education and, consequently, transform patient care as part of precision health.
In practice, this means future physicians are being trained not only to use AI tools but to understand their foundations, limitations, and ethical implications. This critical training is essential so that professionals know when to trust an algorithm’s suggestion and when to question its results. Medical education that integrates AI responsibly prepares a generation of professionals better equipped to navigate an increasingly tech-driven clinical landscape.
Recent updates and institutional milestones
The AMA has been moving consistently to position physicians at the center of healthcare’s digital transformation. In October 2025, the association launched the Center for Digital Health and AI, a center dedicated to putting physicians front and center in defining, guiding, and implementing AI tools and other technologies that are transforming medicine.
Additionally, the AMA publicly weighed in on the 2025 federal AI action plan from the U.S. government, signaling its willingness to work with the administration on key areas of regulation, policy, and artificial intelligence implementation. The organization also published a report on state-level legislative activities related to AI, discussing three priority areas: AI use by health plans, transparency, and physician liability.
In the area of coding and reimbursement, the CPT® (Current Procedural Terminology) system maintained by the AMA is being updated to classify various AI applications. The Digital Medicine Payment Advisory Group (DMPAG) identifies barriers to digital medicine adoption and proposes comprehensive solutions covering coding, payment, and coverage. This classification infrastructure is crucial for ensuring that AI solutions are properly reimbursed and sustainably incorporated into the healthcare system.
What healthcare professionals really think about all of this
Recent surveys paint an interesting picture of how physicians perceive augmented intelligence. Most professionals recognize the value of technology as a support tool, especially for repetitive tasks and the analysis of large volumes of data. At the same time, there is a legitimate concern about over-reliance on automated systems and the possibility of clinical skills eroding over time.
This balance between enthusiastic adoption and healthy caution reflects an important maturity within the medical profession, which does not want to simply embrace novelty without questioning its implications. The fact that the AMA chose the term augmented intelligence over artificial intelligence was not a casual semantic decision. It was a clear stance that technology should amplify, and never diminish, the central role of the healthcare professional.
On the front lines of patient care, physicians report that the most successful tools are those that integrate naturally into the workflow without adding unnecessary complexity. A system that requires fifteen extra clicks to function is unlikely to be adopted in a packed emergency department. On the other hand, solutions that run in the background and deliver relevant information at the right moment of decision-making are received with genuine enthusiasm.
The user experience makes all the difference. It does not matter if you have the most sophisticated algorithm in the world if the interface is confusing or if the response time compromises the speed of care. Companies that understand this dynamic and invest as much in the quality of the AI model as in the design of the interaction are earning real traction within healthcare institutions.
Educational resources and implementation support
The AMA also provides practical resources to help physicians navigate the rapid evolution of AI in clinical practice. The STEPS Forward® program offers a collection of digital health solutions that provide insights on how to integrate AI into workflows, reduce administrative burden, and enhance patient care, always addressing critical issues like ethics, bias, and professional well-being.
Through case studies, implementation strategies, and expert perspectives, the program equips physicians with the knowledge and tools to adopt AI responsibly and effectively. These resources are open access and eligible for continuing medical education credits, encouraging professionals to stay consistently up to date.
On AMA Ed Hub™ and the JAMA Network™, physicians find content that explores the components of AI in healthcare, diving into the challenges and opportunities this technology presents. This curation of educational content is an important differentiator for informed and critical adoption.
The road ahead: innovation with responsibility
The future points toward an increasingly deep and natural integration between healthcare professionals and intelligent systems, but this will only truly work if the foundations of ethics and transparency are well built from the start. Clear regulations, continuing education for professionals, active patient participation in decisions, and robust data governance are pillars that need to evolve at the same pace as the technology.
The AMA has emphasized the importance of continuously refining its policies as technology evolves. The reports from the organization’s Board of Trustees outline the need for additional AI policies, recognizing that the number of stakeholders and policymakers involved in the evolution of artificial intelligence in healthcare demands constant and adaptive oversight.
Augmented intelligence has enormous potential to improve diagnostics, personalize treatments, and save lives, but that potential is only fully realized when innovation walks hand in hand with responsibility. And that is, without a doubt, the most important takeaway for anyone following this transformation closely 💡.
