Both patient safety and risk management in the healthcare field have adjusted to a diversity of conceptual models arising from different industries. Nowadays, Patient Safety should not be anchored to the most classical models but should be complemented by innovative advances that allow for a comprehensive view of all the key elements and the participation of the agents involved in this essential dimension of healthcare quality. This narrative review aims to analyze the approaches that have nurtured the science of safety over time and to offer a holistic and integrative vision that allows professionals, patients, and organizations to understand how we can move forward in achieving a risk-free healthcare system or at least, make it safer. Although there are experiences in the healthcare field of the application of the new paradigms of safety, there are still many pending questions to be solved before integrating and applying them in the real world.
El abordaje de la Seguridad del Paciente (SP) y la gestión del riesgo en el ámbito de la atención sanitaria ha ido adaptándose a diferentes modelos conceptuales, adoptados de diferentes industrias. Actualmente, la SP deja de anclarse a los modelos más clásicos y evoluciona hacia una visión comprensiva (e innovadora) de todos los elementos claves y agentes involucrados en esta dimensión esencial de la calidad asistencial. Por esta razón esta revisión narrativa tiene como objetivo analizar los enfoques que a lo largo del tiempo han nutrido la SP y ofrecer una visión holística e integradora, que permita a los profesionales, pacientes y organizaciones reentender esta ciencia, poder actuar en nuestro entorno inmediato y aumentar el conocimiento. Porque, aunque existen experiencias en el ámbito sanitario de la aplicación de los nuevos paradigmas de la SP, todavía existen cuestiones que resolver antes de integrarlas y aplicarlas en el mundo real.
Professionals responsible for managing patient safety (PS) require frameworks to guide their clinical work. To this end, it may be useful to review the classical paradigms of safety and explore new ones, concluding (likely) that their intersection could be the most beneficial point for our patients.1
It is well known that health care, and especially the practice of intensive care medicine, is becoming increasingly complex2 influenced by both general and specific factors. The former are common across many specialties (e.g., aging and comorbidities).3 The latter, while not exclusive to our specialty, are more prominent in our setting. For instance, physical space, human factors—including cognitive load—the variability of processes, and, of course, the influence of technology, which has enabled significant advances and improvements in outcomes, but has also increased health care-related risks.4 Additionally, the increasing volume of data translates into an overload of available information, which limits professionals' ability to interpret it and make decisions.5 The balance of these factors and the difficulty in understanding their interactions complicate the work of PS managers.6 Another added challenge is the conceptual structure of PS management per se, which involves ideas from other disciplines such as psychology, engineering, physics, and sociology.7 While this may initially seem like a disadvantage, it actually expands the professional’s resources for managing PS.
Considering these specific features of PS, it could be helpful for professionals to also contextualize (historically as well) the tools available to manage PS from the perspective of the three main paradigms of risk management: 1) compliance-based safety theories; 2) culture-based safety theories; and 3) resilience-based organizational theories.8
Compliance-based safety theoriesCompliance-based safety theories emerged in the early 20th century. These are the most reductionist. They are based on the premise that systems are well designed and that the behaviors of the people working within them need to be controlled and adjusted. From this perspective, safety assumes that all accidents can be scientifically understood. The most prominent example is Taylorism. This school of thought views tasks as a sequence of steps, standardized and aligned with work procedures that support efficiency, quality, and system safety. These sequences are non-negotiable. Professional training and supervision are key elements in improving safety.9 Closely related to this is the creation of protocols and standard operating procedures, and the perpetual tension surrounding adherence to them.10
Within this framework arises the well-known Domino Theory of incident causality, which posits a linear sequence of factors leading to an undesirable outcome. Closely related is Heinrich's Law, which includes 2 fundamental principles: 1) for every serious accident, there are 29 minor accidents and 300 near misses, suggesting that analyzing minor incidents can prevent major ones; and 2) accident causality: 88% of accidents are caused by unsafe acts, 10% by unsafe conditions, and 2% by unavoidable causes. The 2 theories assert that unsafe acts and conditions are the main causes of accidents.11 In this model, safety is prescribed by management, and workers are rewarded or punished based on their compliance with the rules. This error-focused, punitive culture has been the standard in health care organizations until recently—and still persists in some of them.12
Culture-based safety theoriesSafety culture has been defined as the integrated pattern of individual and organizational behavior, based on shared beliefs and values, that seeks to minimize harm to patients.13 Organizations with a positive safety culture are characterized by communication built on mutual trust, a shared commitment to safety, and a belief in the effectiveness of preventive actions.14 Safety culture can be measured in health care organizations to identify their position on the safety maturity scale—from pathological to generative—based on trust, accountability, and shared knowledge.15 Safety culture has been associated with adverse event rates, although more in-depth studies are needed to fully understand this relationship, given the complexity of the environment.16,17
The professional is the key element of an organization’s culture, essential for managing the work—not merely a resource for executing it. This paradigm aims to encourage or discourage certain behaviors associated with safe or unsafe situations. Techniques derived from learning psychology are applied, such as the Safety Training Observation Program (STOP), which trains professionals to observe workers, reinforce safe practices, and correct unsafe acts and conditions.18 Practical examples of this paradigm include the use of checklists in “Zero Bacteremia” programs19 or real-time random safety audits,20 as well as the “Hearts & Minds” programs raising awareness on the importance of personal responsibility for safety—programs that have recently been adopted in health care.21 These programs share a common goal: to engage professionals in the construction of safety. Some emphasize observation skills, while others focus on cultural change.
In this cultural paradigm, we should mention the concept of safety climate. This refers to professionals’ perceptions of safety expectations and importance within organizations. Safety climate can be enhanced by visibly demonstrating management’s commitment to safety, balancing the productivity/safety conflict, aligning safety policies with best practices, and promoting a fair culture (shared responsibility between the system and professionals).22 Safety climate has been linked to the rate of adverse events such as drug-related errors.23
Ultimately, this paradigm is not focused on the absolute preventability of incidents—except as an aspirational goal—but rather on fostering intrinsic motivation, participatory prevention, performance measurement, and leadership that promotes professional commitment to patient safety.24
Resilience-based organizational theoriesThese theories incorporate several key concepts: 1) the complexity of systems (e.g., a health care organization, hospital, or department), where interactions across components (e.g., professionals) yield unpredictable results; 2) system flexibility and the ability to adapt (and recover under stress); and 3) the interaction between technology and professionals. In this framework, people are not the problem—they suffer the consequences of the work environment.
Safety-IIRecently, new concepts such as “resilience engineering” have emerged, defining safety as the ability to succeed under variable conditions, rather than assuming that deviating from the concept is always wrong. Alternatively, this can be seen as the intrinsic ability of a system to adjust its functioning in response to changes in the environment (expected or unexpected) to preserve its goals.25 It treats safety not as a system feature but as a system functioning feature. Safety management is thus based on adaptive capacity and on identifying and analyzing variability to understand its system-wide impact. The flagship concept in this school of thought is Safety-II.26 The classical model (Safety-I) has focused on identifying adverse events (i.e., what goes wrong), measuring safety by counting incidents and adverse events. Risk management has revolved around identifying causes through reactive approaches, such as root cause analysis27 or proactive methods, such as failure modes and effects analysis (FMEA).28 The focus is on what is not safe, learning from the absence of safety. It assumes a bimodal process: when work follows the plan (“work as imagined”), adverse events don’t occur; when it doesn’t (due to errors or non-compliance), incidents arise. The goal is to return to normal operations via barriers, regulations, standards, and training.29 However, while adverse events occur frequently, successful outcomes happen far more often—even under adverse conditions. Safety-II, by contrast, starts from the premise that most daily tasks are completed successfully, yet we learn little from this fact. Moreover, processes are not fully structured from the outset—they require continuous adjustment and adaptation (variability), a trait ensured by professionals.
One method of studying this type of safety is the Functional Resonance Analysis Method (FRAM), which models complex sociotechnical systems and visually contrasts tasks from 2 perspectives: 1) as imagined (and prescribed, e.g., by protocols) and 2) as actually performed (where variability is introduced).30 A more practical approach is to highlight and analyze successful tasks the same way safety incidents are analyzed. Not necessarily all of them—since health care systems are imperfect—but those that stand out due to excellence. Examples include the “Learning from Excellence” project at Birmingham Children’s Hospital ICU, which improved appropriate antibiotic prescribing based on this theory,31 or recognizing success in emergency medicine residency programs for improved learning.32 Other examples include “positive/excellent” incident reporting systems that acknowledge professionals’ achievements and enhance engagement and well-being,33 and appreciative inquiry, which suggests that organizations change in the direction of what they study and how they study it: focusing on the positive moves action in that direction. This theory has been applied to health care,34 and although scientific evidence is still limited on this regard, qualitative and observational studies suggest it may positively impact both clinical practice and organizational outcomes, as well as professional experience.
Sociotechnical systemsA key premise of safety in complex systems is that the environment is part of the system, which leads to concepts related to ergonomics and, secondarily, to human factors engineering. This engineering aims to improve efficiency, creativity, productivity, and job satisfaction, with the goal of minimizing errors and improving overall system performance.35 It links to resilience in that environmental, organizational, and labor factors can make it easier to “do the right thing” and harder to “do the wrong thing,” and ensure that when errors occur, they have less impact on patients.36 The SEIPS model (Systems Engineering Initiative for Patient Safety) is a theoretical model based on systems engineering focused on human factors/ergonomics. All versions of the model include 3 main components: 1) work system, processes, and outcomes; 2) key characteristics of each component; and 3) interactions between components. SEIPS 3.0, with a sociotechnical approach, expands the process component, using the concept of the patient journey to describe the spatiotemporal distribution of patient interactions with multiple health care environments over time.37 There are other examples of applying human factors to medical device design (e.g., redesigning pain pumps to facilitate their use and reduce associated risks).38 A systematic review of 28 studies and 3227 participants demonstrated the effectiveness of human factors–based interventions in health care, supporting the need for standardized guidelines on applying these principles to patient safety.39 A recent study shows that safety criteria based on human factors are rarely considered in selecting health care technology.40 Finally, cognitive systems engineering has emerged to complement human factors and ergonomics (initially focused on technology), with the goal of understanding human perspectives in relation to technology.
Probably the highest degree of resilient organization is what's known as high-reliability organizations.41 These organizations have characteristics that are essentially constructive: they care about errors, avoid simplifying problems (in-depth analysis), have a high degree of situational awareness (the ability to perceive and understand what's happening in their environment), and base their decisions on knowledge (the expert's role is valued). Although, in practice, such a high level of organization involves difficulties in developing this model, it serves as an inspiring example.42
New safety paradigmsSafety IIIIn contrast to Safety II, other models have emerged, such as Safety III, promoted by Leveson.43 It openly criticizes Safety II and refutes the simplistic labeling of the diversity and richness of different theories and conceptual frameworks as Safety I. Safety III is based on System Theory, which understands that accidents or "losses" are due to ineffective risk management. It analyzes why the system didn't behave as it was designed to do, seeking to identify which safety control structure failed, and attempts to eliminate, mitigate, or control risks. It understands that the system must be prepared to allow humans to be flexible and resilient and to handle unexpected events. It seeks a design for all parts so that safety is maintained even in extreme situations. It proposes the System-Theoretic Accident Model and Processes (STAMP), expanding the classic causality model with greater complexity and unsafe interactions between system components. Rather than preventing failure, it imposes restrictions on system behavior.
Table 1 illustrates a comparison across 3 models.
Comparison of Safety I, II, and III models.
| Approach | What it studies | How it acts | Logic for improvement |
|---|---|---|---|
| Safety-I | Failures, errors | Identify the causes and correct them | Avoid what goes wrong |
| Safety-II | Everyday successes | Reinforce what goes well | Promote what works |
| Safety-III | Meaning, context | Reflect critically, transform | Change the system to support human response |
Safety differently, developed by Dekker, shares some concepts from Safety II, focusing on the positive capabilities of people rather than adverse events. It considers safety an ethical, not legal, responsibility; it seeks to de-bureaucratize and humanize safety. It is based on empowering professionals' autonomy and decision-making capacity on how to perform their tasks through training and professional development, involving front-line personnel by asking what they need to perform tasks safely, influencing their behavior through the design of spaces, fostering collaboration and involvement, and evaluating the ability to do things well.44
Safety clutterSafety clutter assumes that safety improvement can be achieved by reducing or simplifying certain activities. Often, although many procedures, functions, and activities are incorporated for the sake of safety, they don't add value to the patient, are ineffective, and consume resources. This can happen when activities with the same function are duplicated, generalized without considering differentiation, or specified in a comprehensive way. In a survey conducted among health care professionals, the main practices identified as not adding value were bureaucratic tasks (e.g., the same information in different records), duplication (double-checking medication), or intentional rounds.45 It is proposed to evaluate the contribution, confidence or certainty, and consensus that exist in safety improvement for certain actions prior to their application or discontinuation.46
Gradual extensibilityAnother innovative concept is "gradual extensibility" or "sustainable adaptability," which would be the ability of a system to expand its adaptive capacity in the face of surprising events that challenge its limits. All systems operate with a determined performance and adaptability range (limits) due to finite resources and the inherent variability of their environment. This means there will always be uncertainty, making risk never zero and always variable. This theory proposes decentralizing decision-making to the people closest to the information and point of risk, rewarding reciprocity and support among teams that prevent saturation, and sharing information in real time. As tools, it proposes increasing awareness of unexpected surprises, evaluating the adaptive capacity of each unit, measuring maneuverability, and periodically checking the adaptive capacity of the units. These theories would be completely applicable to the COVID-19 pandemic environment.47
This multidimensional approach and the handling of PS from the multiple theories proposed by other organizations, although challenging to apply to the health care setting,48 could help reduce risks associated with health care and overcome some barriers that have so far prevented sufficient progress in PS.49
Where do the articles in this series fit in? From classic models to new Patient Safety paradigmsThe articles in this concluding series have reviewed classic concepts and models, and key aspects that have allowed us to understand, improve, and strengthen PS in intensive care medicine over the past 2 decades. This includes structural aspects of services and teams, system fragility, patient and family involvement, the importance of professional care and well-being, new training approaches, and the growing role of new technologies, especially artificial intelligence (AI).
The article on zero risk frames the debate on the real limitations of systems, referring to structures, health care organization, and the importance of tools and process systematization.50 All of this, along with the epidemiology of adverse events51 fully aligns with the classic Safety type I framework.
The articles on the impact of safety on clinical outcomes52 and professional well-being as a safety pillar53 introduce a more relational and dynamic approach. They address the need to ensure conditions that allow for both caring and being cared for, clinically and emotionally. These perspectives open the door to Safety type II thinking, focused on understanding why things usually go well and how to strengthen the adaptive mechanisms of teams and systems, along with concepts related to ergonomics and human factors.54 Along these lines, in the past decade, there have been increasing advancements and innovations in teaching. An article has been dedicated to clinical simulation, reinforcing the need to train teams to face system complexity. In-situ simulation, virtual reality, and debriefing centered on reflective judgment aim to empower professionals not only to execute tasks but also to adapt, collaborate, and learn from experience—fundamental principles in resilient safety models.55
AI is yet another step towards a new paradigm that is still difficult to define, but which will lead to the evolution of new models. In these models, professionals will need to participate more actively than ever to define how we will be able to combine technology and human interaction, with all their ethical implications.56
From this perspective, even seemingly technical tools, like simulation, can be seen as platforms for building trust and collective learning. It will be necessary to evaluate the impact of all these tools and methodologies. And the evaluation won't be solely from the professional's perspective; the involvement of patients and families takes on a transformative role. From being informed, they will become key players, and their experience and vision will guide and shape the new models.57
Overall, this series not only maps the current state of critical patient safety but also proposes a necessary shift: from control to care, from error to learning, from norms to context, from technical analysis to collective reflection.
CRediT authorship contribution statementAll authors (MCMD, MB, MCH, GS) have contributed to the conception of the article, its draft, and critical review of the intellectual content, as well as its definitive approval of the final version of the article.
Declaration of Generative AI and AI-assisted technologies in the writing processThe authors declare that they have not used generative AI or AI-assisted technologies in the drafting process.
FundingNone declared.
None declared.



