The adoption of Artificial Intelligence (AI) technologies in organizational settings presents significant potential for efficiency, innovation, and competitive advantage. However, despite these benefits, many organizations face notable resistance during the implementation process. This dissertation explores the multidimensional nature of resistance to AI adoption, emphasizing that resistance is not just a reaction to technological disruption, but a deeper, human-centered response shaped by organizational culture, individual identity, trust, and ethical concerns.The literature review provides a foundation for understanding AI adoption through the lenses of organizational change theory, technological resistance, and human resource management. It draws on established frameworks such as Kotter’s 8-Step Change Model and the Technology Acceptance Model (TAM) and identifies gaps in how traditional models often overlook emotional, psychological, and ethical aspects of AI resistance. The review also explores emerging themes such as algorithmic bias, data privacy, the “black box” nature of AI, and the ethical leadership required to manage AI transformation responsibly.To explore these issues empirically, this study employed a qualitative research design using semi-structured interviews with 25 participants across diverse sectors, including healthcare, finance, education, and technology. Participants ranged from front-line employees to managers and IT specialists. The data were analyzed using inductive thematic analysis to identify patterns of resistance, enabling factors, and perceptions surrounding AI implementation.The findings reveal that resistance to AI is driven by a combination of individual-level concerns such as fear of job displacement, mistrust of AI decisions, and the need for continuous upskilling and organizational-level factors, including weak communication, exclusion from decision-making, lack of ethical oversight, and insufficient training. Participants reported mixed emotions, often expressing both excitement and concern about AI’s impact on their roles and future in the organization.Importantly, the study shows that organizational culture and leadership are central to shaping how AI is received. In organizations where leaders fostered inclusive decision-making, transparent communication, and ethical awareness, employees reported greater openness to AI. Conversely, top-down implementation strategies and lack of support led to heightened anxiety and disengagement. Peer-led initiatives, AI “champions,” and real-world training examples were reported as effective in easing transitions and reducing fear.From these insights, considering organizational culture, leadership engagement, communication quality, and employee empowerment as core drivers of AI acceptance. This dissertation contributes to academic and practical understandings of technological change by highlighting the importance of treating AI implementation as a socio-technical process. It argues that resistance must be anticipated, understood, and managed through a holistic strategy that centers people—not just the technology. The findings suggest that successful AI adoption relies on empathy-driven leadership, continuous learning opportunities, ethical data practices, and the active involvement of employees throughout the change process.In conclusion, this research offers a human-centered perspective on AI adoption and provides actionable insights for leaders, policymakers, and change agents. By addressing resistance not as a barrier but as a critical feedback mechanism, organizations can better align technological innovation with employee values, build trust, and ensure a more sustainable and inclusive digital transformation.