top of page

What’s Really Holding Back AI Adoption in Manufacturing and Supply Chains? (Part 3 — Ethical and Governance Challenges)

  • Writer: Yvonne Badulescu
    Yvonne Badulescu
  • Nov 24, 2025
  • 6 min read

This article is part of a 3-part series exploring the key barriers preventing successful AI adoption in manufacturing and supply chains, based on recent research and industry insights. In Part 1, I examined organizational and cultural barriers. Part 2 addressed the technical and data challenges. This final article focuses on the ethical and governance issues that shape responsible and sustainable AI adoption.



In Part 1 of this series, I examined how organizational and cultural barriers, including lack of strategy, siloed teams, and resistance to change, often derail AI initiatives before they can scale. Part 2 focused on technical and data-related challenges, where issues like legacy systems, poor data quality, and fragmented infrastructure limit AI’s effectiveness across supply chains and manufacturing environments. In this third and final article, I turn to the ethical and governance dimensions of AI adoption.


Even when strategy, leadership, infrastructure, and data are in place, organizations still face critical questions about transparency, fairness, privacy, and accountability. As AI tools take on a greater role in decision-making, trust becomes just as important as accuracy. Without strong ethical principles and clear governance structures, even technically successful AI systems may fail to gain acceptance or deliver sustainable value.


This article explores the core ethical risks and governance challenges organizations must navigate, and presents practical strategies for building responsible, trustworthy AI into supply chain and manufacturing operations.


Ethical Considerations in AI Deployment

While AI offers significant potential for enhancing decision-making in manufacturing and supply chains, its adoption raises important ethical and governance challenges that organizations must address. Beyond technical readiness, companies need to build trust, ensure fairness, and establish clear accountability in their AI systems (Kar et al., 2022; Kaggwa et al., 2024; Stone et al., 2020).


  1. Transparency and Explainability: AI systems often operate with complex models that are difficult to interpret, leading to what is commonly referred to as the "black box" problem. This lack of transparency can create mistrust among users and stakeholders who need to understand how AI-driven decisions are made. In supply chains, a demand forecasting model might adjust order quantities based on subtle correlations in customer behavior. Without transparency, planners may distrust the recommendation and override it, leading to missed optimization opportunities.

  2. Bias and Fairness: AI systems trained on historical data risk reproducing existing biases, leading to unfair outcomes that can affect employees, customers, or suppliers. For instance, AI models used in supplier selection or workforce planning might reflect patterns from biased historical data unless corrective measures are in place. 

  3. Privacy and Data Protection: AI systems rely on large volumes of data, raising concerns about privacy and security. Supply chains often involve sensitive data such as customer information, pricing, or supplier contracts, which must be protected against misuse or breaches. 

  4. Accountability and Responsibility: Organizations need to establish clear lines of responsibility for managing AI systems, ensuring that ethical considerations are upheld and that there is recourse in the event of errors or adverse outcomes. For instance, if an AI system flags a supplier as high risk and procurement cancels the contract, responsibility becomes unclear if the flag was based on outdated or incorrect data, especially when financial or legal consequences follow. 


Governance Challenges in AI Integration

Even as organizations work to adopt AI responsibly, many face structural and procedural gaps in their governance capabilities. Without clear oversight, cross-functional coordination, or consistent monitoring, AI initiatives risk becoming disconnected from broader ethical and operational priorities (Kar et al., 2022; Stone et al., 2020).


  1. Lack of Regulatory Frameworks: The pace of AI innovation has outstripped the development of formal regulatory frameworks, leaving many organizations without external guidance. In cross-border supply chains, companies using AI for supplier vetting or logistics optimization may face uncertainty about compliance with emerging rules around algorithmic fairness, automated decision-making, or cross-border data transfers.

  2. Cross-Functional Coordination: AI governance is not solely a technical or legal issue; it spans multiple functions, including IT, operations, compliance, and legal. Misalignment between these groups can create bottlenecks or blind spots in how AI systems are developed and used. A supply chain optimization tool developed by data scientists without input from operations may propose schedules that don’t account for warehouse constraints, leading to inefficiencies or manual workarounds.

  3. Continuous Monitoring and Evaluation: Once deployed, AI systems must be continuously monitored to ensure they perform as intended, adapt to changing conditions, and remain aligned with organizational goals. In a dynamic supply chain, an AI model that predicts shipment delays must be updated as new suppliers, routes, or weather patterns emerge, otherwise, its recommendations may quickly become inaccurate.

  4. Stakeholder Engagement: Effective governance requires transparency not only within the organization but also with external stakeholders. When AI tools are used to allocate delivery windows or prioritize orders, engaging key logistics partners ensures that constraints are understood and embedded, improving system accuracy and fostering trust.


Strategies for Ethical and Effective AI Governance

Building ethical and effective AI governance requires organizations to move beyond technical implementation and address the principles, structures, and behaviors that shape responsible AI use. The literature highlights several key actions that help organizations embed ethical considerations into AI development and deployment (Kar et al., 2022; Kaggwa et al., 2024; Stone et al., 2020).


  • Develop a Comprehensive AI Ethics Policy: A clear and organization-wide AI ethics policy provides direction and consistency across AI initiatives. Defining principles such as fairness, transparency, accountability, and respect for privacy ensures that AI projects align with broader organizational values. This policy establishes expectations for ethical behavior and serves as a reference point when addressing complex or sensitive decisions.

  • Establish an AI Governance Committee: Forming a cross-functional AI governance committee brings together representatives from IT, legal, compliance, operations, and business functions. This team provides oversight, manages risk, and ensures that ethical considerations are integrated into AI strategy and operations. Cross-department collaboration also strengthens internal alignment and facilitates faster, more informed decision-making.

  • Implement Training and Awareness Programs: Promoting awareness of ethical AI practices across the organization supports a culture of responsibility and transparency. Training programs help employees understand how AI technologies function, their potential risks, and their role in ensuring ethical use. Equipping staff with this knowledge encourages responsible behavior in the development, deployment, and use of AI tools.

  • Conduct Regular Audits and Assessments: Ongoing audits and performance assessments are essential for monitoring AI systems and identifying risks or deviations from ethical standards. Regular reviews allow organizations to detect and correct issues before they escalate, ensuring that AI systems remain aligned with their intended purpose and organizational values over time.

  • Engage in Industry Collaboration: Participating in industry forums, partnerships, and regulatory discussions allows organizations to stay informed about emerging best practices in AI governance. Collaborating with peers, regulators, and academic experts helps companies anticipate changes in the regulatory landscape, access shared resources, and contribute to shaping ethical standards for AI adoption.


Ethical and governance challenges may not be as visible as technical ones, but they are often the reason AI fails to gain traction in real-world operations. Responsible AI adoption requires more than a high-performing model, it requires clarity, transparency, fairness, and trust.


Across this series, I’ve explored three categories of barriers that continue to hold AI back in manufacturing and supply chains:


  • Organizational and cultural misalignment

  • Technical and data readiness gaps

  • Ethical and governance risks


Each of these areas presents its own challenges, but also its own opportunities. And importantly, they are all interconnected. A strong AI strategy is not just about technology, it’s about the systems, people, and principles that surround it. As organizations move forward with AI adoption, success will depend on their ability to align across all three fronts. That means building capabilities, engaging the workforce, investing in infrastructure, and embedding ethical thinking from the very beginning.


Getting this right won’t be easy. But it’s the difference between AI that looks impressive on paper, and AI that actually works and lasts in practice.


References

Kar, S., Kar, A. K., & Gupta, M. P. (2022). Modeling Drivers and Barriers of Artificial Intelligence Adoption: Insights from a Strategic Management Perspective. Intelligent Systems in Accounting, Finance and Management, 28(4), 217–238. https://doi.org/10.1002/isaf.1503

Kaggwa, S., Eleogu, T. F., Okonkwo, F., Farayola, O. A., Uwaoma, P. U., & Akinoso, A. (2024). AI in Decision Making: Transforming Business Strategies. International Journal of Research and Scientific Innovation, 10(12), 423–432. https://doi.org/10.51244/IJRSI.2023.1012032

Stone, M., Woodcock, N., Machtynger, J., & Heppel, M. (2020). Artificial Intelligence in Strategic Marketing Decision Making: A Review. The Bottom Line, 33(3), 205–214. https://doi.org/10.1108/BL-03-2020-0022

  • LinkedIn

©2025 by Yvonne Badulescu.

bottom of page