We will use data and AI in ways that are aligned to our Code of Business Conduct and Ethics and Impact Agenda .
Responsible AI at Manulife
Principles don't live in documents. They live in deployments.
Building trust through ethical, transparent, and human-centered AI
Artificial intelligence offers powerful opportunities to improve customer experiences, strengthen decision-making, and enhance how organizations operate. At Manulife, we recognize that realizing these benefits comes with a responsibility to use AI thoughtfully, safely, and transparently.
We apply AI across our business to support better outcomes for customers and colleagues while maintaining strong governance, ethical standards, and human oversight.
Manulife has articulated six Responsible AI Principles to guide how we build, deploy, and scale artificial intelligence. We move beyond theory to put these principles into practice across our global operations every day.
Our approach is supported by enterprise-wide governance and risk management frameworks that guide how AI is assessed, approved, deployed, and monitored across its full lifecycle.
-
37,000+ Colleagues with AI access
-
70% Workforce Engagement Rate
-
$1B Enterprise Value Goal* by 2027
We will prioritize the safety of our customers, colleagues and organization through sound delivery and governance processes .
We will endeavor to align our AI efforts with our commitment to a sustainable future, by designing energy-efficient AI solutions and partnering with companies who share our values.
We will implement practices intended to make our AI solutions and their use of data free from bias, explainable, and reliable, while maintaining the appropriate accountability for decision-making.
We will prioritize human agency and empower our colleagues to use AI tools to enhance their skills and experience, knowing these are crucial for the future.
We will continually learn from and work with industry partners and AI experts to foster innovation and evolve our commitment to Responsible AI, AI governance and best practices evolve rapidly. Manulife actively invests in continuous learning to stay ahead of technological, ethical, and regulatory developments.
AI and machine learning models at Manulife are governed through our enterprise Model Risk Management (MRM) framework. MRM sets consistent standards for how models are designed, tested, approved, and monitored over time.
Before approval, models must undergo independent validation and reliability reviews proportionate to their risk, with the scope, depth, and frequency of assessments calibrated to the model’s materiality, complexity, and potential impact, and risk-based monitoring maintained once deployed.
MRM also helps embed ethical considerations early in AI development. Teams complete structured checks to identify ethical and customer risks.
These steps aim to ensure AI solutions are evaluated not only for performance, but for alignment with Manulife’s ethical standards, legal requirements, and commitment to customer and stakeholder trust.
To support clear ownership, Manulife is developing Model Owner training, helping model owners understand their responsibilities for ethical use, appropriate controls, and ongoing oversight throughout a model’s lifecycle.
MRM follows a three‑lines‑of‑defense approach, with defined responsibilities across model owners, independent risk teams, and internal audit. This seeks to ensure reliability and explainability are ongoing obligations, not one‑time reviews.
The result:
AI models are deployed with clear documentation, independent oversight, and continuous monitoring. This supports reliable outcomes, understandable decisions, and clear accountability in line with Manulife’s Code of Business Conduct and Ethics and enterprise risk expectations.
Manulife’s partnerships with AKKA and Adaptive ML directly support our sustainability principle by embedding efficiency and energy conscious design into how AI solutions are developed and scaled.
Through these partnerships, we:
- Develop AI solutions optimized for cost, speed, and efficient resource use
- Use smaller, more efficient models where appropriate, reducing computing power and energy requirements
- Optimize how computing resources are used during model training and tuning
- Regularly review and improve our infrastructure to help reduce overall energy consumption
By optimizing how AI solutions are built, leveraging smaller models where appropriate, and partnering with values-aligned vendors, Manulife seeks to integrate sustainability and energy-efficiency considerations directly into the AI development lifecycle.
Manulife maintains a dedicated AI Governance team, responsible for defining enterprise-wide standards, policies, and control requirements for AI. This team works in close partnership with legal, compliance, privacy, risk, and model owners. Model owners reside in the business and function teams and are accountable for making sure models align with business objectives and governance requirements by setting tolerance limits and performance monitoring thresholds.
Strategic oversight is provided by the AI Governance Council, operating as a GLT level SteerCo. This council brings together senior leaders to:
- Set enterprise direction for AI adoption
- Prioritize high impact AI use cases
- Ensure accountability and consistency across business and functions
This multilayered governance structure aims to ensure that safety, compliance, and ethical considerations are embedded from ideation through deployment and ongoing monitoring.
This layered oversight model helps ensure AI solutions operate within defined risk tolerances while remaining adaptable as technologies, regulations, and expectations evolve.
Manulife invests in continuous learning to stay aligned with evolving AI technologies, governance practices, and regulatory expectations.
We recognize that generative AI is evolving rapidly, and our governance, controls, and practices are designed to evolve with it. This establishes a clear baseline while continuously learning from new risks, technologies, and regulatory developments.
Our senior AI leaders regularly participate in and speak at Responsible AI and AI governance conferences, contributing practical experience and engaging with peers on the safe and ethical use of AI.
We collaborate with leading academic institutions, including Harvard University, MIT, York University, the University of Waterloo, and the National University of Singapore, to stay closely connected to emerging research and applied best practices.
Manulife also supports public dialogue through initiatives such as the Hinton Lectures, which convene global experts to explore AI safety, ethics, and societal impact.