
Holly Ferron | February 12, 2025
WPS in the Digital Age: Investigating AI and International Cybersecurity Policy
The rise of artificial intelligence (AI) poses new dangers and new peacekeeping opportunities for Women, Peace, and Security (WPS). How can AI policies, practices and regulations ensure that this technology promotes inclusivity and well-being for all, rather than amplifying discrimination and insecurity?
home / policy briefs / holly ferron
time to read: 6 min
Key Results:
Data discrimination causes more gendered exclusion, stereotyping, and insecurity.
AI tools, AI surveillance, and the military applications of AI pose unique threats to the WPS agenda.
AI can be used deliberately for conflict prevention, response, and recovery.
International policy and regulation of AI has improved, however, more effort is needed to ensure accountability, awareness, and responsible use of AI in compliance with WPS efforts.
What’s at stake?
Over the last decade, artificial intelligence (AI) has become deeply integrated into society and influences many aspects of daily life. It utilizes datasets and algorithms to simulate human-like patterns and decision-making, providing functions like providing information, supporting criminal justice decisions and determining credit scores (O’Neil, 2016).
However, considering that AI systems are based on human information and social/political/economic structures, they are capable of replicating and reinforcing the gender biases and inequalities that already exist. This is known as a “Feedback Loop”- humans both influence and are influenced by AI systems (O’Neil, 2016). For example, when developers input historical data that is gender biased into AI technology that is trained to believe the information is accurate, it will continue to replicate pre-existing biases, exacerbating the problem. As society quickly grows an increasing reliance and trust in AI systems, questions arise regarding how we can reduce the negative repercussions and increase accountability.
The growing use and integration of AI in everyday life also pose unique challenges to the Women, Peace and Security (WPS) agenda. AI technology disproportionately affects women; they can be targeted through weaponized technologies, privacy breaches and mis/dis-disinformation campaigns (UN Women, 2024). The unique threats to women are often overlooked in AI and WPS policy considerations.
This brief presents an overview of the current challenges, policy frameworks, and areas for improvement of AI and WPS to inform individuals and policymakers of ways to reduce harmful impacts of AI.
Research Approach:
The following recommendations are based on a comprehensive analysis of the current policy landscape and academic literature on AI's challenges, advantages, and regulations. By comparatively looking at a variety of literature to identify gaps and trends, this approach ensures that the recommendations are evidence-based and grounded in the latest developments in AI governance.
Key Findings:
Study findings point to several AI threats to WPS and several opportunities to address negative concerns and promote inclusivity in compliance with WPS efforts. Recent international policy shows an increased interest in digital security, with much room for further development.
Data discrimination causes more gendered exclusion, stereotyping, and insecurity.
Data discrimination is a term for how certain individuals or groups might be treated unfairly as a result of the human biases that exist in the information and training of AI models (Chun, 2021). In the collection and analysis of data for purposes like job hiring, criminal justice, credit score calculation and receiving healthcare, data discrimination is integrated into every aspect of our lives. Not only do these biases undermine efforts to promote gender equality, but they can result in economic displacement and unequal access. This systematic issue works against the WPS agenda.
AI tools, AI surveillance, and the military applications of AI pose unique threats to the WPS agenda.
AI tools can be used to facilitate gender-based violence (GBV) by creating harmful, unregulated content and targeted mis/disinformation. For example, deepfake technologies can be used to cyberbully women online, or smear their reputation (Sharland et. al. 2021, 27).
AI surveillance technologies can pose a threat to women by being used to track and target them. For example, in conflict areas, AI biometrics like mass facial scanning can be used to monitor the activities of women human rights defenders and members of the LGBTQ+ community to target attacks (UN Women, 2024, 30). Or, when sensitive information relating to fertility, contraception and pregnancy is processed by AI systems it can breach patient privacy, threatening the well-being of women and girls (Capitology Blog, 2024).
Autonomous weapons systems using AI can be programmed to strike using a target profile, meaning that no person controls exactly who, when and where they strike. They can be used to target women, and cause insecurity within communities (Our Secure Future, 2; UN Women, 2024, 29). These systems can take many shapes such as robots, driverless tanks, and drones.
AI can be used deliberately for conflict prevention, response, and recovery.
AI can be used in conflict prevention, response and recovery by tracking and predicting the eruptions of violence online. Also, as an early warning and response system, it can be used to track harmful speech online and monitor/delete it. Both of these solutions can be helpful, but still risk data discrimination through grouping people and and predicting their behaviour (UN Women, 2024, 21).
Alternatively, AI surveillance can be used to track migration movements across borders and protect women against sex-trafficking schemes. Or, deliberately designed AI chatbots and tools can be developed which support peace efforts and prevent misinformation. Creating tools that promote awareness can help to increase AI and data literacy, supporting WPS objectives (UN Women, 2024, 22).
International policy and regulation of AI has improved, however, more effort is needed to ensure accountability, awareness, and responsible use of AI in compliance with WPS efforts.
This study finds an overall lack of research on AI about WPS, especially in regard to the positive ways that AI can be used for humanitarian response. Although some actionable steps can be taken, the systematic gendered implications of AI with data discrimination still need to be addressed. Even some of the positive uses of AI involve data grouping, which can cause harm to people who are incorrectly categorized and targeted by AI systems.
Many academics have begun to expose the dangers of AI and advocate for increased regulation, policy, and literacy measures to mitigate impacts (O’Neil 2016, Chun 2021, Crawford 2021).
In 2023, the UN began to consider AI in the context of international security, and systems of global governance have begun to take shape (ex. the EU AI Act, The Global Partnership on AI, the Partnership for Global Inclusivity on AI).
Policy Lessons:
AI concerns need to be integrated into WPS policy, and AI governance needs to consider the WPS agenda
Often, AI security considerations fail to consider the WPS agenda, and likewise, WPS objectives also fail to consider the unique risks of AI. These topics mustn't be considered in a vacuum; they must be mutually integrated. This will ensure that the goals of both AI security and the WPS agenda are being met.
Women and diverse actors should meaningfully participate in the technological development and training of AI models
The lack of meaningful participation from women and diverse actors in the development and training of AI has further exacerbated the persistence of technological bias and data discrimination. To promote inclusivity and gender equity, and to minimize the creation of harmful tools women of various intersectionalities must participate in the development and training of AI technology.
Continued research and monitoring of the gendered impacts of AI needs to take place
To inform future AI policy, regulation and governance, continued research on the impacts and capabilities of AI must be explored. By monitoring and evaluating the use of AI using gender-disaggregated data, policy-makers and AI developers can ensure that unique risks to women and vulnerable groups are mitigated.
Continued development of AI tools that specifically leverage and assist the WPS agenda is necessary
Considering that many AI tools can be used to create insecurity for women, tools must be also developed with the intended purpose of advancing the WPS agenda. Some examples of this are programs that fact-check and counter disinformation, chatbots that provide information and support, and biometric migration tools to combat insecurities like human trafficking (UN Women, 22).
Holly Ferron is an MA student at the University of Ottawa.
References:
Capitology Blog (March 21, 2024) Artificial Intelligence and Its Unique Threat to Women. Capitol Technology University. Retrieved November 30, 2024, from https://www.captechu.edu/blog/artificial-intelligence-and-its-unique-threat-women
Chun, W. H. K. (2021). Discriminating data: Correlation, Neighborhoods, and the New Politics of Recognition. The MIT Press.
Crawford, K. (2021). Atlas of AI: Power, politics and Planetary Costs of Artificial Intelligence. Yale University Press.
EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act. (n.d.). Retrieved November 30, 2024, from https://artificialintelligenceact.eu/
Global Partnership on Artificial Intelligence. (n.d.). OECD. Retrieved December 12, 2024, from https://www.oecd.org/en/about/programmes/global-partnership-on-artificial-intelligence.html
ISED (Innovation, Science and Economic Development Canada). (2020, June 14). Joint Statement from founding members of the Global Partnership on Artificial Intelligence. https://www.canada.ca/en/innovation-science-economic-development/news/2020/06/joint-statement-from-founding-members-of-the-global-partnership-on-artificial-intelligence.html
Office of the Spokesperson (September 23, 2024) United States and Eight Companies Launch the Partnership for Global Inclusivity on AI. United States Department of State. Retrieved December 12, 2024, from https://www.state.gov/united-states-and-eight-companies-launch-the-partnership-for-global-inclusivity-on-ai/
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. The Crown Publishing group.
Our Secure Future (N.D) WPS Message Guide; A Gender Perspective on AI Risks to National Security. oursecurefuture.org https://oursecurefuture.org/sites/default/files/2024-03/OSF-GenderPerspectiveOnAIRisks.pdf
Sharland, Lisa. Goussac, N., Currey, E., Feely, G., & O’Connor, S. (2021). System Update: Towards a Women, Peace and Cybersecurity Agenda. United Nations Institute for Disarmament Research (UNIDIR) https://unidir.org/wp-content/uploads/2023/05/UNIDIR_System_Update.pdf
UN Women and UNU Macau. (2024) Artificial Intelligence and the Women, Peace and Security Agenda in South-East Asia. UN Women Regional Office for Asia and the Pacific https://unu.edu/sites/default/files/2024-05/Artificial%20Intelligence%20and%20the%20Women%2C%20Peace%20and%20Security%20Agenda%20in%20South-East%20Asia.pdf