In March 2015, over 2,800 government representatives from around the world met in Sendai, Japan, for the Third United Nations Conference on Disaster Risk Reduction.
During this event, the Sendai Framework for Disaster Risk Reduction (2015-2030) was adopted, outlining clear priorities for humanity to prevent new and reduce existing disaster risks.
To this day, the Sendai Framework remains the sole multilateral instrument dedicated to coordinating disaster risk reduction on an international scale. Its community has historically focused on natural hazards.
Fast forward to 2022 and early 2023,
the Sendai Framework underwent a comprehensive period of review. This gave rise to new perspectives and ideas on disaster risk …
… a key one being the need to focus on
risks from emerging technologies, such as
artificial intelligence (AI) and biotechnology.
The Simon Institute for Longterm Governance (SI) took a leading role in championing this idea, and used various avenues to shed light on this neglected issue.
The Simon Institute for Longterm Governance (SI) took a leading role in championing this idea, and used various avenues to shed light on this neglected issue.
In Spring 2022, SI’s Co-CEO Max Stauffer attended the 7th session of the Global Platform on Disaster Risk Reduction (GPDRR) and delivered two statements on risks from emerging technologies.
The GPDRR, established at the request of the UN General Assembly, is the highest-profile forum for disaster risk professionals from across the world, and meets every two years. Held from 23 to 29 May 2022, the 7th session was attended by 3,000 people in-person, with another 1,500 online.
A few months later, SI delivered the first ever thematic study on Existential Risk and Rapid Technological Change upon a request from the United Nations Office for Disaster Risk Reduction (UNDRR).
A few months later, SI delivered the first ever thematic study on Existential Risk and Rapid Technological Change upon a request from the United Nations Office for Disaster Risk Reduction (UNDRR).
In the study, SI explains the existential nature of various technological risks, and outlines 30 strategies for governing advances in artificial intelligence and biotechnology.
The Sendai Framework Mid-Term Review Report, published in March 2023, makes reference to this study several times. It notably makes mention of existential risks, dual-use applications in synthetic biology, and risks from advanced AI – topics commonly absent from discussions on risk reduction.
Going forward, the report will provide valuable and internationally legitimate substance for multilateral actors to address technological risks.
In early 2023, SI was invited alongside UN partners, member states and other key stakeholders, to contribute to a UN High-Level Meeting on Disaster Risk Reduction to reflect on the Midterm Review of the Sendai Framework.
In early 2023, SI was invited alongside UN partners, member states and other key stakeholders, to contribute to a UN High-Level Meeting on Disaster Risk Reduction to reflect on the Midterm Review of the Sendai Framework.
During this meeting, several panelists highlighted the need for increased engagement from the disaster risk reduction community on technological risks.
Below are a few excerpts.
During this meeting, several panelists highlighted the need for increased engagement from the disaster risk reduction community on technological risks.
Below are a few excerpts.
Duncan Cass-Beggs, former Councillor for Strategic Foresight at the OECD, described the unique nature of AI risk as an issue without a dedicated space in the UN system, and underscored the need for the disaster risk reduction community to voice concerns on risks from emerging technologies:
“The people who work most closely on the latest AI systems are very concerned. That’s why Sam Altman, the head of OpenAI, is talking about this issue with US Senators. It’s why the so-called God Father of Machine Learning Geoffery Hinton just resigned from Google to devote himself to raising the alarm. It’s why thousands of AI experts signed an open letter calling for a pause on developing the most dangerous kinds of AI, until the safety and control mechanisms can catch up.
Given the scale of this issue, it might seem like something that is too big for the Sendai framework to deal with. And it is indeed too big for any of us to deal with alone. But one of the problems with this issue, of global AI risk, is that it doesn’t fit neatly, or have an obvious home within government, or within the UN system. The government departments most involved with AI, are usually trying to promote its development rather than look out for the risks. Very few are doing the foresight about where this could all lead. We need a strong voice from the disaster and risk reduction community about why this and other emerging risks from technology need to be taken seriously, and what could be done about them.
Managing these risks will require regulation first within the US, but then, also by the broader international community. And achieving such agreement will be very challenging. As we look forward to the SDG summit this September and to the Summit of the Future in 2024, it is quite possible that you will need a strong global coalition to pressure leading AI powers to act responsibly on behalf of humanity.”
The distinguished representative of India, Pramod Kumar Mishra, spoke on the need to balance the risks and benefits of artificial intelligence, and explained this need by drawing parallels to past technological advancements such as nuclear energy and the Internet:
“I fully agree there are some risks [with regards to Artificial intelligence]. And of course, a key one is to balance the benefits and cost. But the question is, how do we do that? Even today there are two lines of thinking. The regulators talk about implementing restrictions, but very often the policymakers, they feel that artificial intelligence has great potential.
If we talk about such risks, many other scientific developments in the past have led to such dilemmas and debate, starting with nuclear energy. Similarly with the internet — something that has been a great game changer for development, but has also led to cyber attacks and risks.
So when we look at artificial intelligence, do we want to create a fear that it can destroy the whole world, or do we try to encourage it? I think here, we need to have a more informed approach to artificial intelligence. And the dilemma will be whether to focus on the benefits or the potential costs.”
Maxime Stauffer, co-CEO of SI, emphasized the significance of multilateral coordination in tackling technological risks and underscored the necessity for inclusive technology governance, putting forward five propositions:
“How can the current multilateral architecture reduce threats from technological developments? The good news is that we don’t necessarily need new knowledge, new foresight, or new processes to start making progress. And today, I want to put forward five propositions.
First, we need a greater focus on technological risks. These risks are traditionally neglected in the multilateral system. One reason for this is that their origins are very local, and often based in the global north. But all countries are at risk of being exposed. The SDG mid-point summit this September provides an avenue to make the case that development drives the creation of risk, and that while AI can boost the SDGs; we also need to account for the risks.
Second, we need improved coordination. The consequences of technological risk are multisectoral – meaning all sectors will be exposed and affected by AI. Technological risks are already part of the Sendai Framework, but are not tackled by the disaster risk reduction community. We conducted a text analysis of all the national reports submitted to the Midterm review – and almost none of them mention technological risk. So we don’t necessarily need a new framework, or process – we just need to do better at implementing the existing one.
Third, we need more prevention. Historically, disaster response, while extremely costly, generally works. But today, and in the future, prevention will be increasingly crucial, due to the severity of technological risks. However prevention is much easier said than done. Worldwide research shows that democracies, autocracies, and international organizations only change their budgets in response to disasters. Therefore, if you want to start doing prevention, this means we need to transform institutions, and adopt fundamentally new ideas on how we govern risk. The Our Common Agenda report, the Future’s Lab, and the Summit for the Future, all provide avenues to think about such questions.
Fourth we need to involve the private sector. With technological development, most risks originate from private sector activity. We need better models to keep member state, private sector, and civil society actors well-coordinated. The International Labor Organization pioneered one of these models, but we need others. To do so, we can leverage technical international organizations in the multilateral system, like the International Standardization Organization, or the International Atomic Energy Agency.
Fifth, and most importantly, we need extremely strong inclusion. Discussions about tech development cannot only happen in the Global North. As tech developments amplify inequalities, inequalities also amplify risks. Reducing AI risk therefore, needs to go hand in hand with reducing the Global Digital Divide. And one path to that is via Global Digital Compact. The African Union's consultation on the Global Digital Compact already spotlights the need to highlight both the benefits and risks of artificial intelligence. Another path is via the SIDS conference — SIDS countries need access to technology; but are also extremely exposed to the risks.
In summary, the challenge we face is to address these new risks from development that human activity creates, and especially technological risks. But to be clear, we are extremely far behind. And technology is only developing faster. This will not take foresight or new processes. We will not reinvent the wheel. It will only take political will.”
With multiple panelists echoing the need to address risks from technology, a strong message was conveyed. The official summary of the meeting states:
“Panelists encouraged the DRR community to apply strategic foresight in current decision-making and planning, and to incorporate and advocate for technological risk management approaches.”
The findings of the midterm review of the Sendai Framework will play a crucial role in shaping key upcoming events and processes, including the September 2023 SDG summit, the 2023 Global Risk Report, and
the 2024 Summit for the Future.
The Global Digital Compact and upcoming SIDS conference will also offer avenues for increased engagement for Lower-Middle Income Countries in these discussions — helping to ensure that these dialogues are as inclusive as possible.
Each of these occasions will offer further opportunities for the disaster risk reduction community to raise awareness about technological risks and underscore the need for strong technology governance.
With a deadline set for 2030, the Disaster Risk Reduction community continues to strive towards its goals.
As efforts progress, the DRR community's improved understanding of extreme, man-made risks is ensuring that humanity is able to continue the pursuit of a safer and more secure tomorrow.
A story by
Sofia Mikton
and Cecilia Saura Drago
for
Simon Institute for Longterm Governance
with the valuable contribution of
Max Stauffer and Konrad Seifert
The Simon Institute (SI) is a non-profit organization based in Geneva, Switzerland, with the mission of supporting multilateral governance actors to think long-term and develop instruments that reduce global catastrophic risks, improve quality of life, and ensure agency for present and future generations alike.
You can read more about SI’s work or subscribe to their newsletter at