stag hunt example international relationsrandy edwards obituary

hTIOSQ>M2P22PQFAH trailer If a hunter leaps out and kills the hare, he will eat, but the trap laid for the stag will be wasted and the other hunters will starve. In addition to leadership, the formation of a small but successful group is also likely to influence group dynamics. I refer to this as the AI Coordination Problem. Here, we have the formation of a modest social contract. Published by the Lawfare Institute in Cooperation With, Lawfare Resources for Teachers and Students, Documents Related to the Mueller Investigation, highly contentious presidential elections, Civil Liberties and Constitutional Rights. As described in the previous section, this arms race dynamic is particularly worrisome due to the existential risks that arise from AIs development and call for appropriate measures to mitigate it. [7] E.g. Carol M. Rose argues that the stag hunt theory is useful in 'law and humanities' theory. Structural Conflict Prevention refers to a compromosde of long term intervention that aim to transform key socioeconomic, political and institional factors that could lead to conflict. [21] Moreover, racist algorithms[22] and lethal autonomous weapons systems[23] force us to grapple with difficult ethical questions as we apply AI to more society realms. f(x)={323(4xx2)0if0x4otherwise. One example payoff structure that results in a Deadlock is outlined in Table 9. [50] This is visually represented in Table 3 with each actors preference order explicitly outlined. [1] Kelly Song, Jack Ma: Artificial intelligence could set off WWIII, but humans will win, CNBC, June 21, 2017, https://www.cnbc.com/2017/06/21/jack-ma-artificial-intelligence-could-set-off-a-third-world-war-but-humans-will-win.html. LTgC9Nif In a case with a random group of people, most would choose not to trust strangers with their success. As is customary in game theory, the first number in each cell represents how desirable the outcome is for Row (in this case, Actor A), and the second number represents how desirable the same outcome is for Column (Actor B). Donna Franks, an accountant for Southern Technologies Corporation, discovers that her supervisor, Elise Silverton, made several errors last year. [31] Executive Office of the President National Science and Technology Council: Committee on Technology, Preparing for the Future of Artificial Intelligence, Executive Office of the President of the United States (October 2016), https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf; Artificial Intelligence, Automation, and the Economy Executive Office of the President of the United States (December 2016), https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF. Table 5. This essay first appeared in the Acheson Prize 2018 Issue of the Yale Review of International Studies. David Hume provides a series of examples that are stag hunts. Evidence from AI Experts (2017: 11-21), retrieved from http://arxiv.org/abs/1705.08807. However, both hunters know the only way to successfully hunt a stag is with the other's help. d Payoff variables for simulated Chicken game. This is taken to be an important analogy for social cooperation. Two hunters can either jointly hunt a stag (an adult deer and rather large meal) or individually hunt a rabbit (tasty, but substantially less filling). Outline a basic understanding of what the discipline of International Relations is about, and Jean Jacques Rousseau (1712-1778): Parable of the Stag Hunt. Table 9. [17] Michele Bertoncello and Dominik Wee, Ten ways autonomous driving could redefine the automotive world, Mcikinsey&Company, June 2015, https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/ten-ways-autonomous-driving-could-redefine-the-automotive-world (suggesting that driverless cars could reduce traffic fataltiies by up to 90 percent). Together, this is expressed as: One last consideration to take into account is the relationship between the probabilities of developing a harmful AI for each of these scenarios. In this paper, I develop a simple theory to explain whether two international actors are likely to cooperate or compete in developing AI and analyze what variables factor into this assessment. (lljhrpc). [41] AI, being a dual-use technology, does not lend itself to unambiguously defensive (or otherwise benign) investments. Furthermore, in June 2017, China unveiled a policy strategy document unveiling grand ambitions to become the world leader in AI by 2030. In the stag hunt, what matters is trust Can actors trust that the other will follow through Depends on what they believe about each other, What actors pursue hinges on how likely the other actor is to follow through What is Game Theory theory of looking strategic interaction As a result, it is conceivable that international actors might agree to certain limitations or cooperative regimes to reduce insecurity and stabilize the balance of power. Each can individually choose to hunt a stag or hunt a hare. A classic game theoretic allegory best demonstrates the various incentives at stake for the United States and Afghan political elites at this moment. The remainder of this subsection briefly examines each of these models and its relationship with the AI Coordination Problem. PxF`4f$CN*}S -'2Y72Dl0%^JOG?Y,XT@ dF6l]+$.~Qrjj}46.#Z x^iyY2)/c lLU[q#r)^X Therefore, an agreement to play (c,c) conveys no information about what the players will do, and cannot be considered self-enforcing." The complex machinations required to create a lasting peace may well be under way, but any viable agreementand the eventual withdrawal of U.S. forces that would entailrequires an Afghan government capable of holding its ground on behalf of its citizens and in the ongoing struggle against violent extremism. A persons choice to bind himself to a social contract depends entirely on his beliefs whether or not the other persons or peoples choice. %PDF-1.3 % 0000000016 00000 n Huntington[37] makes a distinction between qualitative arms races (where technological developments radically transform the nature of a countrys military capabilities) and quantitative arms races (where competition is driven by the sheer size of an actors arsenal). To reiterate, the primary function of this theory is to lay out a structure for identifying what game models best represent the AI Coordination Problem, and as a result, what strategies should be applied to encourage coordination and stability. in . As of 2017, there were 193 member-states of the international system as recognized by the United Nations. I discuss in this final section the relevant policy and strategic implications this theory has on achieving international AI coordination, and assess the strengths and limitations of the theory outlined above in practice. Actor As preference order: CC > DC > DD > CD, Actor Bs preference order: CC > CD > DD > DC. Intriligator and Brito[38] argue that qualitative/technological races can lead to greater instability than quantitative races. To what extent are today's so-called 'new wars' (Mary Kaldor) post Clausewitzean in nature? Additionally, both actors can expect a greater return if they both cooperate rather than both defect. The area of international relations theory that is most characterized by overt metaphorical imagery is that of game theory.Although the imagery of game theory would suggest that the games were outgrowths of metaphorical thinking, the origins of game theory actually are to be found in the area of mathematics. Since this requires that the fish have no way to escape, it requires the cooperation of many orcas. In Exercises 252525 through 323232, f(x)f(x)f(x) is a probability density function for a particular random variable XXX. Those in favor of withdrawal are skeptical that a few thousand U.S. troops can make a decisive difference when 100,000 U.S. soldiers proved incapable of curbing the insurgency. [26] Stephen Hawking, Stuart Russell, Max Tegmark, Frank Wilczek, Transcendence looks at the implications of artificial intelligence but are we taking AI seriously enough? The Indepndent, May 1, 2014, https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html. . [16] Google DeepMind, DeepMind and Blizzard open StarCraft II as an AI research environment, https://deepmind.com/blog/deepmind-and-blizzard-open-starcraft-ii-ai-research-environment/. Hunting stags is most beneficial for society but requires a . Under the assumption that actors have a combination of both competing and common interests, those actors may cooperate when those common interests compel such action. This technological shock factor leads actors to increase weapons research and development and maximize their overall arms capacity to guard against uncertainty. As a result, a rational actor should expect to cooperate. SECURITY CLASSIFICATION OF REPORT Unclassified 18. Table 3. As a result of this, security-seeking actions such as increasing technical capacity (even if this is not explicitly offensive this is particularly relevant to wide-encompassing capacity of AI) can be perceived as threatening and met with exacerbated race dynamics. For Rousseau, in his famous parable of the stag hunt, war is inevitable because of the security dilemma and the lack of trust between states. Weiss and Agassi wrote about this argument: "This we deem somewhat incorrect since it is an oversight of the agreement that may change the mutual expectations of players that the result of the game depends on Aumanns assertion that there is no a priori reason to expect agreement to lead to cooperation requires completion; at times, but only at times, there is a posteriori reason for that How a given player will behave in a given game, thus, depends on the culture within which the game takes place".[8]. What are the two exceptions to the ban on the use of force in the UN Charter? [12] Apple Inc., Siri, https://www.apple.com/ios/siri/. In Just War Theory, what is the doctrine of double effect? 0000003638 00000 n This same dynamic could hold true in the development of an AI Coordination Regime, where actors can decide whether to abide by the Coordination Regime or find a way to cheat. [47] George W. Downs, David M. Rocke, & Randolph M. Siverson, Arms Races and Cooperation, World Politics, 38(1: 1985): 118146. In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. In order to assess the likelihood of such a Coordination Regimes success, one would have to take into account the two actors expected payoffs from cooperating or defecting from the regime. [58] Downs et al., Arms Races and Cooperation, 143-144. For example, Jervis highlights the distinguishability of offensive-defensive postures as a factor in stability. The United States is in the hunt, too. A hurried U.S. exit will incentivize Afghanistans various competing factions more than ever before to defect in favor of short-term gains on the assumption that one of the lead hunters in the band has given up the fight. Actor As preference order: DC > DD > CC > CD, Actor Bs preference order: CD > DD > CC > DC. [7] Aumann concluded that in this game "agreement has no effect, one way or the other." Here, both actors demonstrate varying uncertainty about whether they will develop a beneficial or harmful AI alone, but they both equally perceive the potential benefits of AI to be greater than the potential harms. However, in Deadlock, the prospect of both actors defecting is more desirable than both actors cooperating. This makes the risk twofold; the risk that the stag does not appear, and the risk that another hunter takes the kill. This table contains an ordinal representation of a payoff matrix for a Chicken game. Overall, the errors overstated the companys net income by 40%. In addition to the pure strategy Nash equilibria there is one mixed strategy Nash equilibrium. If they are discovered, or do not cooperate, the stag will flee, and all will go hungry. Namely, the probability of developing a harmful AI is greatest in a scenario where both actors defect, while the probability of developing a harmful AI is lowest in a scenario where both actors cooperate. HtV]o6*l_\Ek=2m"H)$]feV%I,/i~==_&UA0K=~=,M%p5H|UJto%}=#%}U[-=nh}y)bhQ:*&#HzF1"T!G i/I|P&(Jt92B5*rhA"4 [30], Today, government actors have already expressed great interest in AI as a transformative technology. The hunters hide and wait along a path. For example, suppose we have a prisoner's dilemma as pictured in Figure 3. Meanwhile, the harm that each actor can expect to receive from an AI Coordination Regime consists of the actors perceived likelihood that such a regime would create a harmful AI expressed as P_(h|A) (AB)for Actor A and P_(h|B) (AB)for Actor B times each actors perceived harm expressed as hA and hB. One significant limitation of this theory is that it assumes that the AI Coordination Problem will involve two key actors. [3] While (Hare, Hare) remains a Nash equilibrium, it is no longer risk dominant. Despite the large number of variables addressed in this paper, this is at its core a simple theory with the aims of motivating additional analysis and research to branch off. This democratic peace proposition not only challenges the validity of other political systems (i.e., fascism, communism, authoritarianism, totalitarianism), but also the prevailing realist account of international relations, which emphasises balance-of-power calculations and common strategic interests in order to explain the peace and stability that characterises relations between liberal democracies. xref Anarchy in International Relations Theory: The Neorealist-Neoliberal Debate Created Date: 20160809151831Z [51] An analogous scenario in the context of the AI Coordination Problem could be if both international actors have developed, but not yet unleashed an ASI, where knowledge of whether the technology will be beneficial or harmful is still uncertain. For example, can the structure of distribution impact an actors perception of the game as cooperation or defection dominated (if so, should we focus strategic resources on developing accountability strategies that can effectively enforce distribution)? Continuous coordination through negotiation in a Prisoners Dilemma is somewhat promising, although a cooperating actor runs the risk of a rival defecting if there is not an effective way to ensure and enforce cooperation in an AI Cooperation Regime. Similar to the Prisoners Dilemma, Chicken occurs when each actors greatest preference would be to defect while their opponent cooperates. Solving this problem requires more understanding of its dynamics and strategic implications before hacking at it with policy solutions. If all the hunters work together, they can kill the stag and all eat. [10] AI expert Andrew Ng says AI is the new electricity | Disrupt SF 2017, TechCrunch Disrupt SF 2017, TechCrunch, September 20, 2017, https://www.youtube.com/watch?v=uSCka8vXaJc. Payoff matrix for simulated Deadlock. [49] For example, by defecting from an arms-reduction treaty to develop more weapons, an actor can gain the upper hand on an opponent who decides to uphold the treaty by covertly continuing or increasing arms production. An example of the game of Stag Hunt can be illustrated by neighbours with a large hedge that forms the boundary between their properties. But the moral is not quite so bleak. In this model, each actors incentives are not fully aligned to support mutual cooperation and thus should present worry for individuals hoping to reduce the possibility of developing a harmful AI. Based on the values that each actor assigns to their payoff variables, we can expect different coordination models (Prisoners Dilemma, Chicken, Deadlock, or Stag Hunt) to arise. 1. The game is a prototype of the social contract. For instance, if the expected punishment is 2, then the imposition of this punishment turns the above prisoner's dilemma into the stag hunt given at the introduction. . I introduce the example of the Stag Hunt Gamea short, effective, and easy-to-use activity that simulates Jean-Jacques Rousseau's political philosophy. This may not amount to a recipe for good governance, but it has meant the preservation of a credible bulwark against state collapse. The Stag Hunt UCI School of Social Sciences, Example of stag hunt in international relations, on Example of stag hunt in international relations, https://en.wikipedia.org/wiki/Stag_Hunt_Mosaic, example of application letter for sales representative, Example of selection criteria planning and organising, Example sentences with the word detrimental, Manual de access 2010 avanzado pdf en espanol gratis. What are, according to Kenneth Waltz, the causes of war? 0 International Relations, To begin exploring this, I now look to the literature on arms control and coordination. Instead, each hunter should separately choose the more ambitious and far more rewarding goal of getting the stag, thereby giving up some autonomy in exchange for the other hunter's cooperation and added might. Moreover, the AI Coordination Regime is arranged such that Actor B is more likely to gain a higher distribution of AIs benefits. <>stream 695 0 obj The closestapproximationof this in International Relations are universal treaties, like the KyotoProtocolenvironmental treaty. In order for human security to challenge global inequalities, there has to be cooperation between a country's foreign policy and its approach to global health. In this section, I briefly argue that state governments are likely to eventually control the development of AI (either through direct development or intense monitoring and regulation of state-friendly companies)[29], and that the current landscape suggests two states in particular China and the United States are most likely to reach development of an advanced AI system first. Deadlock is a common if little studied occurrence in international relations, although knowledge about how deadlocks are solved can be of practical and theoretical importance. It sends a message to the countrys fractious elites that the rewards for cooperation remain far richer than those that would come from going it alone. This is visually represented in Table 3 with each actors preference order explicitly outlined. endstream endobj 1 0 obj <> endobj 2 0 obj [/PDF/Text] endobj 3 0 obj <> endobj 8 0 obj <> endobj 9 0 obj <>stream To be sustained, a regime of racial oppression requires cooperation. For example, in a scenario where the United States and Russia are competing to be the one to land on the moon first, the stag hunt would allow the two countries to work together to achieve this goal when they would have gone their separate ways and done the lunar landing on their own. The stag hunt differs from the prisoner's dilemma in that there are two pure-strategy Nash equilibria:[2] one where both players cooperate, and one where both players defect. But, after nearly two decades of participation in the countrys fledgling democratic politics, economic reconstruction and security-sector development, many of these strongmen have grown invested in the Afghan states survival and the dividends that they hope will come with greater peace and stability. [52] Stefan Persson, Deadlocks in International Negotiation, Cooperation and Conflict 29, 3(1994): 211244. (e.g., including games such as Chicken and Stag Hunt). c Each model is differentiated primarily by the payoffs to cooperating or defecting for each international actor. What is the 'New Barbarism' view of contemporary conflicts? Under this principle, parties to an armed conflict must always distinguish between civilians and civilian objects on the one hand, and combatants and military targets on the other. Civilians and civilian objects are protected under the laws of armed conflict by the principle of distinction. ? Nations are able to communicate with each other freely, something that is forbidden in the traditional PD game. For the cooperator (here, Actor B), the benefit they can expect to receive from cooperating would be the same as if both actors cooperated [P_(b|B) (AB)b_Bd_B]. {\displaystyle a>b\geq d>c} One example addresses two individuals who must row a boat. Downs et al. A great example of chicken in IR is the Cuban Missile Crisis. 7into the two-person Stag Hunt: This is an exact version of the8 informal arguments of Hume and Hobbes. The remainder of this subsection looks at numerical simulations that result in each of the four models and discusses potential real-world hypotheticals these simulations might reflect. Using their intuition, the remainder of this paper looks at strategy and policy considerations relevant to some game models in the context of the AI Coordination Problem. What is the so-called 'holy trinity' of peacekeeping? As the infighting continues, the impulse to forego the elusive stag in favor of the rabbits on offer will grow stronger by the day. Additionally, this model accounts for an AI Coordination Regime that might result in variable distribution of benefits for each actor. Leanna Litsch, Kabul Security Force Public Affairs. (1) the responsibility of the state to protect its own population from genocide, war crimes, ethnic cleansing and crimes against humanity, and from their incitement; What is the difference between structural and operational conflict prevention? 2020 Yale International Relations Association | New Haven, CT, https://www.technologyreview.com/s/610026/algorithms-are-making-american-inequality-worse/, https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf, Preparing for the Future of Artificial Intelligence, Artificial Intelligence, Automation, and the Economy, https://www.nytimes.com/2017/07/20/business/china-artificial-intelligence.html, Interview with YPG volunteer soldier Brace Belden, Shaping Saddam: How the Media Mythologized A Monster Honorable Mention, Probability Actor A believes it will develop a beneficial AI, Probability Actor B believes Actor A will develop a beneficial AI, Probability Actor A believes Actor B will develop a beneficial AI, Probability Actor B believes it will develop a beneficial AI, Probability Actor A believes AI Coordination Regime will develop a beneficial AI, Probability Actor B believes AI Coordination Regime will develop a beneficial AI, Percent of benefits Actor A can expect to receive from an AI Coordination Regime, Percent of benefits Actor B can expect to receive from an AI Coordination Regime, Actor As perceived utility from developing beneficial AI, Actor Bs perceived utility from developing beneficial AI, Probability Actor A believes it will develop a harmful AI, Probability Actor B believes Actor A will develop a harmful AI, Probability Actor A believes Actor B will develop a harmful AI, Probability Actor B believes it will develop a harmful AI, Probability Actor A believes AI Coordination Regime will develop a harmful AI, Probability Actor B believes AI Coordination Regime will develop a harmful AI, Actor As perceived harm from developing a harmful AI, Actor Bs perceived harm from developing a harmful AI. 16 (2019): 1. I thank my advisor, Professor Allan Dafoe, for his time, support, and introduction to this papers subject matter in his Global Politics of AI seminar. In 2016, the Obama Administration developed two reports on the future of AI. The academic example is the Stag Hunt. which can be viewed through the lens of the stag hunt in for an example the countrys only international conference in International Relations from, Scenario Assurance game is a generic name for the game more commonly known as Stag Hunt. The French philosopher, Jean Jacques Rousseau, presented the following 0000016685 00000 n One example payoff structure that results in a Prisoners Dilemma is outlined in Table 7. This is taken to be an important analogy for social cooperation. Is human security a useful approach to security? The dynamics changes once the players learn with whom to interact with. The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. Depending on the payoff structures, we can anticipate different likelihoods of and preferences for cooperation or defection on the part of the actors. The second technology revolution caused World War II. This subsection looks at the four predominant models that describe the situation two international actors might find themselves in when considering cooperation in developing AI, where research and development is costly and its outcome is uncertain. Here, values are measured in utility. The response from Kabul involved a predictable combination of derision and alarm, for fear that bargaining will commence on terms beyond the current administrations control. This could be achieved through signaling lack of effort to increase an actors military capacity (perhaps by domestic bans on AI weapon development, for example). [6], Aumann proposed: "Let us now change the scenario by permitting pre-play communication. The question becomes, why dont they always cheat? %PDF-1.7 % Here, both actors demonstrate high uncertainty about whether they will develop a beneficial or harmful AI alone (both Actors see the likelihood as a 50/50 split), but they perceive the potential benefits of AI to be slightly greater than the potential harms. Here if they all act together they can successfully reproduce, but success depends on the cooperation of many individual protozoa. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends now whether it can be controlled at all.[26]. As we discussed in class, the catch is that the players involved must all work together in order to successfully hunt the stag and reap the rewards once one person leaves the hunt for a hare, the stag hunt fails and those involved in it wind up with nothing. They will be tempted to use the prospect of negotiations with the Taliban and the upcoming election season to score quick points at their rivals expense, foregoing the kinds of political cooperation that have held the country together until now. Game Theory 101: The Complete William Spaniel shows how to solve the Stag Hunt using pure strategy Nash equilibrium. For example, international sanctions involve cooperation against target countries (Martin, 1992a; Drezner, . }}F:,EdSr September 21, 2015 | category: In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. [13] Tesla Inc., Autopilot, https://www.tesla.com/autopilot. > 0000000696 00000 n 695 20 The 18th century political philosopher Jean-Jacques Rousseau famously described a dilemma that arises when a group of hunters sets out in search of a stag: To catch the prized male deer, they must cooperate, waiting quietly in the woods for its arrival. 0000009614 00000 n Additionally, the feedback, discussion, resource recommendations, and inspiring work of friends, colleagues, and mentors in several time zones especially Amy Fan, Carrick Flynn, Will Hunt, Jade Leung, Matthijs Maas, Peter McIntyre, Professor Nuno Monteiro, Gabe Rissman, Thomas Weng, Baobao Zhang, and Remco Zwetsloot were vital to this paper and are profoundly appreciated. Meanwhile, the escalation of an arms race where neither side halts or slows progress is less desirable to each actors safety than both fully entering the agreement. In this game "each player always prefers the other to play c, no matter what he himself plays. The field of international relations has long focused on states as the most important actors in global politics. [11] McKinsey Global Institute, Artificial Intelligence: The Next Digital Frontier, June 2017, https://www.mckinsey.com/~/media/McKinsey/Industries/Advanced%20Electronics/Our%20Insights/How%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/MGI-Artificial-Intelligence-Discussion-paper.ashx: 5 (estimating major tech companies in 2016 spent $20-30 billion on AI development and acquisitions). Interestingly enough, the Stag Hunt theory can be used to describe social contracts within society, with the contract being the one to hunt the stag or achieve mutual benefit. Another example is the hunting practices of orcas (known as carousel feeding). 0000016501 00000 n startxref A person's choice to bind himself to a social contract depends entirely on his beliefs whether or not the other person's or people's choice. [13] And impressive victories over humans in chess by AI programs[14] are being dwarfed by AIs ability to compete with and beat humans at exponentially more difficult strategic endeavors like the games of Go[15] and StarCraft. Most events in IR are not mutually beneficial, like in the Battle of the Sexes. Evaluate this statement. Sharp's consent theory of power is the most well articulated connection between nonviolent action and power theory, yet it has some serious shortcomings, especially in dealing with systems not fitting a ruler-subject dichotomy, such as capitalism, bureaucracy, and patriarchy. Author James Cambias describes a solution to the game as the basis for an extraterrestrial civilization in his 2014 science fiction book A Darkling Sea. [4] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014). In so doing, they have maintained a kind of limited access order, drawing material and political benefits from cooperating with one another, most recently as part of the current National Unity Government. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. As a result, it is important to consider deadlock as a potential model that might explain the landscape of AI coordination. 0000001656 00000 n Table 4. What is the difference between ethnic cleansing and genocide? b I refer to this as the AI Coordination Problem.

Convoy Commander Brief Example, Big Lots Rewards Login, Real World: Hawaii Where Are They Now, Vegan Liquid Mozzarella Recipe, Articles S