This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Uncategorized

Use of Artificial Intelligence in future armed conflicts

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

Use of Artificial Intelligence in future armed conflicts

The increase in the use of Unmanned Aircraft Systems (UASs) in military and commercial environments has led to a heated discussion on the ban on “killer robots” (Human Rights Watch, para 1). These autonomous machines, which can operate on the ground or in the air, hypothetically when combined with “artificial intelligence” (AI), can help them execute missions autonomously. The debate, which involves many stakeholders and scopes, raises a serious topic of whether AI systemsshould be allowed to carry out these combat operations, especially when there is a likelihood that there will be human casualties. Considering the complex issue at hand, the functional importance of AI is necessary. The following study analyzes the significance of AI in the military setting and some of the advantages and limitations of using AI in drones during combat.

The general meaning of AI is the ability of a computing machine to perform activities that would otherwise require human reasoning and decision-making, such as speech recognition, visual distinction,and performing decision-making processes and analytics (Russell, Stuart, and Peter Norvig 2). However, this definition is inherently misrepresented as to what is logical, and the discussion has always been open to debate.

By that definition, an internal controller of the house, such as a thermostat, can be considered intelligent because the device can sense and change the temperature (Castelfranchi, Cristiano, and Yves Lespérance 345). It is not the same as artificial intelligence, with which a drone selects and designs approaches without significant intervention from humans, which is usually perceived when it comes to autonomous ordnances system. Also, another key factor to be considered when talking about autonomous ordnances is the growing inability to liberate independence from the commercial autonomy of military drones (James Poss, para 3). With the growth of the business sector to develop autonomous land and air systems, there is no doubt that there is evidence of a shift in the military’s AI capability tothe business (James Poss, para 5). Therefore, restricting autonomous innovation for military purposes may not be realistic, as it can as well be commercially available in the lower end of the market for consumers. Also, the divergent improvement in the market of the autonomous corporate structure would result in governments and armies that would no longer dominate, which could result inunsafe and dangerous AI systems, both semi-autonomous and autonomous.

One concept that has usually been interlinked with AI is drone swarming. The technology behind drone swarming allows drones to make decisions autonomously based on shared decisions from other drones (Longo 1). This has proven to be promising as it can revolutionize drone applications in military warfare. Countries such as China have succeeded in developing drone swarm technology (Romaniuk and Burgers 1). On the other hand, the US has been testing a ‘ Low-Cost Unmanned aerial vehicle Swarming Technology (LOCUST), which is a drone swarming program that employs the use of AI on a number of drones to help in data gathering in the battlefield (Longo 1). Swarms have the potential to revolutionize military as they are applicable in every area of national and homeland security. For example, the marine forces can use swarms of drones to search the ocean for enemy submarines. Drones can also be deployed in large areas to help identify enemy hideouts and help eliminate their air defense systems. Furthermore, drone swarms can act as a shield by helping intercept incoming hypersonic missiles. Furthermore, the drones can be equipped with chemical, biological, radiological, and nuclear (CBRN) sensors, which can help identify and devise solutions to such threats.

AI and Military application

The fate of artificial intelligence in combatscenarios is primarily linked to the capacity of specialists to develop systemsthat have an autonomous capability of using knowledge and logical reasoning based on the data being fed (Kibble). Currently, there are no such systems in operation. Most drones on-site are controlled remotely, which means that a human continues to legitimately control a drone after a certain distance, for example, via an additional virtual link such as a satellite feed. Most uncrewed military aircraft develop just a little, and gradually they gain semi-autonomy, which means that they can navigate and land by themselves without any human input (Allen, Greg, and Chan 27). However, almost all of them require some degree of human input to complete their combat operations. Even those drones that take off and fly to a specific target to take pictures and then go back to base still operate on an automated system but cannot be compared to some of the operations of an autonomous let alone a semi-autonomous drone.

Although the current systems are more programmed to be automatic rather than autonomous, more research continues to be carried out within the scope of autonomy. In many countries, airplanes, land, water, and underwater vehicles have gradually improved their military capabilities, with varying results (Allen, Greg, and Chan 13). In the United States, certain types of independent helicopters are under development that a sales representative can operate the aircraft with a cell phone and are currently under development by countries such asthe USA, Europe, and China. Autonomous drones are under rapid research and development across the world. However, the offices that make thesesystemsstill find it difficult to make a qualitative leap in operational execution (Harwood). There are many reasons for the failure to develop these innovations, including unexpected costs and unforeseen technical problems, although authoritarian and social boundaries are also barriers to the implementation of autonomous UAS. The United States, for example, has struggled to launch autonomous drones into operation mainly because of the hierarchical struggle and the prioritization of crewed aircraft (Spinetta and Cummings 11).

For some soldiers, the drone systems are only suitable for rescue operations, but they put the rule at risk as long as they are qualified to perform the most recognized and advanced tasks. However, other hierarchical problems restrict the operational use of autonomous aircraft, and an increasingly dangerous problem is the commercialization of AI drone technology for businesses. However, cargo delivery services like UPS, FedEx, and DHL are looking into the application of military drone technology in the delivery of cargo (James Poss, para 7). An allegorical arms competition takes place in the company’s group to improve autonomous systems. The improvement in autonomous military systems was, at best, moderate and constant and did not reflect the progress made in independent commercial contexts, such as driverless vehicles and, in particular, drones.

Several roboticists and military experts have argued that that autonomous drones should not be regarded as a threat on ethical grounds but should be a more preferred option as compared to human fighters. According to roboticist Ronald C. Arkin, future autonomous drones will have the ability to make a better judgment on the battlefield based on the fact that they do not need to be fed with a self-preservation program, which is more common in humans (Arkin 338). In this case, the drones will have the ability to make ethical decisions avoiding the need to “shoot first and ask questions later” scenarios (Arkin 338). Furthermore, autonomous drones will not be plagued by human emotions, such as fear in a combat scenario. The systems will be able to handle a lot of information being fed as compared to humans and thus improve some of the battlefield outcomes.

According to Lt. Col. Douglas A. Pryer, there are ethical advantages associated with implementing autonomous weapons such as drones and removing humans from high-stress combat zones (Pryer 14). He further notes that from a neurological perspective, the neural circuits which are responsible for receiving and relaying information in the brain tend to shut down when overloaded with stress. This can often lead to soldiers losing control of themselves and end up committing war crimes (Pryer 14).

There are many positive aspects of expanding AI’s combat work. By focusing more and more on AI skills, this increases AI’s ability to verify authentic or wrong approaches decisively (Allen, Greg, and Chan 28). Intelligent drones can be designed to reduce the number of acts of war against individuals, eg. For example, the massacre of civilians or prisoners of war, the destruction of civilian property by non-military personnel, the torture of prisoners, and devastation whilelimitingcollateral damage due to instances of helplessness and confrontation (Kibble 2). Artificial intelligence will be more competent than humans and modern combat strategies and methods. Errors are also greatly reduced in the war zone. Currently, it is possible to accurately calculate and see the degree of destruction of an attack, which will now result in minor injuries due to an unpredictable attack (Cass 1017). The additional violation of human rights in the types of aggression, agony, etc. is eliminated with the method at the right time because AI is the predominant method or strategy of combat.

Drone swarms offer customization and flexibility to military commanders allowing them to add or remove drones in the formation when necessary (Kallenborn, para 12). However, this requires a new form of inter-drone communication protocol that allows for drones to be easily added or removed from a swarm. As such, the drones should easily accommodate drones added or removed from the swarm either from the command or from hostile action (Kallenborn, para 13). The adaptability of intelligent drone swarms means that the drones can easily adapt to the needs of the situation and alleviate the commanders from small tasks such as managing the drones. The customization of drone swarms means that the size of the swarm can be adjusted depending on the mission. Furthermore, military commanders can easily customize the drones with sensors, weapons, or other payloads depending on the needs of the mission. Drone swarms can also easily break into small groups or merge as a unit, which can help change battlefield dynamics easily (Kallenborn, para 14). For example, a drone swarm can break into small groups to conduct reconnaissance in a particular area. If they spot an adversary, they can easily merge to eliminate the target.

Limitations of implementing AI-based UAS in the military

Artificial intelligence is designed for a specific purpose and is dependent on different contexts. It works successfully on discreet tasks on very limited subjects. Also, AI requires a large number of marked datasets with a focus on the time and speed required, and the requirement to get there changes the conventional way for the barrier segment to manage the review of sensitive information through restricted access and silos (Allen, Greg, and Chan 10).A misunderstanding of AI restrictions that result from an unsuccessful desire to protect intelligent machines is, in fact, used as an instrument to exacerbate risk and increase the potential for setbacks. Artificial intelligence offers new weaknesses and opportunities in systems (Shankar Siva Kumar, Ram, et al.2). Although system failure in combat is not usually linked to AI, the failure of AI may seem unique and conceivable in an unrecognizable, new, and sudden way. Also, it can be difficult to verify that an AIsystem works as intended. Also, trying to apply AI means significantly identifying an unwanted behavior and ensuring that a system no longer exhibits that behavior. The presentation of artificial intelligence in combat requires an assessment of the results of the previous failures and their consequences. Even in major security applications, such as drone video assessment, specialized development, and artificial intelligence, are an unsatisfactory threat if they depend only on current machines (Allen, Greg, and Chan 13).

The choices that people face in combat are only associated with humans. While the relevance of artificial intelligence to security challenges is guaranteed in regions with unclear and well-characterized reliability, people must resist the urge to tackle their most difficult problems of knowing how and when people without intelligence use AI weapons.Also, the use of machine learning, AI, and data analytics is not a system by which people can waive their obligation to make decisions for them (Kibble 1).

Artificial intelligence is likely to be one of many tools in the computerized toolbox if any. Given the unique aspects of development, learning-based structures may not be the best solution for some problems. However, the militaryenterprise has areas suitable for artificial intelligence, and restrictive considerations should not be lacking in discussions on the use of deadly military force.

The 2008 digital attack on the Pentagon, the United States Department of Defense, is an excellent example of how this view can be broken today, using cyber tools to deceive autonomous weapons, causing havoc and at the same time causing serious problems in providing accountability for actions by different states (BBC, para 1). Besides, one can even find a circumstance where the developer for malicious code isartificial intelligence itself, which also complicates the situation (Simonite, Para 2&8). Taking accountability for actions in this scenario becomes a challenge as no party can be blamed.

Drone swarming comes with some challenges on aspects such as electronic warfare. There are certain vulnerabilities that make it a threat to the program. The functionality of drone swarms depends on the ability of the drones to link with one another and facilitate communication. The failure of the drones to communicate with one another as a result of signal jamming can result in the overall catastrophic failure of the drone swarm (Lachow 98). However, some of the drone swarms can be hardened to withstand signal jamming, and this can play a major role in military warfare. The AI incorporated on the drones can act as alternative communication relays in the event of a jamming scenario. Alternatively, they can issue warnings to the commanders of an impending signal jamming attack.

The advances in drone technology can help drones withstand such incidences of signal jamming. There have been concepts implemented recently on drones known as stigmergy. Stigmergy is a means of communication usually found in swarming insects such as ants (Cimino, Lazzeri, and Vaglini 1). The concept allows the insects to pass information from one insect to another (Cimino et al. 2). When applied to intelligent drones, they can share information with one another of any impending threats and help formulate real-time solutions.

Protecting the military from negative effects of AI

The potential damage to AI intentionally destined for a combat massacre is a pressing issue. Also, the United States and other countries strive to create artificial military intelligence, such as autonomous drones and weapons that enhance capabilities in war zones, while injuring or killing fewer soldiers. For the United States, this would be a natural extension of the current drone program, which has helped create another issue by increasing the number of terrorists (Hussain, para 3). The Pentagon states it does not intend to exclude people from the decision-making process, which supports the use of lethal power (Ackerman, para 1). Still, AI Innovation has proven to perform better in decision-making as compared to humans. This has caused concerns among people as many fear that global arms competition, which threatens an arms race to full ordinances, which, to some extent, may not be capable of making decisions. As a result, there have always been calls to regulate the use of AI in autonomous weapon systems such as UAS.

The United Nations is examining how universal limitations can be implemented for autonomous lethal weapons. In 2015, a large number of AI analysts proposed the restriction of autonomous weapons, if they did not require human control, because autonomous weapons are perfect for companies for various reasons, such as destabilizing countries, assassinations and the oppression of the population, and in particular an ethnic group (Berkeley Engineering). These terrible calamities would be under the supervision of an AI ​​system already available, or that will make them sooner or later where features such as facial recognition can make such drones easier to hunt and assassinate a particular target. Worse can be an algorithm fed onto a drone that can select specific targets based on their skin color.

One notable aspect when it comes to the protection of the military and national infrastructure is that machines and not humans, that will make essential decisions in a world of national security. The AI created by humans will affect the administration, work, and progress of military power. It goes beyond the swarms of autonomous drones to better target enemies and offers military administrators with new and proven alternatives in a combat situation. Although the Department of Defense has promised that people will always make the final decision to kill another person, there are real questions about what it means if using AI can improve the performance of their weapons to the extent that they can independently identify and determine an alternative course of action to help achieve mission objectives based on the current parameters being fed in real-time (Ilachinski 11).

The ability to develop artificial thinking will affect each of the three detailed phases of national strategy development: identification, decision-making, and evaluation — this likely to provide both benefits and drawbacks. After locating and examining large datasets, policymakers will have more details than ever in their planned briefs on a wide range of topics, from security phases to skill movements and enemy military reconnaissance (Horowitz 41).

Military and security organizations can use artificial intelligence as a targeting system. Artificial intelligence can identify and present new threats and provide effective countermeasures to mitigate such risks (Soffar, para 14). This can invalidate a long-distance threat in the combat zone and provide methods for balancing the improvised explosive device (IED). For example, drones mounted with AI can scan a particular area to determine threats on infrastructure on infrastructures such as dams. One perceived threat is that small drones can be used as remote-controlled bombs that terrorists can use to attach small explosives and fly over a military facility (Wallace, Ryan, and Loffi 11). The drones can be operated autonomously by an AI system. However, military drones fitted with AI technologies can detect such drones and engage signal jamming capabilities to disable such threats (Beaudoin et al. 5).

 

 

Works Cited

Ackerman, Spencer. “Pentagon: A Human Will Always Decide When A Robot Kills You”. WIRED, 2020, https://www.wired.com/2012/11/human-robot-kill/.

Allen, Greg, and Taniel Chan. Artificial intelligence and national security. Cambridge, MA: Belfer Center for Science and International Affairs, 2017.

Arkin, Ronald C. “The case for ethical autonomy in unmanned systems.” Journal of Military Ethics 9.4 (2010): 332-341.

BBC. “US Military ‘Hit In Cyber Strike.'” BBC News, 2010, https://www.bbc.com/news/world-us-canada-11088658.

Beaudoin, Laurent, et al. “Potential threats of UAS swarms and the countermeasure’s need.” 2011.

Castelfranchi, Cristiano, and Yves Lespérance, eds. Intelligent Agents VII. Agent Theories Architectures and Languages: 7th International Workshop, ATAL 2000, Boston, MA, USA, July 7-9, 2000. Proceedings. Springer, 2003.

Cass, Kelly. “Autonomous Weapons and Accountability: Seeking Solutions in the Law of War.” Loy. LAL Rev. 48 (2014): 1017.

Cimino, Mario GCA, Alessandro Lazzeri, and GigliolaVaglini. “Combining stigmergic and flocking behaviors to coordinate swarms of drones performing target search.” 2015 6th International Conference on Information, Intelligence, Systems, and Applications (IISA). IEEE, 2015.

Harwood, Robert. “The Challenges To Developing Fully Autonomous Drone Technology”. Ansys.Com, 2019, https://www.ansys.com/blog/challenges-developing-fully-autonomous-drone-technology.

Human Rights Watch. “Arms: New Campaign To Stop Killer Robots.” Human Rights Watch, 2013, https://www.hrw.org/news/2013/04/23/arms-new-campaign-stop-killer-robots.

Horowitz, Michael C. “Artificial Intelligence, International Competition, and the Balance of Power (May 2018).” Texas national security review (2018).

Hussain, Murtaza. “Retired General: Drones Create More Terrorists Than They Kill, Iraq War Helped Create ISIS”. The Intercept, 2020, https://theintercept.com/2015/07/16/retired-general-drones-create-terrorists-kill-iraq-war-helped-create-isis/.

Ilachinski, Andrew. Artificial Intelligence and Autonomy: Opportunities and Challenges. No. DIS-2017-U-016388-Final. Center for Naval Analyses Arlington United States, 2017.

James Poss, Maj Gen (Ret). “The Military-To-Commercial Drone Market: Is It A Two-Way Street?”. Inside Unmanned Systems, 2020, https://insideunmannedsystems.com/the-military-to-commercial-drone-market-is-it-a-two-way-street/.

Kallenborn, Zachary. “The Era Of The Drone Swarm Is Coming, And We Need To Be Ready For It – Modern War Institute”. Modern War Institute, 2020, https://mwi.usma.edu/era-drone-swarm-coming-need-ready/.

Kibble, Rodger. “Can an unmanned drone be a moral agent? Ethics and accountability in military robotics.” DJ Gunkel, J. J. Bryson, & S. Torrance (Eds.), The machine question: AI, ethics, and moral responsibility (Proceedings of symposium ”Machine Question: AI, Ethics, and Moral Responsibility” AISB/IACAP 2012). 2012.

Lachow, Irving. “The upside and downside of swarming drones.” Bulletin of the atomic scientists 73.2 (2017): 96-101.

Longo, Maria. “The advantage of autonomous swarming drones in the military.” Unpublished manuscript (2016).

Pryer, Douglas A. “The Rise of the Machines: Why Increasingly” Perfect” Weapons Help Perpetuate Our Wars and Endanger Our Nation.” Military Review 93.2 (2013): 14.

Romaniuk, Scott N., and Tobias Burgers. “China’s Swarms of Smart Drones Have Enormous Military Potential.” The Diplomat: Asia Defense, February 3 (2018).

Russell, Stuart J., and Peter Norvig. Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited, 2016.

Shankar Siva Kumar, Ram, et al. “Failure Modes in Machine Learning Systems.” arXiv (2019): arXiv-1911. [xv]

Simonite, Tom. “Google’S AI Software Is Learning To Make AI Software.” MIT Technology Review, 2017, https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/.

Spinetta, Lt Col Lawrence, and M. L. Cummings. “Unloved Aerial Vehicles.” Armed Forces Journal 150 (2012).

Soffar, Heba. “Unmanned Aerial Vehicle (UAV) (Drones) Uses, Advantages And Disadvantages | Science Online”. Science Online, 2020, https://www.online-sciences.com/robotics/unmanned-aerial-vehicle-uses-advantages-and-disadvantages/.

Wallace, Ryan J., and Jon M. Loffi. “Examining unmanned aerial system threats & defenses: A conceptual analysis.” International Journal of Aviation, Aeronautics, and Aerospace 2.4 (2015): 1.

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask