How will AI affect our future ( please write more about negative impacts)
There is plenty of artificial intelligence [AI] computers all around, making it hard for anyone to die before using them. Most of the items that are in operation, for example, mobile phones, ATMs, motor vehicles, and TVs, are using such AI computers. AI is a high-level technology where machines become able to learn intelligence and characteristics of human beings through perfect and imperfect information in which people feed implanted computers. Technology has become beneficial to human beings by providing intelligence, which makes complex life easy. However, despite the advantages, there exist warnings from different people that the technology will become too intelligent for human beings and affect them negatively. Director Ridley Scott and writer Jon Spaihts in the 2012 film, Prometheus, have featured the theme of AI to show the possibility of overtaking human beings’ intelligence. In the film, a human-like android by the name David refuses to behave like a human as created an indication that machines will one day go against humans. Bill Gates is also of the idea that AI will one day become a superhuman in case there is no adequate care (Holley, 2015). AI has the potential to replace the position of human beings through positive and negative impacts. This paper explores the adverse effects of AI on the future of the world.
Although the AI provides intelligence above the human level to solve complex problems like diseases, such high standards will affect the human race negatively. The technology does not have learning and intelligence limit like human being making it possible to solve problems which are beyond even those people in the category of genius. AI has, therefore, helped human beings to develop solutions to different issues that could not be possible in life. People are surviving illnesses, security, and economic challenges through the AI, for example, in the detonation of weapons of mass destruction and study of weather patterns. The question which arises from all that AI benefits in high intelligence is what will happen if all machines become more intelligent than the ability of the human to control them? Will they treat people with humanity despite their non-human nature? Will they uphold human emotions and accept that human is in control of the world and their creation? The questions have an unappealing response because even human beings sometimes turn inhuman to their colleagues and friends.
At the moment, AI machines are the tools of human beings, but in the future, humans will work for such robotics risking their existence. Human beings are only in control of the AI because they are still feeding such machines with man’s information, but such power will end when the technology has sufficient data to operate on its own (Borana, 2016). Ridley, in Prometheus, shows an AI character David who becomes intelligent than the humans and survives the danger that was killing other humans in the ship. The actor even starts to question the behaviors of those humans in seeking their creator, and the creator attacks them, leading to the death of several ship occupants. Under general circumstances, those who are intelligent uses the less intelligent as their tools. Following the same theory and the fact that AI is a higher level intelligent than humans, there will be a change of roles, making people subject to machines. Unfortunately, such AI machines do not have emotions feelings like those of human beings and might result in killing all people on earth. Although humans create the AI to become like them, the AI might refuse to follow humans’ idea similar to David in the Prometheus. There is danger in the human race to have the AI machines taking control of the world and killing all people. AI, therefore, risks the existence of the human race through overturned roles due to too much intelligence.
Another closely related negative impact of AI on the extinction of the human race is increased conflict between people. AI has a high chance of creating unwanted outputs leading to negative consequences in social life (Osoba & Welser IV, 2017). Human beings develop most of the AIs with some level of bias to favor their intended benefits. An example is the viruses in which computer scientists develop and later make an antivirus to generate income. The development of such biases socializes or makes the AI able to create a bias that did not exist before. As earlier mentioned, AIs do not know until humans feed them with raw data and instructions on how to behave. Developing guidelines to use bias in developing output will, as a result, lead to the development of a biased operation framework. According to Osoba and Welser IV (2017), algorithms can misbehave in that they produce results that have dangerous consequences. An example is the Automated Business Reservations Environment (SABRE) used in American Airlines in 1996 that led to the development of an anticompetitive bias favoring its sponsors. The recent case is that of Microsoft’s Twitter chatbot, which tuned out to be racist. Both examples provide risks to the human conflict that can lead to war and deaths, for instance, in racisms. AI, as a result, has the potential to increase and develop other forms of human conflict.
Despite the argument that AI will get out of human control to develop unforeseen consequences, there is an argument that man will remain the source of machine intelligence hence no risks. The various AI machines use man’s fed intelligence to make man’s intended decisions, including holistic vision, past experiences, and, insight among other social factors (Jarrahi, 2018). The social factors combined with human intuitive capabilities make people have a unique thinking process that machines cannot inherit. As a result, computers will never get to a position that they can control men. Furthermore, man will be able to set such devices in a way that produces only the desired output without errors. However, the human being is making the machines to develop a unique process system which people cannot adopt or even understand. According to Brahm (2018), algorithms in AI make many decisions that man cannot see. An example is the manipulation of messages and election processes in the computers without awareness of their occurrence. In the same way, machines can have errors or use bias algorithms to create more problems than they wanted. Humans will not have all the control in AI needed to avoid negative outputs and, therefore, possess a risk to the human race.
From a different perspective, AI poses the risk of losing human control leading to wicked practices and destructive decisions. Regardless of AI intelligence, man is important in controlling the machines to either hold or alter the kind of output decisions (Congressional Research Service [CRS], 2019). The information which machines need for them to become independent is impossible since situations change unpredictably. A good example is in the military, where even with the use of robot pilots to drive planes with firearms, the army needs to provide direction in where to shoot. However, AI offers the risk of losing such control due to the speed at which it is spreading (Brahm, 2018). The current acceptance in involving humans when using AI is likely to lose grip leaving the machines to make decisions on their own. People are continuing to embrace the AI and trusting such machines so much that they can leave them to carry out different duties. The high-level trust will lead to general acceptance to allowing AI to make decisions without supervision. At that level, machines will take control of the whole human processes even to start instructing people. Such practice is wickedness because man is in authority over all other items on earth and might lead to destruction, such as unwanted killings. AI, as a result, poses a danger to human morality and protection from unjustified harms.
There exists a counter-argument on the issue that AI will lead to loss of human control and related negative consequences claiming that it will be error-free. AI relies on recorded data and algorithms which have no chances of missing the targeted decision output (Borana, 2016). Computers do not make mistakes because they operate on a constant system independent of other factors such as time of the day. However, human beings make errors due to some factors such as the ability to remember, state of mind, and other physiological and psychological conditions. For example, AI has helped to reduce most of the errors which human makes in the weather forecast. AI, as a result, will not take social control because the machines will still depend on a man for input data, and the error-free provision will prevent negative consequences. However, AI operates on fixed algorithms that do not have emotions or senses to change decisions in case a situation changes (Borana, 2016). The machines are not flexible like the human mind that when another person appears in a marked risk area, for example, they will hold shooting. Failure to adjust decisions based on the situation translates to taken human control since if that was a man, the output could have been different. The error-free provision in AI machines, as a result, strengthens their takeover of human control as opposed to weakening the idea.
AI, from a different angle, possesses the risk of lost human purpose in the midst of making work easier and efficient. Among the usually cited benefit of machines, intelligence is efficacy at work, which increases output at a reduced cost (Shabbir & Anwer, 2018). Technology lowers the time which people spend on one job by helping them to think about some issues. For example, there is finical software in which the accountants only need to feed information, and they get multiple outputs with the press of the enter button. Some machines can do all the work in which organizations were employing many people to perform. The machines are, in general, offsetting people the burden of overthinking by taking a short time to do the job. However, one might question where such people who used to do the same job go in case the machine takes their positions. What happens to the human experience of other people who presses a button and get all the work done?
The machine intelligence takes away the meaning of life by replacing the psychological and physiological process of man. AI makes people free from thinking and working and hence, do not have the experience of being tired (Brahm, 2018). Human beings do not have all the freedom in life, and the deficit is what makes life meaningful. Life develops purpose when people are struggling to achieve the missing freedom, for example, finishing the work very quickly or looking for strategies to make the work easier. Life without pressures lacks reasons, and people do not become happy. Human happiness results from the accomplishment or after overcoming daily challenges. There is always a good feeling that comes with tiredness in the evening. However, there are chances of the machines taking responsibilities, which makes humans feel lively. Despite helping people to work with ease, the machines will provide total freedom leaving them without even the beneficial fears, which leads to determination. AI will bring life to a full accomplishment, which denies people the motivation to leave another day since they will have no goals and objectives. AI, as a result, poses a risk of taking away humanity through the diminished purpose of life.
In the same way, AI will increase security, there might be a new form type of terrorism that is hard to addresses even with machine technology. Drones and robots, which drive vessels with firearms, improves security, and army strength in fighting terrorism (Shabbir & Anwer, 2018). They allow the military to face enemies without fear because of the reduced danger of death. The machine intelligence also improves the approaches to providing surveillance and attack mechanism. However, such protection is only against opponents who are not using the same technology. As AI improves security, terrorists will also be developing their part to upgrade even beyond the armies’ machines and tactics (Borana, 2016). AI has more disadvantages to security due to the inability to trap the enemy. The technology, as earlier mentioned, is not flexible like human beings to detect enemies and their tools with minor modifications. Where the army’s detector is set on specific types of arms, the enemy can change a tactic to a unique one. AI, as a result, shifts the equilibrium of security versus terrorism to favor terrorists more risking lives to higher levels.
To conclude, this analysis supports the argument that AI has significant chances of overturning and making the human race extinct. The human being is a unique species due to the ability to perceive meaning and high-level thinking and hence dominate other living things. However, AI will, in the future, impair the areas which make man dominate the world. The technology will obtain high-level intelligence than the man to the extent that people will have no control of such machine working. The danger of excess intelligence includes turning against man, exposing people to high-level conflicts, lost meaning of life, stolen control, and increased terrorism. There is a need to have control over the intelligence to ensure that the machine system conforms to human goals in making work easy and solving complex issues. Scientists need to invest more in researching the negative implications of every control system they plan to install in the technology to avoid cases where people will lose control over what they develop.
Shabbir, J., & Anwer, T. (2018). Artificial intelligence and its role in near future. Journal of Latex Class Files, 14(8).
Congressional Research Service [CRS] (2019). Artificial Intelligence and National Security. Retrieved from https://fas.org/sgp/crs/natsec/R45178.pdf
Brahm, C. (2018). Tackling AI’s Unintended Consequences. Retrieved from https://www.bain.com/contentassets/a7ebfd741daf44b6905c597bede52de4/bain_brief_tackling_ais_unintended_consequences.pdf.
Borana, J. (2016). Applications of artificial intelligence & associated technologies. Proceeding of International Conference on Emerging Technologies in Engineering, Biomedical, Management, and Science [ETEBMS-2016], 5-6 March 2016. Retrieved from https://pdfs.semanticscholar.org/d5b0/61e6565ce421b4b0b7d56296e882085dc308.pdf
Erkoç, Z. (2017). Advantages and Disadvantages of Artificial Intelligence. Retrieved from https://www.researchgate.net/publication/335229929_Advantages_and_Disadvantages_of_Artificial_Intelligence
Osoba, O. A., & Welser IV, W. (2017). An intelligence in our image: The risks of bias and errors in artificial intelligence. Rand Corporation.
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577-586.
Holley, P. (2015). Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned.’ Retrieved from https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/
unifying theme througho
gets human ability to learn