This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Uncategorized

Policies and Regulatory to Artificial Intelligence

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

Policies and Regulatory to Artificial Intelligence

The rapid changes since the invention of the modern personal computer, followed by the internet, have been immense. Now, most people in developed countries and a substantial percentage in the underdeveloped world carry computing and informational power that is a few million times that powered the first voyage to the moon. All this is in a device that is usually less than 7 inches in dimension: a smartphone. The smartphone further can access more information than even the most extensive libraries the world has ever seen, thanks to the internet.  The cellphone is a perfect example of the growth of the computing influence in both commercial and the personal sphere in the last few decades in ways that would have been unimaginable less than fifty years ago. However, while these developments have been hugely welcome in solving a myriad of communication and social issues, they have also introduced Artificial Intelligence (AI) to the world. AI presents both opportunities and challenges to the modern world. As with any new technology, the question of how and when to regulate AI has occupied legal experts. The work will first explore the sectors that are most impacted by the AI then present arguments why AI regulation should not have paranoia as its driving force and finally present the counterargument. A three-tiered regulatory system for AI encompassing international law, domestic legislation, and self-regulation offers the best opportunity for AI to grow and keep invention flowing.

Issue Statement and Problem Questions

The question of AI is an imperative one in the current era and the future. As such, industry regulation has become a highly debated question that needs a resolution. The various methods of management presented all have a range of drawbacks. These issues raise several questions:

  1. How can control be harnessed to ensure safety and more innovation in AI;
  2. Which are the best methods for AI regulation?
  3. What is the role of national and international regulation in AI development?

Legal Significance: AI and the World

AI can impact large sectors of the economy and the daily lives of the people. However, the areas that will see the most significant change as a result of AI application. The first one is likely to be in the issue of transport. As the current craze in the building of autonomous cars has shown, AI will change the way transport occurs shortly with smart vehicles and smart cities becoming common. The second one is the use of AI technologies by governments and law enforcement personnel to execute legislation governing criminal activities and other aspects. The use of AI by police and government agencies is already apparent in several countries, including China, where facial recognition using AI is rampant. The application of AI in medicine will also likely increase in the next few years as the invention of more AI-capable medical devices continues. One aspect that AI may even have a more substantial than expected impact on the issue of currency. Currently, Bitcoin and other autonomous payment methods have already made an impact. Their further development-aided AI is likely to make time even more central to the future’s commercial world.

AI will also impact other areas as disparate as financial, media, and legal analysis. The use of AI in the elucidation of complex financial statements is already in its nascent stages. Such an application saves human workers hundreds of hours poring over multifaceted data looking for patterns. In law, insurance companies are using AI to predict how to set the premiums to charge their clients. Further development will see AI used to predict legal outcomes in the future. The Associated Press, one of the world’s biggest news agencies, is already using Automated Insights to produce almost four thousand earning reports annually, which has increased its productivity in this sector nearly fourfold. Consequently, the use of AI is becoming central to even fields that need a human touch.

More importantly, on the issue of jobs, the impact of AI on the employment sector will be profound. While AI-powered robots taking up a majority of manufacturing jobs have been discussed by both policymakers and the proponents of AI as something at least soon, its infancy stages are already apparent. A range of manufacturing issues from the assembly of components and stacking can already be performed mostly by AI-powered robots with no human supervision. In the future, AI-predicated robots may even perform maintenance with little human input. The result will be even bigger socio-economic inequality as people are rendered jobless and destitute. Thus, while some have dismissed the question of AI taking over jobs as technophobia, its effects are already apparent.

The question of AI in a civilized world and its effect on citizens’ civil rights are also relevant. Questions about the violation of privacy using AI as a central method are significant in this manner, especially when it comes to government agencies and even large corporate entities. The question of confidentiality has a close association with the issues of deep-fakes and algorithmic bias resulting from bad data in matters of identification by police using AI. More dangerously for human existence, the development of AI in the weapons industry will see further automation of weapons where single algorithmic failure can result in the launching and death of millions. The invention of more destructive weapons is also likely to become central to most countries’ aims as AI starts. In this sense, the long-held fear that AI will become more powerful than human intelligence and learn to operate outside human intelligence will further make the world a more dangerous place. Thus, AI will provide both opportunities and challenges.

The Regulation or Self-Regulation in AI: International Co-operation, National Legislation, and Self-Regulation as the Basis for a Safe and Beneficial AI.

As apparent in the preceding section, AI is not the panacea that cures every societal malady but will go a long way in solving some of the issues bedeviling the world. As a consequence, AI needs a level of regulation that ensures both the growth of the field, while at the same time ensuring that it does not become a destructive technology. To ensure a practical law that does not stifle innovation that will result in human good in perpetuity. Such a regulation involves the use of international and national legislation as well as policy, coupled self-regulation in the sector. The three-prong regulatory regime would address questions that bedevil the regulation of technology, such as the Pacing Problem. The Pacing Problem refers to the problem of organizational connection that notes that innovation tends to outpace laws.[1] There is also an understanding that national and international laws on the regulation of technology tend to suffer the hindrance of “stagnation, ossification and bureaucratic inertia.”[2] The use of a three-pronged module also provides the chance for the solution of the Collingridge Dilemma. The Collingridge Dilemma, whose other term is the Uncertainty Paradox states that after introducing an innovation, an immediate regulation risks being counterproductive as there is usually little understanding of the change and their societal impact.[3] The third issue the three-sided regulation would solve is the precautionary principle that seeks to be al technology until its social impact can be fully understood.[4] Such a method is dangerous as it can lead to stifled innovation. As a consequence, providing for self-regulation in addition to legislative and international laws on AI is imperative. The alternatives are stringent international and national laws or no regulation at all.

International Regulation as The First Facet

The need for a multifaceted regulation in the sector is apparent in several ways. The first is that international instruments present the best way to ensure standard best practices in the field of AI across borders. The result will be a uniform system where one set of countries does not develop harmful AI while the other countries try to rein in the detrimental effects of AI. International cooperation is already apparent in issues such as nuclear weapons and ballistic weapons development. AI should be the next field in which countries can agree to rein themselves in the interests of human survival. Elon Musk, the head of Tesla and someone well-versed in questions of AI, has noted that “AI is more dangerous than nuclear weapons.”[5] The international community should heed his advice and create a foreign body to oversee the development of standards in the AI industry. Such an organization, preferably an arm of the United Nations, should create universal rules that would ensure the development of AI not for human destruction but humankind’s evolution.

International cooperation in the IA field is necessary and central to the further development of AI proficient world. Countries such as the US and China have invested large sums of money in the event of AI, unlike the Third World. International regulations would govern issues such as technology transfer in AI to the third world to erase decades of underdevelopment in the third world as the globe moves to the next phase of expansion. The development of AI at the hands of a few countries would lead to continuing economic and social differences between the developed and the developing worlds, almost certainly relegating the underdeveloped countries to economic subservience for perpetuity. While agreements on technology transfer may prove hard to negotiate, an international instrument should govern it.

More importantly, international cooperation via the requisite bodies, foreign policies, and conventions, agreements to stop the development of AI in weaponry would happen. Trepidation about the event of a super-soldier in the form of an AI-fueled robotic soldier and its impact on international peace are valid. As technology development is usually a private company affair rather than a direct state issue, the matter is more pertinent. The development of such soldiers by private entities would allow wealthy people to non-human mercenaries at their behest, which would threaten the safety and stability of the entire regions and even the globe. Such issues are only governable through the use of international instruments and bodies.

Domestic Law and Policy

The need for multifaced regulation and policy in AI is also apparent in that some issues are best regulated by domestic law and policy as the second pillar of the AI parameters. For instance, while AI presents opportunities for law enforcement, it also offers opportunities for criminals to threaten digital security. The terrorizations against digital security are apparent in criminals using computing power to hack or socially engineer their victims using AI at superhuman levels. Such an event is best controlled at the national level as each country has its unique challenges in this asset. The matter of security and AI is also apparent in the issue of physical security. For instance, evil individuals may weaponize civilian drones to attack civilians. These issues call for national legislative and policy matters in a bid to control them.

Another aspect that needs national policy and laws, especially in democratic states, is political security. Political security grants citizens in any particular country certain fundamental rights that the state cannot infringe without legitimate reasons and a court authorization. These include issues such as privacy and related rights. AI, as a foundational tool, has the possibility of leading to surveillance akin to the one predicted by George Orwell, where governments watch the citizenry encircling with no legal basis. [6] The inclusion of cameras throughout the public space is already an issue in most public places in the developed world. The combination of the cameras and AI in the coming decades will lead to privacy-eliminating surveillance in levels that George Orwell could not have anticipated. Such shadowing is the profiling and repression by countries intent on non-democratization and the rise of totalitarianism. In the current era, most internet giants, including Facebook, Google, and Amazon, have their profits from the violation of their consumers’ privacy rights. While these companies may never weaponize this, states will inevitably use data for nefarious purposes. For instance, the danger of AI, lack of privacy, and disinformation are already apparent in China’s Orwellian use of facial recognition software and its social credit system.[7] These issues governance is only possible at the national level, with the state passing laws prohibiting the use of AI to intrude the citizens’ rights to privacy. The loss of confidentiality in democratic countries is the first step in the growth of repression. Thus, national laws should regulate the matter.

The question of the bias of AI in criminal law enforcement should also face national regulation. Inevitably, AI algorithms coding tends to identify the dominant traits in the population. Consequently, the question of the misidentification in the case of AI in facial technology or misallocation of characteristics may prove troublesome. Facial recognition technology has already proven adept at misidentifying people of color when law enforcement has used it.[8] Consequently, the possibility of such technology leading to the arrest of the wrong people is significant. More worryingly, the courts have started using AI in the convicts’ sentencing through the determination of the defendants’ “risk.”[9] The algorithms used in making these decisions are usually proprietary, meaning that the judge has little to no understanding of how they work. The lack of transparency in such cases is astounding, as the American example of Wisconsin v Loomis has shown. Being sentenced by a machine is not only potentially hazardous to both personal rights and the legal system. Further, such an algorithm’s development to neural networks that are impossible to assess may pose an even greater danger. Such a situation calls for the development of national laws that either scraps the use of such algorithms or make them transparent.

Self-Regulation as the Third Prong

Many industries have decried the regulation by state entities as overbearing and unnecessary. Lack of regulation has resulted in epic failures in some areas such as finance, as seen in the 2008 financial crises. Consequently, while national law by the requisite authorities is necessary, some aspects have to be left to the AI industry to self-regulate. The combination of international, domestic, and self-regulation is the best bet for the growth of an AI industry that will serve humanity and deflect the harms that may come with AI. As the third prong of the regulatory scheme, a self-regulatory regime would provide an expedient as the relative novelty of AI creates a range of organizational challenges that national and global law may not govern effectively.

Arguably, considerable self-regulation is imperative as there is a comprehensive understanding of the dangers that AI can pose to human existence in the long-term. The collective action for major industry players in the AI industry would raise a significant advantage for self-regulation as part of a mode approach to AI. In this instance, two methods have mainly proven effective in the regulation of dangerous technology globally. The first is the Responsible Care (RC) program, the American chemical industry, after the Bhopal disaster. The second method is the Institute of Nuclear Power Operations that set industry-wide performance aims, standards, and strategies to ensure distinction and safety in nuclear powerplant operations.

Self-regulation would involve the formulation of principles that align with the industry objectives. Such a normative framework would insist on codes of conduct that define industry practices that would ensure both innovation and safety. More particularly, such principles should simply explain the issues that self-regulation seeks to control and its impacts on the security and the development of AI. Among other problems, it should have a basis on the understanding that while regulation by formal institutions will most certainly be always a step behind innovation, the industry’s duty-bound to ensure the safety of AI. The agreement should have a basis on the harm posed by AI affecting everyone, including the industry and its bottom-line.

Secondly, self-regulation is likely to be successful as the individual researchers will have an intrinsic motivation to pursue collective action solutions. While extrinsic factors such as public relations and protecting the bottom-line play a central part in the development of self-regulation in any industry, intrinsic motivation is a better predictor of a sustained ethical system. An understanding of the moral basis for self-regulation is central in harnessing the opportunities that AI provides while keeping its dangers at bay. While formal parameters in the formula of the nation and international laws may lead be useful in the regulation of the known and extant facets of AI, the world has to rely on self-regulation for the yet to be discovered facts of AI. Thus, self-regulation is central to AI as an industry.

Thirdly, the development of AI inherently relies on the cooperation between industry titans both nationally and internationally. The collaboration presents a unique opportunity for a self-regulatory system that has credible enforcement mechanisms. After establishing the rules and regulations, the non-complying bodies can face ostracization from the rest of the industry, which would ensure that there is compliance across the board. The implementation device will force the corporations to engage in collective action in the field of safety and security while at the same time competing in more credible areas. Aspects such as naming-and-shaming bodies that fail to adhere to the basic safety and security issues would improve the AI industry while consecutively ensuring growth. Notably, self-regulation can serve as the basis for national and international regulation by establishing the requisite rules in areas where laws do not yet exist. The role of self-regulation in AI is thus indisputable.

Counterarguments: AI, International Conventions, Domestic Legislation, and Industry Self-Governance

The paper has taken a three-pronged approach to the order of AI. There are various counterarguments to the method. The first counterargument is that AI is intrinsically an international issue. Thus, AI should have international conventions and bodies as the primary mode of regulation, with national management only playing the role of filling the gaps that global convents fail to. An example is a law on the development of nuclear weapons whose governance is almost exclusively an international affair. With the dangers of AI predicted to be worse than those of nuclear weapons due to its potentially pervasive influence in all fields of human existence, the argument is that there should be a strict international regime on AI is necessary for the survival of humanity.

The second counterargument has a basis on that it may not be possible to regulate AI internationally due to its unique challenges. AI development happens at the national level, with companies in various countries playing individual roles. For instance, companies as diverse as Google in the US, Huawei in China, and Samsung in South Korea have invested heavily in AI development, with support from their governments. Thus, it is questionable to what extent these companies and their governments would be willing to cede their development to cede their advantage in the interests of the common good.

The third counterargument notes that national and international laws cannot effectively govern AI, and thus, only self-regulation is viable. The argument has a basis on the Pacing Problem and the Collingridge Dilemma. The basis for this assertion is that national and international legislation cannot grow fast enough to anticipate the changes in AI. When it does, its effect is to stifle the growing industry rather than assist it to grow. With these issues presented, the assertion calls for the development of self-regulation as the primary basis for AI governance.

Refutation: AI Regulation is only Possible with the Employment of the Three Fields of AI Governance.

Needless to note, those who see AI as an existential threat to humanity call for a robust formal governance system for the technology. They either advocate strict international rules on its development, national laws that stifle its growth, or both. The myopia of this approach is apparent in several ways. The first one is that strict domestic and global rules would stifle innovation leading to the lack of development in an industry that promises so much from medical innovation to entertainment. The second reason is that excessive regulation can drive the growth of AI underground, which would limit the long-term development of the sector. As a consequence, a balance between strict formal regulatory mechanisms and more information self-regulation is imperative.

Secondly, some AI industry experts and activists who see AI as a panacea to all problems facing humanity call for exclusively self-governance mode in AI. The avowal’s foundation is that AI development may happen at a faster rate than regulation, and effective management may have on the positive attributes of AI. However, examining the recent issues in industries that have failed to have adequate supervision decimates this argument. The most famous example has been banking and its myriad of failures due to under regulation. Consequently, AI deserves both legislation that will preserve abuse as well as self -control that will ensure continued innovation.

Conclusion

From the analysis, it is apparent that AI, as an industry, should have a governance system based on international law, national legislation, and self-regulation. The three-pronged method is essential in balancing the various innovation and regulatory needs of AI. As the first prong, international law can assist in the development of rules that bind nations in issues such as the development of AI in warfare and weapons. A national law would then govern AI issues that are uniquely national, such as deep-faking, privacy, and intensity theft. National and international rules would leave the space for self -regulation by the industry, ensuring a balanced approach to the issue.

With these issues in mind, recommendations are necessary to ensure an effective system. The first is an international covenant that governs AI. Such a convent can be region-based such as is the case with the General Data Protection Regulation. Other regions have been fast to adopt what was originally a European Union law in various ways. However, a law with global implications such as from the UN would be more impactful, especially if backed by a UN body and major world powers. Secondly, states should address the dearth of national legislation addressing AI. For instance, in countries in the UK, there is no statutory definition of AI.[10] National legislation should, however, not stifle growth. Lastly, the formation of the national and international industry -associations for self-regulation should follow. These three recommendations will ensure an AI that puts the world on the path to prosperity.

[1] Butenko and Larouche 2015

[2] (Marchant 2011, 199)

[3]

[4] Butenko and Larouche 2015

[5] Butenko and Larouche 2015

[6]

[7]

[8]

[9] Jason Tashea, “Courts are using AI to sentence criminals. That must stop now,” WIRED, last modified April 17, 2017, https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/.

[10]

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask