This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Uncategorized

Google Artificial Intelligence

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

Google Artificial Intelligence

Executive Summary

This report offers an evaluation and analysis of the Google AI principles. It analyses the context, content, and stakeholders, internal and external, as well as Google’s ethical stances that align with Beard and Long staff’s ethical design principles. Internal stakeholders of Google AI principles include data science, product development, communication team, and human resources team. External stakeholders include ethicists, governments, and nongovernmental organizations. These stakeholders ensure that Google AI principles include fairness, being socially beneficially, incorporate confidentiality, and restrict abusive applications. The report also outlines the contents of the Google AI principles, including the mission of Google of organizing global information and making it universally accessible. It also includes the objectives for the AI applications, AI applications that the company will not pursue, and the conclusion. This report also highlights the various strengths and limitations of Google AI principles. Recommendations discussed include addressing these limitations by developing consistent principles and ensure that they commit to autonomous and transparent review to minimize challenges during AI deployment. Google should follow generally accepted ethics when operating and designing AI technologies and make sure that their work is consistent with the principles of international law and human rights.

 

 

 

 

 

 

Critical Review of an Ethical AI Framework

Introduction

Google Artificial Intelligence (AI) is transforming the process of doing business from automation to argumentation and beyond. Through these roles, it opens up practically infinite possibilities to benefit the entire world. Businesses in each segment are eager to claim their piece of the prospective AI windfall. According to the PwC study, the approximated infusion of this windfall by 2030 could be the US $15.7trillion (Auth et al., 2019). Besides, the level of understanding and application of accountable and principled Google AI practices among respondents in most situations was immature. Amid this promise, the swift speed and noteworthy level of transformation resulting from Google AI systems and progressively more invasive human/machine connections are also giving rise to noticeably contradictory concerns among company leaders and customers. This report discusses the context content and stakeholders of the Google AI framework. Besides, it explores the ethical stance of Google AI concerning Beards and Long Staff ethical framework Google AI governance, and the strengths and weaknesses of Google AI framework. Consumers want the expediency of services modified to their requirements while companies are exploring the opportunities presented by Google AI and, simultaneously, enlightening themselves about the possible risks (Jobin et al., 2019). Alongside these risks, the rise of Google AI brings intrinsic challenges around trust and responsibility. To tackle these well, organizations should recognize the challenges and dangers around Google AI and consider these in their design and operation.

Context, Content, and Stakeholders

Context

The Global Human Capital Trends details that investigated managers believe that Google AI will be extensively set up in the coming years (Kraus et al. 2019. This path is also followed by the public sector, which seeks to adopt applications that develop services in the public sector and run investigation and decision-making challenges (Kraus et al. 2019). The cognitive Google AI applications are essential in minimizing backlogs, cutting costs, forecasting counterfeit transactions, and identifying criminal suspects through facial recognition. Adopting them for computerization can assist governments to focus on more innovative and intricate features of service delivery to people. Google’s recent release of ethical principles and responsible practices reflects one more company’s recognition of how important it is to get out in front of regulators, potential litigants, and adversaries by adopting ethical AI design principles. Google intends these principles to help shape the “ideal Google AI algorithm” – one who is socially beneficial, unbiased, tested for safety, accountable, private, and scientifically rigorous. Google’s version was released as it was retreating from using its image recognition capabilities to enhance military drone strike targeting, these principles of self-regulation. Nonetheless, it serves as a useful starting point for exploring which principles should find their way into any set of AI ethical standards. The interaction between Artificial Intelligence technologies and social, political, and economic organizations is improving (Jobin et al., 2019). However, these institutions fail to define its consequences well.

The decision of Google to come up with its ethical principles mirrors the immediacy of concerns that arise with AI applications. This decision of Google to set up its ethical principles to guide the corporation’s AI demonstrates the importance of such principles. Firstly, Google wanted to frame acceptable uses and draw away from over-reaching recommendations influencing the implementation of some structure of regulatory misunderstanding. According to the Chief Executive Officer of the company, the market is not supposed to state how technology is applied, and big tech firms like Google have the role of ensuring that technology is implemented for good and accessible to each person. In 2018, Google established its AI principles to offer direction in addition to opening source tools and policy for the ethical improvement of AI that also avoids partiality and ensures confidentiality. The policy also outlined Google’s resistance to mass surveillance and the breach of human rights. Google, therefore, partnered with regulators to extend their skills and instruments and navigate these issues like human rights and ethical principles together.

Content

The contents of Google AI principles include an introductory paragraph that gives an overview of the aspirations of Google to create technologies that solve fundamental issues that help individuals in their daily lives. This paragraph states that Google is confident about the incredible possibility of AI and complex technologies to empower individuals and companies and help future generations. Besides, it states that AI technologies will endorse and further the mission of the company, which is to put in order the global information and make it commonly available and useful.

Google AI principles also include the goals for AI applications, which include being socially accountable by having transformative influences in various fields, including healthcare, safety, energy, transport, industrialized, and leisure. Another goal is to avoid establishing or reinforcing unfair bias by differentiating fair from unfair biases because it is challenging, and differs across cultures and societies. The third objective is to continue to build up and apply strong safety and protection practices to evade unintentional results that create risks of harm. Additional goals include being responsible to the public, integrating confidentiality design standards, upholding high standards of systematic excellence, and making the AI applications accessible for uses that accord with these principles. The principles also highlight factors that the company will consider when evaluating the likely uses of AI. These factors include the primary use and possible use of technology and application, including how closely the solution is linked to or adaptable to a detrimental use. Other factors are the Nature and uniqueness, scale, and Nature of Google’s involvement. Besides, it outlines the AI applications that Google will not practice, including technologies that result in or are likely to cause harm, armaments, or other technologies whose main reason for execution is to facilitate injury to citizens. Besides, it will not deal with technologies that use the information for surveillance violating globally accepted norms and technologies whose goals breach widely conventional ethics of international law and human rights. Lastly, it outlines the conclusion of the principles and how the company will approach its work with modesty, dedication, and readiness to adapt their strategy as they learn.

 

Stakeholders

Internal Stakeholders

Depending on the nature of the business, there are various internal Google AI stakeholders. Firstly, there is the research and data science team that reflect on the influences of principled and communal implications of AI models and unveils them. Secondly, the product development team including designers and engineers who look for correlated and data leakage and reflect on the influences of applying a protected data class in a data model, because that may result in a biased outcomes and decision respectively (Dent, 2018). They also conduct an ethics pre-check as a way to discuss whether a recommended invention, service, or feature could result in unintended results. The legal team helps understand the laws on data confidentiality to make sure that business people do not expose the company to risk. Human resources revise employment processes to account for concerns and opportunities raised by Google AI principles. The communication teamwork with marketing and PR to understand how companies engage.

External Stakeholders

Companies connect with NGOs or other professional advocacy groups to make sure that people learn best practices and share their experiences. Companies also engage with relevant ethicists to gather insight on possible ethical concerns. On the other hand, the government helps maintain a dialogue with policy advisors on issues related to using Google AI fairly and for the public good. Lastly, shareholders constantly test a way to assess the influence of ethical technology use on key metrics such as risk mitigation, customer, and other key performance indicators (KPIs)

Ethical Stance of Google AI principles

Google seeks to make technologies that can help solve important issues. Its ethical stances apply the Beard and Long Staff framework. For Google, AI technologies promote innovation and enhance their duty to organize and make information accessible and valuable (Beard & Longstaff, 2018). For this reason, Google AI takes into consideration social and economic aspects by being socially accountable, thus enhancing the ability of people to comprehend the implication of content at a scale (Farthing et al., 2019). Google also avoids unjust prejudice on people, which is also included in the Beard and Long Staff framework. Since technology designs can be biased, some customers are not treated well based on unrelated issues such as race or ethnicity. Fairness requires that Google AI present validations for these differences. The design of the Google AI is appropriately guarded by its best AI practices. Google maximizes the freedom of those affected by their design by being accountable to people (Beard & Longstaff, 2018). The system does this by providing proper opportunities for criticism, correct explanations, and petition. Besides, AI applies the Beard and Long Staff’s principle of anticipating design for all potential users. Even though designers are not answerable for the actions of customers, they are answerable for the sincerity of their frameworks (Safdar et al., 2020). They incorporate confidentiality design frameworks by providing prospects for notice and consent, encouraging architectures with confidentiality safeguards, and providing the right transparency and data use management. Lastly, Google upholds high measures of technical excellence by unlocking new areas of scientific research in critical domains of environmental sciences (Beard & Longstaff, 2018). The company designs its applications with integrity, simplicity, and fitness of purpose. Google AI design is tailored to the problem it is intended to solve. Besides, Google works to potentially restrict destructive or abusive applications and the significant impacts of these restrictions(Beard & Longstaff, 2018). This principle maximizes good and minimizes bad influences. Google is mindful of the harmful side effects of their technology.

Google AI Governance

Google has written a white paper of the convenient issues that they believe would make a provable impact in helping to guarantee the responsible use of AI. This white paper outlines five specific areas where existing, context-specific regulation from governments and civil society would assist in progressing the lawful and moral improvement of AI.

Firstly, Google has a clarification on why an AI system works in a certain way. These clarifications heighten people’s self-assurance and confidence in the accuracy and suitability of their forecasts. The explainability principles bring together a collection of best practices, along with remarks on their admirable characteristics to offer real insight. Besides, it offers guidelines for theoretical use cases so businesses can regulate how to balance the benefits of applying multifaceted AI systems against the restrictions imposed by differing standards of explainability. Lastly, it describes the least suitable standards in diverse business sectors and application backgrounds.

Unfair labels and negative connections entrenched in algorithmic systems can result in or intensify serious and long-lasting damage. These unfair prejudices can not only affect the social structure, but risk broadcasting injustice in access to various opportunities like educational, or financial opportunities. Google ensures that its AI principles balance the competing objectives and meaning of fairness. It also clarifies the comparative prioritization of opposing factors in some general hypothetical conditions, and there will be differences across cultures and geographies. Google also has safety considerations that highlight fundamental workflows and principles of documentation for certain application settings that are adequate to reveal due diligence in carrying out safety. It also sets up safety documentation marks to imply that a service has been evaluated as passing the requirements.

Through the Human AI collaboration, Google decides on contexts when an AI system should not fully automate decisions. This also evaluates the various strategies that enable people to appraise and oversee the AI system. Lastly, the liability framework evaluates the possible weaknesses that exist. It considers safe h frameworks and liability restrictions where there exists a concern that legal responsibility laws may otherwise put off socially beneficial innovation.

Individuals and companies should ensure that they comply with Google’s AI principles by influencing the degree to which their companies can achieve and appraise fairness when using Google’s AI applications. Policymakers and specialists should also team up to identify where unplanned counter-intuitive damage arises due to the existing regulations and seek practical solutions. Individuals and companies can also apply AI could to discover links between input data and output forecasts to surface any fundamental prejudices that are entrenched in existing processes. This will help improve the consistency of decision-making and fairness of decisions.

Individuals should also demonstrate compliance by taking precautions against both the unintentional and intentional exploitation of AI with risks to safety. However, this needs to be within reason, in proportion to the harm that could occur and the feasibility of the precautionary steps recommended, across technological, lawful, financial, and cultural elements. It is also essential to take into consideration of psychological factors. Companies should appropriately promote the trust of users by adding stop buttons or voice recordings to provide crucial reassurance to those using Google AI applications. If people find systems with errors, their risks of choosing to ignore safety-critical guidance, even when a system is almost always right, because of a single bad experience increases.

Companies that pose potential threats to Google’s AI principles can be blocked by Google’s company and denied access to the services offered by the AI. These threats may include requests that entirely rely on rules and signatures. An enabled device may view these malicious requests then alert the human operator who then checks the request. After determining the malicious request, the operator blocks exact matches or malicious requests. According to Google’s CEO, the best partners that can provide a good start for a strong foundation in coming up with ethical principles is Europe’s General Data Protection Regulations.

Summary, Strengths, and Weaknesses

In summary, the Google AI framework is essential because it transforms the process of doing business by computerizing activities, thus benefiting the world. It involves various stakeholders, both internal and external such as data science, human resources, the government, and NGOs. These stakeholders have the responsibility of ensuring that the AI designs align with the company’s principles like confidentiality, social responsibility, avoiding fairness, and ensuring that the company does not design abusive applications by maximizing good and minimizing the bad. The above-mentioned ethical considerations reveal that Google AI applies Beards and Long Staff ethical framework. The Google AI principles are well thought out and promising because people can implement new methods of making businesses efficient, create opportunities, and allow AI designs to be accurate. It also seeks new ways to align work with AI principles by increasing transparency. Another concern is that the fact that Google does not follow international laws and regulations, the company is possibly avoiding some harder questions. It is not at all settled in terms of worldwide conformity and comparable law the number of key international law, and human rights principles should be applied to various AI technologies and applications. This lack of precision is one of the crucial reasons that firms like Google should think about their role in establishing and installing AI technologies, especially in military contexts. There is a need for Google to follow widely accepted principles when deploying and designing AI technologies and ensure that their work is consistent with the principles of international law and human rights.

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask