paint-brush
Deontological Ethics, Utilitarianism and AIby@antonvoichenkovokrug
1,328 reads
1,328 reads

Deontological Ethics, Utilitarianism and AI

tldt arrow

Too Long; Didn't Read

The fear associated with strong AI is that it may perceive humanity as a threat or an inefficient agent to achieve the optimal results.
featured image - Deontological Ethics, Utilitarianism and AI
Anton Voichenko (aka Anton Vokrug) HackerNoon profile picture

In the modern world, where artificial intelligence (AI) plays an increasingly important role, the issue of how ethical principles guide AI behavior becomes particularly relevant. This issue takes on even greater significance in the context of developing powerful artificial intelligence capable of independent thought and self-awareness. To understand the potential effects of ethics on AI, it is necessary to review two main approaches to ethics: deontological and utilitarian.


In the age of rapid technological development, artificial intelligence (AI) has become a major transformational power. Although AI holds tremendous promise in a variety of fields, from healthcare to finance, it is important to acknowledge and consider the potential dangers it poses (source).


Deontological ethics or duty-based ethics is founded on principles considered to be absolute. Deontological ethics in philosophy, ethical theories that focus on the relationship between duty and the morality of human actions. The term “deontology” is derived from the Greek deon, “duty”, and logos, “science”. In deontological ethics, an action is considered morally good by some property of the action itself, not because the product of the action is good. Deontological ethics states that at least some actions are morally obligatory regardless of their consequences for human welfare. This ethics is described by expressions such as “Duty for duty's sake”, “Honesty is its own reward” and “Let justice be done though the heavens fall” (source).


This approach assumes that certain actions are right or wrong regardless of their consequences. Philosopher Immanuel Kant was one of the main proponents of deontology, arguing that behavior should be based on duties and universal moral laws. In this case, moral principles represent unchangeable rules that must be observed.


Referring to utilitarianism, it is primarily based on the works of philosophers such as Jeremy Bentham and John Stuart Mill and evaluates actions by their consequences. The fundamental idea of altruism is that actions that are beneficial to the majority of people are considered right. This approach is focused on achieving the common good, even if it requires sacrifices from some individuals.


Utilitarianism is an ethical theory that distinguishes good from evil by focusing on results. It is a form of consequentialism. Utilitarianism states that the most moral choice is the one that will bring about the greatest good for the largest number of people. This is the only moral ground that can be used to justify military force or warfare. It is also the most common approach to moral reasoning used in business through the way it considers costs and benefits. However, since we are unable to predict the future, it is difficult to know exactly whether the consequences of our actions will be good or bad. This is one of the limitations of utilitarianism (source).


The difference between deontology and utilitarianism is that the former can ascribe adherence to certain principles even if it may have a negative impact on society, whereas the latter can justify morally questionable actions if they serve the common good.

The first thing to realize is that there are 4 different types of artificial intelligence. The degree to which an AI system can reproduce human capabilities is used as a criterion for defining AI types. There are 4 main types of artificial intelligence:


• Reactive machines

• Limited memory

• Theory of mind

• Self-awareness (source)


Regardless of what moral values guide artificial intelligence, there is a certain assumption that artificial intelligence tends to be kind (although it is not endowed with any feelings at this point). This is because AI can contribute to optimizing certain goals that meet the basic principles of algorithms. This approach involves making decisions based on the calculation of the greatest common good, even if this has negative consequences for individuals or groups.


It is important to note that the fear associated with strong AI is that it may perceive humanity as a threat or an inefficient agent to achieve the optimal results. This raises the concern that strong AI will lead to dangerous decisions for humanity. But it's important to keep in mind that at this point, AI systems are only as impartial as the data they learn from. If the data used to train artificial intelligence models are biased, artificial intelligence itself can maintain and even enhance social biases. This can lead to discriminatory effects in areas such as employment, lending and criminal justice (source).


However, there is an important question to be raised: why do we believe that strong AI is motivated by self-interest? Finally, we don't really know how moral principles will determine the behavior of an advanced AI. Perhaps the AI of the future will combine different ethical systems to create a more balanced and safer way of making decisions.


Anyway, it is worth remembering that even now artificial intelligence has already achieved an extraordinary level of development and progress, for example, a common at first glance GPT chat during its first year of existence has turned from an ordinary text generator into a comprehensively developed system with the ability to generate images and form different tables and charts.


The study of these questions is of critical importance to the development and progress of artificial intelligence, which is currently working for the benefit of all humanity (except when AI is used for dangerous purposes), and requires further analysis and debate among experts in artificial intelligence, philosophy and ethics.