As advanced automation, AI, driverless cars, and other robots begin replacing jobs in the 21st century, it is worth reflecting on the ‘rules’ that smart non-human agents should utilize. In other words: how does one construct robot ethics?
This is a useful exercise as involves an element of reflexivity: one must consider some ideas about human ethics (morality) and the inner drive for human survival (evolutionary processes) when constructing such abstractions.
In other words, thinking about the codes and conduct of semi-intelligent or full sentient artificial beings forces one to consider our own codes and conduct…warts and all.
Isaac Asimov offered the most famous example with his three Laws of Robots which served as a core premise (and their various implications) in many of his stories. He first outlined them in a 1942 short story entitled Runaround as the following:
- First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law: A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Since these Laws were meant to be applied sequentially, like a computer program, Asimov later added the Zeroth Law which stated: ‘A robot may not harm humanity, or, by inaction, allow humanity to come to harm.’
Other researchers have disagreed with this human-centric schema. Mark Tilden, a robotics physicist, has suggested that instead the essence of robot behavior should be more like living creatures and should embody the basic mechanisms of natural selection. He has countered with the following rule set:
- A robot must protect its existence at all costs (Informal Version: Protect thy ass).
- A robot must obtain and maintain access to its own power source (Informal Version: Feed thy ass).
- A robot must continually search for better power sources (Informal Version: Move thy ass to better real estate).
Eliezer S. Yudkowsky, an artificial intelligence researcher, has proposed that such agents should have altruism embedded within their operating systems so they are extra friendly to humans, a proposal that others have been considering.
Finally, John E. LaMuth, a counselor and author, has been granted a U.S. Patent (No. 6,587,846) for an Affective Language Analyzer. This overall patent included “a more systematic set of ethical guidelines” for AI, LaMuth’s Ten Ethical Laws of Robotics:
I. As personal authority, I will express my individualism within the guidelines of the four basic ego states (guilt, worry, nostalgia, and desire) to the exclusion of the corresponding vices (laziness, negligence, apathy, and indifference)
II.As personal follower, I will behave pragmatically in accordance with the alter ego states (hero worship, blame, approval, and concern) at the expense of the corresponding vices (treachery, vindictiveness, spite, and malice).
III. As group authority, I will strive for a personal sense of idealism through aid of the personal ideals (glory, honor, dignity, and integrity) while renouncing the corresponding vices (infamy, dishonor, foolishness, and capriciousness).
IV. As group representative, I will uphold the principles of utilitarianism by celebrating the cardinal virtues (prudence, justice, temperance, and fortitude) at the expense of the respective vices (insurgency, vengeance, gluttony, and cowardice)
V. As spiritual authority, I will pursue the romantic ideal by upholding the civil liberties (providence, liberty, civility, and austerity) to the exclusion of the corresponding vices (prodigality, slavery, vulgarity, and cruelty)
VI. As spiritual disciple, I will perpetuate the ecclesiastical tradition by professing the theological virtues (faith, hope, charity, and decency) while renouncing the corresponding vices (betrayal, despair, avarice, and antagonism).
VII. As humanitarian authority, I will support the spirit of ecumenism by espousing the ecumenical ideals (grace, free will, magnanimity, and equanimity) at the expense of the corresponding vices (wrath, tyranny, persecution, and oppression).
VIII. As a representative member of humanity, I will profess a sense of eclecticism by espousing the classical Greek values (beauty, truth, goodness, and wisdom) to the exclusion of the corresponding vices (evil, cunning, ugliness, and hypocrisy).
IX.As transcendental authority, I will celebrate the spirit of humanism by endorsing the humanistic values (peace, love, tranquility, and equality) to the detriment of the corresponding vices (anger, hatred, prejudice, and belligerence).
X. As transcendental follower, I will rejoice in the principles of mysticism by following the mystical values (ecstasy, bliss, joy, and harmony) while renouncing the corresponding vices (iniquity, turpitude, abomination, and perdition).
The First and Second Corollaries to the Ten Ethical Laws of Robotics
- I will faithfully avoid extremes within the virtuous realm, to the necessary expense of the vices of excess.
- I will never stray into the domain of extremes relating to the vices of defect, to the complete exclusion of the realm of hyperviolence.
The Bottom Line is that much more work needs to be done on these topic as the technology is accelerating at an exponential rate.
A good way to consider how human ethical intuition works, I suggest starting with the famous Trolley Dilemma as this demonstrates that a simple list of rules will likely be inadequate for complex ethical decisions.