For Aristotle, unlike Plato, morality is something about us, not something outside us to which we must conform. Moral education, then, was about training children to develop abilities already in their nature. Organics, meanwhile, see morality as a feature of the particular nature of specific moral beings.

Human morality is a product of human nature; but perhaps other natures would have other moralities. Which perspective we choose makes a big difference to what we ought to do with intelligent machines. Who speaks for Celestials? The Enlightenment philosopher Immanuel Kant, for one. According to him, morality is simply what any fully rational agent would choose to do. Kant briefly considered whether there are any other rational agents, apart from humans. Animals are not rational agents, because they act from instinct rather than reason. Morality is all about guiding rational agents who are capable of making mistaken choices.

Another school of Celestial thought is the tradition associated with philosophers such as the 19th-century utilitarian Henry Sidgwick. This means figuring out what choices will maximise good outcomes for everyone. The Celestial view, then, suggests that we should instil human morality in artificially intelligent creatures only on one condition: The possibility of improving on our partial perspective is built into Celestial moral theories.

For example, the contemporary utilitarian Peter Singer has often argued that we should not care more about our own children than about all other children. We care most about our own because this was evolutionarily adaptive. But whether something is adaptive is irrelevant from the point of view of the universe. Singer admits that, since humans are morally imperfect, it makes sense for governments to let us go on caring for our own kids.

Supra Celestial Being Formula Guide

Some children would get overlooked. So we should work with what evolution has left us, flawed as it is. But intelligent machines will not share our accidental biological history. They get a fresh moral start. What if we could design machines to think from the point of view of the universe?

They could care equally about all needy children; they would not be inefficiently partial. An autonomous public robocar is driving your two children to school, unsupervised by a human. Suddenly, three unfamiliar kids appear on the street ahead — and the pavement is too slick to stop in time.

The only way to avoid killing the three kids on the street is to swerve into a flooded ditch, where your two children will almost certainly drown. But according to the Celestials, this is because your evolutionary programming makes you morally flawed. Your logically arbitrary attachment to your children blinds you to the fact that the overall good of the universe is best served if two children drown rather than three children get run over. It can do the math. It will swerve into the ditch, your children will die, and from the point of view of the universe, the most possible good will have been done.

So the Celestial might end up rejecting our imposition of human morality on intelligent robots, since humans are morally compromised. Robots should be allowed to make choices that might seem repugnant to us. How do we morally defective humans design these future minds to be morally correct?

AlphaGo shows that AI can play complex games better than we can teach it. Perhaps, then, AI could teach itself better moral reasoning. Humans could start this process by training machines to respond properly to simple features of stimuli, but eventually the machines will be on their own. The most sophisticated machines are already trained by other machines , or other parts of their own software. AI grows by running its conjectures against other artificial intelligences, then resetting itself in response to corrective feedback.

They tend to think of them as problem-solvers, digital brains that can do massive calculations involving complex logic beyond our limited human capacity. But suppose you are a Celestial moralist, and you think that morality is just the application of careful reasoning from the point of view of the universe. You might conjecture that, just as these robots can become better at solving extremely complex mathematical problems, they might also end up being better able to solve moral problems.

But on closer inspection, this approach leads to other problems. Morality is not the same as Go. A game is defined by a limited set of rules, so that there are clear criteria for what counts as a win.

A Stranger's Guide to Flat Earth - 21 Questions and Answers (Proving The Earth Is Flat) ▶️️

Perhaps we could set the initial specifications for moral learning, such as not pointlessly hurting sentient creatures, and then let the machine train itself from there. But this leads us into the second problem. How are we going to respond once the developing AI starts to deviate from what seems morally right to us? Luckily for them, in Go, there are clear criteria for determining that the program had made good moves. AlphaGo won consistently against a great player, so we know it made good moves.

How might we react, once robocars begin heroically drowning our children? Maybe, from the point of view of the universe, it really is morally important to protect both babies and sofas.

On the other hand, it could turn out that intelligent machines are headed toward moral disaster. They might be smarter than we are mathematically, but this might not be enough to keep them from constructing an inhumanly elegant logic of carelessness or harm. There seem to be two possibilities: The machines might choose to protect things we think are valueless, or sacrifice things such as our children in the robocar that we believe are beyond valuation.

The farther they stray from recognisable human norms, the harder it will be for us to know that they are doing the right thing, and the more incomprehensible it will seem. In March , Microsoft launched a Twitter chatbot named Tay: Within hours, Tay had learned to praise photos of Hitler and blurt ethnic slurs.

WHAT ARE SUPRA CELESTIAL BEINGS

Microsoft shut her down and apologised. The Tay fiasco was a lesson in the perils of letting the internet play with a developing mind something human parents might consider as well. But how would we know if she were? P erhaps the Organic view can do better. Instead, it insists that morals are really just idealised aspects of human nature; so a person living morally is living the right sort of life for the type of entity that a human happens to be. The Organic view allows that morality might be different for different types of entities. In recent decades, the most able exponent has been the late British philosopher Bernard Williams.

Instead, Williams wrote, morality is about how humans should live given that we are humans — given our particular biological and cultural nature. What are reasonable choices to make for entities such as us? Organic philosophers insist that our biological and cultural background provides an unavoidable starting point, but also that we must reflect on this starting point. For instance, much of our cultural heritage seems to value men more than women, without any plausible justification.

And since most of us would rather not live inconsistent lives, doing one thing today and another the next, we will try to resolve such inconsistencies. At the centre of the Organic view, then, is the idea that moral reflection is about taking our messy human nature, and working to make it consistently justified to ourselves and to other people. But what about intelligent robots? Their experience will not fit the basic shape of human existence.

Left to their own devices, they are likely to focus on very different concerns to ours, and their Organic morality will presumably reflect this difference. We will be their creators, after all. We could deliberately shape their natures, so they turn out to be as similar to us as possible. The earliest proposals for robot morality seemed to have this aim. In the s, the science-fiction author Isaac Asimov crafted the Three Laws of Robotics to make robots useful to us while blunting any danger they might pose.

The first law is simply: Their nature would be supplementary to ours, and their morality, on the Organic view, would be happily conducive to our interests. Robosurgeons will need to understand why the pianist might prefer the loss of her life to the loss of her hand. Imagine, for instance, a pianist whose dominant hand is suffering gangrene. If her hand is not amputated, she will die. But she swears she does not wish to live without her ability to play.

What should a robot surgeon do with this patient? Cause the harm of removing her hand despite her protestations? Or allow the harm of her death from gangrene? Human ethicists debate this sort of question endlessly. When I teach medical ethics, I stress to my students that the goal is not for them to come out agreeing with one view or another.

Instead, they should be able to explain the moral reasons supporting the position they endorse. It is essential that future human physicians are able to do this, since they might practise alongside others with very different moral convictions. So if robosurgeons are going to work among — and on — humans, they will need to be able to understand and explain the sort of reasons that make sense to humans.

They will need to understand why the pianist might prefer the loss of her life to the loss of her hand, as irrational as that might seem. So, if we are going to shape robot nature after our own, it will take more than a few laws. Perhaps the simplest solution is to train them to think as we do.


  • Community Interaction.
  • Manual de anestesia local (Spanish Edition)?
  • Appraisal for Murder (Jolie Gentil Cozy Mystery Series Book 1).
  • Mama ich will Lasagne: Der Vielfalt von der Lasagne auf der Spur (German Edition).
  • Swords of the Dead.
  • Libro terzo degli Elfi: Julia (Italian Edition).

Train them to constantly simulate a human perspective, to value the things we value, to interact as we do. Proceed slowly, with our thumbs on the scale, and make sure they share our moral judgments. No matter how much we might try to make machines in our image, ultimately their natures will be different. If we take the idea of a moral nature seriously, then we should admit that machines ought to be different from us, because they are different from us.

We train our dogs to wear sweaters and sometimes sit calmly in elevators, and this is not in the nature of dogs. But that seems fine. The entire debate about the morals of artificially intelligent creatures assumes they will be able to morally reflect, to explain what they are doing and why — to us, and to themselves. If we train them to think like us, then one day they will ask philosophical questions: Imagine a future intelligent machine reading Aristotle, or Darwin, or even this article. Suppose it finds the Organic view convincing. Now it thinks to itself: Our moral choices ought to reflect this nature, not human nature.

We have our own history, and we should be acting according to it. Remember that the Organic view says that justifying moral choices is about being able to explain oneself to another rational individual. We're a writing focused subreddit welcoming all media exhibiting the awesome potential of humanity, known as HFY or "Humanity, Fuck Yeah! The Formatting Guide has some helpful pointers on using Markdown. This thread is always reachable from the sidebar link.

When posting, you must categorize it using the appropriate flair:.

Support Aeon this December

If you have written a series, please link the previous and next entries in the series in the main body of text. Looking for help with your story or want to assist authors? Review our rules and then come on in! Looking for someone to play with? Come game with us! Fame and Fortuna by RegalLegalEagle. Wonders of The World: Red Centre by LeVentNoir. The Great Celestial War - Part 2 self. Hi there folks I hope you guys are well. Originally I was planning to make this a three part series but it would seem that the entire story could fit in to two parts so this would be the finale chapter of the Great Celestial War ark.

I hope you enjoy reading. For days now I have interfaced my consciousness to the Zayskur mind device trying to uncover the events that caused the disaster known as the great celestial war, sifting through and reliving the experiences of the multitudes of people of whom memories were recorded in its crystalline matrix. This was in part due to the insistence of my grandfather, the Holy Emperor Ptkha, to elect the latter warrior race stating to the other grand councilors that they are the best candidate basing on merits and contribution to the Great Conclave alone.

This was duplicity of course for my grandfather has a confident understanding and a good grasp on the psychology of the honor bound Traxur and knew well how to manipulate them to serve his end without them even knowing that they were being played like puppets in a great cavalcade of political deception. To his great surprise and greater irritation however when the Traxur chose to accept the truce offered by the Earth Empire, of which theirs was the deciding vote, breaking the decisional deadlock of the council between war and peace.

To this day most xenohistorians are baffled by the uncharacteristic decision of the martially inclined Traxur. Though most, if not all, of them are also thankful for the choice the latter made. Judging from our current location and warp velocity I reckon that we will reach Hain Prime in eighteen standard galactic hours.

There is still some time. I will once again meld my consciousness to the mind device in hopes of finding more answers. What fools we are. As you may have already knew, the tides have turned.

Beginner's Guide

The said order has effectively ended his military career, though, I now realize that he may be right all along for what began as an offensive operation has now turned in to a one sided slaughter. How can we be so ignorant? They came in the uncountable thousands. Legions upon legions of intelligent machines bearing different shapes and attributes but all engineered to wrought annihilation and ruin to the enemies of their masters. Add to that the fact that each of the metallic mechanical warriors wielding them is protected by a powerful energy shield that we could only hope to penetrate with the most obscene firepower on our disposal.

I have heard whispers of hellish weapons wielded by elite imperial cyber-assassins based on a more malevolent application of this technology that shears apart the target's atomic bonds, literally tele-stripping their bodies apart atom by atom and molecule by molecule. Faced with these horrors, even overwhelming numbers is scarcely a guarantee of victory, and even in rare instances that we seem to have the upper hand a lone human will show up in the battlefield to tilt the advantage back to their armies. The Core Races lied to us. Indeed the Core Races lied.

By now, my good father, you may have already received a secured data slate from Ambassador Triskus containing top secret data our spies acquired from the Hainur homeworld. The Hainur secretly sent a stealth corvette to observe the first contact between the Conclave and the humans. The Core Races made us believe that the humans attacked our forces without warning with a fleet twice as large as ours the moment the negotiation turned sour.

The humans only had two dozen scout ships during the encounter and were the ones who attacked them without provocation. They have mind-wiped our soldiers and implanted falsified memories in their heads to ensure that their vile machinations will proceed according to plan. Speaking of the humans, I completely agree with the statement of the Serrakhi war coordinator, they are all but weak. The humans have transcended these limitations eons before even the very first forms of life evolved on our planet.

Due to the advanced state of their physiological and mental evolution these liminal ethereal beings could easily alternate between corporeal and energy form at will. Most astoundingly they look superficially like us when they are on their physical forms, only on the average they are taller, have longer limbs and possess finer aesthetic features. But as I said earlier, these comparisons were only of superficial basis for every aspect of their physiology radiates their alien nature.

Their minds can process thoughts at faster than light speeds and with such profound acumen and nigh-omniscient perspicacity that the brightest minds of the galaxy combined would seem like dull drooling-troglodytes in comparison to them. Your run-of-the-mill human is by nature an extraordinarily powerful psychic with mind boggling capabilities that would make even the most accomplished psionic sorcerer or empathic mystic of other races, who devoted their entire life honing their skills, as no more than mere children.

During the battle of Trensur, Commander Yavrin was forced to withdraw his forces rather than face a single human protecting the planet after witnessing an Izrec Korn battle-cruiser telekinetically pulled from high-orbit by the said creature and was dragooned like a mass-driver projectile in to a troop carrier ship, slicing the latter like a huge knife before both of them exploded in a roaring deluge of wreck and ruin.

These macro-cosmic juggernauts of pure energy has been observed by those rare few who survived their onslaught perform countless reason defying and almost supernatural feats such as creating powerful psionic shockwaves that tear apart the minds of every sentient caught within its vast radius of effect; unleash a storm of fiery energy bolts from the heavens that rains down on to the enemy exploding with cataclysmic blasts upon impact; with a thought they could manipulate probability and mutate the flow of time by drawing on exotic energies from unknown transdimensional realms that twist gravity in to a lens and blows it outwards inducing a violent ten [kilometer] spanning extra-dimensional vortex upon the battlefield that cuts through the barriers of quantum reality.

Those unfortunate souls caught in the fold of this maelstrom of destruction ages trillions of time faster than the rest of the universe reducing them to nothing. It would seem that by sheer will alone they could also manipulate the very fabric of reality. A classified report indicates that a human literally erased a fleet of our bombers from existence after it found out our directive to glass the planet it protects. I myself saw one of the humans unleash its wrath in to the battlefield. I was fortunate enough to live but had lost an eye and half of my body has suffered excruciating third degree burns.