Cookies   I display ads to cover the expenses. See the privacy policy for more information. You can keep or reject the ads.

Video thumbnail
So humans have a kind of built-in capacity of learning moral systems from
their parents and other people we're not born with any particular moral [code] but the
ability to learn it just like we can learn languages. The problem is of course
this built-in facility might have worked quite well
back in the Stone Age when we were evolving in small tribal communities - but doesn't
work that well when surrounded with a high-tech civilization, millions of other
people and technology that could be potentially very dangerous. So we might
need to update our moral systems. And that is the interesting question
of moral enhancement: can we make ourselves more fit for a current work?
and what kind of fitness should we be talking about? For example we might want
to improve on altruism - that we should be coming to strangers. But in a big
society, in a big town - of course there are going to be some stranger's that you
shouldn't trust. So it's not just blind trust you want to enhance - you actually
want to enhance ability to make careful judgements; to figure out what's
going to happen on whom you can trust. So maybe you want to have some other aspect,
maybe the care - the circle of care - is what you want to expand. Peter Singer
pointed out that there are circles of care and compassion have been slowly expanding
from our own tribe and their own gender, to other genders, to other
people and eventually maybe to other species. But this is still biologically
based a lot of it is going on here in the brain and might be modified. Maybe we
should artificially extend these circles of care to make sure that we actually do
care about those entities we ought to be caring about. This might be a problem
of course, because some of these agents might be extremely different for what we
used to. For example machine intelligence might produce
more machines or software that is a 'moral patient' - we
actually ought to be caring about the suffering of software. That might be
very tricky because our pattern receptors up in the brain are not
very tuned for that - we tend to think that if it's got a
face and the speaks then it's human and then we can care about it. But who
thinks about Google? Maybe we could get super-intelligences that we actually
ought to care a lot about, but we can't recognize them at all because they're so
utterly different from ourselves. So there are some easy ways of modifying
how we think and react - for example by taking a drug. So the hormone oxytocin is
sometimes called 'the cuddle hormone' - it's released when breastfeeding and when
having bodily contact with your loved one, and it generally seems to be making
us more altruistic; more willing to trust strangers. You can kind of sniff it and
run an economic game and you can immediately see a change in response. It
might also make you a bit more ego-centric. It does enlarge feelings of comfort
and family friendliness - except that it's only within what you consider to be your
family. So we might want to tweak that. Similarly we might think about adding
links to our brains that allow us to think in better ways. After all,
morality is dependent on us being able to predict what's going to happen when
we do something. So various forms of intelligence enhancement might be very useful
also for becoming more moral. Our ability to control our reactions that allow our
higher-order values to control our lower order values is also important, that
might actually require us to literally rewire or have bioships that help us do
it. But most important is that we need the information we need to retrain the
subtle networks in a brain in order to think better. And that's going to require
something akin to therapy - it might not necessarily be about lying on a sofa and
telling your psychologist about your mother.
It might very well be a bit of training, a bit of cognitive enhancement, maybe a
bit of brain scanning - to figure out what actually ails you. It's probably going to
look very very different from anything Freud or anybody else envisioned for the future.
But I think in the future we're actually going to try to modify ourselves so
we're going to be extra certain, maybe even extra moral, so we can function in a
complex big world.