Wisdom received, over and out

Imagine being able to gain wisdom instantly, use it to make a vital decision and then lose it again (if you wish) to wallow in ignorance. Would being able to do so, which is not a distant prospect, make us more or less human, asks Ian Pearson

June 18, 2009

Wisdom is traditionally considered the highest form of intelligence, combining systemic experience, some deep thinking and knowledge. Human nature is a set of behavioural biases imposed on us by our biological heritage, built over billions of years. As a technology futurist, I find it useful that in spite of technology changes, our human nature has probably remained much the same for the past 100,000 years, and it is this anchor that provides a guide to potential markets.

Underneath a thin veneer of civilisation, we are pretty similar to our caveman ancestors. Human nature is an interesting mixture of drives, founded on raw biology and tweaked by human evolution over millennia to incorporate some cultural aspects such as the desire for approval by our peer group, the need to acquire and display status and so on. Each of us faces a constant battle between our inbuilt nature and the desire to do what we know is the "right thing" based on our education and situational analysis. For example, I love eating snacks all evening, but if I do, I put on weight. Knowing this, I can just about muster enough willpower to manage my snacking so that my weight remains stable. Some people stay even slimmer than I do, while others lose the battle and become obese. So already it is clear that on an individual basis, the battle between wisdom and nature can go either way. On a group basis, people can go either way, too, with mobs at one end and professional bodies at the other. But even in the latter, where knowledge and intelligence should hold sway, the same basic human drive for power and status corrupts institutional intellectual values, with the same power struggles and use of the same emotional drivers as mob rulers.

So, much as we would like to think that we have moved beyond biology, everyday evidence says we are still very much in its control, both individually and collectively. But what of the future? Will we for ever be ruled by our human nature? Will it always get in the way of the application of wisdom? Or will we find a way of becoming wiser? After 100,000 years of failure by conventional social means, it seems most likely that technology could help us. But what kind of technology would work?

Many biologists argue that for various reasons, humans no longer evolve along Darwinian lines. We mostly don't let the weak die, and our gene pools are well mixed, with few isolated communities to drive evolution. But there is a bigger reason why humanity has reached the end of the Darwinian road. From now on (well, a few decades from now on anyway), as a result of advancing biotechnology and increasing understanding of genetics and proteomics, we will, in essence, be masters of our own genome. We will be able to decide which genes to pass on, which to modify or swap, which to dump. One day, we will be able to design new ones. This will certainly not be easy. Most physical attributes arise from the interactions of many genes, so it isn't as simple as ticking boxes on a wish list, but technology progresses by constantly building on existing knowledge, so we will get there, slowly but surely, and the more we know, the faster we will learn. As we use this knowledge, future generations will start echoing the values and decisions of their ancestors, which, if anything, is closer to Lamarckian evolution than Darwinian.

So we will soon have the ability to redesign humanity from the ground up. We could decide which attributes to enhance, which to reduce or jettison. We could make future generations just the way we wanted, their human nature designed and optimised to our view of perfection. And therein lies the first fundamental problem. We don't all share a single value set, and will never agree on what perfection means. Our decisions on what to keep and dump wouldn't be based on wisdom, deciding what is best for humanity in some absolute sense; they would instead echo our value system at the time of the decision.

Worse still, it wouldn't be all of us deciding, but some mad scientist, power-crazy politician, celebrity, plutocrat or, worse still, a committee. People in authority don't always represent the best of current humanity; at best, they simply represent the attributes required to rise to the top, and there is only a small overlap between those sets.

Imagine if such decisions were to be made in today's UK, with a nanny state redesigning us to smoke less, drink less, eat less, exercise more - and to do whatever the state tells us without objection.

What of wisdom then? How often is wisdom obvious in government policy? Do we want a Stepford society? That is what evolution under state control would yield. Under the control of engineers or designers or celebrities, it would look different, but none of these groups represents the best interests of wisdom, either. What of a benign dictator, using the wisdom of Solomon to direct humans down the right path to wise Utopia? No thanks! I am not sure there is any committee, individual or role that is capable of reaching a truly wise decision on what our human nature should become. And even if there were, there is no guarantee that future human nature would be designed to be wise, rather than a mixture of other competing attributes.

The more I think about it, the more I think that is the way it ought to be. Becoming wise is certainly something to aspire to, but do you want everyone to be wise? Really? I would much prefer a society that is as mixed as today's, with a few wise men and women, quite a lot of fools and most people in between. Maybe more wise people and fewer fools would be nice, and certainly I'd like to adjust our institutions so that more wise people rise to positions of power, but I don't think it's a good idea to try to make humans better genetically. Who knows where that would end, with the free run of values that we seem to have now that the fixed anchors of religion have been lost. Each successive decision on optimisation would be based on a different value set, taking us on a random walk with no particular destination. Is wisdom simply not desired enough to make it a winner in the optimisation race, competing as it is against beauty, sporting ability, popularity, fame and fortune?

So if we can't safely use genetics to make humans wiser or improve human nature, is the battle between wisdom and nature already lost? Not yet: there are some other avenues to explore. Suppose wisdom were something that people could acquire if and when they wanted it. Suppose it could be used at will when our leaders were making important decisions, and the rest of the time we could carry on our lives in the bliss of ignorance and folly, without the burden of knowing what was wise. Maybe that would work. In this direction, the greatest toolkit we will have comes from IT, and especially from the field of artificial intelligence.

Much of knowledge (of which only a rapidly decreasing proportion is human knowledge) is captured on the internet, in databases and expert systems, in neural networks and sensor networks. Computers already enhance our lives greatly by using this knowledge automatically. And yet they can't yet think in any real sense of the word, and are not yet conscious, whatever that means. But thanks to advancing technology, it is becoming routine to monitor signals in the brain to millimetre resolutions. Nanowires can even measure signals from different parts of individual cells. With more rapid reverse engineering of brain processes, and consequential insights into the mechanisms of consciousness, computer designers will have much better knowledge on which to base their development of strong AI, ie, conscious machines. Technology doesn't progress linearly, but exponentially, with the knowledge development rate increasing rapidly as progress in one area helps progress in others.

Thanks to this positive feedback effect, it is possible that we could have conscious machines as early as 2020, and that they will not just be capable of human levels of intelligence, but will become vastly superior in terms of sensory capability, memory, processing speed, emotional capability and even the scope of their thinking. Most importantly, from a wisdom viewpoint, they will be able to take into account many more factors at one time than humans. They will also be able to accumulate knowledge and experience from other compatible machines, as well as from the whole of the internet's archives, so every machine could instantly benefit from insights from any other, and could also access any sensory equipment connected to any other computer, pool computer minds as needed, and so on. In a real sense, they will be capable of accumulating many human lifetimes of equivalent experience in just a few minutes.

It would perhaps be unwise to build such powerful machines before humans can transparently link their brains to them; otherwise, we face a potential Terminator scenario, so this timescale could be delayed by regulation (although the military potential and our human tendency to want to gain advantage may trump this). By the time we actually build conscious machines that we can link to our brains, they will be capable of vastly higher levels of intelligence. So they will make superb tools for finding wiser solutions to problems. They will enable their human "wearers" to consider every possibility, from every angle, looking at every facet of the problem, to consider the consequences and compare with other approaches. And of course, if anyone can wear them, then the intellectual gap between dumb and smart people would be eliminated by the vast superiority of the added brainpower. This would make it possible to continue to select our leaders on factors other than intelligence or wisdom, but still enable them to act with much more wisdom when called to.

But this doesn't solve the problem automatically. Leaders would have to be forced to use machine tools when a wise decision is required; otherwise they could often choose not to do so, and sometimes still end up making very unwise decisions by following the forces driven by their nature. And if they did decide to use the machine, then inevitably some would argue that humans are becoming somewhat obsolete, and we are in danger of handing over decision-making to machines, another form of Terminator scenario, with the decisions that are made not being properly "human" ones. Somehow, we would have to crystallise out those parts of human decision-making that we consider to be fundamentally human, and important to keep, and ensure that any decision is subject to the resultant human veto. We could blend nature and wisdom to suit.

This route towards machine-enabled wisdom would still take a lot of effort and debate to make it work. Some of the same objections face this approach as the genetic one, but if it is only optional and the links can be switched on and off, then it should be feasible, just about. We would have great difficulty in deciding what rules and processes to apply, and it would take some time to make it work, but nature could be eventually overruled by wisdom using an AI "wisdom machine" approach.

Would it be wise to do so? Although I think changing our genetics to bias us towards wisdom is unwise, I believe that using optional AI-based wisdom is both feasible and wise in itself. We need to improve the quality of human decision-making processes if future generations are to live peacefully, get the best out of their lives and not trash the planet. If we can do so without changing the fundamental nature of humanity, then all the better. We can keep our human nature, and be wise when we want to be. If we can do that, we can acknowledge our tendency to follow our nature, and overrule it as required. Sometimes nature will win, but only when we let it. Wisdom will one day triumph. But probably not in my lifetime.

You've reached your article limit.

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Have your say

Log in or register to post comments

Most Commented

James Fryer illustration (27 July 2017)

It is not Luddism to be cautious about destroying an academic publishing industry that has served us well, says Marilyn Deegan

Jeffrey Beall, associate professor and librarian at the University of Colorado Denver

Creator of controversial predatory journals blacklist says some peers are failing to warn of dangers of disreputable publishers

Hand squeezing stress ball
Working 55 hours per week, the loss of research periods, slashed pensions, increased bureaucracy, tiny budgets and declining standards have finally forced Michael Edwards out
Kayaker and jet skiiers

Nazima Kadir’s social circle reveals a range of alternative careers for would-be scholars, and often with better rewards than academia

hole in ground

‘Drastic action’ required to fix multibillion-pound shortfall in Universities Superannuation Scheme, expert warns