I saw this article on IO9 asking if artificially enhanced human intelligence might not be as beneficial as we might imagine. The fact that I saw this on IO9 kept me from dismissing it as typical Luddite fear-mongering; their tagline, after all, is “we come from the future.” If anybody on the Internet is pro-future, it’s these guys.
Well, these guys and countless other trans-humanist blogs, tech sites, and other groups. Technologically interested people on the Internet? You don’t say! Anyway.
My position going into the article was firmly in the “pro-enhancement” camp. Intelligence is the defining characteristic of our species. Technological progress has made life better for most people. It stands to reason that more intelligence will lead to more technology which will lead to better lives.
However, I’m no longer quite as certain about this position as I was before. The article points out a few things that I, like many others, often take for granted:
We value smartness, no doubt about it. No one likes to be called stupid, especially in the sci-tech-saturated world we live in. High intelligence, goes the argument, is what’s needed for success in this society, a trait that trumps physical strength, the conviction to succeed, and even a solid education.
But as Walker told me, this is an intelligence bias, one that’s twofold. There’s the emphasis towards intelligence itself, and then there’s the bias towards certain kinds of intelligence — namely “IQ-type” intelligence, or what Changizi calls chess-and-brain-teaser-like intelligence.
It’s a good point. You don’t often hear discussion of intelligence-enhancement of things like social intelligence or emotional intelligence; it’s always the brain-puzzle stuff, the stereotypical “nerd” stuff.
I really like Changizi’s version of what he views as the optimal vision for an enhanced intelligence in humans:
“Well, there’s my own Human 3.0 view, in which I make the case that any enhancements that truly take off will be ones that closely harness our brains’ natural instincts — that’s the only way to coax the brain to do new things brilliantly — and in this sense I deem even writing, speech and music as “enhancements”.
This is a perspective I’ve never considered before and it’s something that, in retrospect, reveals that I’m fully guilty of falling into the “fetishization” of intelligence, valuing only outcomes like higher IQs and the ability to crunch numbers in one’s head.
This is strange, because in truth, I’m incredibly weak in math and similar disciplines; one of the reasons I focused so heavily on writing throughout my education. You’d think I would be the first to point out that intelligence is more than just IQ. Maybe I focused on those goals hoping there would one day be technology that would help me understand trig.
I’m glad that this is a discussion. I feel that my own perspective has been broadened and I very much agree with the vision of intelligence-enhancement will mirror our own natural biological processes and improve upon them.
Although it’s not mentioned in the article, I believe enhancement with the goal of preventing neurological decay (such as what happens to the brain through natural aging) would be a worthy goal.
Interesting stuff. I’m still very interested in trans-humanism and its goals and I’m glad to see there is nuanced discussion happening about them. That bodes well for the future, in my opinion.
4 thoughts on “Thoughts On Enhanced Intelligence”
Personally, I’ve never liked the idea of “Intelligence.” It’s too 1-Dimensional, it’s too confining, it’s too static. For some reason a person who’s “smart” is supposed to be “smart” about everything, and they’re supposed to act a certain way. More importantly, a person who’s smart doesn’t become dumb through lack of practice, and a person who’s dumb can’t work their way up to being smart.
Personally, I’ve always preferred a growth mindset, the idea that anyone can achieve great things by putting focused effort towards something and focusing on their own improvement. It’s the reason why I’m good at acoustics and terrible at car repair, I put a lot of effort towards science, and no effort towards cars. That doesn’t make me “smarter” than a mechanic, put a broken car in front of me and you’ll learn just how dumb I am.
Practice. Real, structured, effortful practice. Not doing something every day, but making a concerted effort to improve every day. That’s better than intelligence, because that’s how you get really good at something.
Here’s an inspiring video, in one year, practicing for 1-2 hours a day, this girl becomes a very impressive dancer. Really worth a look: http://youtu.be/daC2EPUh22w
Nice piece, I read the same post and was happy with the discussion. Here is my take on this:
1. Modern concepts of intelligence are multi-dimensional.
2. Intelligence has two components: structural/inherited and structural/learned.
3. Intelligence, in all its dimensions, can be regarded as methods to process certain problems.
4. All forms of intelligence can, and will, be modeled in computers.
5. Intelligence enhancement include all forms of intelligence.
6. A balanced enhancement of intellectual, social and emotional intelligence is probably required in order to make these enhancements bearable.
So I see the IO9 article as a nice reminder that we are looking at more than math competence when working on AI.
The article does a great job of pointing out something that isn’t really obvious; it’s great that we’ve developed AI that can play chess crazy good (enough to challenge grandmasters) but playing chess doesn’t cure cancer, or solve hunger, or fix climate change. If the goal of creating higher intelligence is to employ that intelligence to solve real-world problems, then that’s something that the AI community doesn’t really seem to have targeted.
This might also be a matter of using the wrong tools for the job. If you want to enhance average human intelligence on a broad scale, you need to create the right educational structure. That means determining what kind of learning a child is most receptive to and then orienting his/her education in such a way that you’re playing to strengths. Then you get kids who can express competence in what they’ve been taught, and then put them to use in a field where they’re playing to their own strengths.
Feeding back into what the article was saying, if the goal is to get people to be happier in life, since happier people are more productive, having confidence in your abilities and doing something you enjoy is a better path to happiness.
I didn’t even consider the implications for this discussion with regards to AI. It’s a really good point. I think the fact that this understanding of intelligence as multi-dimensional is the strongest argument against a computer singularity that’s malevolent towards human life. An AI with human-like emotional intelligence could potentially lead to an understanding of morality, which would be better for us in the long run.