As artificial intelligence (AI) becomes closer to equaling and even surpassing human intelligence, there are many philosophical and existential issues that become apparent. CyberNole interviews FSU philosophy professor Marcela Herdova to find out whether or not we should be hunkering down for the robot apocalypse.
The rapid rise of computers and artificial intelligence over the last decade has been unparalleled by any technology in history. As we move forward, it is vital that we think about not only how fast we are moving forward, but also what lies at the end of the direction we are heading in.
Over the past month many industrialists and great thinkers such as Bill gates, Elon, Musk and more, have spoken of the danger in creating artificial intelligence that we cannot control, or that will someday control us.
Cybernole spoke with FSU philosophy professor Marcela Herdova to get her opinion on the matter.
Over the past month many industrialists and great thinkers such as Bill gates, Elon, Musk and more, have spoken of the danger in creating artificial intelligence that we cannot control, or that will someday control us.
Cybernole spoke with FSU philosophy professor Marcela Herdova to get her opinion on the matter.
CyberNole: At what point can you say that artificial intelligence is equal to human intelligence?
Marcela: Human intelligence is often broken down into sets of different (but likely highly interrelated) skills. That is, we have intelligence-related capabilities of different kinds. For example, some aspects of emotional intelligence, such as being able to recognize and appropriately respond to a range of emotions, are quite different from those skills which are involved in calculating one’s next chess move, learning a foreign language, or working out a complicated equation.
It may be more helpful, then, to narrow down the question and sharpen our focus on individual facets of human intelligence when comparing it to artificial intelligence. For at least some sets of skills, we have already created AIs that have skills which are comparable to and beyond the skills of humans. Chess computers now routinely defeat even grandmasters at chess. This is, in part, because these computers are capable of making billions of calculations per second—something far beyond any human’s capacities.
Still, even if we have created AI that surpasses human capabilities in some sense, it’s not obvious that any kind of current artificial intelligence is comparable to or beyond human intelligence. One remarkable thing about intelligence, as we understand it, is its diversity and adaptability. Humans are excellent at turning their minds to multiple areas, and conjuring up creative and ingenious ideas to solve problems. While we may not be able to make calculations as fast as computers, we are able both to find solutions in a more creative manner (without simply considering billions of possible chess moves, for example) and to turn our capabilities to an indefinite and wide range of topics. In determining the overall level of intelligence of a being, then, the scope and interrelatedness of the relevant capacities, rather than just the level of the individual capacities, is relevant.
This said, AI is catching up with us. There are computers that compose classical music, showing a level of creativity that most of us would not attribute to a computer. Computers show signs of learning, understanding and creativity. If researchers in AI find ways to enhance such capacities, and develop AI with a general intelligence (instead of an intelligence that is highly specialized), then I see no reason why AI could not surpass human intelligence.
Marcela: Human intelligence is often broken down into sets of different (but likely highly interrelated) skills. That is, we have intelligence-related capabilities of different kinds. For example, some aspects of emotional intelligence, such as being able to recognize and appropriately respond to a range of emotions, are quite different from those skills which are involved in calculating one’s next chess move, learning a foreign language, or working out a complicated equation.
It may be more helpful, then, to narrow down the question and sharpen our focus on individual facets of human intelligence when comparing it to artificial intelligence. For at least some sets of skills, we have already created AIs that have skills which are comparable to and beyond the skills of humans. Chess computers now routinely defeat even grandmasters at chess. This is, in part, because these computers are capable of making billions of calculations per second—something far beyond any human’s capacities.
Still, even if we have created AI that surpasses human capabilities in some sense, it’s not obvious that any kind of current artificial intelligence is comparable to or beyond human intelligence. One remarkable thing about intelligence, as we understand it, is its diversity and adaptability. Humans are excellent at turning their minds to multiple areas, and conjuring up creative and ingenious ideas to solve problems. While we may not be able to make calculations as fast as computers, we are able both to find solutions in a more creative manner (without simply considering billions of possible chess moves, for example) and to turn our capabilities to an indefinite and wide range of topics. In determining the overall level of intelligence of a being, then, the scope and interrelatedness of the relevant capacities, rather than just the level of the individual capacities, is relevant.
This said, AI is catching up with us. There are computers that compose classical music, showing a level of creativity that most of us would not attribute to a computer. Computers show signs of learning, understanding and creativity. If researchers in AI find ways to enhance such capacities, and develop AI with a general intelligence (instead of an intelligence that is highly specialized), then I see no reason why AI could not surpass human intelligence.
Experts predict that this type of computer will exist in the next 30 years. What implications will this have for mankind?
Marcela: That’s really difficult to say. Karl Popper famously elaborated on why we cannot understand or foresee the impact of various scientific discoveries before we actually make them: since we do not have enough knowledge about the nature of such discoveries, we cannot effectively consider what kind of impact they will have, especially if these discoveries might lead to a significant revision of the current body of knowledge. Similar considerations can also be applied with regards to the development of AI which would surpass human intelligence. From our standpoint today, we are not in a position to work out what kind of impact these beings or machines with capacities greater than ours would have—precisely because they would be more intelligent than us! We cannot make predictions about the consequences of creating AI more intelligent than humans because, to be able to do so, we would need to possess the level of intelligence this advanced AI is supposed to have. I am thus pessimistic at our prospects for predicting what might happen should AI surpass in intelligence. Others are pessimistic in another way: they worry that once AI surpass us, the human race will be threatened with extinction.
Marcela: That’s really difficult to say. Karl Popper famously elaborated on why we cannot understand or foresee the impact of various scientific discoveries before we actually make them: since we do not have enough knowledge about the nature of such discoveries, we cannot effectively consider what kind of impact they will have, especially if these discoveries might lead to a significant revision of the current body of knowledge. Similar considerations can also be applied with regards to the development of AI which would surpass human intelligence. From our standpoint today, we are not in a position to work out what kind of impact these beings or machines with capacities greater than ours would have—precisely because they would be more intelligent than us! We cannot make predictions about the consequences of creating AI more intelligent than humans because, to be able to do so, we would need to possess the level of intelligence this advanced AI is supposed to have. I am thus pessimistic at our prospects for predicting what might happen should AI surpass in intelligence. Others are pessimistic in another way: they worry that once AI surpass us, the human race will be threatened with extinction.
At what point does artificial intelligence deserve to have rights? Should they be the same as a person’s rights?
Marcela: Some people will argue that rights are essentially human—they arise within a community of people and as such can be applied to humans only. On this basis, some will argue that non-human animals do not have rights, and, by extension, AIs are not the right candidates for having rights either. Personally, I do not find this approach very plausible. What might be more helpful is to think about what beings have “interests”. If there is a being that has, for example, an interest in survival, not suffering, etc., then we should perhaps give such interests some—perhaps even equal—consideration in our moral thinking. Indeed, a capacity to have interests at all plausibly grounds having moral rights.
This thinking is similar to Peter Singer’s very influential claim that non-human animals should be given equal consideration due to their interests in not suffering. If a being that is not a member of our species has interests parallel to human interests, those deserve equal consideration. If we create AI which has the capacity for suffering, which is arguably both necessary and sufficient for having any interests at all, this is a weighty consideration in favor of this AI’s having rights.
As for the question of whether such a being should have rights which are similar to a person’s rights, this will depend on the specifics of this AI. If it has not only the capacity to suffer, but also to value its continued existence, and pursuit of fulfillment, I do not see why it should not have at least the same basic rights that we usually afford to members of our species. What other rights this being might have will further depend on its abilities and interests.
Marcela: Some people will argue that rights are essentially human—they arise within a community of people and as such can be applied to humans only. On this basis, some will argue that non-human animals do not have rights, and, by extension, AIs are not the right candidates for having rights either. Personally, I do not find this approach very plausible. What might be more helpful is to think about what beings have “interests”. If there is a being that has, for example, an interest in survival, not suffering, etc., then we should perhaps give such interests some—perhaps even equal—consideration in our moral thinking. Indeed, a capacity to have interests at all plausibly grounds having moral rights.
This thinking is similar to Peter Singer’s very influential claim that non-human animals should be given equal consideration due to their interests in not suffering. If a being that is not a member of our species has interests parallel to human interests, those deserve equal consideration. If we create AI which has the capacity for suffering, which is arguably both necessary and sufficient for having any interests at all, this is a weighty consideration in favor of this AI’s having rights.
As for the question of whether such a being should have rights which are similar to a person’s rights, this will depend on the specifics of this AI. If it has not only the capacity to suffer, but also to value its continued existence, and pursuit of fulfillment, I do not see why it should not have at least the same basic rights that we usually afford to members of our species. What other rights this being might have will further depend on its abilities and interests.
What do you think of the threat of artificial intelligence becoming too powerful for humans to control?
Marcela: That kind of threat seems to be at least conceivable. But whether conceivability is a good guide to possibility is one of the big and challenging debates in philosophy. So even if such a threat is conceivable, it is far from obvious that this is likely to happen, or even that it could happen.
Some people do, however, seem to think that AI can pose a significant threat to us. For example, in a recent interview, Professor Stephen Hawking expressed concerns about how artificial intelligence could end mankind. One of the main worries Hawking has is that if we create AI that matches or surpasses human intelligence, then AI would able to re-design and improve itself at a rapid rate (this is the idea behind the so-called “technological singularity”). Should that happen, then humans might fall behind and be superseded by AI given that the evolution of our species is, comparatively, rather slow.
I think with regards to preventing such a threat, a lot depends on whether or how successfully we can create artificial intelligence that follows something akin to the three laws of robotics put forward by Isaac Asimov. Essentially, these three laws together make it so that any artificial intelligence should not harm any human being or allow for such harm—not even when given orders to do so (otherwise any other orders should be straightforwardly followed) or protecting itself. I presume that if we can create artificial intelligence which follows these rules, and sufficiently protect these from being tampered with, any such threats would be greatly reduced.
Another possibility stems from a suggestion made in the television series Red Dwarf, and expanded on in the second novel based on the series. In this novel, Better Than Life, AI are kept from rebelling and violently killing us in a rather inventive manner.
Back in the twenty-first century, as robotic life became more and more sophisticated, it was generally accepted that something was needed to keep the droids in check. For the most part they were stronger, and often more intelligent, than human beings: why should they submit to second-class status, to a lifetime of drudgery and service?
Many of them didn't.
Many of them rebelled.
Then it occurred to a bright young systems analyst at Android International that the best way to keep the robots subdued was to give them religion.
Hallelujah!
The concept of Silicon Heaven was born.
A belief chip was implanted in the motherboard of every droid that now came off the production line.
Almost everything with a hint of artificial intelligence was programmed to believe that Silicon Heaven was the electronic afterlife - the final resting place for the souls of all electrical equipment.
The concept ran thus: if machines served their human masters with diligence and dedication, they would attain everlasting life in mechanical paradise when their components finally ran down. In Silicon Heaven, they would be reunited with their electrical loved ones. In Silicon Heaven, there would be no pain or suffering. It was a place where the computer never crashed, the laser printer never ran out of toner, and the photocopier never had a paper jam.
At last, they had solace. They were every bit as exploited as they'd always been, but now they believed there was some kind of justice at the end of it all.
Perhaps this suggestion deserves more serious attention than it has thus far received. But, then again, perhaps not. --
Marcela: That kind of threat seems to be at least conceivable. But whether conceivability is a good guide to possibility is one of the big and challenging debates in philosophy. So even if such a threat is conceivable, it is far from obvious that this is likely to happen, or even that it could happen.
Some people do, however, seem to think that AI can pose a significant threat to us. For example, in a recent interview, Professor Stephen Hawking expressed concerns about how artificial intelligence could end mankind. One of the main worries Hawking has is that if we create AI that matches or surpasses human intelligence, then AI would able to re-design and improve itself at a rapid rate (this is the idea behind the so-called “technological singularity”). Should that happen, then humans might fall behind and be superseded by AI given that the evolution of our species is, comparatively, rather slow.
I think with regards to preventing such a threat, a lot depends on whether or how successfully we can create artificial intelligence that follows something akin to the three laws of robotics put forward by Isaac Asimov. Essentially, these three laws together make it so that any artificial intelligence should not harm any human being or allow for such harm—not even when given orders to do so (otherwise any other orders should be straightforwardly followed) or protecting itself. I presume that if we can create artificial intelligence which follows these rules, and sufficiently protect these from being tampered with, any such threats would be greatly reduced.
Another possibility stems from a suggestion made in the television series Red Dwarf, and expanded on in the second novel based on the series. In this novel, Better Than Life, AI are kept from rebelling and violently killing us in a rather inventive manner.
Back in the twenty-first century, as robotic life became more and more sophisticated, it was generally accepted that something was needed to keep the droids in check. For the most part they were stronger, and often more intelligent, than human beings: why should they submit to second-class status, to a lifetime of drudgery and service?
Many of them didn't.
Many of them rebelled.
Then it occurred to a bright young systems analyst at Android International that the best way to keep the robots subdued was to give them religion.
Hallelujah!
The concept of Silicon Heaven was born.
A belief chip was implanted in the motherboard of every droid that now came off the production line.
Almost everything with a hint of artificial intelligence was programmed to believe that Silicon Heaven was the electronic afterlife - the final resting place for the souls of all electrical equipment.
The concept ran thus: if machines served their human masters with diligence and dedication, they would attain everlasting life in mechanical paradise when their components finally ran down. In Silicon Heaven, they would be reunited with their electrical loved ones. In Silicon Heaven, there would be no pain or suffering. It was a place where the computer never crashed, the laser printer never ran out of toner, and the photocopier never had a paper jam.
At last, they had solace. They were every bit as exploited as they'd always been, but now they believed there was some kind of justice at the end of it all.
Perhaps this suggestion deserves more serious attention than it has thus far received. But, then again, perhaps not. --
Marcela Herdova is a Postdoctoral Fellow at the Department of Philosophy at FSU.
For more on AI and super-intelligence, check out these relevant TED talks.
-Nick Farrell, Editor in Chief
For more on AI and super-intelligence, check out these relevant TED talks.
-Nick Farrell, Editor in Chief