Artificial Intelligence, good or bad?




What would you say is the greatest threat to humanity  ...  is it climate change? Donald Trump? A sufficiently large asteroid - large enough to extinguish most life, including 100% of human life - hitting Earth? Well, potentially none of the above. Sam Harris is not the only one to make the well considered point that Artificial Intelligence (AI) could well fit the bill.


Bill Gates and Elon Musk agree with him and say the most urgent issue is not getting closer to developing ever more powerful independent AI, but to build in safeguards that prevent it from acting in other interests than ours.


Scientist and philosopher Grady Booch says while that is paramount, we don't need to be afraid of an all-powerful, unfeeling AI. Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how we'll teach, not program, them to share our values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.


This is fascinating stuff ... watch this space, your children's life may depend on it.





But wait, there's more! Artificial intelligence is getting smarter by leaps and bounds - within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values - or will they have values of their own?



















 

>