Are you exciting about what Artificial Intelligence can do for humanity or are you concerned?
I just finished reading “The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution“ by Walter Isaacson. At the end of the book he talks in more detail about Artificial Intelligence which I found very interesting.
I like very much Isaacson writing style. I thoroughly enjoyed reading his exclusive biography of Steve Jobs. It gave me a real understanding of Job’s personality and achievements.
I enjoyed The Innovators book because it covers both the technology and the people behind the technology.
Isaacson manages to provide a good balance about the endless debates about who did what, when, where, who “borrowed” ideas from whom, and who deserves the amount of credit for different things.
The book starts with Ada Lovelace, the daughter of the poet Lord Byron. Many regard Ada as being the first programmer. She was Charles Babbage’assistant who primarily thought that his Analytical Engine machine was for crunching numbers. Ada recognised that the device had applications beyond pure calculations, it could store and manipulate anything such as symbols, including logic, words and music.
It was fascinating to find out that back in the 1850s and 1860s someone was pondering with the idea that someday any piece of content – book, music, photograph, video – will express and store in digital form.
Ada’s more controversial contention was that no computer, no matter how powerful, would ever indeed be a “thinking” machine. She believed that the devices could perform tasks as instructed, but only people could bring creativity.
Alan Turing, however, disagreed with Ada’s contention. He wanted to prove, just as many Artificial Intelligence enthusiasts, that machines will replicate and surpass the human brain, and prove Ada Lovelace wrong. But after all these years it has remained an illusion.
Alan Turing was an English Mathematician who during the Second World War worked for the Government Code and Cypher School at Bletchley Park, Britain’s codebreaking centre that produced Ultra intelligence.
He played a decisive role in cracking intercepted coded messages that enabled the Allies to defeat the Nazis in many important engagements, including the Battle of the Atlantic, and in so doing helped win the war.
I hadn’t heard of most of the innovators mentioned in the book until Walter Isaacson gets to the early 1970s, I then started to recognise innovators such as Paul Allen and Bill Gates, Steve Jobs and Steve Wozniak, Steve Case, Larry Page and Sergey Brin.
Will Artificial Intelligence Replace Humans or Become Its Partners
When IBM’s Deep Blue, a chess-playing machine, beat the world champion Garry Kasparov in 1997 and then Watson, its natural-language question-answering computer, won at Jeopardy against champions Brad Rutter and Ken Jennings in 2011 the entire community of Artificial Intelligence broke out in discussions.
But IBM CEO Ginni Rometty said that it was not a real breakthrough of human-like artificial intelligence. The computers are only intelligent the way people program them.
“Computers today are brilliant idiots,” said the company’s director of research, John E. Kelly III, after the Deep Blue and Watson victories. “They have tremendous capacities for storing information and performing numerical calculations – far superior to those of any human. Yet, when it comes to another class of skills, the capacities for understanding, learning, adapting, and interacting, computers are woefully inferior to humans.”
Computers can do some of the toughest tasks in the world (assessing billions of possible chess positions, finding correlations in hundreds of Wikipedia-size information repositories), but they cannot perform some of the tasks that seem most simple to us humans.
Fantastic advances have been made in speech-recognition technologies such as Siri and other systems, but it is also apparent that whenever we use Siri, we still can’t have a meaningful conversation with a computer.
The latest advances have led to describe the moment when computers are not only smarter than humans but also can design themselves to be even super smarter, and will thus no longer need us, mortals.
However, people have heard similar stories from the 1950s. Real artificial intelligence may take a few more generations or even a few more centuries. In my opinion, I hope it never happens.
Key Information from The Innovators Book
“Human ingenuity will never devise any inventions more beautiful, nor more simple, nor more to the purpose than Nature does.” Wrote Leonardo da Vinci, whose Vitruvian Man, according to Isaacson in The Innovators book, became the ultimate symbol of the intersection of art and science.
Ada Lovelace declared. “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.” She believed that machines would not replace humans but instead become their partners, and would bring to this relationship originality and creativity.
The strategy of combining computer and human capabilities, of creating a human-computer symbiosis, turned out to be more fruitful than the pursuit of machines that could think on their own.
“The goal is not to replicate human brains,” says John Kelly, the director of IBM Research. Echoing Licklider, he adds, “This isn’t about replacing human thinking with machine thinking. Rather, in the era of cognitive systems, humans and machines will collaborate to reproduce better results, each bringing their own superior skills to the partnership.”
The Computer Working With Doctors On Cancer Treatment
IBM decided that the best use of Watson, the Jeopardy-playing computer, would be for it to collaborate with humans. The Watson system was fed more than 2 million pages from medical journals and 600,000 pieces of clinical evidence, and could search up to 1.5 million patient records.
When a doctor put a patient’s system and vital information, the computer provided a list of recommendations ranked in order of its confidence.
The IBM team realised, the machine needed to interact with human doctors. David McQueeney, the vice president of software at IBM Research said: “We aim to combine human talents, such as our intuition, with the strengths of a machine, such as its infinite breadth. That combination is magic because each offers a piece that the other one doesn’t have.”
The goal could be to find ways to optimise the collaboration between human and machines capabilities – to forge – a partnership in which we let the machines do what they do best, and they let us do what we do best.
As long as we remain creative species, “The machines will be more rational and analytic. People will provide judgment, intuition, empathy, a moral compass, and human creativity.” said IBM’s research director John Kelly.
“We possess an imagination that brings together things, facts, ideas, conceptions in new, original, endless, ever-varying combinations,” said Ada Lovelace.
In conclusion, if you are interested in the history of computers, the internet, technology and the digital world in which we live, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution book would be an excellent book to read.
I would love to hear your opinion or thoughts about this topic. Please leave your comments in the form below.
To Your Success!