Armstrong Economics Blog/AI Computers
Re-Posted Nov 29, 2017 by Martin Armstrong
QUESTION: Mr. Armstrong; I heard you will be the keynote speaker at the Hack-Miami Programmer’s conference in 2018. Can I ask a question given your expertise in AI. What is your opinion of Sophia? Is this really AI warranting that she was granted citizenship in Saudi Arabia?
See you in Miami
ANSWER: Sophia is a great development. However, proof that this is not the type of AI everything thinks it is all we need do is listen to a joke. When Sophia is talking to anyone, it’s really being handed the lines. It might determine when it’s the right time to say something, but those pithy one-liners aren’t from the robot. Someone has programmed that. You can ask Google Home a joke and she too will answer. These are scripted lines. The robot can process what you are asking, but it cannot actually create a joke from scratch. It is possible to move toward that, but there is no guarantee every joke would actually be funny.
The AI that people think they are watching is a fully cognitive robot that is self-aware. I wrote a program for my children back in the early 1980s. I installed a Dragon voice board and wrote a program to be a politician. It would interact with my children and record likes and dislikes. When my daughter would go back the next day, it would ask about something it knew like how is your dog.
The politician part came into play whenever it did not understand a line of conversation. It would simply change the subject. My kids would bring friends over and did not understand that the computer I created was not exactly off the shelf. They would tell their friend that their computer talked.
Likewise, Socrates is not cognitive. It understands how to analyze and go well beyond what humans are capable of. However, it is still not self-aware. I personally do not believe a computer can simply become self-aware by evolution. That theory is really based on the idea that there is no God and we have emerged simply because of our brain structure. Thus, if we create a neural net big enough, the theory is consciousness will somehow emerge as did we. I do not buy that.