Latest News Editor's Choice

Opinion / Columnist

Mutambara speaks on artificial intelligence

31 Oct 2019 at 16:47hrs | Views
This is exciting stuff (the military robot video below). In this demonstration, the robot is programmed not to hurt humans. It is coded that way in this case. It has targets (non-human) that it must attack. So, no matter how much provocation it gets from human beings. It will not attack a human. That's how it is programmed. In this demonstration, they are showing the effectiveness and robustness of the product.

Now in a war between two groups of humans, it will be programmed to attack one group and not its side. This could be achieved by identifying the friendly team using sensors mounted on the bodies of the friendly soldiers.

Alternatively, the robots as an army of robots (no boots on the ground!) can be deployed against a human enemy force. Again, the robots will be programmed not to shoot each other, but only attack the human enemy forces. The robots will also communicate among themselves.

All this is work in progress. There are lots of ethical and legal issues involved before the deployment of the product. But with advances in AI, the robots are getting very sophisticated, robust and effective. That's high science and technology for you. It is a double-edged sword. You can use it to build, develop, empower and protect; or you can use it disempower, enslave and destroy!

Although the video demonstrates AI in military applications, the technology has a potential impact and use in every sector without exception. However, there are fundamental challenges in the field of Robotics and AI - moral, ethical and legal. There are also significant risks and dangers. I guess it comes with the territory of high science and technology. It is fraught with great hazards. There is a need for serious mitigation, in addition to crafting of new laws, treaties and agreements both within countries and among nations.

There is even a greater cause for concern in the field of Artificial General Intelligence (AGI). This is a field of study where we build machines that can do any task, not just a particular assignment only. Here we are talking about human-like intelligence. A human being has general intelligence about many things, not just one task. For example, the robot in the video only knows one mandate: shooting at non-human targets while avoiding harm to human beings. We want more than such capability. We want a robot that can do any task as a human being does. We want generalised multi-task capability in a machine. That is AGI. It is much harder than AI.

Furthermore, researchers are working on constructing a robot that has a mind of its own by which it makes independent decisions. The objective is to create a robot that has the capacity to change its mind when executing a task. This has not yet been achieved. However, researchers are pursuing that agenda. It is called building intentionality into a machine. Constructing a robot which decides to follow or defy the computer program or programmer. This has not yet been achieved but t is work in earnest progress.

Now you can imagine what will happen when the goal of AGI is achieved and intelligent machines also have intentionality - the risks and dangers will go exponential!

AI and AGI constitute a brave new world. Add on top of what I have outlined is another effort (work in serious progress) of building machines that manifest human emotions (love, guilt, joy, embarrassment, etc.) and consciousness. This is not yet done, but not impossible. It will all compound the complexity of the challenges and risks. In fact, it will alter the concept of what it means to be alive. The question of what a human being is will have to be deconstructed. Can a human being be created using AGI? If so, what does it all mean? Those are the imponderables and conundrums. However, there are also massive opportunities. High technology is a source of both progress and vulnerability. You cannot have one without the other, unfortunately.

To further compound the issues at hand, there is also the matter of artificial superintelligence. This is where a machine (a super-intelligent robot or agent) is created, which is smarter than the smartest human being. When that happens, obviously the robots have to be in charge. Humans will play second fiddle to the robots. Anyway, we are not there, YET. You can smile, for now.

Which of our institutions are best placed to drive the thinking and training needed? First and foremost, there must be buy-in at the highest level of public policy (government). Then well-funded university departments, think tanks, and bespoke partnerships institutions (industry, academia, government, civil society).

Some might argue that AGI and artificial superintelligence will never happen. Well, you must always keep an open mind. Do not be afraid of (or cast aspersions about) things that you do not know or understand. Also never say something cannot be done, using your limited knowledge base or constricted intellectual exposition. Keep an open mind.

Here are anecdotes of incredibly wise and knowledgeable people in history who got things terribly wrong by using their limited and current prisms (and frameworks) as a basis of judgement:

1) In 1915 there is a famous well-decorated professor who declared that: ‘All useful scientific knowledge has been discovered. There are no more new things to uncover.' Of course, he was wrong.

2) When sound in movies was introduced (before that there were only silent movies), a well-respected movie producer declared that ‘It will not work. Why would someone want to listen to disturbing sounds when they are busy watching an interesting scene. People are used to watching silent movies, and they love it that way!' Well, history has proved him slightly wrong!

3) when PCs were discovered they were quite big (the size of a room) and awfully expensive. A well-respected business guru declared that: ‘The global market of the PC will be 150. Nothing more. There is no future for this product.' Well, what can I say about this prediction?

The moral of the story from these anecdotes is that never use your current but limited experience or knowledge base to judge what is possible or not possible in the future. It is vital to keep an open mind even if obtaining current evidence is not that compelling.

All the questions, concerns and fears raised about AI and AGI are meritorious. There are significant issues and implications, more so for the African. There are neither easy answers nor microwave prescriptions. Once the robot is smarter than the smartest person, it is game over for the humans. However, we are not there yet.

One further matter is the exciting field of human augmentation; what we call the development of Human 2.0. This is where we combine AGI or AI with the human capability to produce a new capability. As indicated earlier, all these developments are characterised by severe moral, ethical and legal dilemmas. There are immense risks. Technology is a source of both strengths and dangers.

The key take-home message for the African continent is that technology is just a tool. The challenge for us as Africans is: ‘How can we make high technology work for us (solve our problems and achieve our collective socio-economic ambitions) after thoroughly understanding both the potential benefits and vulnerabilities.' Africans must be owners, drivers, creators and innovators of technology and not just consumers, objects, victims or noisy bystanders.

We must step up to the AI and AGI plate.

Source - Arthur Mutambara
All articles and letters published on Bulawayo24 have been independently written by members of Bulawayo24's community. The views of users published on Bulawayo24 are therefore their own and do not necessarily represent the views of Bulawayo24. Bulawayo24 editors also reserve the right to edit or delete any and all comments received.