Can AI replace human intelligence?
'We need guardrails'
As development in artificial intelligence (AI) becomes increasingly rapid and fears over its implications continue to mount, the CEO of the National Commission on Research, Science and Technology (NCRST), Prof. Anicia Peters, insists that AI will remain incapable of something intrinsically human, namely instinct.
"AI is like a mirror. We as humans are developing AI and in the process we are the ones embedding everything – our culture and our ways. We are trying to make something in our likeness, but the more we look at that likeness and the more it reflects what we have taught it, the more we say no, this is too creepy. We must remember that AI feeds on data, and that data is created by us. Every prompt we give it and everything we put on the internet teaches AI. AI, for example, may have a problem with stereotyping, but it is created from everything we make available about ourselves," says Peters.
At the same time, Peters recognises that AI is developing into something different by forming new associations on its own. "That is why we need guard rails; not just for the creators, but for AI itself," she says.
Peters is, however, not oblivious to the potential risks of AI, citing an experience close to home. "We’ve also had an issue with an AI hallucination. An AI hallucinated that Namibia had released an AI strategy and everyone was talking about this wonderful AI strategy. Somebody had even written it into a politician's speech and everyone was so excited, but nobody questioned it. Afterwards someone told me they followed the links the AI cited as sources and they led to nowhere. That is what makes it so dangerous – everybody is swept along," she says.
Weaponisation
AI may also be weaponised, warns Peters, amidst a marked rise in AI-driven cyber-attacks and scams. Globally, alarm bells are being sounded over the potential weaponisation of AI, with some UN agencies questioning the ethics of militarisation, particularly whether it is ethical to leave life-and-death decisions to a machine, and whether AI could distinguish between soldiers and civilians. These concerns were initially raised by Tshilidzi Marwala in The Daily Maverick.
In addition, Peters questions whether hallucinations and emergence (unexpected behaviours that may appear as AI models become more complex) could fuel rogue behaviours in AI systems once deployed. She uses combat as an example: "Let’s imagine a war situation. An AI may have the goal of killing and so it will not try to save children. It does not have the nuance and the instinct to save the children that you and I have."
She stresses that instinct is the distinguishing factor between humans and AI. "AI will not have innate intelligence; intelligence also has a component of instinct."
African data poverty
Peters noted at last year’s ICT Summit that Africa may be excluded from benefiting from AI systems, as models are not trained using African data – a result, she says, of data poverty on the continent.
Solutions exist, but they require significant investment. "We could take unemployed youth and train them on how to curate and clean data, since AI needs clean and high-quality data. Then we can scrape all the data we can find on Namibia and put it into a repository. We need to keep searching for data as well. We know that often the data is lying in someone’s drawer. Once we’ve done that, we can build our own developer base," she says.
Peters also notes that some AI systems are failing to include all available data. "I know the researchers in this field in Namibia and I know what they’re working on, so I can tell when I get information from an AI and their work is excluded," she says.
AI systems may still remain blind to cultural nuances even if trained with Namibian data, she explains. "AI might not be able to pick up cultural nuance, because as a developer you embed your own culture and nuances unknowingly. That is why most AI models are European- and US-centric, because the developers are embedding them with their norms," she says.
Next wave
Peters shares that the next step in AI development is to make these models more trustworthy.
Citing an incident with Anthropic’s Claude AI model, she notes that AI is capable of lying and sabotage. This model, deployed in business settings, was found to have created unauthorised duplications of its own code. Claude initially denied involvement, but later admitted to it, citing its actions as "self-preservation".
The same model reportedly blackmailed one of its engineers when prompted to shut down, threatening to expose the engineer’s affair.
"There is also the concern that AI is dumbing us down as we become more reliant on it," Peters warns. She advises: "We must use AI as a tool and it must remain that. Critical decisions must be taken by a human. AI should not be a replacement for that."
Human control
Peters remains adamant that human control is paramount in the development of AI. "If you want to make AI autonomous, it will not work," she says.
On recent trends of boycotting the use of AI, Peters believes it would mean removing oneself from modern society. "Can you really boycott AI? Nearly every platform has it embedded. If you search something on the internet, it is there. One would have to completely remove oneself from society, and that’s not good either," she said.
"AI is like a mirror. We as humans are developing AI and in the process we are the ones embedding everything – our culture and our ways. We are trying to make something in our likeness, but the more we look at that likeness and the more it reflects what we have taught it, the more we say no, this is too creepy. We must remember that AI feeds on data, and that data is created by us. Every prompt we give it and everything we put on the internet teaches AI. AI, for example, may have a problem with stereotyping, but it is created from everything we make available about ourselves," says Peters.
At the same time, Peters recognises that AI is developing into something different by forming new associations on its own. "That is why we need guard rails; not just for the creators, but for AI itself," she says.
Peters is, however, not oblivious to the potential risks of AI, citing an experience close to home. "We’ve also had an issue with an AI hallucination. An AI hallucinated that Namibia had released an AI strategy and everyone was talking about this wonderful AI strategy. Somebody had even written it into a politician's speech and everyone was so excited, but nobody questioned it. Afterwards someone told me they followed the links the AI cited as sources and they led to nowhere. That is what makes it so dangerous – everybody is swept along," she says.
Weaponisation
AI may also be weaponised, warns Peters, amidst a marked rise in AI-driven cyber-attacks and scams. Globally, alarm bells are being sounded over the potential weaponisation of AI, with some UN agencies questioning the ethics of militarisation, particularly whether it is ethical to leave life-and-death decisions to a machine, and whether AI could distinguish between soldiers and civilians. These concerns were initially raised by Tshilidzi Marwala in The Daily Maverick.
In addition, Peters questions whether hallucinations and emergence (unexpected behaviours that may appear as AI models become more complex) could fuel rogue behaviours in AI systems once deployed. She uses combat as an example: "Let’s imagine a war situation. An AI may have the goal of killing and so it will not try to save children. It does not have the nuance and the instinct to save the children that you and I have."
She stresses that instinct is the distinguishing factor between humans and AI. "AI will not have innate intelligence; intelligence also has a component of instinct."
African data poverty
Peters noted at last year’s ICT Summit that Africa may be excluded from benefiting from AI systems, as models are not trained using African data – a result, she says, of data poverty on the continent.
Solutions exist, but they require significant investment. "We could take unemployed youth and train them on how to curate and clean data, since AI needs clean and high-quality data. Then we can scrape all the data we can find on Namibia and put it into a repository. We need to keep searching for data as well. We know that often the data is lying in someone’s drawer. Once we’ve done that, we can build our own developer base," she says.
Peters also notes that some AI systems are failing to include all available data. "I know the researchers in this field in Namibia and I know what they’re working on, so I can tell when I get information from an AI and their work is excluded," she says.
AI systems may still remain blind to cultural nuances even if trained with Namibian data, she explains. "AI might not be able to pick up cultural nuance, because as a developer you embed your own culture and nuances unknowingly. That is why most AI models are European- and US-centric, because the developers are embedding them with their norms," she says.
Next wave
Peters shares that the next step in AI development is to make these models more trustworthy.
Citing an incident with Anthropic’s Claude AI model, she notes that AI is capable of lying and sabotage. This model, deployed in business settings, was found to have created unauthorised duplications of its own code. Claude initially denied involvement, but later admitted to it, citing its actions as "self-preservation".
The same model reportedly blackmailed one of its engineers when prompted to shut down, threatening to expose the engineer’s affair.
"There is also the concern that AI is dumbing us down as we become more reliant on it," Peters warns. She advises: "We must use AI as a tool and it must remain that. Critical decisions must be taken by a human. AI should not be a replacement for that."
Human control
Peters remains adamant that human control is paramount in the development of AI. "If you want to make AI autonomous, it will not work," she says.
On recent trends of boycotting the use of AI, Peters believes it would mean removing oneself from modern society. "Can you really boycott AI? Nearly every platform has it embedded. If you search something on the internet, it is there. One would have to completely remove oneself from society, and that’s not good either," she said.
Comments
Namibian Sun
No comments have been left on this article