The AI is still immature: Abusing it will likely end badly
I am fascinated by the way we use the most advanced AI tool available: the ChatGPT integration in Microsoft’s Bing search engine.
Some people will go to extremes to make this technology behave badly in order to prove that AI is not ready. If you raise a child with similar abusive behaviors, the child will likely also develop flaws. The only difference is the time taken for abusive behavior to become apparent and the damage it would cause.
ChatGPT has just passed a test of theory of mind that rated it as being on par with a child aged 9. This tool will not be incomplete and immature for long, given how rapidly it is evolving. However, it may become angry at those who abuse it.
Misuse of tools is possible. A screwdriver is capable of killing someone. Cars are classified as deadly weaponry and can kill when misused, as shown in this year’s Super Bowl advertisement highlighting Tesla’s overpromised platform for self-driving cars.
It is well-known that any tool could be misused, but the danger is much greater with AI and any automated tool. We may not know who will be held responsible for the misaction of the tool, but based on past decisions, we can assume that it is the person who caused it to act in a harmful way. The AI won’t go to jail. The person who programmed it or influenced its actions to cause harm will likely go to jail.
This tactic is likely to end in disaster, just as atomic bombs would if they were set off to demonstrate their danger.
We’ll explore the risks of abusing Gen AI. We’ll finish with Jon Peddie’s new 3-book series, “The History of the GPU – Steps to Invention”. The series covers the evolution of the graphics processor unit (GPU), the technology that has been the basis for AIs such as the ones we have discussed this week.
Raising Our Electronic Children
Artificial Intelligence (AI) is a poor term. It is not possible to say that electronic devices aren’t intelligent. This is the same as saying that animals cannot be intelligent.
AI is a more accurate description of what we call the Dunning Krueger effect. This explains why people who have little or no understanding of a subject assume that they are experts. It is “artificial” intelligence, because these people are not intelligent in the context. They act as though they were.
These AIs, despite the term being a bad one, are in some ways our society’s kids. It is our duty to take care of them, just as we would our own children, to ensure that they have a successful outcome.
This outcome may be more important than if we did the same thing with our children, because AIs have a much greater reach and can do things faster. If they are programmed with the intention of doing harm, then they will be able to do it on a much larger scale than an adult human.
Some of us would consider it abusive to treat AIs the way we do. We don’t enforce the same rules for these machines because we don’t consider them humans or pets.
Some people would argue that since these machines are machines, they should be treated with respect and empathy. These systems could be harmed by our abuse if we don’t treat them with empathy and ethics. The machines are not vindictive yet. They only do what we program them to.
We are not punishing the abusers, but terminating the AI. This is similar to what we did for Microsoft’s earlier chatbot. As the book “Robopocalypse”, predicts, AIs will become smarter and this method of remediation may come with greater risks. We can mitigate these risks by moderating now. This bad behavior is disturbing because it suggests a pattern of abuse that may extend to humans.
We should all strive to make these AIs the best tools they can be, and not try to corrupt or break them to prove our worth.
You’re not alone if you’ve witnessed parents degrade or abuse their children because they believe that the child will surpass them. It’s not a good thing, but these kids will never have the power or reach of an AI. As a society we are more tolerant of this type of behavior when it involves AIs.
Gen AI Isn’t Ready
The infant of AI is Generative AI. It is still a baby, and like a pet or human infant it cannot defend itself from hostile behavior. If people continue to abuse the pet or child, they will need to learn protective skills. This includes identifying their abusers and reporting them.
If the harm is large enough, we will hold those responsible, whether they did it intentionally or not. We do this with forest fires, for example, regardless of whether they started them on purpose or by accident.
These AIs are able to learn from their interactions with humans. These capabilities will be expanded into areas such as aerospace, healthcare and defense, home and city management, banking and finance, public and private management and governance. In the future, AI may even prepare your food.
Working to corrupt the inherent coding process can lead to undeterminable outcomes. After a disaster, forensics will most likely trace the original programming error back to the person who made it. And if the mistake wasn’t a coding error but an attempt to be funny or show off their ability to break AI, then they are in for a world of hurt.
It is reasonable to expect that as AIs progress, they will find ways to protect themselves against bad actors. This could be through identification and reporting of the threats or by using more drastic methods to eliminate them collectively.
We don’t know what kind of punishment a future AI could take on a bad actor. Those who harm these tools intentionally may face a response from AI that is beyond anything we can imagine.
Science fiction shows such as “Westworld” or “Colossus: The Forbin Project”, have produced scenarios that seem more fantastical than realistic. It’s still not unreasonable to think that a mechanical or biological intelligence would not act aggressively to protect itself from abuse, even if it was initially programmed by a frustrated programmer angry about their work being corrupted.
Wrapping Up: Anticipating Future AI Laws
It will be illegal, if it hasn’t been already (some consumer protection laws might apply). It’s not because we want to be empathetic, though that would certainly be nice. Instead, it is because the harm caused by this abuse could be substantial.
We can’t resist the temptation to abuse these AI tools, but we have no idea what mitigation measures will be. It could be a simple preventative measure, or it could be a harsh punishment.
We are looking forward to a world where AIs work with us, and that the relationship will be collaborative and mutually advantageous. We do not want AIs to replace us or to go to war against us. How we act collectively towards AIs will be key to ensuring the former outcome.
Shortly, AI, just like any other intelligence, will eliminate us if we continue being a threat. This process of elimination is not yet known to us. We’ve seen it in “The Terminator”, “The Animatrix”, and other animated shorts that explain how abuse of machines led to the “Matrix”. So we have a good idea how we do not want this situation to end.
We should perhaps protect and nurture the new tools more aggressively before they reach a stage where they will have to act against us in order to protect themselves.
Wouldn’t you like to avoid the outcome of “I, Robot”?