The time for AI hacking is closer than you think

And finally – sophistication: AI-assisted hacks open the door to complex strategies beyond those that can be devised by the unaided human mind. The sophisticated statistical analyzes of AIs can reveal relationships between variables, and thus possible exploits, that the best strategists and experts may never have recognized. This sophistication can allow AIs to deploy strategies that subvert multiple levels of the target system. For example, an AI designed to maximize a political party’s vote share could determine a precise combination of economic variables, campaign messaging and procedural vote adjustments that could make the difference between election victory and defeat, extending the revolution that mapping software brought to gerrymandering. all aspects of democracy. And that’s not even getting into the tricky tricks an AI might suggest to manipulate the stock market, legislative systems or public opinion.
At the speed, scale, scope and sophistication of computers, hacking will become a problem that we as a society can no longer handle.
I am reminded of a scene in the movie Terminator, in which Kyle Reese describes to Sarah Connor the cyborg hunting her: “It’s non-negotiable. It cannot be justified. There is no feeling of pity, remorse or fear. And it certainly will not stop, ever…” We are not dealing with literal cyborg killers, but as AI becomes our adversary in the world of social hacking, we may find it just as difficult to keep up with its inhuman ability to prey on our vulnerabilities.
Some AI researchers worry about the extent to which powerful AIs can overcome their human-imposed limitations and – potentially – come to dominate society. While this may seem like wild speculation, it is a scenario worth at least considering and preventing.
However, today and in the near future, the hacking described in this book will be carried out by the powerful against the rest of us. All the AIs out there, whether on your laptop, online or embodied in a robot, are programmed by other people, usually in their interests and not yours. While an internet-connected device like Alexa may pretend to be your trusty friend, never forget that it’s designed to sell Amazon’s products. And just as Amazon’s website encourages you to buy its own brands instead of its competitors’ higher-quality items, it won’t always act in your best interest. It will hack your confidence in Amazon for the goals of the shareholders.
In the absence of any meaningful regulation, there really isn’t anything we can do to prevent AI hacking from unfolding. We must accept that it is inevitable, and build robust governance structures that can quickly and effectively respond by normalizing beneficial hacks into the system and neutralizing the malicious or unintentionally harmful ones.
This challenge raises deeper, more difficult questions than how AI will evolve or how institutions might respond to it: What hacks count as beneficial? Which ones are harmful? And who decides? If you think government should be small enough to drown in a bathtub, then you probably think that hacks that reduce the government’s ability to control citizens are usually good. But you may not want to replace technological overlords with political ones. If you believe in the precautionary principle, you want as many experts as possible to test and judge hacks before they are incorporated into our social systems. And you might want to apply that principle further upstream, to the institutions and structures that make these hacks possible.
The questions continue. Should AI-created hacks be managed locally or globally? By administrators or by referendum? Or is there a way we can let the market or civil society groups decide? (The current effort to apply governance models to algorithms is an early indicator of how this will go.) The governance structures we design will empower some people and organizations to decide the hacks that will shape the future. We must ensure that that power is exercised with care.
Excerpt from A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend Them Back by Bruce Schneier. Copyright © 2023 by Bruce Schneier. Used by permission of the publisher, WW Norton & Company, Inc. All rights reserved.