Transparency requirements cannot curb shady behavior

Transparency requirements cannot curb shady behavior

But a new study suggests that these types of measures may not always be very effective. Nemanja Antic, assistant professor of managerial economics and decision science at Kellogg, and his colleagues find that it is often possible for people to share enough information during deliberations to get the outcome they want while still maintaining plausible deniability about how much they know should their communications ever be made public.

In other words, transparency can often be gamed.

In many situations, “you can get the same result that you would have had without transparency, and that was very surprising to us,” says Antic.

Depending on your perspective, this ability to bypass transparency is really good or really bad.

Take individuals under surveillance, who of course may not be doing anything unethical or illegal at all. For them, the study offers a kind of road map for how to gather information and use it to make the best possible decision, while retaining enough uncertainty about what you know to keep you safe from possible punishment. This can be useful for activists who want to share information under the surveillance of an authoritarian government or for business executives who want to discuss a possible acquisition without having their words later taken out of context by antitrust regulators (or really for anyone who forwards sensitive information via a electronic platform that may one day be hacked).

But the finding also has implications for authorities, regulators and others who design and use transparency measures. To ensure that their own interests are not trampled, they may have to rely on other strategies, such as limiting how and when information is shared.

Covert communication

To gain insight into how scrutiny can affect decision-making, Antic and his colleagues, Archishman Chakraborty of Yeshiva University and Rick Harbaugh of Indiana University, turned to game theory. They built a mathematical model where two parties with a shared interest trade information back and forth before making a final decision – all under the watchful eye of a watchful observer with slightly different interests.

To understand how their model works, and thus how transparency can be played, consider a scenario where two executives work together to evaluate potential sites for a new mining operation. The first manager has information about the economic impact of the new sites – small, medium or large – while the second is armed with information about the small, medium or large environmental costs. To know whether to proceed with a given site, they both need to contribute information to the decision.

See also  Ragnarök: "God of War," Reinvented: From Sexual Minigames to Male Deconstruction | Culture

Meanwhile, the general public also has an interest in whether a particular location is chosen, and it weighs the environmental impact more heavily than the company.

Sometimes the company and the public are on the same page. For “good” areas, where the economic benefit exceeds the environmental costs, both the firm and the public agree that the project should go ahead. And for “bad” areas, where the environmental costs exceed the economic benefit, everyone agrees that the project should not go ahead. But when the economic benefit and environmental cost are roughly equal—for example, a project with medium economic benefit and medium environmental cost, or high economic benefit and high environmental cost—the parties disagree. The company wants to move forward with these “mediocre” sites, while the public will not approve.

Critically, the audience can find out what information managers share with each other to make the decision. This means that managers cannot knowingly act on information in a way that goes against the public interest, otherwise they will be punished. As a leader, Antic says, you have to show that “given all the available information you had at the time, you made a decision that was palatable to the public.”

If the public cannot be sure whether the leaders actually knew they were acting against the public’s wishes, however, leaders will be given the benefit of the doubt. This means that managers who move forward on a mediocre site can avoid penalty if they can also show that, given the information discussed, the site could also be a good one.

The researchers find that for a scenario like the one described, it is always possible for managers to make the same decision they would have wanted to make anyway, while still maintaining plausible deniability. By carefully planning the order in which information is shared, and stopping before everyone’s cards are on the table, leaders can work together to decide whether to move forward, without publicly distinguishing between the good sides that the audience will approve of and the mediocre ones. those that didn’t want to.

See also  10 games that were critically flawed but developed a cult following

For example, a leader might say “What do you think” as a way to imply that they have information about the site that they can only share after the other leader provides additional context. Or a manager may say that the costs are “not high,” leaving unsaid whether the costs are small or medium.

“These conversations are about providing information, but most importantly also about providing context for how any future comments will be interpreted,” says Antic. “You want enough information to make a decision, but not too much information.”

He points out that there are even cases where parties may want a public investigation. “Let’s say there’s quite a bit of mistrust” between a company and the general public, Antic says. In situations where the public is likely to object to a firm’s decision, the firm may actually invite transparency, because by revealing to the public exactly what it knew when the decision was made, the firm can demonstrate that the conflict of interest is not as great. as the audience imagines.

Preparing for scrutiny

The utility of plausible deniability is nothing new. Organizations that know they will be scrutinized may deliberately give their managers as little information as possible. Remember how, regarding the CIA’s “enhanced interrogation techniques,” the White House counsel told President George W. Bush, “Mr. President, I think for your own protection you don’t need to know the details of what’s going on here.” Or in the same way, a leader may deliberately avoid seeking information from a subordinate in order not to be held accountable for what they learn. Of course, these strategies can lead to terrible decisions.

See also  This nefarious Pokémon NFT scheme leaves Windows PCs vulnerable to attack

“But the surprising lesson from this paper is that it is often possible” to get the result you want while maintaining this plausible deniability, Antic says.

For people sharing sensitive information under threat of surveillance, the study suggests that there is little to lose—and much to gain—by ​​communicating in a way that an outside observer cannot object to, even if that communication becomes somewhat roundabout. In the researcher’s model, this meant exchanging information back and forth with the understanding that the interlocutor’s future remarks will be understood in the context that your statements provide. Lawyers representing firms in antitrust cases, for example, often warn against using shorthand such as “Get rid of them” that could later be misinterpreted. They suggest that all sensitive details be discussed with the precise context in which they should be understood. In other words, it’s generally wise to stick to language that would be acceptable if made public, even if you don’t think it will be made public.

For those charged with implementing transparency requirements, the lessons are equally strong. Other measures, such as limiting how the parties can communicate with each other, may be necessary to prevent transparency being compromised.

In fact, Antic suggests, this may be one reason why sunshine laws and other transparency requirements don’t always have much impact. He points to some of the guidelines intended to eliminate bias from hiring processes, such as blind hiring or auditing a hiring committee’s communications. If a committee is nevertheless able to shape the hiring process, and in particular control the information shared, they may be able to maintain plausible deniability without changing the final hiring decisions, and so the policies may not be particularly effective.

“It maybe shows why some of these policies don’t result in actual change,” says Antic.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *