Foul play in information markets

Tags
Prediction markets

Information markets do a better job at predicting than experts. Examples are given of horse races, weather, politics, gas markets etc. These are all cases where the prediction market can’t influence the actual outcome due to size or inability (such as in horse racing where it’s against the rules)

Evaluation: We’ll try to evaluate information markets vs the alternative regular ones not in the sense of ‘can they make mistakes’ (of course they can), but in the sense of are they more or less susceptible to foul play. Forms of foul play can include ‘throwing the bet’ to generate a specific outcome, doing something else in reality to change the outcome, collusion. Hanson categorizes them as: lying, manipulation, sabotage, embezzlement, and retribution.

  1. Lying

Participants in the market can lie to each other and offer fake advice: “This is the better option” or “I’ll be voting for option B and so will they, so it’s in your best interest to vote as well”. These affects happen in traditional markets as well. Ratings agencies, investment banks or shareholders all signal different things to the markets and people argue that they are not truly independent forecasts or opinions (Examples: Goldman Sachs puts a buy rating on a company another company business unit took public, Moody’s providing AAA rating to bonds that shouldn’t receive them because they’re paid to do so). Market participants should be skeptical and having an economic incentive for this can increase this skepticism which is healthy.

💡
In crypto markets more pseudonymity is acceptable and so it’s hard to enforce who owns what and receive disclosers regarding positions. This makes lying easier, however the market dynamics still hold true.
  1. Manipulation

In this case someone manipulates their prediction to affect the overall prediction. They elect to lose money to affect the result and perhaps gain more money from an external event. Two pushbacks to this approach: while possible, if this happens on a regular basis (which makes sense if there’s an incentive to do it) this doesn’t prevent the information market from working, it simply increases the range of outcomes. Good decision makers will recognize that the margin for error is larger and the manipulation will loose affect. Secondly, and probably more importantly, it’s incredibly hard to manipulate ‘proper’ markets - ones that are large enough and have a distributed enough player base. This is because markets always have ‘noise trades’ - trades that are done by fools, hedgers or people who make mistakes - manipulators would simply be part of this group and as they grow, the incentive for informed players to ‘fix’ these bets and profit from their ‘correct’ bet increases.

💡
This might not be true at edge events or cases and needs to be looked into further. Further reading:
  1. Robin Hanson, Ryan Oprea, and David Porter, “Information Aggregation and Manipulation in an Experimental Market,” Journal of Economic Behavior and Organization (forthcoming, 2006).
  2. Wolfers and Zitzewitz, “Prediction Markets,” 107–26.

Another change in behavior is eliciting more research which is beneficial:

“The second change in behavior is that the increased profit opportunity from more noise traders increases the effort by other traders to obtain relevant information. So, on net, more noise trading should increase price accuracy. And, in fact, empirically it seems that financial and information markets with more noise trading, and hence a larger trading volume, tend to be more accurate, all else being equal.”

The real worry from a case of manipulation is brought in this example:

💡
“For example, in a market estimating the chance of a terrorist attack, terrorists might perhaps arrange for the size of the attack to be correlated with the forecast error. The market might then become more accurate in estimating whether an attack would occur, but it would also miss the big attacks more often. In such a case, the expected harm from price errors could increase with more manipulation, even as the expected error decreased. One approach to mitigating this problem is via the parameters that markets estimate. The closer those parameters are to the actual decision parameters of interest, the less likely should be the existence of hidden states that modulate the magnitude of the harm from estimation errors and that are correlated with some manipulator bias. For example, it would be better for a terrorist attack market to estimate the harm caused by the attack and not just whether an attack occurs.
  1. Sabotage

A participant can make an external bet on the outcome of a process and profit from it and therefore manipulate the outcome via sabotage. While this can work in theory there haven’t been many documented cases of this (for example an employee shorting a company and then doing something incredibly harmful to it). Existing comparable cases exist: shorting airline stocks and planting a bomb, buying life insurance for intended murder or sabotaging an internal company initiative to win a bet.

Because of the externalities of these events it’s hard to disprove them, and as they’re ‘one off’ it’s hard to plan against them. On the one hand information markets are thinly traded which makes them easier to manipulate but on the other hand it makes the upside much smaller, and easier to detect sabotage although this could also be done on an external market.

It’s possible to limit upside by creating boundaries on positions, however these again, don’t preclude externally traded positions (derivatives in other markets).

  1. Embezzlement

There are concerns that information markets within companies could misdirect time, money, and credit, even maliciously. Real-money markets for company-related events could cause employees to neglect other tasks to participate. Play-money markets or real-money markets with small betting limits might not motivate employees to participate. The challenge is to create markets that induce enough effort without causing too much distraction or misalignment of incentives inside the organization to withhold information or harm other bettors.

Adding monetary incentives to collaborative goals can cause mayhem, which is one of the reasons that salaries are not tied one to one with specific outcomes, rather they reward ‘work’ in a softer sense.

There are no clear escapes from these misaligned incentives. Possible solutions are:

  • To create a similar schema where ones voting history is tied to the results and rewarded via other mechanisms.
  • Votes are tracked and analyzed by ‘detectives’
  • There is different ranking for voting, a hierarchy of sorts, such that these incentives can be mitigated. For example in an internal company decision the team working on it would have to vote/bet first, thus forcing them to commit publicly, and dramatic changes they make could be more transparent.
  • Carefully vetting the bets made, who makes them, to avoid misalignment.
  1. Retribution

Many forecasts have an agenda. That’s the whole premise of UPOD (under promise, over deliver). Information markets can mess this up for those wishing to have an agenda based forecast and could threaten retribution and forced group think.

This is most easily solved by anonymous voting.

Conclusion

The standard for evaluation of how good information markets are should be to compare them to competing forecasting institutions, which are rife with similar problems.

Overall these problems of lies, manipulation, sabotage, embezzlement and retribution are not, on the whole, worse in information markets than their equivalents. However some, such as sabotage and embezzlement do require careful thought.

In Hanson’s words:

💡

Inducing lies is only a special concern of information markets when such markets have wider participation than other institutions. Reasonable solutions include having advisors trading instead of talking, or giving them the ability to show their neutral trading position. Manipulation seems a much weaker concern for information markets than for competing institutions, as manipulative trading should usually improve price accuracy. Manipulation should be a potential problem only when all traders are very risk-averse, or when the harm from price errors correlates in unusual ways with those errors. Sabotage is not a concern when markets estimate large social aggregates that are hard for individuals to influence, or when the trading stakes are too small to pay for any substantial sabotage efforts. When the events are small enough relative to the trading stakes for sabotage to be a concern, one can limit participation, reveal trades to investigators, or place bounds on individual trading stakes. To discourage the embezzlement of time, money, and credit within organizations, internal markets could trade a new color of money. When trading topics are subsidized at their value of information, those with consistent trading gains can take credit for adding so many dollars to the organization’s bottom line. Standard information sources should have special processes that trade on them before others, and teams should trade on team information before team members do. Retribution does not seem a special concern of information markets, and anonymous trading can greatly reduce the ability to suppress information through threats of retribution.