< ZaCon II CFP Closes on Fri | Online Privacy, a teaser >
1) Privacy is free. Many privacy advocates believe it is a free lunch‚ - that is, consumers can obtain more privacy without giving up anything. Not so. There is a strong trade-off between privacy and information: The more privacy consumers have, the less information is available for use in the economy. Since information helps markets work better, the cost of privacy is less efficient markets.
There are two problems with this statement. The first counter-fallacy is the idea that more information, any information, makes markets work better; that just isn't true. Take a simplistic example of someone who signs up for a golf magazine and is then spammed by so many adverts for golfing gear that they train their spam filter to block it. The company got some information, used it inappropriately, leading to the client making fewer purchases for no better reason than too much advertising. What's needed is a mechanism for the right (i.e. necessary to enable consented activities in the consumers interest) information to get to the right companies (i.e. not spammy affiliates or surveillance groups). This is exactly what privacy advocates are working for currently; what controls can enforce this rather than the overly permissive current state.
The second problem is that the cost goes both ways. Right now a consumer has to spend the effort in enforcing their privacy. The current technical complexities of, for example, ensuring cookies for services you use, are not used to correlate your identity across affiliate sites, is high and only performed by the few who understand the implications and care enough to do something about it. Thus, the cost (understanding, technical ability, actual work required) is too high for many consumers to reasonably enforce their own privacy. This cost needs to shift to companies in order to achieve a more reasonable middle ground.
2) If there are costs of privacy, they are borne by companies. Many who do admit that privacy regulations restricting the use of information about consumers have costs believe they are born entirely by firms. Yet consumers get tremendous benefits from the use of information.
Think of all the free stuff on the Web: newspapers, search engines, stock prices, sports scores, maps and much more. Google alone lists more than 50 free services‚ - all ultimately funded by targeted advertising based on the use of information. If revenues from advertising are reduced or if costs increase, then fewer such services will be provided.
I don't see fewer services, in return for more control of what information is collected and how it is used, as a poor trade off i.e. it's a cost most consumers would be willing to bear. If anything, efficiencies may be generated in the market with weaker services that exist purely as third party data collection points (e.g. spammers, personal data warehouses (e.g. Axciom) and other organisations that end up with data from our primary service providers that we would prefer didn't) being weeded out. It would be hard to argue that more privacy would result in all information supported services disappearing.
Giving consumers more control of their information does not lead to firms having worse information. If anything the firms are likely to have access to higher quality information and avoid many of the poor inferences current data sets lead to (e.g. googling for "bomb making" means you're a terrorist). The key quality differentiator is that a consumer can target the intended use with the right information, due to the disclosure of intended use by the firm when gathering consent. The current situation is more akin to my bank knowing my shoe size, just because they can, and sharing that with affiliates; rather than the bank collecting credit rating specific data for their own calculations.
3) If consumers have less control over information, then firms must gain and consumers must lose. When firms have better information, they can target advertising better to consumers‚ - who thereby get better and more useful information more quickly. Likewise, when information is used for other purposes‚ - for example, in credit rating‚ - then the cost of credit for all consumers will decrease.
4) Information use is "all or nothing." Many say that firms such as Google will continue to provide services even if their use of information is curtailed. This is sometimes true, but the services will be lower-quality and less valuable to consumers as information use is more restricted.
For example, search engines can better target searches if they know what searchers are looking for. (Google's "Did you mean . . ." to correct typos is a familiar example.) Keeping a past history of searches provides exactly this information. Shorter retained search histories mean less effective targeting.
Once again, we have the counter fallacy: "more information == higher quality service" coupled with a misunderstanding of what sort of control privacy advocates are looking for.
First, a large amount of information currently collected is not collected for direct use with that service; while Google search does collect your search term, it also correlates that use with other services. If Google were to say "we collect exactly this information for this specific purpose, if you don't like it leave" that would be a huge improvement over the current vague statement of "we collect some information, we share some of it, if you don't like it leave, but we'll still try to track you around the web."
Second, privacy advocates, for the most part, have no problem with Google collecting search terms and using that data for the typo correction example above. The problem is strongly associating those terms with an identity and then barely anonymising them. It would be quite possible for Google to collect the search terms and provide typo correction without knowing UserX searched for that term.
5) If consumers have less privacy, then someone will know things about them that they may want to keep secret. Most information is used anonymously. To the extent that things are "known" about consumers, they are known by computers. This notion is counterintuitive; we are not used to the concept that something can be known and at the same time no person knows it. But this is true of much online information.
This "fallacy" is phrased incorrectly. It should be "If consumers have less privacy, then someone *could* know things about them they may want to keep secret." This is not a fallacy. Sure, for the most part there isn't a sweaty sysadmin reading each of my Yahoo mails (although research by others suggests there may be), but if a sysadmin/private investigator/government organisation wanted to they could. If the information is stored and identified then at some point someone will want to consume it. My experience in information security tells me that you can't provide perfect protection, and as the Saudi/RIM lawful intercept saga indicates, gov pressure to be able to violate your privacy/secrecy/confidentiality wins. As the Google/China hack indicates, lawful intercept gets used by the bad guys too.
What's more, the advanced data analytics performed by the likes of Facebook and Google allow additional secret information, that you may not have intentionally disclosed about you, to be discerned. In short, if the information isn't stored, it can't be compromised.
It may be because I'm not an economist but it sounds like Rubin makes a weak point (coupled with my observation in parenthesis) here: "Differential pricing is bad (mostly to the poor), but some good could come from it (mostly to the rich), so it's okay." The way I see it, if one side has perfect information about the other, but not vice versa, then the negotiation is flawed and will not work to mutal benefit. Even if you could argue that this is not true, people who take steps to prevent their information from being collected and tagged with their identity would be in a stronger bargaining position and would benefit more than the consumers who didn't.
6) Information can be used for price discrimination (differential pricing), which will harm consumers. For example, it might be possible to use a history of past purchases to tell which consumers might place a higher value on a particular good. The welfare implications of discriminatory pricing in general are ambiguous. But if price discrimination makes it possible for firms to provide goods and services that would otherwise not be available (which is common for virtual goods and services such as software, including cell phone apps) then consumers unambiguously benefit.
It's true, harm from privacy violations is difficult to asses. If only someone wrote a book about it providing some sort of comprehensive taxonomy of privacy harms. In short, it is very short sighted of Rubin to claim that violations of online privacy cannot lead to harm.
7) If consumers knew how information about them was being used, they would be irate. When something (such as tainted food) actually harms consumers, they learn about the sources of the harm. But in spite of warnings by privacy advocates, consumers don't bother to learn about information use on the Web precisely because there is no harm from the way it is used.
The panopticon is a well understood and flawed model. Giving firms and governments all the information reduces consumer liberty and gives firms/governments all the power. There needs to be a balance; banks can't have "anonymous" banking with them, and governments can't allow "anonymous" through their borders. However, governments shouldn't be able to ask banks about all their customers because they feel like create some sort of creepy total awareness office. If anything allowing consumers more control over their information and firms/governments less control makes it easier for consumers to keep those firms/governments honest leading to a more efficient market.
8) Increasing privacy leads to greater safety and less risk. The opposite is true. Firms can use information to verify identity and reduce Internet crime and identity theft. Think of being called by a credit-card provider and asked a series of questions when using your card in an unfamiliar location, such as on a vacation. If this information is not available, then less verification can occur and risk may actually increase.
I'm calling wild assertion on this one. While the mass of information gathered is likely used for benign purposes, the exceptions which cause harm and the potential for this harm to occur if no controls are in place, is enough to justify their existence. That's why even though the majority of the populace don't commit crimes, we still have police for the few who do.
9) Restricting the use of information (such as by mandating consumer "opt-in") will benefit consumers. In fact, since the use of information is generally benign and valuable, policies that lead to less information being used are generally harmful.
10) Targeted advertising leads people to buy stuff they don't want or need. This belief is inconsistent with the basis of a market economy. A market economy exists because buyers and sellers both benefit from voluntary transactions. If this were not true, then a planned economy would be more efficient‚ - and we have all seen how that works.
If Communism is to economists as Nazism is to moralists, then I'm calling Godwins Law (I know, I lose). That being said, I'm not going to defend this point, as it's a dumb one. Targeted advertising is much better than untargeted advertising. Guess what's better for the consumer? NO ADVERTISING coupled with easy ways of finding out information on products they actually want to purchase. The only reason I allow advertising (and sometimes click the ads) is for sites I want to support who use ad-revenue, for the rest, there's ad block. But I try not to let any of them profile me to offer targeted ads, yet somehow I am still fully empowered to both find products I want, research them in detail and purchase them from companies selling them.
This brings us to the end. In short, I disagree with everything Rubin says. He misunderstands that privacy advocates are looking for a balance of controls, not extremes, and makes unvalidated assertions about how information inherently leads to all sorts of good economic things. He also fails to consider abuses of information, which are the specific cases privacy advocates are trying to protect against.