The Oil of the Digital Age

The Oil of the Digital Age

A Michael Rigley video called “Network” recently caught our eye. A caption describing the video reads, “Information technology has become a ubiquitous presence. By visualizing the processes that underlie our interactions with this technology we can trace what happens to the information we feed into the network.” For policy wonks like us, this generally translates into one word: privacy.

For a country seemingly obsessed with reality television and tabloid journalism, the United States is suddenly very worried about privacy, wrote The Verge’s Joshua Topolsky in the Washington Post this week. And not celebrity privacy, but your privacy.

Using Information to Sell You Stuff
Joshua Brustein wrote in the New York Times that Facebook’s pending initial public offering gives credence to the argument that personal data is the oil of the digital age. The company was built on a formula common to the technology industry: offer people a service, collect information about them as they use that service and use that information to sell advertising.

That information – and that advertising – is becoming more and more targeted, too. “2012 is going to be a huge year in terms of innovation—not just with respect to being able to leverage location to contextualize the types of advertising and offers that a consumer receives, but also then to turn the corner on that and turn it into actual commerce in the physical world,” said Walt Doyle, CEO of the location-based service Where. In the not-so-distant future, a marketer could use technology like that of Where to track a person’s daily routine so precisely that it knows when she’s on her way to the office, the gym or home for the evening.

The New York Times is running a long piece this weekend on How Companies Learn Your Secrets. Almost every major retailer, from grocery chains to investment banks to the U.S. Postal Service, has a “predictive analytics” department devoted to understanding not just consumers’ shopping habits but also their personal habits, so as to more efficiently market to them. As the ability to analyze data has grown more and more fine-grained, the push to understand how daily habits influence our decisions has become one of the most exciting topics in clinical research, even though most of us are hardly aware those patterns exist. One study from Duke University estimated that habits, rather than conscious decision-making, shape 45 percent of the choices we make every day, and recent discoveries have begun to change everything from the way we think about dieting to how doctors conceive treatments for anxiety, depression and addictions. There is a calculus, it turns out, for mastering our subconscious urges. For companies like Target, the exhaustive rendering of our conscious and unconscious patterns into data sets and algorithms has revolutionized what they know about us and, therefore, how precisely they can sell.

Big Data and Elections
This use of “big data” is not limited to commercial use. Slate reports on Narwhal, the Obama reelection team’s project to link once completely separate repositories of information so that every fact gathered about a voter is available to every arm of the campaign. If successful, Narwhal would fuse the multiple identities of the engaged citizen—the online activist, the offline voter, the donor, the volunteer—into a single, unified political profile. More broadly, Narwhal would bring new efficiency across the campaign’s operations. No longer will canvassers be dispatched to knock on the doors of people who have already volunteered to support Obama. And if a donor has given the maximum $2,500 in permitted contributions, emails will stop hitting him up for money and start asking him to volunteer instead. Those familiar with Narwhal’s development say the completion of such a technical infrastructure would also be a gift to future Democratic candidates who have struggled to organize political data that has been often arbitrarily siloed depending on which software vendor had primacy at a given moment. [For more on campaigns and political data see this Stanford Law Review article]

Sharing Your Information Without Asking You
Although people may feel a little creepy when they find out how much companies know about them, the stakes are much higher. For example, Arun Thampi, a programmer in Singapore, discovered that the mobile social network Path was surreptitiously copying address book information from users’ iPhones without notifying them. The most sought-after bounty for state officials: dissidents’ address books, to figure out who they are in cahoots with, where they live and information about their family. In some cases, this information leads to roundups and arrests. A person’s contacts are so sensitive that Alec Ross, a senior adviser on innovation to Secretary of State Hillary Rodham Clinton, said the State Department was supporting the development of an application that would act as a “panic button” on a smartphone, enabling people to erase all contacts with one click if they are arrested during a protest.

Twitter, too, acknowledged that it stores user data – including info from contact lists and address books – on its servers, sometimes for months at a time. A mobile feature called Find Friends, which allows smartphone users to locate real-life friends on Twitter, gives Twitter permission to churn through the contacts on your phone. That data, including phone numbers and email addresses, remains on Twitter servers for up to 18 months. In a statement, Twitter has promised more clarity on existing privacy policies.

Reps. Henry Waxman (D-CA) and G.K. Butterfield (D-NC) sent a letter to Apple CEO Tim Cook asking if the company does enough to protect user information on iPhones. Specifically, they asked if Apple’s policies ensure developers can’t share or collect user data — such as iPhone contact lists — without permission. The concern comes after Path, an online diary, said it collected and stored users’ iPhone contact lists without explicitly asking for permission to do so. When launching the app, Path automatically uploaded contact data in order to “find friends” to connect to on the social networking app. The lawmakers say the practice “raises questions about whether Apple’s iOS app developer policies and practices may fall short when it comes to protecting the information of iPhone users and their contacts.” They asked Cook how many apps grab information from users’ iPhone contact lists and whether the apps ask for permission from users to access that data.

Almost immediately, Apple announced that apps that use address book data will require explicit user permission to do so. “Apps that collect or transmit a user’s contact data without their prior permission are in violation of our guidelines,” Apple spokesman Tom Neumayr said. “We’re working to make this even better for our customers, and as we have done with location services, any app wishing to access contact data will require explicit user approval in a future software release.” So Apple has done the right thing, arguably something it should have done long ago: Assure users that no app can read their contact data without their permission.

In response to the uproar over how mobile iOS applications have had access to address-book data without having to inform the user, Google was all too happy to confirm that its development model for Android applications makes it impossible to share personal data with an app developer unless you agree to do so before installing the app. Tim Bray, Google’s head of Android developer relations, addressed Android’s take on the Path-inspired mess that forced Apple to acknowledge that it should have done a better job policing apps that uploaded address-book data from users without explicit permission. “Reading contacts on Android requires explicit OK,” he said, pointing to two Android development articles that address how Android deals with granting permission to access personal data.

But Google has raised a number of privacy concerns of its own. Google and other advertising companies have been bypassing the privacy settings of millions of people using Apple's Web browser on their iPhones and computers—tracking the Web-browsing habits of people who intended for that kind of monitoring to be blocked. The companies used special computer code that tricks Apple's Safari Web-browsing software into letting them monitor many users. Safari, the most widely used browser on mobile devices, is designed to block such tracking by default. Google disabled its code after being contacted by The Wall Street Journal.

Moreover, in late January, Google announced changes to its privacy policies and Google Terms of Service. The subject has garnered lots of press in the ensuing weeks. Late last week, Google sent a self-assessment report to the Federal Trade Commission saying that these changes – due to take effect March 1 -- are fully in compliance with the company’s settlement with the federal government last year. But privacy advocates disagreed. "Google's report makes clear that the company failed to comply with the obligations set out in the consent order, particularly with respect to the changes announced on Jan. 24, 2012. It is clear that the Federal Trade Commission will need to act,” said Electronic Privacy Information Center Executive Director Marc Rotenberg.

Targeting Children, Too
The Federal Trade Commission this week issued a staff report showing the results of a survey of mobile apps for children. The survey shows that neither the app stores nor the app developers provide the information parents need to determine what data is being collected from their children, how it is being shared, or who will have access to it.

The report notes that mobile apps can capture a broad range of user information from a mobile device automatically, including the user's precise geolocation, phone number, list of contacts, call logs, unique identifiers, and other information stored on the device. At the same time, the report highlights the lack of information available to parents prior to downloading mobile apps for their children, and calls on industry to provide greater transparency about their data practices.

Can Privacy Be Protected?
Getting personal information removed from websites that collect it can feel a lot like playing Whac-a-Mole. Lawmakers and regulators are trying to do more to address consumer concerns. There is no U.S. law, as there is in Europe, requiring companies to allow people to view or delete their personal data on file at an institution. Last year, Sens. John Kerry (D-MA) and John McCain (R-AZ) introduced legislation that would require most data brokers to let people view and make corrections to the personal data stored about them. The White House is expected to call for similar rights when it releases its "Privacy Bill of Rights" later this year.

People have been willing to give away their data while the companies make money. But there is some momentum for the idea that personal data could function as a kind of online currency, to be cashed in directly or exchanged for other items of value. A number of start-ups allow people to take control — and perhaps profit from — the digital trails that they leave on the Internet.

In response to the Path-inspired app, Om Malik wrote an interesting commentary He says the most important question is, ‘What do we learn from all this and where do we go from here?’ Today’s apps are inherently more social and thus by extension more human. The relationships on this social web are going from increasingly virtual to more real. In a sense, these apps have started to reflect our daily lives. Our daily lives have many layers of trust built into them. There is an implicit social contract that implies that trust. Doing business with your bank, dry cleaner, green grocer and coffee shop is built on that trust. We are friends with others whom we trust. We work with people we trust. And that trust is what drives us to do the right thing. Social apps of today need to understand this concept of trust and doing the right thing.

Joshua Topolsky says the question we should all be asking is ‘Why?’ Why is it necessary for services such as Path to take or hold our data at all? There are other ways to capture encrypted data. One method is called “hashing,” which creates specific, anonymous strings of numbers and letters from plain text data such as your name or phone number. Using that method, applications pulling the same content will get clear matches while exposing zero user data to a third party. Your data stay private, but you’re still able to find your friends within a service.

The FTC report on children and mobile apps recommends:

  • All members of the "kids app ecosystem" – the stores, developers and third parties providing services – should play an active role in providing key information to parents.
  • App developers should provide data practices information in simple and short disclosures. They also should disclose whether the app connects with social media, and whether it contains ads. Third parties that collect data also should disclose their privacy practices.
  • App stores also should take responsibility for ensuring that parents have basic information. "As gatekeepers of the app marketplace, the app stores should do more." The report notes that the stores provide architecture for sharing pricing and category data, and should be able to provide a way for developers to provide information about their data collection and sharing practices.

Last month the European Commission proposed adding a new “right to be forgotten” to privacy law. The regulations must be approved by member states, but the language sent a jolt through companies such as Google and Facebook, which have built business models dependent on user data and could face multimillion-dollar fines for infractions. Scholars, lawyers and privacy advocates are also scrambling to sort through the implications of the rules, which set up a pitched battle between the right to privacy and freedom of expression online.

Former White House counterterrorism aide Richard Falkenrath wrote in the Financial Times this week that the stakes are huge. It is essential – both for Europeans and Americans – to protect personal privacy in the age of pervasive social media and cloud computing. Individual users of cloud services should have a legal right to be forgotten that supersedes whatever authorizations they (or their surrogates) granted when they created their accounts. Users should, in other words, have the right to change their minds as they learn the implications of that little box they unthinkingly ticked while signing up for the latest, greatest, cheapest cloud-based information service.

Conclusion
As Topolsky concluded, hopefully this is the start of a big wake-up call, because it seems clear that we all need to be thinking more seriously about where and how our information is used. If there are better ways to protect privacy, we need to push back hard and make companies adopt those practices. Then we need to keep watching to make sure they stick to it.