Atlantic, The
Former NSA Chief Clashes With ACLU Head In Debate
Is the National Security Agency keeping us safe? That was the question that MSNBC used to frame a debate at the Aspen Ideas Festival, which The Atlantic co-hosts with The Aspen Institute.
The debate featured General Keith Alexander, former head of the National Security Agency; former Congresswoman Jane Harman; and former solicitor general Neal Katyal spoke in defense of the signals intelligence agency.
Anthony Romero of the ACLU, academic Jeffrey Rosen and former Congressman Mickey Edwards acknowledged the need for the NSA, but argued that it transgresses against our rights with unnecessary programs that violate the Constitution. The two teams also spent time arguing about Edward Snowden and whether his leaks were justified. By the end of the 90 minute session the civil libertarian team handily beat the national security state team in audience voting.
Anthony Romero of the ACLU was at his strongest when pressing the other team to explain why the American people shouldn't have a right to privacy in their metadata, given how revealing it can be. He rejected the notion that the phone dragnet is permissible because, although the NSA keeps records of virtually every phone call made, it only searches that database under a narrow set of conditions.
The Test We Can -- and Should -- Run on Facebook
[Commentary] For a widely criticized study, the Facebook emotional contagion experiment -- which deployed its own type of new techniques -- managed to make at least one significant contribution. It has triggered the most far-reaching debate we’ve seen on the ethics of large-scale user experimentation: not just in academic research, but in the technology sector at large.
Perhaps we could nudge that process with Silicon Valley’s preferred tool: an experiment. But this time, we request an experiment to run on Facebook and similar platforms. Rather than assuming Terms of Service are equivalent to informed consent, platforms should offer opt-in settings where users can choose to join experimental panels. If they don’t opt in, they aren’t forced to participate.
This could be similar to the array of privacy settings that already exist on these platforms. Platforms could even offer more granular options, to specify what kinds of research a user is prepared to participate in, from design and usability studies through to psychological and behavioral experiments.
Of course, there is no easy technological solution to complex ethical issues, but this would be significant gesture on the part of platforms towards less deception, more ethical research and more agency for users.
[Crawford is a visiting professor at MIT’s Center for Civic Media, a principal researcher at Microsoft Research, and a senior fellow at NYU’s Information Law Institute]
How to Run Facebook's Mood Manipulation Experiment on Yourself
When news of Facebook’s attempt to emotionally manipulate its users has emerged, debate quickly focused on the experiment’s ethics. Lauren McCarthy, though, kept thinking about the experiment itself. But as discussion went on, she found that “no one was talking about what the study might mean. What could we do beyond the ethics?”
Now, she has a preliminary answer. McCarthy has made a browser extension, Facebook Mood Manipulator, that lets users run Facebook’s experiment on their own News Feeds.
Just as the original 2012 study surfaced posts analyzed to be either happier or sadder than average, McCarthy’s extension skews users’ feeds either more positive or more negative -- except that, this time, users themselves control the dials.
Unlike the Facebook study, which only surfaced posts judged happier or sadder, McCarthy’s software also lets people see posts that use more “aggressive” or “open” words in their feed. The extension, in other words, lets users reclaim some control over their own feed. It lets users discover what it’s like to wrestle with their own attentional algorithm -- as subtle, or as stupid, as it can sometimes be.
The Internet Is Fracturing Into Separate Country-Specific Networks
[Commentary] The World Wide Web celebrated its 25th birthday recently. Today the global network serves almost 3 billion people, and hundreds of thousands more join each day.
If the Internet were a country, its economy would be among the five largest in the world. Yet all of this growth and increasing connectedness, which can seem both effortless and unstoppable, is now creating enormous friction, as yet largely invisible to the average surfer. Fierce and rising geopolitical conflict over control of the global network threatens to create a balkanized system -- what some technorati, including Google’s executive chairman, Eric Schmidt, have called “the splinternet.”
Some experts anticipate a future with a Brazilian Internet, a European Internet, an Iranian Internet, an Egyptian Internet -- all with different content regulations and trade rules, and perhaps with contrasting standards and operational protocols. Whether or not this fragmentation can be managed is up to question, but at a minimum, this patchwork solution would be disruptive to American companies like Google, Facebook, Amazon, and eBay, which would see their global reach diminished. And it would make international communications and commerce more costly.
One Closed API at a Time, The Era Of The Open Web May Be Waning
[Commentary] APIs -- application programming interfaces -- are enablers of remix culture, essentially. And what they mix is structured data. They are, essentially, a way for companies and developers to talk to each other and build off of each other.
They're a means of converting the information a service contains into the stuff of the wider Internet. Because of all that, APIs have been seen, traditionally, as symbolic and practical.
So it's hard not to see the closure of the Netflix API, on top of the closure of all the other APIs, as symbolic in its own way -- of a new era of the web that is less concerned with outreach, and more concerned with consolidation. A web controlled by companies that prefer their own way of doing things, without external input. A web that takes the productive enthusiasms of independent developers and says, essentially, "Thanks, but no thanks."
The Promise of a New Internet
[Commentary] People tend to talk about the Internet the way they talk about democracy -- optimistically, and in terms that describe how it ought to be rather than how it actually is.
This idealism is what buoys much of the network neutrality debate, and yet many of what are considered to be the core issues at stake -- like payment for tiered access, for instance -- have already been decided. Internet advocates have been asking what regulatory measures might help save the open, innovation-friendly Internet.
But increasingly, another question comes up: What if there were a technical solution instead of a regulatory one? What if the core architecture of how people connect could make an end run on the centralization of services that has come to define the modern net?
It's a question that reflects some of the Internet's deepest cultural values, and the idea that this network -- this place where you are right now -- should distribute power to people.
In the post-NSA, post-Internet-access-oligopoly world, more and more people are thinking this way, and many of them are actually doing something about it. Among them, there is a technology that's become a kind of shorthand code for a whole set of beliefs about the future of the Internet: "mesh networking." These words have become a way to say that you believe in a different, freer Internet.
Method Journalism
[Commentary] With the launch of new site after new site in 2014, it's been a fascinating time to watch digital media try to figure itself out.
Amid the turmoil of disruption, buffeted by tech companies' control over information distribution, but aware of new fields of possibility, the past few years were filled with defending legacy brands.
There are some exciting sites and they're doing great work, and they are also making mistakes and doing weird stuff as they find their identities. But the more I thought about what's different in this era of media relative to earlier ones: none of these sites is focused on an area of coverage. They are, instead, about the method of coverage.
In a world where traditional beats may not make sense, where almost all marginal traffic growth comes from Facebook, where subscription revenue is a rumor, where business concerns demand breadth because they want scale… a big part of the industry's response has been to create sites that become known by how they cover something rather than what.
Should US Hackers Fix Cybersecurity Holes or Exploit Them?
[Commentary] There’s a debate going on about whether the US government -- specifically, the National Security Agency and United States Cyber Command -- should stockpile Internet vulnerabilities or disclose and fix them.
It's a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.
A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year. Unpublished vulnerabilities are called “zero-day” vulnerabilities, and they’re very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.
When someone discovers one, he can either use it for defense or for offense.
Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn’t make the vulnerability go away, but most users protect themselves by patch their systems regularly.
Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn't even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software's vendor finds out -- the timing depends on how extensively the vulnerability is used -- and issues a patch to close the vulnerability.
Antonin Scalia Totally Gets Net Neutrality
The Federal Communications Commission proposed new rules to regulate broadband Internet providers. Many supporters of an open web don’t like these rules. The agency’s suggested regulations, they say, will either sacrifice a key tenet of the Internet -- network neutrality, a storied and contested idea -- or prove ineffectual.
They say the agency must re-categorize broadband Internet providers, so that they become utilities -- common carriers. It’s obvious, obvious, they say, that the FCC categorizes broadband incorrectly in the first place.
Turns out a member of the nation’s highest ranking court made their case for them almost a decade ago. That judge’s name? Antonin Scalia, who wrote that the FCC’s interpretation of the law around “information services” was “implausible.” With its decision to regard cable broadband as an information service, the agency had “[established] a whole new regime of non-regulation, which will make for more or less free-market competition, depending upon whose experts are believed.” In ruling that broadband was an information service, the FCC “had exceeded the authority given it by Congress.”
Judge Scalia based his argument on an interesting analogy: “If, for example, I call up a pizzeria and ask whether they offer delivery, both common sense and common “usage,” […] would prevent them from answering: ‘No, we do not offer delivery -- but if you order a pizza from us, we’ll bake it for you and then bring it to your house.’ The logical response to this would be something on the order of, ‘so, you do offer delivery.’ But our pizza-man may continue to deny the obvious and explain, paraphrasing the FCC and the Court: ‘No, even though we bring the pizza to your house, we are not actually “offering” you delivery, because the delivery that we provide to our end users is “part and parcel” of our pizzeria-pizza-at-home service and is “integral to its other capabilities.”
The Library of Congress Wants to Destroy Your Old CDs
If you've tried listening to any of your old CDs lately, if you even own them anymore, you may have noticed they won't play.
CD players have long since given up on most of the burned mixes I made in college. And while most of the studio-manufactured albums I bought still play, there's really no telling how much longer they will. My once-treasured CD collection -- so carefully assembled over the course of about a decade beginning in 1994 -- isn't just aging; it's dying. And so is yours.
"All of the modern formats weren't really made to last a long period of time," said Fenella France, chief of preservation research and testing at the Library of Congress. "They were really more developed for mass production." "If you want to really kill your discs, just leave them in your car over the summer."
France and her colleagues are trying to figure out how CDs age so that we can better understand how to save them. This is a tricky business, in large part because manufacturers have changed their processes over the years but won't say how. And so: we know a CD's basic composition -- there's a plastic polycarbonate layer, a metal reflective layer with all the data in it, and then the coating on top -- but it's impossible to tell just from looking at a disc how it will age.
"We're trying to predict, in terms of collections, which of the types of CDs are the discs most at risk," France said.