Digital Content

Information that is published or distributed in a digital form, including text, data, sound recordings, photographs and images, motion pictures, and software.

Tech’s Swift Reaction To Hate Groups Was Years In The Making

While tech’s crackdown on violence-inciting white nationalist sites came rapidly following the turmoil in Virginia, it took years of cajoling by activists and advocates to get Silicon Valley ready for action. “We put out our first report about cyberhate in 1985,” says Brittan Heller, director of technology and society for the Anti-Defamation League (ADL). In 2012, the ADL inaugurated its Working Group on Cyberhate. “This was one of the first bodies to get organizations across the tech industry to talk about these issues,” says Heller. The ADL doesn’t publish a list of its members, but Heller says it includes “all the major tech companies like Facebook and Google, Apple and Microsoft, Twitter.”

In 2014, the Working Group put out best-practice guidelines for tech companies to handle online hate—like clearly explaining terms of service for users and providing mechanisms for people to report abuse. That same year, the Southern Poverty Law Center began its Silicon Valley push. “In 2014, we decided that we needed to at least make an effort to work with the tech companies to de-monetize hate,” says Heidi Beirich, director of SPLC’s Intelligence Project.

Can Silicon Valley Disrupt Its Neo-Nazi Problem?

Tech leaders still have no coherent vision for how to police hate speech without becoming tyrants, themselves.

Fighting Neo-Nazis and the Future of Free Expression

[Commentary] In the wake of Charlottesville, both GoDaddy and Google have refused to manage the domain registration for the Daily Stormer, a neo-Nazi website. Subsequently Cloudflare, whose service was used to protect the site from denial-of-service attacks, has also dropped them as a customer, with a telling quote from Cloudflare’s CEO: “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.” We agree.

Even for free speech advocates, this situation is deeply fraught with emotional, logistical, and legal twists and turns. All fair-minded people must stand against the hateful violence and aggression that seems to be growing across our country. But we must also recognize that on the Internet, any tactic used now to silence neo-Nazis will soon be used against others, including people whose opinions we agree with. Those on the left face calls to characterize the Black Lives Matter movement as a hate group. In the Civil Rights Era cases that formed the basis of today’s protections of freedom of speech, the NAACP’s voice was the one attacked. Protecting free speech is not something we do because we agree with all of the speech that gets protected. We do it because we believe that no one—not the government and not private commercial enterprises—should decide who gets to speak and who doesn’t. For any content hosts that do reject content as part of the enforcement of their terms of service, we have long recommended that they implement procedural protections to mitigate mistakes. These are methods that protect us all against overbroad or arbitrary takedowns.

When the Government Rules by Software, Citizens are Left in the Dark

Governments increasingly rely on mathematical formulas to inform decisions about criminal justice, child welfare, education, and other arenas. Yet it’s often hard or impossible for citizens to see how these algorithms work and are being used.

San Francisco Superior Court began using PSA in 2016, after getting the tool for free from the John and Laura Arnold Foundation, a Texas nonprofit that works on criminal-justice reform. The initiative was intended to prevent poor people unable to afford bail from needlessly lingering in jail. But a memorandum of understanding with the foundation bars the court from disclosing “any information about the Tool, including any information about the development, operation and presentation of the Tool.” Many governments said they had no relevant records about the programs. Taken at face value, that would mean those agencies did not document how they chose, or how they use, the tools. Others said contracts prevented them from releasing some or all information. Goodman says this shows governments are neglecting to stand up for their own, and citizens’, interests. “You can really see who held the pen in the contracting process,” she says. The Arnold Foundation says it no longer requires confidentiality from municipal officials, and is happy to amend existing agreements, to allow officials to disclose information about PSA and how they use it. But a representative of San Francisco Superior Court said its contract with the foundation has not been updated to remove the gag clause.

AI Programs Are Learning to Exclude Some African-American Voices

Some artificial intelligence (AI) systems are learning to be prejudiced against some dialects. And as language-based AI systems become ever more common, some minorities may automatically be discriminated against by machines, warn researchers studying the issue.

Anyone with a strong or unusual accent may know what it’s like to have trouble being understood by Siri or Alexa. This is because voice-recognition systems use natural-language technology to parse the contents of speech, and it often relies on algorithms that have been trained with example data. If there aren’t enough examples of a particular accent or vernacular, then these systems may simply fail to understand you. The problem may be more widespread and pernicious than most people realize. Natural-language technology now powers automated interactions with customers, through automated phone systems or chatbots. It’s used to mine public opinion on the Web and social networks, and to comb through written documents for useful information. This means that services and products built on top of language systems may already be unfairly discriminating against certain groups.

Cox starts charging $50 extra per month for unlimited data

Cox is now charging its customers $50 extra each month for unlimited data.

Cox also introduced a $30-per-month charge that adds 500GB to the standard 1TB data plan. Cox customers who go over the 1TB cap without having purchased extra or unlimited data pay a $10 charge for each additional 50GB. Naturally, "unused data does not roll over," Cox says. Cox, the third-largest cable company in the US after Comcast and Charter, has about 6 million residential and business customers in 18 states. It has rolled the data caps out on a city-by-city basis, so not all Cox customers face the caps yet.

Silicon Valley escalates its war on white supremacy despite free speech concerns

Silicon Valley significantly escalated its war on white supremacy, choking off the ability of hate groups to raise money online, removing them from Internet search engines, and preventing some sites from registering at all.

The new moves go beyond censoring individual stories or posts. Tech companies such as Google, GoDaddy and PayPal are now reversing their hands-off approach about content supported by their services and making it much more difficult for “alt-right” organizations to reach mass audiences. But the actions are also heightening concerns over how tech companies are becoming the arbiters of free speech in America. And in response, right-wing technologists are building parallel digital services that cater to their own movement. Gab.ai, a social network for promoting free speech, was founded shortly after the presidential election by Silicon Valley engineers alienated by the region’s liberalism. Other conservatives have founded Infogalactic, a Wikipedia for the alt-right, as well as crowdfunding tools Hatreon and WeSearchr. The latter was used to raise money for James Damore, a white engineer who was fired after criticizing Google’s diversity policy. “If there needs to be two versions of the Internet so be it,” Gab.ai tweeted. The company’s spokesman, Utsav Sanduja, later warned of a “revolt” in Silicon Valley against the way tech companies are trying control the national debate.

Digital platforms force a rethink in competition theory

Anxiety about the health of competition in the US economy — and elsewhere — is growing. The concern may be well founded but taking forceful action will require economists to provide some practical ways of proving and measuring the harm caused by increasing market power in the digital economy.

The forces driving concentration do not affect the US alone. In all digital markets, the cost structure of high upfront costs and low additional or marginal costs means there are large economies of scale. The broad impact of digital technology has been to increase the scope of the markets many businesses can hope to reach. In pre-digital days, the question an economist would ask is whether the efficiencies gained by big or merging companies would be passed on to consumers in the form of lower prices. Another key question was whether it would still be possible for new entrants to break into the market. Digital platforms make these questions harder to answer.

  • One much-needed tool is how to assess consumer benefits.
  • A second issue is how to take into account the interactions between markets, given that most platforms and tech companies steadily expand into other activities and markets.
  • A third issue, perhaps the most important, is the effect increasing concentration has on incentives to innovate and invest.

[Diane Coyle is professor of economics at the University of Manchester]

Lawsuit over false online data revived after US top court review

A federal appeals court revived a California man's lawsuit accusing Spokeo of publishing an online profile about him that was filled with mistakes.

The 9th US Circuit Court of Appeals ruled 3-0 in favor of Thomas Robins, 15 months after the US Supreme Court asked it to more closely assess whether he suffered the "concrete and particularized" injury needed to justify a lawsuit. Spokeo sells data aggregated from various databases to users including employers and people seeking romantic partners. Robins sued after learning that his profile, which carried someone's else's photo, said he was married with children, affluent, in his 50s and employed, and had a graduate degree. He said all of this was wrong, and accused Pasadena, California-based Spokeo of willfully violating the federal Fair Credit Reporting Act, with potential damages of $1,000. The case was significant because Robins tried to pursue a class action, which if successful could expose Facebook, Alphabet's Google and other online data providers to mass claims in similar lawsuits.

This Was the Alt-Right’s Favorite Chat App. Then Came Charlottesville.

They posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber.” They organized last weekend’s “Unite the Right” rally in Charlottesville (VA), connecting several major white supremacy groups for an intimidating display of force. And when that rally turned deadly, with the killing of a 32-year-old counterdemonstrator, they cheered and discussed holding a gathering at the woman’s funeral.

For two months before the Charlottesville rally, I embedded with a large group of white nationalists on Discord, a group chat app that was popular among far-right activists. I lurked silently and saw these activists organize themselves into a cohesive coalition, and interviewed a number of moderators and members about how they used the service to craft and propagate their messages. I also asked Discord executives what, if anything, they planned to do about the white nationalists and neo-Nazis who had set up shop on their platform and were using it to spread their ideology. Several said they were aware of the issue, but had no concrete plans to crack down on any extremist groups. On Aug 14, Discord finally took action, banning several of the largest alt-right Discord communities and taking away one of the white nationalist movement’s key communication tools.