Julia Angwin

News Publishers Are Fighting Big Tech Over Peanuts. They Could Be Owed Billions.

A bitter battle is taking place between Big Tech and the free press over how to share in the income that news content generates for technology giants. The future of our news ecosystem, a linchpin of democracy, depends on the outcome. Platforms gained their audience in part by sharing news content free.

Four Ways to Fix Facebook

For years, Congress and federal regulators have allowed the world’s largest social network to police itself — with disastrous results. Here are four promising reforms under discussion in Washington: 

  1. Impose Fines for Data Breaches
  2. Police Political Advertising
  3. Make Tech Companies Liable for Objectionable Content
  4. Install Ethics Review Boards

Facebook Job Ads Raise Concerns About Age Discrimination

The ability of advertisers to deliver their message to the precise audience most likely to respond is the cornerstone of Facebook’s business model. But using the system to expose job opportunities only to certain age groups has raised concerns about fairness to older workers. Several experts questioned whether the practice is in keeping with the federal Age Discrimination in Employment Act of 1967, which prohibits bias against people 40 or older in hiring or employment.

Facebook Allowed Political Ads That Were Actually Scams and Malware

Russian disinformation isn’t the only deceptive political advertising on Facebook. The pitch designed to lure President Donald Trump’s critics is one of more than a dozen politically themed advertisements masking consumer rip-offs that ProPublica has identified since launching an effort in September to monitor paid political messages on the world’s largest social network.

Facebook (Still) Letting Housing Advertisers Exclude Users by Race

In February, Facebook said it would step up enforcement of its prohibition against discrimination in advertising for housing, employment or credit. But our tests showed a significant lapse in the company’s monitoring of the rental market. Last week, ProPublica bought dozens of rental housing ads on Facebook, but asked that they not be shown to certain categories of users, such as African Americans, mothers of high school kids, people interested in wheelchair ramps, Jews, expats from Argentina and Spanish speakers.

Facebook Says it Will Stop Allowing Some Advertisers to Exclude Users by Race

Facing a wave of criticism for allowing advertisers to exclude anyone with an “affinity” for African-American, Asian-American or Hispanic people from seeing ads, Facebook said it would build an automated system that would let it better spot ads that discriminate illegally.

Federal law prohibits ads for housing, employment and credit that exclude people by race, gender and other factors. Facebook said it would build an automated system to scan advertisements to determine if they are services in these categories. Facebook will prohibit the use of its “ethnic affinities” for such ads. Facebook said its new system should roll out within the next few months. “We are going to have to build a solution to do this. It is not going to happen overnight,” said Steve Satterfield, privacy and public policy manager at Facebook. He said that Facebook would also update its advertising policies with “stronger, more specific prohibitions” against discriminatory ads for housing, credit and employment.

Facebook Lets Advertisers Exclude Users by Race

Imagine if, during the Jim Crow era, a newspaper offered advertisers the option of placing ads only in copies that went to white readers. That’s basically what Facebook is doing nowadays.

The ubiquitous social network not only allows advertisers to target users by their interests or background, it also gives advertisers the ability to exclude specific groups it calls “Ethnic Affinities.” Ads that exclude people based on race, gender and other sensitive factors are prohibited by federal law in housing and employment. The ad we purchased was targeted to Facebook members who were house hunting and excluded anyone with an “affinity” for African-American, Asian-American or Hispanic people. When we showed Facebook’s racial exclusion options to a prominent civil rights lawyer John Relman, he gasped and said, “This is horrifying. This is massively illegal. This is about as blatant a violation of the federal Fair Housing Act as one can find.”

Google Has Quietly Dropped Ban on Personally Identifiable Web Tracking

When Google bought the advertising network DoubleClick in 2007, Google founder Sergey Brin said that privacy would be the company’s “number one priority when we contemplate new kinds of advertising products.” And, for nearly a decade, Google did in fact keep DoubleClick’s massive database of web-browsing records separate by default from the names and other personally identifiable information Google has collected from Gmail and its other login accounts. But this summer, Google quietly erased that last privacy line in the sand – literally crossing out the lines in its privacy policy that promised to keep the two pots of data separate by default.

In its place, Google substituted new language that says browsing habits “may be” combined with what the company learns from the use Gmail and other tools. The change is enabled by default for new Google accounts. Existing users were prompted to opt-in to the change this summer. The practical result of the change is that the DoubleClick ads that follow people around on the web may now be customized to them based on the keywords they used in their Gmail. It also means that Google could now, if it wished to, build a complete portrait of a user by name, based on everything they write in email, every website they visit and the searches they conduct.

What Facebook Knows About You

We live in an era of increasing automation. Machines help us not only with manual labor but also with intellectual tasks, such as curating the news we read and calculating the best driving directions. But as machines make more decisions for us, it is increasingly important to understand the algorithms that produce their judgments. We’ve spent the year investigating algorithms, from how they’ve been used to predict future criminals to Amazon’s use of them to advantage itself over competitors. All too often, these algorithms are a black box: It’s impossible for outsiders to know what’s going inside them. Sept 28 we’re launching a series of experiments to help give you the power to see inside. Our first stop: Facebook and your personal data.

Facebook has a particularly comprehensive set of dossiers on its more than 2 billion members. Every time a Facebook member likes a post, tags a photo, updates their favorite movies in their profile, posts a comment about a politician, or changes their relationship status, Facebook logs it. When they browse the Web, Facebook collects information about pages they visit that contain Facebook sharing buttons. When they use Instagram or WhatsApp on their phone, which are both owned by Facebook, they contribute more data to Facebook’s dossier. And in case that wasn’t enough, Facebook also buys data about its users’ mortgages, car ownership and shopping habits from some of the biggest commercial data brokers. Facebook uses all this data to offer marketers a chance to target ads to increasingly specific groups of people. Indeed, we found Facebook offers advertisers more than 1,300 categories for ad targeting — everything from people whose property size is less than .26 acres to households with exactly seven credit cards.

Make Algorithms Accountable

[Commentary] Algorithms are ubiquitous in our lives. They map out the best route to our destination and help us find new music based on what we listen to now. But they are also being employed to inform fundamental decisions about our lives. Companies use them to sort through stacks of résumés from job seekers. Credit agencies use them to determine our credit scores. And the criminal justice system is increasingly using algorithms to predict a defendant’s future criminality. Those computer-generated criminal “risk scores” were at the center of a recent Wisconsin Supreme Court decision that set the first significant limits on the use of risk algorithms in sentencing. The court ruled that while judges could use these risk scores, the scores could not be a “determinative” factor in whether a defendant was jailed or placed on probation. And, most important, the court stipulated that a presentence report submitted to the judge must include a warning about the limits of the algorithm’s accuracy. This warning requirement is an important milestone in the debate over how our data-driven society should hold decision-making software accountable. But advocates for big data due process argue that much more must be done to assure the appropriateness and accuracy of algorithm results.

[Julia Angwin is a reporter at ProPublica.]