Creating a Culture of Consent for Our Digital Future: A Conversation with Tawana Petty

Benton Institute for Broadband & Society

Wednesday, March 6, 2024

Digital Beat

Creating a Culture of Consent for Our Digital Future:
A Conversation with Tawana Petty

With support from the Benton Institute’s Opportunity Fund Fellowship, Greta Byrum (Benton Institute Fellow) and Ever Bussey (Just Tech Program Officer) interviewed community organizers, hacktivists, and cyber security experts to learn about the threats that face new digital adopters and how their work effectively mitigates these harms. With the consent of the interviewees, we are happy to share these conversations in hopes that you can learn to manage this historic push for internet adoption.

In this conversation, Ever Bussey and Just Tech Fellow Tawana Petty discuss how data extraction and surveillance impact marginalized communities, working with policymakers, and how to address the proliferation of artificial intelligence in an inclusive and consetful manner.

 

Ever Bussey
         Bussey

Ever Bussey (EB): In a couple of sentences, could you describe the online privacy, security, and safety training and preparedness work you do? Who is it directed toward and why?

Tawana Petty (TP): I don't consider myself a person who trains on those types of things like online privacy or cybersecurity. However, through my work with Detroit Digital Justice Coalition, we convene DiscoTech, short for Discovering Technology, and we bring in that expertise whether we're using tools from Equality Labs or bringing in a digital security trainer like Sarah Aoun or lean into the expertise of Matt Mitchell. Those are folks whom I would consider have that expertise around online safety and how we secure our digital tools. My work is more so around trying to foster a culture of consent and pushing for the opportunity to opt out of systems and give away less of our data, or take time to read particular privacy policies, knowing that they aren't always super accessible. So, I'm more of a culture shifter than a person who teaches particular tools that you might leverage for digital privacy.

EB: Why is shifting the culture around our relationship with online consent important? Why are you interested in doing that work? And who was it for?

Tawana Petty
           Petty

TP: I'm interested in doing that work because I recognize how pervasive dominant narratives are. In this work, specifically the work that I've been engaged in around data justice work and digital justice work, a lot of folks have shifted to a mindset of powerlessness where they're like, "Well, they have our stuff anyway" or, "They're already going to take this." I've learned that the more you encourage people that we still have a voice in the matter, the more folks tend to push back against systems that are unjust. And it's not a given that your data is going to be extracted and weaponized against you. We still have opportunities to mount a resistance against systems that are harmful. To me, that's the general public. That's anybody who is exposed to any type of system that's extracting our data. However, I do prioritize Black folks.

EB: Why Black folks? Say you're talking to me and I'm one of the people that's just throwing my hands up in defeat. Why should I be concerned about something like that? What do I stand to lose?

TP: There's a lot to lose. I always tell people who say, "I have nothing to hide," or, "They have everything about me anyway," that there's always something to hide. Just think about being in your apartment on any given day and the government or a tech company is able to see everything that you do inside your apartment without your consent. That's the way we have to operate. We have to think about those moments of privacy that are being eroded and how our data and information is being leveraged to target us to be consumers of systems, not producers of systems, and how agencies like law enforcement have leveraged our data in the past to create predictive policing systems or to hyper surveil communities. We're looking at systems that are systemically racist.

And that's a reason why I prioritize Black people, because we are the most harmed by these systems, whether it's housing, law enforcement, education, or the medical industry. All of these institutions and corporations use massive amounts of data to make decisions about our lives. When they do that, they're doing it from a racially equitable lens. We have not only a responsibility to push back on those types of systems, but it's the only way that we're going to be seen as fully human and not have systems that become artificially intelligent and hyper racist on levels that are even more undesirable than the human aspect of racism.

EB: If you had to pick one or two threats to prepare for, which of the following do you think is most important to the people you work with? Data privacy, protection against fraud or hacking, online harassment or threats, surveillance and discrimination, or something else.

TP: Oh, my goodness, it’s hard to pick. But I would say surveillance for sure, because the communities that I’m seeing be hyper surveilled are already the most marginalized. We’re rapidly moving toward a social credit system in which the undesirable populations are the ones being contained within these surveillance mechanisms. And while folks who have greater wealth and are white have more opportunities to move about freely to access more systems and to have upward mobility, more and more Black communities are being hyper-surveilled, being squeezed out of systems, or having our civil liberties and human rights eroded.

Surveillance is a very prevalent harm that I am witnessing and can see being exacerbated. Then, I would say secondly, fraud targeting our senior citizens particularly who are very vulnerable to deep fakes and other types of replications of our voices and data and are being spammed in ways that allow access to their bank accounts and other private resources. The proliferation of online data access and to manipulate it using artificial intelligence is going to be a pretty great risk for those folks who are not literate in these systems and most times that's going to be our elders in the community.

EB: From your experience working with specific community groups, what should government officials know about what's at stake if internet security and safety concerns are not addressed?

TP: I would just reiterate those two. I think government officials should know (1) there are some models that are not the end all be all, but I do think government agencies should prioritize the principles that are in the Blueprint for an AI Bill of Rights. I know it's not a governing document, but at the very least it has human alternatives, and it has all of these opportunities to make sure that we're trying to challenge algorithmic discrimination. I think that that should become a systemic policy throughout agencies, and it should have a particular emphasis on making sure that law enforcement does not have an exemption.

A lot of times when we have policies like the National Institute of Standards and Technology (NIST)’s AI Risk Management Framework, which is very clear on the different types of biases that you can experience within data systems, none of these have a strong enough stance on the ways that law enforcement is able to freely leverage our data without many restrictions. So, I would say those five principles that are identified within the blueprint clearly connect to the ways that the NIST AI Risk Management Framework defines biases, which are not just technical, they're human biases as well. And putting those two together to create a legislative policy that makes sure we are not harmed.

The National Institute of Standards and Technology, and they give recommendations to the federal government on how we should think about AI, artificial general intelligence, automated systems, etc. They've created a framework that has some very useful ways on to identify the types of biases that we should be thinking about when we're trying to challenge algorithmic discrimination. And I think that that's important to pull out to apply to the blueprint for AI Bill of Rights and make it a legislative policy.

EB: If I were a government official and you want to give me an example, something specific and tangible about how people need to prepare for the impending privacy or surveillance threats as more people get online. Could you share one?

TP: I would use OpenAI and ChatGPT, not even just ChatGPT, but all of them. Now they have Auto-GPT and all these other ones. I would say that we are learning a lesson right now about the harms of extracting mass amounts of data with little transparency and how replicating misinformation without any real policies in place to challenge "hallucinations." I'm struggling with what to say, because I don't want to use the same language they use. But the fact of the matter is the ways that ChatGPT is being marketed is that, despite its harms, that it's some kind of autonomous nonhuman connected system that's making all these mistakes. I think we're sitting at a moment now for policymakers to hold actual individuals accountable for the harms that these systems are creating and not fall into the hype that makes it seems like there's just some rogue technology that's out here lying to everybody without the human contribution from the tech companies who have launched this upon our public.

And so, you're seeing class action lawsuits right now on the lack of transparency in the mass extraction of data, because a lot of people have no idea what all data has been used for, how it’s being used, who it is being shared with, or what impact it's having on communities. I would say policymakers should really take this moment to dissect what happened, and how did we get this far without regulations put in place? We've seen what happens when you just let something be shared amongst millions of people, billions of people, without regulations put in place. We’ve got to stop letting tech companies pay out little dollars, move on with their lives, and continue to create harm. We have to use the framework and the blueprint to identify which principles are being violated by these companies and hold them accountable.

EB: Who represents a threat and why? So, for instance, regarding data privacy and the extraction of our data, who should we be concerned about and why? Is it corporate, state, or nonstate actors?

TP: It's definitely corporate, state, and nonstate actors, because there is no real regulation. I never thought I would be the person who was touting regulation But I've seen the harms of not having solid regulation in place. I'm at the point now where I find myself championing seeing regulation come through the Federal Trade Commission (FTC), seeing regulation come through the Equal Employment Opportunity Corporation (EEOC), seeing regulation come through the Consumer Financial Protection Bureau. These are the places that are in charge of how we experience the workforce, financial systems, and all of the ways that we interact with these automated systems online. So, seeing these agencies step up and say, "Hey, we actually have policies in place that are meant to protect human rights, and we're going to try to leverage some of that power in favor of actual human rights," gives me a little bit of hope.

I'm looking to see more of that, not that I have this blind trust in government, but I'm looking to see more agencies step up and say, "We believe in human rights. This is a violation of human rights. We believe in civil rights. This is a violation of civil rights," and exercise the power that they have to protect us. Everyone is, right now, liable to hurt us, because they're not being regulated or they're not following existing regulations.

EB: What does successful privacy and cybersecurity preparation look like?

TP: I'll channel Una Lee now. It's consent, right? We have what we said in the consent tech curriculum, which to me is going to be the hardest thing to implement. But I'm here for the struggle of trying to push us toward a more consentful world. The five principles in the Blueprint for an AI Bill of Rights can be useful here:

  • Safe and effective systems—you should be protected from unsafe and ineffective systems.
  • Algorithmic discrimination protections—you should not face discrimination by an algorithm and systems should be used and designed in an equitable way.
  • Data privacy—you should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  • Notice an explanation—you should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  • Human alternatives, consideration, and fallback—you should be able to opt out where appropriate and have access to a person who can quickly consider and remedy problems you encounter.

If anyone who leverages something and uses the internet, leverages any kind of tool, or has a tech company considers those principles, we will be much further along. And of course, I live by the Detroit Digital Justice Coalition’s Digital Justice Principles. Allied Media has principles. We all have these principles that help us guide the way that we engage with communities, neighborhoods, and systems. All government agencies and tech companies should be forced to consider those five principles when they're designing, developing, deploying, or purchasing these systems. A healthy cyber ecosystem would be one that prevents those harms by following those principles.

EB: Would you say that your work is or could be to support us, I'm using us in the general sense, to establish these principles of what is consentful?

TP: I feel it's part of part of my future mission. As of late, my work has felt like it's been more of trying to shift narratives and politicize folks regarding what's out there that can be leveraged to push for more responsive legislation or more responsive policy on a local level. I've served as a community educator in that regard and have done consultation with policymakers, et cetera. But I would love to be part of the future of helping to implement some of these consentful systems through the leadership and guidance of Una Lee and the Consentful Tech project. But I do see that as a passion. Truly consentful systems is going to be our hardest challenge, because agencies don't want to give you back that data. It's worth lots of money.

EB: Considering what you just said, what support do you think you would need to work successfully at scale?

TP: I would like for policy and decision-makers to move beyond partisanship and recognize that these systems impact all of us. There is no blue, red AI harm. We all leverage the internet. Our families have access to the internet, our elders, our young people, all of us. We should all be collectively creating a system that is not harmful and that should not have a political line. I need for policymakers to move. I need them to think more at a human level and less at a political level. And I know the personal is political, but my point is I need them to take very seriously these principles. I need them to push for it to be legislative policy within the agencies. I need them to hold the civil rights agencies and departments within all of these governmental agencies accountable, and I need folks to not be elected who don't take that seriously, or reelected.

EB: I hear that answer. Let’s say that I'm a policymaker again, and I'm meeting with you one-on-one. I want to make a task force and put you at the head of it, or not even make a task force, but I see the work you are doing already and I want to support that. What would you need from me specifically?

TP: See, that's what's complicated. There are so many task forces that exist for these things. It's difficult for me to say anything beyond I need them to act on the millions of recommendations that they have already received. I need to see legislation actually move and not get stuck because of partisan politics. And so, I'm actually going to share you a link of just policy that exists right now that's just sitting there. But what I'm saying is I need a legislator who is an advocate who purely listens to community voice and then takes that voice and makes policy that will pass. They need to be somebody who's willing to have nonpartisan or bipartisan cooperation to push through policies that are rooted in these principles that need to become actual federal legislation. Now, beyond that, I don't think a policymaker can help me as an individual outside of passing a policy.

EB: Maybe policymaker is not the word I should be using. But with the investment that's coming down the pipeline that is already actually lead, certain branches or bullet points like the Digital Equity Act necessitate some of the funding be spent on keeping the internet equitable. Do you think, as a government official, it would be possible for me to support you in accessing that funding?

TP: Yes, that's actually a conversation we've been having at Detroit Digital Justice Coalition. Like what's happening with this $1.5 billion that's supposed to be making its way through these communities? How can local grassroots organizations that have been committed to this work be seeded with the resources to do on-the-ground engagement, have real community trainings on how to be consentful and how to secure your data, have real community trainings on how you engage with the political system, how you make a public comment. How do you pursue legislation? How do you ensure that legislation that you're pursuing makes it to be passed? I definitely think that that money should be leveraged to see those organizations who are committed to doing that work, but don’t have the funds to do it.


Ever Bussey is a social researcher and creative media maker. The nature of their practice brings the intricacies of human social relationships into focus through storytelling and collective world-building. Ever was introduced to digital media making through Allied Media Projects, where they learned to apply a social justice lens to their creative practice. They were able to expand their background in creative media and Participatory Action Research while pursuing graduate studies at The New School, where they received their MA in media studies. Outside of creative storytelling, Ever teaches a graduate seminar, titled Ethnography and Design, and facilitates a thesis studio course for MFA design and technology students at Parsons School of Design.

Tawana Petty is a mother, social justice organizer, poet, author, and facilitator. Her work focuses on racial justice, equity, privacy, and consent. She is a 2023-2025 Just Tech Fellow with the Social Science Research Council, a 2024 National LIO Yearlong Fellow with the Rockwood Leadership Institute and she serves on the CS for Detroit Steering Committee. Petty also serves on the board of Signal-Return.

The Benton Institute for Broadband & Society is a non-profit organization dedicated to ensuring that all people in the U.S. have access to competitive, High-Performance Broadband regardless of where they live or who they are. We believe communication policy - rooted in the values of access, equity, and diversity - has the power to deliver new opportunities and strengthen communities.


© Benton Institute for Broadband & Society 2023. Redistribution of this email publication - both internally and externally - is encouraged if it includes this copyright statement.


For subscribe/unsubscribe info, please email headlinesATbentonDOTorg

Kevin Taglang

Kevin Taglang
Executive Editor, Communications-related Headlines
Benton Institute
for Broadband & Society
1041 Ridge Rd, Unit 214
Wilmette, IL 60091
847-220-4531
headlines AT benton DOT org

Share this edition:

Benton Institute for Broadband & Society Benton Institute for Broadband & Society Benton Institute for Broadband & Society

Benton Institute for Broadband & Society

Broadband Delivers Opportunities and Strengthens Communities