[syndicated profile] eff_feed

Posted by Josh Richman

There’s a weird belief out there that tech critics hate technology. But do movie critics hate movies? Do food critics hate food? No! The most effective, insightful critics do what they do because they love something so deeply that they want to see it made even better. The most effective tech critics have had transformative, positive online experiences, and now unflinchingly call out the surveilled, commodified, enshittified landscape that exists today because they know there has been – and still can be – something better.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

That’s what drives Molly White’s work. Her criticism of the cryptocurrency and technology industries stems from her conviction that technology should serve human needs rather than mere profits. Whether it’s blockchain or artificial intelligence, she’s interested in making sure the “next big thing” lives up to its hype, and more importantly, to the ideals of participation and democratization that she experienced. She joins EFF’s Cindy Cohn and Jason Kelley to discuss working toward a human-centered internet that gives everyone a sense of control and interaction – open to all in the way that Wikipedia was (and still is) for her and so many others: not just as a static knowledge resource, but as something in which we can all participate. 

In this episode you’ll learn about:

  • Why blockchain technology has built-in incentives for grift and speculation that overwhelm most of its positive uses
  • How protecting open-source developers from legal overreach, including in the blockchain world, remains critical
  • The vast difference between decentralization of power and decentralization of compute
  • How Neopets and Wikipedia represent core internet values of community, collaboration, and creativity
  • Why Wikipedia has been resilient against some of the rhetorical attacks that have bogged down media outlets, but remains vulnerable to certain economic and political pressures
  • How the Fediverse and other decentralization and interoperability mechanisms provide hope for the kind of creative independence, self-expression, and social interactivity that everyone deserves  

Molly White is a researcher, software engineer, and writer who focuses on the cryptocurrency industry, blockchains, web3, and other tech in her independent publication, Citation Needed. She also runs the websites Web3 is Going Just Great, where she highlights examples of how cryptocurrencies, web3 projects, and the industry surrounding them are failing to live up to their promises, and Follow the Crypto, where she tracks cryptocurrency industry spending in U.S. elections. She has volunteered for more than 15 years with Wikipedia, where she serves as an administrator (under the name GorillaWarfare) and functionary, and previously served three terms on the Arbitration Committee. She’s regularly quoted or bylined in news media, speaks at major conferences including South by Southwest and Web Summit; guest lectures at universities including Harvard, MIT, and Stanford; and advises policymakers and regulators around the world. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

MOLLY WHITE: I was very young when I started editing Wikipedia. I was like 12 years old, and when it said the encyclopedia that anyone can edit, “anyone” means me, and so I just sort of started contributing to articles and quickly discovered that there was this whole world behind Wikipedia that a lot of us really don't see, where very passionate people are contributing to creating a repository of knowledge that anyone can access.
And I thought, I immediately was like, that's brilliant, that's amazing. And you know that motivation has really stuck with me since then, just sort of the belief in open knowledge and open access I think has, you know, it was very early for me to be introduced to those things and I, I sort of stuck with it, because it became, I think, such a formative part of my life.

CINDY COHN: That’s Molly White talking about a moment that is hopefully relatable to lots of folks who think critically about technology – that moment when you first experienced how, sometimes, the internet can feel like magic.
I'm Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF’s Activism Director. This is our podcast, How to Fix the Internet.

CINDY COHN: The idea behind this show is that we're trying to make our digital lives BETTER. A big part of our job at EFF is to envision the ways things can go wrong online-- and jumping into action to help when things then DO go wrong.
But this show is about optimism, hope and solutions – we want to share visions of what it looks like when we get it right.

JASON KELLEY: Our guest today is Molly White. She’s a journalist and web engineer, and is one of the strongest voices thinking and speaking critically about technology–specifically, she’s been an essential voice on cryptocurrency and what people often call Web3–usually a reference to blockchain technologies.. She runs an independent online newsletter called Citation Needed, and at her somewhat sarcastically named website “Web3 is going just great” she chronicles all the latest, often alarming, news, often involving scams and schemes that make those of us working to improve the internet pull our hair out.

CINDY COHN: But she’s not a pessimist. She comes from a deep love of the internet, but is also someone who holds the people that are building our digital world to account, with clear-eyed explanations of where things are going wrong, but also potential that exists if we can do it right. Welcome, Molly. Thanks for being here.

MOLLY WHITE: Thank you for having me.

CINDY COHN: So the theme of our show is what does it look like if we start to get things right in the digital world? Now you recognize, I believe, the value of blockchain technologies, what they could be.
But you bemoan how far we are from that right now. So let's start there. What does the world look like if we start to use the blockchain in a way that really lives up to its potential for making things better online?

MOLLY WHITE: I think that a lot of the early discussions about the potential of the blockchain were very starry-eyed and sort of utopian. Much in the way that early discussions of the internet were that way. You know, they promised that blockchains would somehow democratize everything we do on the internet, you know, make it more available to anyone who wanted to participate.
It would provide financial rails that were more equitable and had fewer rent seekers and intermediaries taking fees along the way. A lot of it was very compelling.
But I think as we've seen the blockchain industry, such as it is now, develop, we've seen that this technology brings with it a set of incentives that are incredibly challenging to grapple with. And it's made me wonder, honestly, whether blockchains can ever live up to the potential that they originally claimed, because those incentives have seemed to be so destructive that much of the time any promises of the technology are completely overshadowed by the negatives.

CINDY COHN: Yeah. So let's talk a little bit about those incentives, 'cause I think about that a lot as well. Where do you see those incentives popping up and what are they?

MOLLY WHITE: Well, any public blockchain has a token associated with it, which is the cryptocurrency that people are trading around, speculating on, you know, purchasing in hopes that the number will go up and they will make a profit. And in order to maintain the blockchain, you know, the actual system of records that is storing information or providing the foundation for some web platform, you need that cryptocurrency token.
But it means that whatever you're trying to do with the blockchain also has this auxiliary factor to it, which is the speculation on the cryptocurrency token.
And so time and time again, watching this industry and following projects, claiming that they will do wonderful, amazing things and use blockchains to accomplish those things, I've seen the goals of the projects get completely warped by the speculation on the token. And often the project's goals become overshadowed by attempts to just pump the price of the token, in often very inauthentic ways or in ways that are completely misaligned with the goals of the project. And that happens over and over and over again in the blockchain world.

JASON KELLEY: Have you seen that not happen with any project? Is there any project that you've said, wow, this is actually going well. It's like a perfect use of this technology, or because you focus on sort of the problems, is that just not something you've come across?

MOLLY WHITE: I think where things work well is when those incentives are perfectly aligned, which is to say that if there are projects that are solely focused on speculation, then the blockchain speculation works very well. Um, you know, and so we see people speculating on Bitcoin, for example, and, and they're not hoping necessarily that the Bitcoin ledger itself will do anything.
The same is true with meme coins. People are speculating on these tokens that have no purpose behind them besides, you know. Hoping that the price will go up. And in that case, you know, people sort of know what they're getting into and it can be lucrative for some people. And for the majority of people it's not, but you know, they sort of understand that going into it, or at least you would hope that they do.

CINDY COHN: I think of the blockchain as, you know, when they say this'll go down on your permanent record, this is the permanent record.

MOLLY WHITE: That’s usually a threat.

CINDY COHN: Yeah.

MOLLY WHITE: I try to point that out as well.

CINDY COHN: Now, you know, look, to be clear, we work with people who do international human rights work saving the records before a population gets destroyed in a way that that can't be destroyed by the people in power is, is, is one of the kind of classic things that you want a secure, permanent place to store stuff, um, happens. And so there's, you know, there's that piece. So where do you point people to when you're thinking about like, okay, what if you want a real permanent record, but you don't want all the dreck of the cryptocurrency blockchain world?

MOLLY WHITE: Well, it really depends on the project. And I really try to emphasize that because I think a lot of people in the tech world come at things somewhat backwards, especially when there is a lot of hype around a technology in the way that there was with blockchains and especially Web3.
And we saw a lot of people essentially saying, I wanna do something with a blockchain. Let me go find some problem I can solve using a blockchain, which is completely backwards to how most technologists are used to addressing problems, right? They're faced with a problem. They consider possible ways to solve it, and then they try to identify a technology that is best suited to solving that problem.
And so, you know, I try to encourage people to reverse the thinking back to the normal way of doing things where, sure, you might not get the marketing boosts that Web3 once brought in. And, you know, it certainly it was useful to attract investors for a while to have that attached to your project, but you will likely end up with a more sustainable product at the end of the day because you'll have something that works and is using technology that is well suited to the problem. And so, you know, when it comes to where would I direct people other than blockchains, it very much depends on their problem and, and the problem that they're trying to solve.
For example, if you don't need to worry about having a, a system that is maintained by a group of people who don't trust each other, which is the blockchain’s sort of stated purpose, then there are any number of databases that you can use that work in the more traditional manner where you rely on perhaps a group of trusted participants or a system like that.
If you're looking for a more distributed or decentralized solution, there are peer-to-peer technologies that are not blockchain based that allow this type of content sharing. And so, you know, like I said, it really just depends on the use case more than anything.

JASON KELLEY: Since you brought up decentralization, this is something we talk about a lot at EFF in different contexts, and I think a lot of people saw blockchain and heard decentralized and said, that sounds good.
We want less centralized power. But where do you see things like decentralization actually helping if this kind of Web3 tech isn't a place where it's necessarily useful or where the technology itself doesn't really solve a lot of the problems that people have said it would.

MOLLY WHITE: I think one of the biggest challenges with blockchains and decentralization is that when a lot of people talk about decentralization, they're talking about the decentralization of power, as you've just mentioned, and in the blockchain world, they're often talking about the decentralization of compute, which is not necessarily the same thing, and in some cases it very much different.

JASON KELLEY: If you can do a rug pull, it's not necessarily decentralized. Right?

MOLLY WHITE: Right. Or if you're running a blockchain and you're saying it's decentralized, but you run all of the validators or the miners for that blockchain, then you, you know, the computers may be physically located all over the world, or, you know, decentralized in that sort of sense. But you control all the power and so you do not have a truly decentralized system in that manner of speaking.
And I think a lot of marketing in the crypto world sort of relied on people not considering the difference between those two things, because there are a lot of crypto projects that, you know, use all of the buzzwords around decentralization and democratization and, you know, that type of thing that are very, very centralized, very similar to the traditional tech companies where, you know, all of Facebook servers may be located physically all around the world, but no one's under the. The impression that Facebook is a decentralized company. Right? And so I think that's really important to remember is that there's nothing about blockchain technology specifically that requires a blockchain project to be decentralized in terms of power.
It still requires very intentional decision making on the parts of the people who are running that project to decentralize the power and reduce the degree to which any one entity can control the network. And so I think that there is this issue where people sort of see blockchains and they think decentralized, and in reality you have to dig a lot deeper.

CINDY COHN: Yeah, EFF has participated in a couple of the sanctions cases and the prosecutions of people who have developed peace. Is of the blockchain world especially around mixers. TornadoCash is one that we participated in, and I think this is an area where we have a kind of similar view about the role of the open source community and kind of the average coder and when their responsibility should create liability and when they should be protected from liability.
And we've tried to continue to weigh in on these cases to make sure the courts don't overstep, right? Because the prosecution gets so mad. You're talking about a lot of money laundering and, and things like that, that the, you know, the prosecution just wants to throw the book at everybody who was ever involved in these kinds of things and trying to create this space where, you know, a coder who just participates in a GitHub developing some piece of code doesn't have a liability risk.
And I think you've thought about this as well, and I'm wondering, do you see the government overstepping and do you think it's right to continue to think about that, that overstepping?

MOLLY WHITE: Yeah, I mean, I think it's that those are the types of questions that are really important when it comes to tackling problems around blockchains and cryptocurrencies and the financial systems that are developing around these products.
Tou have to be really cautious that, you know, just because a bad thing is happening, you don't come in with a hammer that is, you know, much too big and start swinging it around and hitting sort of anyone in the vicinity because, you know, I think there are some things that should absolutely be protected, like, you know, writing software, for example.
A person who writes software should not necessarily be liable for everything that another person then goes and does with that software. And I think that's something that's been fairly well established through, you know, cryptography cases, for example, where people writing encryption algorithms and software to do strong encryption should not be considered liable for whatever anyone encrypts with that technology. We've seen it with virus writers, you know, it would be incredibly challenging for computer scientists to research and sort of think about new viruses and protect against vulnerabilities if they were not allowed to write that software.
But, you know, if they're not going and deploying this virus on the world or using it to, you know, do a ransomware attack, then they probably shouldn't be held liable for it. And so similar questions are coming up in these cryptocurrency cases or these cases around cryptocurrency mixers that are allowing people to anonymize their transactions in the crypto world more adequately.
And certainly that is heavily used in money laundering and in other criminal activities that are using cryptocurrencies. But simply writing the software to perform that anonymization is not itself, I think, a crime. Especially when there are many reasons you might want to anonymize your financial transactions that are otherwise publicly visible to anyone who wishes to see them, and, you know, can be linked to you if you are not cautious about your cryptocurrency addresses or if you publish them yourself.
And so, you know, I've tried to speak out about that a little bit because I think a lot of people see me as, you know, a critic of the cryptocurrency world and the blockchain world, and I think it should be banned or that anyone trading crypto should be put in jail or something like that, which is a very extreme interpretation of my beliefs and is, you know, absolutely not what I believe. I think that, you know, software engineers should be free to write software and then if someone takes that software and commits a crime with it, you know, that is where law enforcement should begin to investigate. Not at the, you know, the software developer's computer.

CINDY COHN: Yeah, I just think that's a really important point. I think it's easy, especially because there's so much fraud and scam and abuse in this space, to try to make sure that we're paying attention to where are we setting the liability rules because even if you don't like cryptocurrency or any of those kinds of things, like protecting anonymity is really important.
It's kind of a function of our times right now where people are either all one or all the other. And I really have appreciated, as you've kind of gone through this, thinking about a position that protects the things that we need to protect, even if we don't care about 'em in this context, because we might in another, and law of course, is kind of made up of things that get set in one context and then applied in another, while at the same time being, you know, kind of no holds barred, critical of the awful stuff that's going on in this world.

JASON KELLEY: Let’s take a quick moment to say thank you to our sponsor.
“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
And now back to our conversation with Molly White

JASON KELLEY: Some of the technologies you're talking about when sort of separated out from, maybe, the hype or the negatives that have like, overtaken the story. Things like peer-to-peer file sharing, cryptography. I mean, even, let's say, being able to send money to someone, you know, with your phone, if you want to call it that, are pretty incredible at some level, you know?
And you gave a talk in October that was about a time that you felt like the web was magic and you brought up a, a website that I'm gonna pretend that I've never heard of, so you can explain it to me, called Neopets. And I just wanna, for the listeners, could you explain a little bit about what Neopets was and sort of how it helped inform you about the way you want the web to work and, and things like that?

MOLLY WHITE: Yeah, so Neopets was a kids game. When I was a kid, you could adopt these little cartoon pets and you could like feed them and change their colors and do things, you know, play little games with them.

JASON KELLEY: Like Tamagotchis a little bit,

MOLLY WHITE: a little bit. Yeah. Yeah. There was also this aspect to the website where you could edit your user page and you could create little webpages in your account that were, it was pretty freewheeling, you know, you could edit the CSS and the HTML and you could make your own little website essentially. And as a kid that was really my first exposure to the idea that the internet and these websites that I was seeing, you know, sort of for the first time were not necessarily a read-only operation. You know, I was used to playing maybe little games on the internet  whatever kids were doing on the internet at the time.
And Neopets was really my first realization that I could add things to the internet or change the way they looked or interact with it in a way that was, you know, very participatory. And that later sort of turned into editing Wikipedia and then writing software and then publishing my writing on the web.
And that was really magical for me because it sort of informed me about the platform that was in front of me and how powerful it was to be able to, you know, edit something, create something, and then the whole world could see it.

JASON KELLEY: There's a really common critique right now that young people are sort of learning only bad things online or like only overusing the internet. And I mean, first of all, I think that's obviously not true. You know, every circumstance is different, but do you see places where like the way you experienced the internet growing up are still happening for young people?

MOLLY WHITE: Yeah, I mean, I think a lot of those, as you mentioned, I think a lot of those critiques are very misguided and they miss a lot of the incredibly powerful and positive aspects of the internet. I mean, the fact that you can go look something up and learn something new in half a second, is revolutionary. But then I think there are participatory examples, much like what I was experiencing when I was younger. You know, people can still edit Wikipedia the way that I was doing as a kid. That is a very powerful thing to do when you're young, to realize that knowledge is not this thing that is handed down from on high from some faceless expert who wrote history, but it's actually something that people are contributing to and improving constantly. And it can always be updated and improved and edited and shared, you know, in this sort of free and open way. I think that is incredibly powerful and is still open to people of any age who are, you know, able to access the site.

JASON KELLEY: I think it's really important to bring up some of these examples because something I've been thinking about a lot lately as these critiques and attacks on young people using the internet have sort of grown and even, you know, entered the state and congressional level in terms of bills, is that a lot of the people making these critiques, I feel like never liked the internet to begin with. They don't see it as magic in the way that I think you do and that, you know, a lot of our listeners do.
And it's a view that is a problem specifically because I feel like you have to have loved the internet before you can hate it. You know, like, it's not like you need to really be able to critique the specific things rather than just sort of throw out the whole thing. And one of the things you know, I like about the work that you do is that you clearly have this love for technology and for the internet, and that lets you, I think, find the problems. And other people can't see through into those specific individual issues. And so they just wanna toss the whole thing.

MOLLY WHITE: I think that's really true. I think that, you know, I think there is this weird belief, especially around tech critics, that tech critics hate technology. It's so divorced from reality because, you don't see that in other worlds where, you know, art critics are never told that they just hate all art. I think most people understand that art critics love art and that's why they are critics.
But with technology critics, there's sort of this weird, you know, this perception of us as people who just hate technology, we wanna tear it all down when in reality, you know, I know a lot of tech critics and, and most of us, if not all of us, that I can think of come from a, you know, a background of loving technology often from a very young age, and it is because of that love and the want to see technology to continue to allow people to have those transformative experiences that we criticize it.
And that's, for some reason, just a hard thing, I think for some people to wrap their minds around.

JASON KELLEY: I want to talk a little bit more about Wikipedia, the whole sort of organization and what it stands for and what it does has been under attack a lot lately as well. Again, I think that, you know, it's a lot of people misunderstanding how it works and, and, um, you know, maybe finding some realistic critiques of the fact that, that, you know, it's individually edited, so there's going to be some bias in some places and things like that, and sort of extrapolating out when they have a good faith argument to this other place. So I'm wondering if you've thought about how to protect Wikipedia, how to talk about it. How you know your experience with it has made you understand how it works better than most people.
And also just generally, you know how it can be used as a model for the way that the internet should be or the way we can build a better internet.

MOLLY WHITE: I think this ties back a little bit to the decentralization topic where Wikipedia is not decentralized in the sense that, you know, there is one company or one nonprofit organization that controls all the servers. And so there is this sort of centralization of power in that sense. But it is very decentralized in the editing world where there is no editorial board that is vetting every edit to the site.
There are, you know, numerous editors that contribute to any one article and no one person has the final say. There are different language versions of Wikipedia that all operate somewhat independently. And because of that, I think it has been challenging for people to attack it successfully.
Certainly there have been no shortage of those attacks. Um, but you know, it's not a company that someone could buy out and take over in ways that we've seen, you know, for example Elon Musk do with Twitter. There is no sort of editorial board that can be targeted and sort of pressured to change the language on the site. And, you know, I think that has helped to make Wikipedia somewhat resilient in ways that we've seen news organizations or other media publications struggle with recently where, you know, they have faced pressure from their buyers. The, you know, the people who own those organizations to be sure.
They've faced, you know, threats from the government in some cases. And Wikipedia is structured somewhat differently that I think helps us remain more protected from those types of attacks. But, you know, I, I am cautious to note that, you know, there are still vulnerabilities.
The attacks on Wikipedia need to be vociferously opposed. And so we have to be very cautious about this because the incredible resource that Wikipedia is, is is something that doesn't just sort of happen in a vacuum, you know, outside of any individual's actions.
It requires constant support, constant participation, constant editing. And so, you know, it's certainly a model to look to in terms of how communities can organize and contribute to, um, you know, projects on the internet. But it's also something that has to be very carefully maintained.

CINDY COHN: Yeah, I mean, this is just a lesson for our times, right? You know, there isn't a magical tech that can protect against all attacks. And there isn't a magical, you know, nonprofit 501-C3 that can be resistant against all the attacks. And we're in a time where they're coming fast and furious against our friends at Wikimedia, along with a lot of other, other things.
And I think the impetus is on communities to show up and, and, you know, not just let these things slide or think that, you know, uh, the internet might be magic in some ways, but it's not magic in these ways. Like we have to show up and fight for them. Um, I wanted to ask you a little bit about, um, kind of big tech's embrace of AI.
Um, you've been critical of it. We've been critical of it as well in many ways. And, and I, I wanna hear kind of your concerns about it and, um, and, and kind of how you see AI’s, you know, role in a better world. But, you know, also think about the ways in which it's not working all that well right now.

MOLLY WHITE: I generally don't have this sort of hard and fast view of AI is good or AI is bad, but it really comes down to how that technology is being used. And I think the widespread use of AI in ways that exploit workers and creatives and those who have decided to publish something online for example, and did not expect for that publication to be used by big tech companies that are then profiting off of it, that is incredibly concerning. Um, as well as the ways that AI is marketed to people. Again, this sort of mirrors my criticism, surround the crypto industry, but a lot of the marketing around AI is incredibly misleading. Um, you know, they're making promises that are not born out in reality.
They are selling people a product that will lie to you, you know, that will tell you things that are inaccurate. So I have a lot of concerns around AI, especially as we've seen it being used in the broadest, and sort of by the largest companies. But you know, I also acknowledge that there are ways in which some of this technology has been incredibly useful. And so, you know, it is one of these things where it has to be viewed with nuance, I think, around the ways it's being developed, the ways it's being deployed, the ways it's being marketed.

CINDY COHN: Yeah, there is a, a kinda eerie familiarity around the hype around AI and the hype around crypto. That, it's just kind of weird. It feels like we're going through like a, you know, groundhog day. Like we're living through the, another hype cycle that feels like the last, I think, you know, for us at EFF, we're really, we, we've tried to focus a lot on governmental use of AI's systems and AI systems that are trying to predict human behavior, right?
The digital equivalent of phrenology right? You know, let us, let us do sentiment analysis on the things that you said, and that'll tell us whether you're about to be a criminal or, you know, the right person for the job. I think those are the places that we've really identified, um, as, you know, problematic on a number of levels. You know, number one, it, it doesn't work nearly as well as,

MOLLY WHITE: That is a major problem!

CINDY COHN: It seems like that ought to be number one. Right. And this, you know, especially spending your time in Wikipedia where you're really working hard to get it right. Right. And you see the kind of back and forth of the conversation. But the, the central thing about Wikipedia is it's trying to actually give you truthful information and then watching the world get washed over with these AI assistants that are really not at all focused on getting it right, you know, or really focused on predicting the next word or, or however that works, right. Like, um, it's gotta be kind of strange from where you sit, I suspect, to see this.

MOLLY WHITE: Yeah, it's, it's very frustrating. And, you know, I, I like to think we lived in a world at one time where people wanted to produce technology that helped people, technology that was accurate, technology that worked in the ways that they said it did. And it's been very weird to watch, especially over the last few years that sort of, uh, those goals degrade where, well, maybe it's okay if it gets things wrong a lot, you know, or maybe it's okay if it doesn't work the way that we've said it does or maybe never possibly can.
That's really frustrating to watch as someone who, again, loves technology and loves the possibilities of technology to then see people just sort of using technology to, to deliver things that are, you know, making things worse for people in many ways.

CINDY COHN: Yeah, so I wanna flip it around a little bit. You, like EFF, we kind of sometimes spend a lot of time in all the ways that things are broken, and how do you think about how to get to a place where things are not broken, or how do you even just keep focusing on a better place that we could get to?

MOLLY WHITE: Well, I've, like I said, you know, a lot of my criticism really comes down to the industries and the sort of exploitative practices of a lot of these companies in the tech world. And so to the extent possible, separating myself from those companies and from their control has been really powerful to sort of regain some of that independence that I once remembered the web enabling where, you know, if you had your own website, you know, you could kind of do anything you wanted. And you didn't have to stay within the 280 characters if you had an idea, you know, and you could publish, uh, you know, a video that was longer than 10 minutes long or, or whatever it might be.
So sort of returning to some of those ideals around creating my own spaces on the web where I have that level of creative freedom, and certainly freedom in other ways, has been very powerful. And then finding communities of people who believe in those same things. I've taken a lot of hope in the Fediverse and the communities that have emerged around those types of technologies and projects where, you know, they're saying maybe there is an alternative out there to, you know, highly centralized big tech, social media being what everyone thinks of as the web. Maybe we could create different spaces outside of that walled garden where we all have control over what we do and say, and who we interact with. And we set the terms on which we interact with people.
And sort of push away the, the belief that, you know, a tech company needs to control an algorithm to show you what it is that you want to see, when in reality, maybe you could make those decisions for yourself or choose the algorithm or, you know, design a system for yourself using the technologies that are available to everyone, but have been sort of walled in by a large or many of the large players in the web these days.

CINDY COHN: Thank you, Molly. Thank you very much for coming on and, and spending your time with us. We really appreciate the work that you're doing, um, and, and the way that you're able to boil down some pretty complicated situations into, you know, kind of smart and thoughtful ways of reflecting on them. So thank you.

MOLLY WHITE: Yeah. Thank you.

JASON KELLEY: It was really nice to talk to someone who has that enthusiasm for the internet. You know, I think all of our guests probably do, but when we brought up Neo pets, that excitement was palpable, and I hope we can find a way to get more of that enthusiasm back.
That's one of the things I'm taking away from that conversation was that more people need to be enthusiastic about using the internet and whatever that takes. What did you take away from chatting with Molly that we need to do differently Cindy?

CINDY COHN: Well, I think that the thing that made the enthusiasm pop in her voice was the idea that she could control things. That she was participating and, and participating not only in Neopets, but the participation on Wikipedia as well, right?
That she could be part of trying to make truth available to people and recognizing that truth in some ways isn't an endpoint, it's an evolving conversation among people to try to keep getting at getting it right.
That doesn't mean there isn't any truth, but it does mean that there is an open community of people who are working towards that end. And, you know, you hear that enthusiasm as well. It's, you know, the more you give in, the more you get out of the internet and trying to make that a more common experience of the internet that things aren't just handed to you or taught to you, but really it's a two-way street, that's where the enthusiasm came from for her, and I suspect for a lot of other people.

JASON KELLEY: Yeah, and what you're saying about truth, I think she sort of applies this in a lot of different ways. Even specific technologies, I think most people realize this, but you have to say it again and again, aren't necessarily right or wrong for everything. You know, AI isn't right or wrong for every scenario. It's sort of, things are always evolving. How we use them is evolving. Whether or not something is correct today doesn't mean it will be correct tomorrow. And there's just a sort of nuance and awareness that she had to how these different things work and when they make sense that I hope we can continue to see in more people instead of just a sort of, uh, flat across the board dislike of, you know, quote unquote the internet or quote unquote social media and things like that.

CINDY COHN: Yeah, or the other way around, like whatever it is, there's a hype cycle and it's just hyped over and over again. And that she's really charting a middle ground in the way she writes and talks about these things that I think is really important. I think the other thing I really liked was her framing of decentralization as thinking about decentralizing power, not decentralizing compute, and that difference being something that is often elided or not made clear.
But that can really help us see where, you know, where decentralization is happening in a way that's empowering people, making things better. You have to look for decentralized power, not just decentralized compute. I thought that was a really wise observation.

JASON KELLEY: And I think could be applied to so many other things where a term like decentralized may be used because it's accessible from everywhere or something like that. Right? And it's just, these terms have to be examined. And, and it sort of goes to her point about marketing, you know, you can't necessarily trust the way the newest fad is being described by its purveyors.
You have to really understand what it's doing at the deeper level, and that's the only way you can really determine if it's, if it's really decentralized, if it's really interoperable, if it's really, you know, whatever the new thing is. AI

CINDY COHN: Mm-hmm. Yeah, I think that's right. And you know, luckily for us, we have Molly who digs deep into the details of this for so many technologies, and I think we need to, you know, support and defend, all the people who are doing that. Kind of that kind of careful work for us, because we can't do all of it, you know, we're humans.
But having people who will do that for us in different places who are trusted and who aren't, you know who whose agenda is clear and understandable, that's kind of the best we can hope for. And the more of that we build and support and create spaces for on the, you know, uncontrolled open web as opposed to the controlled tech giants and walled gardens, as she said, I think the better off we'll be.

JASON KELLEY: Thanks for joining us for this episode of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF dot org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe even pick up some merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll see you next time.
I’m Jason Kelley…

CINDY COHN: And I’m Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons attribution 4.0 international and includes the following music licensed Creative Commons 3.0 unported by its creators: Drops of H2O, the filtered water treatment, by J. Lang. Additional beds by Gaetan Harris.

Panels needing panelists

May. 20th, 2025 10:40 am
boxofdelights: (Default)
[personal profile] boxofdelights posting in [community profile] wiscon
These #WisCon panels still need panelists:

We've lost some people and are looking for panelists on the below:

May 23 Fri7:00-7:45pm | Breathe the Pressure: Burnout and Recovery for Creatives

May 24 Sat10:00-10:45am | Pathways to Publishing

May 24 Sat4:00-4:45pm | Small Press Publishing

Please email Panels@WisCon.sf3.org if you are able to participate!
[syndicated profile] eff_feed

Posted by India McKinney

Today, the Senate Judiciary Committee is holding a hearing titled “Defending Against Drones: Setting Safeguards for Counter Unmanned Aircraft Systems Authorities.” While the government has a legitimate interest in monitoring and mitigating drone threats, it is critical that those powers are narrowly tailored. Robust checks and oversight mechanisms must exist to prevent misuse and to allow ordinary, law-abiding individuals to exercise their rights. 

Unfortunately, as we and many other civil society advocates have highlighted, past proposals have not addressed those needs. Congress should produce well-balanced rules that address all these priorities, not grant de facto authority to law enforcement to take down drone flights whenever they want. Ultimately, Congress must decide whether drones will be a technology that mainly serves government agencies and big companies, or whether it might also empower individuals. 

To make meaningful progress in stabilizing counter unmanned aerial system (“C-UAS”) authorities and addressing emerging issues, Congress should adopt a more comprehensive approach that considers the full range of risks and implements proper safeguards. Future C-UAS legislation include the following priorities, which are essential to protecting civil liberties and ensuring accountability:

  • Strong and explicit safeguards for First Amendment-protected activities 
  • Ensure transparency and require detailed reporting
  • Provide due process and recourse for improper counter-drone activities 
  • Require C-UAS mitigation to involve least-invasive methods
  • Maintain reasonable retention limits on data collection
  • Maintain sunset for C-UAS powers as drone uses continue to evolve

Congress can—and should—address public safety concerns without compromising privacy and civil liberties. C-UAS authorities should only be granted with the clear limits outlined above to help ensure that counter-drone authorities are wielded responsibly.

The American Civil Liberties Union (ACLU), Center for Democracy & Technology (CDT), Electronic Frontier Foundation (EFF), and Electronic Privacy Information Center (EPIC) shared these concerns with the Committee in a joint Statement For The Record.

[syndicated profile] aichildlit_feed

Posted by Jean Mendoza

 

Dad, Is It Time to Gather Mint? Celebrating the Seasons
Written by Tyna Legault Taylor (Attawapiskat First Nation)
Illustrated by Michelle Dao (Vietnamese Canadian)
Published in 2025
Published by Portage and Main (Highwater Press)
Reviewer: Jean Mendoza
Review Status: Highly Recommended

b


laget hiyt toxwum/Herring to Huckleberries
Written by ošil Betty Wilson (Tla'anim)
Illustrated by Prashant Miranda (not Native)
Published in 2025
Published by Portage and Main (Highwater Press)
Reviewer: Jean Mendoza
Review Status: Highly Recommended 

Are you a teacher who has used a curriculum about Native Americans? Debbie has started analyzing one on AICL. Take a look! Her comments can help you think critically about whatever curriculum you use. Her analysis got me thinking about a phrase I've seen in non-fiction books to describe Indigenous societies before colonization: "hunter-gatherer." It's used to differentiate between groups that mainly "lived off the land" and those that grew crops or kept animals. It's not usually explained very well.

As a kid, I pictured "hunter-gatherers" wandering in the woods hoping to come across something edible to collect or kill. It seemed like an exhausting life, always having to find the next meal. In my mind, people who grew gardens and row crops didn't need to go looking for food. (Though I figured they'd still pick huckleberries if they found some, because who wouldn't?)  Nothing in textbooks or children's literature I saw dismantled those mistaken ideas, and it's embarrassing how long it took me to replace them with a more accurate picture. 

That's one reason it's good to see two 2025 picture books from Portage and Main Press that give a clear, respectful sense of what's involved in collecting food from the land (and water) to feed families and the community. Observation, intergenerational knowledge, ingenuity, and hard work kept the people fed, and continue to do so today. 

Dad, Is It Time to Gather Mint? is for ages 5-9. The protagonist is a contemporary Native child, Joshua. Joshua learns from his dad about when, where, and how to find foods that have sustained his Omushkego Cree and Anishinaabe communities over time. Plants, fish, and mammals couldn't just be taken all year round. What Joshua likes most of all is picking mint so his mom can make mint iced tea, but he has to wait until summer for that. Meanwhile, he and his dad go on walks each season, and dad shows him what to look for. 

I like the way several two-page spreads show what Joshua learns, and what he does to help. Here are  pages about fall:


On the left, Dad explains the peoples' seasonal food sources. On the right, Joshua has hooked a white fish, one of the foods of fall.

The illustrations show his delighted engagement as he soaks in what Dad teaches him. But what he wants most of all is to be able to gather wild mint for tea. Spoiler alert: that day comes at last.

Herring to Huckleberries is suitable for slightly older children; I would think middle grades and up. It follows the experience of a girl from the Tla'amin Nation, one of the Coast Salish nations in what is currently known as British Columbia. In her author's note, Betty Wilson comments that the book shares her memories of growing up in the 1950s, when, during each season, she would go with her grandparents and other relatives to collect foods that would sustain them all year. 

Throughout the book, text and pictures give a sense of how large the gathering operations could be, and how much knowledge was involved. Yes, the people caught herring when the fish showed up in the bay -- and they used a specially-designed rake. They also sank cedar branches in the water so the herring would lay eggs on them. They then gathered the branches full of eggs, peeled the eggs off to eat fresh, or hung the full branches up on drying racks so the eggs would dry, to be used later. They dried the whole fish, too. 

Children reading these two books will never be burdened with the inadequate ideas I had about what it means to "hunt and gather"! Both Dad and Herring emphasize the science, community effort, and complexity of the knowledge involved. Dad features a map of the region where Joshua's family lives, and a recipe for mint tea. The back matter of Herring includes an eagle's-eye-view map of the Tla'amin homelands, and detailed descriptions of 12 of their important traditional foods.

I also love that both books integrate specific Indigenous languages of the protagonists. In Dad, selected words in Anishinaabemowin and Omushkegomowin are part of the text, with English equivalents in the margins. Herring is a dual language book; the story is told in the author's native Ayahjutham and English on the same 2-page spreads, like this:

Some young readers will be intrigued to see that the orthography of Ayahjutham is very different from English. Here's how the name of that language is written:


At AICL, we're strong believers that Native children benefit from knowledge about their Indigenous language, and that all children benefit from knowing about languages other than what they speak at home! Both Dad and Herring support this by providing vocabulary lists and pronunciation guides in the back matter. 

And teacher guides for both books are available from Portage and Main.

There are so many opportunities for conversations with children here, whether they're Native or not. Younger kids may be very surprised that not everybody has gotten their food from Safeway or Jewel. They may want to talk about foods they would be willing to work for, as Joshua and ošil do. What do they notice about the variety of foods shown in the books -- does it create a balanced diet? Do they or any of their family members fish, hunt, trap, or collect wild foods like mushrooms or berries? Is the activity random, or does a person have to "know what they're doing" in order to have success? What's it like for them -- do they enjoy it, or feel like part of something important? Is it even possible to find wild foods where they live now? How could they find out what wild food resources would have been available in their area 70 years ago, or longer? Betty Wilson comments that many of the foods her community gathered in the 1950s are no longer available. What might have happened to those foods? 

Food sovereignty is an increasingly important topic for Native people. High schoolers may know that the US government officials provided or withheld food to keep Native communities "in line". They may have heard that settlers' foods, heavy on sugar and carbohydrates, have created health problems that Native Nations now deal with. Food sovereignty addresses those issues, and more. Picture books like Dad, Is It Time to Gather Mint and Herring to Huckleberries can help older kids explore the historical and cultural significance of communities being able to think critically about diet and supply themselves with food. 

Dad, Is It Time to Gather Mint and Herring to Huckleberries are strongly recommended! They can be important additions to a curriculum about Native Americans, and useful in teaching about the relationships between human well-being and the foods we eat.  



[syndicated profile] eff_feed

Posted by Alexis Hancock

After multiple delays of the REAL ID Act of 2005 and its updated counterpart, the REAL ID Modernization Act, in the United States, the May 7th deadline of REAL ID enforcement has finally arrived. Does this move our security forward in the skies? The last 20 years says we got along fine without it. There were and are issues along the way that REAL ID does impose on everyday people, such as potential additional costs and rigid documentation, even if you already have a state issued ID. While TSA states this is not a national ID or a federal database, but a set of minimum standards required for federal use, we are still watchful of the mechanisms that have pivoted to potential privacy issues with the expansion of digital IDs.

But you don’t need a REAL ID just to fly domestically. There are alternatives.

The most common alternatives are passports or passport cards. You can use either instead of a REAL ID, which might save you an immediate trip to the DMV. And the additional money for a passport at least provides you the extra benefit of international travel.

Passports and passport cards are not the only alternatives to REAL ID. Additional documentation is also accepted as well: (this list is subject to change by the TSA):

  • REAL ID-compliant driver's licenses or other state photo identity cards issued by Department of Motor Vehicles (or equivalent and this excludes a temporary driver’s license)
  • State-issued Enhanced Driver's License (EDL) or Enhanced ID (EID)
  • U.S. passport
  • U.S. passport card
  • DHS trusted traveler cards (Global Entry, NEXUS, SENTRI, FAST)
  • U.S. Department of Defense ID, including IDs issued to dependents
  • Permanent resident card
  • Border crossing card
  • An acceptable photo ID issued by a federally recognized Tribal Nation/Indian Tribe, including Enhanced Tribal Cards (ETCs)
  • HSPD-12 PIV card
  • Foreign government-issued passport
  • Canadian provincial driver's license or Indian and Northern Affairs Canada card
  • Transportation Worker Identification Credential (TWIC)
  • U.S. Citizenship and Immigration Services Employment Authorization Card (I-766)
  • U.S. Merchant Mariner Credential
  • Veteran Health Identification Card (VHIC)

Foreign government-issued passports are on this list. However, using a foreign-government issued passport may increase your chances of closer scrutiny at the security gate. REAL ID and other federally accepted documents are supposed to be about verifying your identity, not about your citizenship status. Realistically, interactions with secondary screening and law enforcement are not out of the realm of possibility for non-citizens. The power dynamics of the border has now been brought to flying domestically thanks to REAL ID. The privileges of who can and can’t fly are more sensitive now.

REAL ID and other federally accepted documents are supposed to be about verifying your identity, not about your citizenship status

Mobile Driver’s Licenses (mDLs)

Many states have rolled out the option for a Mobile Driver's License, which acts as a form of your state-issued ID on your phone and is supposed to come with an exception for REAL ID compliance. This is something we asked for since mDLs appear to satisfy their fears of forgery and cloning. But the catch is that states had to apply for this waiver:

“The final rule, effective November 25, 2024, allows states to apply to TSA for a temporary waiver of certain REAL ID requirements written in the REAL ID regulations.”

TSA stated they would publish the list of states with this waiver. But we do not see it on the website where they stated it would be. This bureaucratic hurdle appears to have rendered this exception useless, which is disappointing considering the TSA pushed for mDLs to be used first in their context.

Google ID Pass

Another exception appears to bypass state issued waivers, Google Wallet’s “ID Pass”. If a state partnered with Google to issue mDLs, or if you have a passport, then that is acceptable to TSA. This is a large leap in terms of reach of the mDL ecosystem expanding past state scrutiny to partnering directly with a private company to bring acceptable forms of ID for TSA. There’s much to be said on our worries with digital ID and the rapid expansion of them outside of the airport context. This is another gateway that highlights how ID is being shaped and accepted in the digital sense.

Both with ID Pass and mDLs, the presentation flow allows for you to tap with your phone without unlocking it. Which is a bonus, but it is not clear if TSA has the tech to read these IDs at all airports nationwide and it is still encouraged to bring a physical ID for additional verification.

A lot of the privilege dynamics of flying appear through types of ID you can obtain, whether your shoes stay on, how long you wait in line, etc. This is mostly tied to how much you can spend on traveling and how much preliminary information you establish with TSA ahead of time. The end result is that less wealthy people are subjected to the most security mechanisms at the security gate. For now, you can technically still fly without a REAL ID, but that means being subject to additional screening to verify who you are.

REAL ID enforcement has some leg room for those who do not want or can’t get a REAL ID. But the progression of digital ID is something we are keeping watch of that continues to be presented as the solution to worries of fraud and forgery. Governments and private corporations alike are pushing major efforts for rapid digital ID deployments and more frequent presentation of one’s ID attributes. Your government ID is one of the narrowest, static verifications of who you are as a person. Making sure that information is not used to create a centralized system of information was as important yesterday with REAL ID as it is today with digital IDs.

[syndicated profile] eff_feed

Posted by Paige Collings

Lawmakers and regulators around the world have been prolific with passing legislation restricting freedom of expression and privacy for LGBTQ+ individuals and fueling offline intolerance. Online platforms are also complicit in this pervasive ecosystem by censoring pro-LGBTQ+ speech, forcing LGBTQ+ individuals to self-censor or turn to VPNs to avoid being profiled, harassed, doxxed, or criminally prosecuted.  

The fight for the safety and rights of LGBTQ+ people is not just a fight for visibility online (and offline)—it’s a fight for survival. This International Day Against Homophobia, Biphobia, and Transphobia, we’re sharing four essential tips for LGBTQ+ people to stay safe online.

Using Secure Messaging Services For Every Communication 

All of us, at least occasionally, need to send a message that’s safe from prying eyes. This is especially true for people who face consequences should their gender or sexual identity be revealed without their consent.

To protect your communications from being seen by others, install an encrypted messenger app such as Signal (for iOS or Android). Turn on disappearing messages, and consider shortening the amount of time messages are kept in the app if you are actually attending an event. If you have a burner device with you, be sure to save the numbers for emergency contacts.

Don’t wait until something sensitive arises: make these apps your default for all communications. As a side benefit, the messages and images sent to family and friends in group chats will be safe from being viewed by automated and human scans on services like Telegram and Facebook Messenger. 

Consider The Content You Post On Social Media 

Our decision to send messages, take pictures, and interact with online content has a real offline impact. And whilst we cannot control every circumstance, we can think about how our social media behaviour impacts those closest to us and those in our proximity, especially if these people might need extra protection around their identities. 

Talk with your friends about the potentially sensitive data you reveal about each other online. Even if you don’t have a social media account, or if you untag yourself from posts, friends can still unintentionally identify you, report your location, and make their connections to you public. This works in the offline world too, such as sharing precautions with organizers and fellow protesters when going to a demonstration, and discussing ahead of time how you can safely document and post the event online without exposing those in attendance to harm.

If you are organizing online or conversing on potentially sensitive issues, choose platforms that limit the amount of information collected and tracking undertaken. We know this is not always possible as perhaps people cannot access different applications. In this scenario, think about how you can protect your community on the platform you currently engage on. For example, if you currently use Facebook for organizing, work with others to keep your groups as private and secure as possible.

Create Incident Response Plans

Developing a plan for if or when something bad happens is a good practice for anyone, but especially for LGBTQ+ people who face increased risk online. Since many threats are social in nature, such as doxxing or networked harassment, it’s important to strategize with your allies around what to do in the event of such things happening. Doing so before an incident occurs is much easier than when you’re presently facing a crisis.

Only you and your allies can decide what belongs on such a plan, but some strategies might be: 

  • Isolating the impacted areas, such as shutting down social media accounts and turning off affected devices
  • Notifying others who may be affected
  • Switching communications to a predetermined more secure alternative
  • Noting behaviors of suspected threats and documenting these 
  • Outsourcing tasks to someone further from the affected circle who is already aware of this potential responsibility.

Consider Your Safety When Attending and Protests 

Given the increase in targeted harassment and vandalism towards LGBTQ+ people, it’s important to consider counterprotesters showing up at various events. Since the boundaries between events like pride parades and protest might be blurred, precautions are necessary. Our general guide for attending a protest covers the basics for protecting your smartphone and laptop, as well as providing guidance on how to communicate and share information responsibly. We also have a handy printable version available here.

This includes:

  • Removing biometric device unlock like fingerprint or FaceID to prevent police officers from physically forcing you to unlock your device with your fingerprint or face. You can password-protect your phone instead.
  • Logging out of accounts and uninstalling apps or disabling app notifications to avoid app activity in precarious legal contexts from being used against you, such as using queer dating apps in places where homosexuality is illegal. 
  • Turning off location services on your devices to avoid your location history from being used to identify your device’s comings and goings. For further protections, you can disable GPS, Bluetooth, Wi-Fi, and phone signals when planning to attend a protest.

LGBTQ+ Rights For Every Day 

Consider your digital safety like you would any aspect of bodily autonomy and self determination—only you get to decide what aspects of yourself you share with others, how you present to the world, and what things you keep private. With a bit of care, you can maintain privacy, safety, and pride in doing so. 

And in the meantime, we’re fighting to ensure that the internet can be a safe (and fun!) place for all LGBTQ+ people. Now more than ever, it’s essential for allies, advocates, and marginalized communities to push back against these dangerous laws and ensure that the internet remains a space where all voices can be heard, free from discrimination and censorship.

[syndicated profile] eff_feed

Posted by ARRAY(0x5637370ff450)

This week, the U.S. House Ways and Means Committee moved forward with a proposal that would allow the Secretary of the Treasury to strip any U.S. nonprofit of its tax-exempt status by unilaterally determining the organization is a “Terrorist Supporting Organization.” This proposal, which places nearly unlimited discretion in the hands of the executive branch to target organizations it disagrees with, poses an existential threat to nonprofits across the U.S. 

This proposal, added to the House’s budget reconciliation bill, is an exact copy of a House-passed bill that EFF and hundreds of nonprofits across the country strongly opposed last fall. Thankfully, the Senate rejected that bill, and we urge the House to do the same when the budget reconciliation bill comes up for a vote on the House floor. 

The goal of this proposal is not to stop the spread of or support for terrorism; the U.S. already has myriad other laws that do that, including existing tax code section 501(p), which allows the government to revoke the tax status of designated “Terrorist Organizations.” Instead, this proposal is designed to inhibit free speech by discouraging nonprofits from working with and advocating on behalf of disadvantaged individuals and groups, like Venezuelans or Palestinians, who may be associated, even completely incidentally, with any group the U.S. deems a terrorist organization. And depending on what future groups this administration decides to label as terrorist organizations, it could also threaten those advocating for racial justice, LGBTQ rights, immigrant communities, climate action, human rights, and other issues opposed by this administration. 

On top of its threats to free speech, the language lacks due process protections for targeted nonprofit organizations. In addition to placing sole authority in the hands of the Treasury Secretary, the bill does not require the Treasury Secretary to disclose the reasons for or evidence supporting a “Terrorist Supporting Organization” designation. This, combined with only providing an after-the-fact administrative or judicial appeals process, would place a nearly insurmountable burden on any nonprofit to prove a negative—that they are not a terrorist supporting organization—instead of placing the burden where it should be, on the government. 

As laid out in letter led by ACLU and signed by over 350 diverse nonprofits, this bill would provide the executive branch with: 

“the authority to target its political opponents and use the fear of crippling legal fees, the stigma of the designation, and donors fleeing controversy to stifle dissent and chill speech and advocacy. And while the broadest applications of this authority may not ultimately hold up in court, the potential reputational and financial cost of fending off an investigation and litigating a wrongful designation could functionally mean the end of a targeted nonprofit before it ever has its day in court.” 

Current tax law makes it a crime for the President and other high-level officials to order IRS investigations over policy disagreements. This proposal creates a loophole to this rule that could chill nonprofits for years to come. 

There is no question that nonprofits and educational institutions – along with many other groups and individuals – are under threat from this administration. If passed, future administrations, regardless of party affiliation, could weaponize the powers in this bill against nonprofits of all kinds. We urge the House to vote down this proposal. 

[syndicated profile] eff_feed

Posted by ARRAY(0x56373709b1d8)

Within the next decade, generative AI could join computers and electricity as one of the most transformational technologies in history, with all of the promise and peril that implies. Governments’ responses to GenAI—including new legal precedents—need to thoughtfully address real-world harms without destroying the public benefits GenAI can offer. Unfortunately, the U.S. Copyright Office’s rushed draft report on AI training misses the mark.

The Report Bungles Fair Use

Released amidst a set of controversial job terminations, the Copyright Office’s report covers a wide range of issues with varying degrees of nuance. But on the core legal question—whether using copyrighted works to train GenAI is a fair use—it stumbles badly. The report misapplies long-settled fair use principles and ultimately puts a thumb on the scale in favor of copyright owners at the expense of creativity and innovation.

To work effectively, today’s GenAI systems need to be trained on very large collections of human-created works—probably millions of them. At this scale, locating copyright holders and getting their permission is daunting for even the biggest and wealthiest AI companies, and impossible for smaller competitors. If training makes fair use of copyrighted works, however, then no permission is needed.

Right now, courts are considering dozens of lawsuits that raise the question of fair use for GenAI training. Federal District Judge Vince Chhabria is poised to rule on this question, after hearing oral arguments in Kadrey v. Meta PlatformsThe Third Circuit Court of Appeals is expected to consider a similar fair use issue in Thomson Reuters v. Ross Intelligence. Courts are well-equipped to resolve this pivotal issue by applying existing law to specific uses and AI technologies. 

Courts Should Reject the Copyright Office’s Fair Use Analysis

The report’s fair use discussion contains some fundamental errors that place a thumb on the scale in favor of rightsholders. Though the report is non-binding, it could influence courts, including in cases like Kadrey, where plaintiffs have already filed a copy of the report and urged the court to defer to its analysis.   

Courts need only accept the Copyright Office’s draft conclusions, however, if they are persuasive. They are not.   

The Office’s fair use analysis is not one the courts should follow. It repeatedly conflates the use of works for training models—a necessary step in the process of building a GenAI model—with the use of the model to create substantially similar works. It also misapplies basic fair use principles and embraces a novel theory of market harm that has never been endorsed by any court.

The first problem is the Copyright Office’s transformative use analysis. Highly transformative uses—those that serve a different purpose than that of the original work—are very likely to be fair. Courts routinely hold that using copyrighted works to build new software and technology—including search engines, video games, and mobile apps—is a highly transformative use because it serves a new and distinct purpose. Here, the original works were created for various purposes and using them to train large language models is surely very different.

The report attempts to sidestep that conclusion by repeatedly ignoring the actual use in question—training —and focusing instead on how the model may be ultimately used. If the model is ultimately used primarily to create a class of works that are similar to the original works on which it was trained, the Office argues, then the intermediate copying can’t be considered transformative. This fundamentally misunderstands transformative use, which should turn on whether a model itself is a new creation with its own distinct purpose, not whether any of its potential uses might affect demand for a work on which it was trained—a dubious standard that runs contrary to decades of precedent.

The Copyright Office’s transformative use analysis also suggests that the fair use analysis should consider whether works were obtained in “bad faith,” and whether developers respected the right “to control” the use of copyrighted works.  But the Supreme Court is skeptical that bad faith has any role to play in the fair use analysis and has made clear that fair use is not a privilege reserved for the well-behaved. And rightsholders don’t have the right to control fair uses—that’s kind of the point.

Finally, the Office adopts a novel and badly misguided theory of “market harm.” Traditionally, the fair use analysis requires courts to consider the effects of the use on the market for the work in question. The Copyright Office suggests instead that courts should consider overall effects of the use of the models to produce generally similar works. By this logic, if a model was trained on a Bridgerton novel—among millions of other works—and was later used by a third party to produce romance novels, that might harm series author Julia Quinn’s bottom line.

This market dilution theory has four fundamental problems. First, like the transformative use analysis, it conflates training with outputs. Second, it’s not supported by any relevant precedent. Third, it’s based entirely on speculation that Bridgerton fans will buy random “romance novels” instead of works produced by a bestselling author they know and love.  This relies on breathtaking assumptions that lack evidence, including that all works in the same genre are good substitutes for each other—regardless of their quality, originality, or acclaim. Lastly, even if competition from other, unique works might reduce sales, it isn’t the type of market harm that weighs against fair use.

Nor is lost revenue from licenses for fair uses a type of market harm that the law should recognize. Prioritizing private licensing market “solutions” over user rights would dramatically expand the market power of major media companies and chill the creativity and innovation that copyright is intended to promote. Indeed, the fair use doctrine exists in part to create breathing room for technological innovation, from the phonograph record to the videocassette recorder to the internet itself. Without fair use, crushing copyright liability could stunt the development of AI technology.

We’re still digesting this report, but our initial review suggests that, on balance, the Copyright Office’s approach to fair use for GenAI training isn’t a dispassionate report on how existing copyright law applies to this new and revolutionary technology. It’s a policy judgment about the value of GenAI technology for future creativity, by an office that has no business making new, free-floating policy decisions.

The courts should not follow the Copyright Office’s speculations about GenAI. They should follow precedent.

[syndicated profile] eff_feed

Posted by Cindy Cohn

John L. Young, who died March 28 at age 89 in New York City, was among the first people to see the need for an online library of official secrets, a place where the public could find out things that governments and corporations didn’t want them to know. He made real the idea – revolutionary in its time – that the internet could make more information available to more people than ever before.

John and architect Deborah Natsios, his wife, in 1996 founded Cryptome, an online library which collects and publishes data about freedom of expression, privacy, cryptography, dual-use technologies, national security, intelligence, and government secrecy. Its slogan: “The greatest threat to democracy is official secrecy which favors a few over the many.” And its invitation: “We welcome documents for publication that are prohibited by governments worldwide.”

Cryptome soon became known for publishing an encyclopedic array of government, court, and corporate documents. Cryptome assembled an indispensable, almost daily chronicle of the ‘crypto wars’ of the 1990s – when the first generation of internet lawyers and activists recognized the need to free up encryption from government control and undertook litigation, public activism and legislative steps to do so.  Cryptome became required reading for anyone looking for information about that early fight, as well as many others.    

John and Cryptome were also among the early organizers and sponsors of WikiLeaks, though like many others, he later broke with that organization over what he saw as its monetization. Cryptome later published Wikileaks’ alleged internal emails. Transparency was the core of everything John stood for.

John was one of the early, under-recognized heroes of the digital age.

John was a West Texan by birth and an architect by training and trade. Even before he launched the website, his lifelong pursuit of not-for-profit, public-good ideals led him to seek access to documents about shadowy public development entities that seemed to ignore public safety, health, and welfare. As the digital age dawned, this expertise in and passion for exposing secrets evolved into Cryptome with John its chief information architect, designing and building a real-time archive of seminal debates shaping cyberspace’s evolving information infrastructures.

The FBI and Secret Service tried to chill his activities. Big Tech companies like Microsoft tried to bully him into pulling documents off the internet. But through it all, John remained a steadfast if iconoclastic librarian without fear or favor.

John served in the United States Army Corps of Engineers in Germany (1953–1956) and earned degrees in philosophy and architecture from Rice University (1957–1963) and his graduate degree in architecture from Columbia University in 1969. A self-identified radical, he became an activist and helped create the community service group Urban Deadline, where his fellow student-activists initially suspected him of being a police spy. Urban Deadline went on to receive citations from the Citizens Union of the City of New York and the New York City Council.

John was one of the early, under-recognized heroes of the digital age. He not only saw the promise of digital technology to help democratize access to information, he brought that idea into being and nurtured it for many years.  We will miss him and his unswerving commitment to the public’s right to know.

[syndicated profile] eff_feed

Posted by Joe Mullin

The Kids Online Safety Act (KOSA) is back in the Senate. Sponsors are claiming—again—that the latest version won’t censor online content. It isn’t true. This bill still sets up a censorship regime disguised as a “duty of care,” and it will do what previous versions threatened: suppress lawful, important speech online, especially for young people.

TAKE ACTION

KOSA Will silence kids and adults

KOSA Still Forces Platforms to Police Legal Speech

At the center of the bill is a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” The bill claims to bar lawsuits over “the viewpoint of users,” but that’s a smokescreen. Its core function is to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to one of these harms. 

When the safest legal option is to delete a forum, platforms will delete the forum.

This bill won’t bother big tech. Large companies will be able to manage this regulation, which is why Apple and X have agreed to support it. In fact, X helped negotiate the text of the last version of this bill we saw. Meanwhile, those companies’ smaller competitors will be left scrambling to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

To avoid liability, platforms will over-censor. It’s not merely hypothetical. It’s what happens when speech becomes a legal risk. The list of harms in KOSA’s “duty of care” provision is so broad and vague that no platform will know what to do regarding any given piece of content. Forums won’t be able to host posts with messages like “love your body,” “please don’t do drugs,” or “here’s how I got through depression” without fearing that an attorney general or FTC lawyer might later decide the content was harmful. Support groups and anti-harm communities, which can’t do their work without talking about difficult subjects like eating disorders, mental health, and drug abuse, will get caught in the dragnet. 

When the safest legal option is to delete a forum, platforms will delete the forum.

There’s Still No Science Behind KOSA’s Core Claims

KOSA relies heavily on vague, subjective harms like “compulsive usage.” The bill defines it as repetitive online behavior that disrupts life activities like eating, sleeping, or socializing. But here’s the problem: there is no accepted clinical definition of “compulsive usage” of online services.

There’s no scientific consensus that online platforms cause mental health disorders, nor agreement on how to measure so-called “addictive” behavior online. The term sounds like settled medical science, but it’s legislative sleight-of-hand: an undefined concept given legal teeth, with major consequences for speech and access to information.

Carveouts Don’t Fix the First Amendment Problem

The bill says it can’t be enforced based on a user’s “viewpoint.” But the text of the bill itself preferences certain viewpoints over others. Plus, liability in KOSA attaches to the platform, not the user. The only way for platforms to reduce risk in the world of KOSA is to monitor, filter, and restrict what users say.

If the FTC can sue a platform because minors saw a medical forum discussing anorexia, or posts about LGBTQ identity, or posts discussing how to help a friend who’s depressed, then that’s censorship. The bill’s stock language that “viewpoints are protected” won’t matter. The legal incentives guarantee that platforms will silence even remotely controversial speech to stay safe.

Lawmakers who support KOSA today are choosing to trust the current administration, and future administrations, to define what youth—and to some degree, all of us—should be allowed to read online. 

KOSA will not make kids safer. It will make the internet more dangerous for anyone who relies on it to learn, connect, or speak freely. Lawmakers should reject it, and fast. 

TAKE ACTION

TELL CONGRESS: OPPOSE KOSA

[syndicated profile] eff_feed

Posted by Hayley Tsukayama

We’ve covered a lot of federal and state proposals that badly miss the mark when attempting to grapple with protecting young people’s safety online. These include bills that threaten to cut young people off from vital information, infringe on their First Amendment rights to speak for themselves, subject them (and adults) to invasive and insecure age verification technology, and expose them to danger by sharing personal information with people they may not want to see it.

Several such bills are moving through the California legislature this year, continuing a troubling years-long trend of lawmakers pushing similarly problematic proposals. This week, EFF sent a letter to the California legislature expressing grave concerns with lawmakers’ approach to regulating young people’s ability to speak online.

We’re far from the only ones who have issues with this approach. Many of the laws California has passed attempting to address young people’s online safety have been subsequently challenged in court and stopped from going into effect.

Our letter outlines the legal, technical, and policy problems with proposed “solutions” including age verification mandates, age gating, mandatory parental controls, and proposals that will encourage platforms to take down speech that’s even remotely controversial.

There are better paths that don’t hurt young people’s First Amendment rights.

We also note that the current approach completely ignores what we’ve heard from thousands of young people: the online platforms and communities they frequent can be among the safest spaces for them in the physical or digital world. These responses show the relationship between social media and young people’s mental health is far more nuanced than many lawmakers are willing to believe.

While our letter is addressed to California’s Assembly and Senate, they are not the only state lawmakers taking this approach. All lawmakers should listen to the people they’re trying to protect and find ways to help young people without hurting the spaces that are so important to them.

There are better paths that don’t hurt young people’s First Amendment rights and still help protect them against many of the harms that lawmakers have raised. In fact, elements of such approaches, such as data minimization, are already included in some of these otherwise problematic bills. A well-crafted privacy law that empowers everyone—children and adults—to control how their data is collected and used would be a crucial step in curbing many of these problems.

We recognize that many young people face real harms online, that families are grappling with how to deal with them, and that tech companies are not offering much help.

However, many of the California legislature’s proposals—this year, and for several years—miss the root of the problem. We call on lawmakers work with us to enact better solutions.

[syndicated profile] eff_feed

Posted by Svea Windwehr

This is the final part of a three-part series about age verification in the European Union. In part one, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks. 

When thinking about the safety of young people online, it is helpful to remember that we can build on and learn from the decades of experience we already have thinking through risks that can stem from content online. Before mandating a “fix,” like age checks or age assurance obligations, we should take the time to reflect on what it is exactly we are trying to address, and whether the proposed solution is able to solve the problem.

The approach of analyzing, defining and mitigating risks is a helpful one in this regard as it allows us to take a holistic look at possible risks, which includes thinking about the likelihood of a risk materializing, the severity of a certain risk and how risks may affect different groups of people very differently. 

In the context of child safety online, mandatory age checks are often presented as a solution to a number of risks potentially faced by minors online. The most common concerns to which policymakers refer in the context of age checks can be broken down into three categories of risks:

  • Content risks: This refers to the negative implications from the exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm. 
  • Conduct risks: Conduct risks involve behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
  • Contact risks: This includes potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material. 

Taking a closer look at these risk categories, we can see that mandatory age checks are an ineffective and disproportionate tool to mitigate many risks at the top of policymakers’ minds.

Mitigating risks stemming from contact between minors and adults usually means ensuring that adults are barred from spaces designated for children. Age checks, especially age verification depending on ID documents like the European Commission’s mini-ID wallet, are not a helpful tool in this regard as children routinely do not have access to the kind of documentation allowing them to prove their age. Adults with bad intentions, on the other hand, are much more likely to be able to circumvent any measures put in place to keep them out.

Conduct risks have little to do with how old a specific user is, and much more to do with social dynamics and the affordances and constraints of online services. Differently put: Whether a platform knows a user’s age will not change how minor users themselves decide to behave and interact on the platform. Age verification won’t prevent users from choosing to engage in harmful or risky behavior, like freely posting personal information or spending too much time online. 

Finally, mitigating risks related to content deemed inappropriate is often thought of as shutting minors out from accessing certain information. Age check mandates seek to limit access to services and content without much granularity. They don’t allow for a nuanced weighing of the ways in which accessing the internet and social media can be a net positive for young people, and the ways in which it can lead to harm. This is complicated by the fact that although arguments in favour of age checks claim that the science on the relationship between the internet and young people is clear, the evidence on the effects of social media on minors is unsettled, and researchers have refuted claims that social media use is responsible for wellbeing crises among teenagers. This doesn’t mean that we shouldn’t consider the risks that may be associated with being young and online. 

But it’s clear that banning all access to certain information for an entire age cohort interferes with all users’ fundamental rights, and is therefore not a proportionate risk mitigation strategy. Under a mandatory age check regime, adults are also required to upload identifying documents just to access websites, interfering with their speech, privacy and security online. At the same time, age checks are not even effective at accomplishing what they’re intended to achieve. Assuming that all age check mandates can and will be circumvented, they seem to do little in the way of protecting children but rather undermine their fundamental rights to privacy, freedom of expression and access to information crucial for their development. 

At EFF, we have been firm in our advocacy against age verification mandates and often get asked what we think policymakers should do instead to protect users online. Our response is a nuanced one, recognizing that there is no easy technological fix for complex, societal challenges: Take a holistic approach to risk mitigation, strengthen user choice, and adopt a privacy-first approach to fighting online harms. 

Taking a Holistic Approach to Risk Mitigation 

In the European Union, the past years have seen the adoption of a number of landmark laws to regulate online services. With new rules such as the Digital Services Act or the AI Act, lawmakers are increasingly pivoting to risk-based approaches to regulate online services, attempting to square the circle by addressing known cases of harm while also providing a framework for dealing with possible future risks. It remains to be seen how risk mitigation will work out in practice and whether enforcement will genuinely uphold fundamental rights without enabling overreach. 

Under the Digital Services Act, this framework also encompasses rights-protective moderation of content relevant to the risks faced by young people using their services. Platforms may also come up with their own policies on how to moderate legal content that may be considered harmful, such as hate speech or violent content. Robust enforcement of their own community guidelines is one of the most important tools at the disposal of online platforms, but unfortunately often lacking – also for categories of content harmful to children and teenagers, like pro-anorexia content

To counterbalance potential negative implications on users’ rights to free expression, the DSA puts boundaries on platforms’ content moderation: Platforms must act objectively and proportionately and must take users’ fundamental rights into account when restricting access to content. Additionally, users have the right to appeal content moderation decisions and can ask platforms to review content moderation decisions they disagree with. Users can also seek resolution through out-of-court dispute settlement bodies, at no cost, and can ask nonprofits to represent them in the platform’s internal dispute resolution process, in out-of-court dispute settlements and in court. Platforms must also publish detailed transparency reports, and give researchers and non-profits access to data to study the impacts of online platforms on society. 

Beyond these specific obligations on platforms regarding content moderation, the protection of user rights, and improving transparency, the DSA obliges online platforms to take appropriate and proportionate measures to protect the privacy, security and safety of minors. Upcoming guidelines will hopefully provide more clarity on what this means in practice, but it is clear that there are a host of measures platforms can adopt before resorting to approaches as disproportionate as age verification.

The DSA also foresees obligations on the largest platforms and search engines – so called Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) that have more than 45 million monthly users in the EU – to analyze and mitigate so-called systemic risks posed by their services. This includes analyzing and mitigating risks to the protection of minors and the rights of the child, including freedom of expression and access to information. While we have some critiques of the DSA’s systemic risk governance approach, it is helpful for thinking through the actual risks for young people that may be associated with different categories of content, platforms and their functionalities.

However, it is crucial that such risk assessments are not treated as mere regulatory compliance exercises, but put fundamental rights – and the impact of platforms and their features on those rights – front and center, especially in relation to the rights of children. Platforms would be well-advised to use risk assessments responsibly for their regular product and policy assessments when mitigating risks stemming from content, design choices or features, like recommender systems, ways of engaging with content and users and or online ads. Especially when it comes to possible negative and positive effects of these features on children and teenagers, such assessments should be frequent and granular, expanding the evidence base available to both platforms and regulators. Additionally, platforms should allow external researchers to challenge and validate their assumptions and should provide extensive access to research data, as mandated by the DSA. 

The regulatory framework to deal with potentially harmful content and protect minors in the EU is a new and complex one, and enforcement is still in its early days. We believe that its robust, rights-respecting enforcement should be prioritized before eyeing new rules and legal mandates. 

Strengthening Users’ Choice 

Many online platforms also deploy their own tools to help families navigate their services, including parental control settings and apps, specific offers tailored to the needs of children and teens, or features like reminders to take a break. While these tools are certainly far from perfect, and should not be seen as a sufficient measure to address all concerns, they do offer families an opportunity to set boundaries that work for them. 

Academic and civil society research underlines that better and more granular user controls can also be an effective tool to minimize content and contact risks: Allowing users to integrate third-party content moderation systems or recommendation algorithms would enable families to alter their childrens’ online experiences according to their needs. 

The DSA takes a first helpful step in this direction by mandating that online platforms give users transparency about the main parameters used to recommend content to users, and to allow users to easily choose between different recommendation systems when multiple options are available. The DSA also obliges VLOPs that use recommender systems to offer at least one option that is not based on profiling users, thereby giving users of large platforms the choice to protect themselves from the often privacy-invasive personalization of their feeds. However, forgoing all personalization will likely not be attractive to most users, and platforms should give users the choice to use third-party recommender systems that better mirror their privacy preferences.

Giving users more control over which accounts can interact with them, and in which ways, can also help protect children and teenagers against unwanted interactions. Strengthening users’ choice also includes prohibiting companies from implementing user interfaces that have the intent or substantial effect of impairing autonomy and choice. This so-called “deceptive design” can take many forms, from tricking people into giving consent to the collection of their personal data, to encouraging the use of certain features. The DSA takes steps to ban dark patterns, but European consumer protection law must make sure that this prohibition is strictly enforced and that no loopholes remain. 

A Privacy First Approach to Addressing Online Harms 

While rights-respecting content moderation and tools to strengthen parents’ and childrens’ self-determination online are part of the answer, we have long advocated for a privacy-focused approach to fighting online harms. 

We follow this approach for two reasons: On the one hand, privacy risks are complex and young people cannot be expected to predict risks that may materialize in the future. On the other hand, many of the ways in which children and teenangers can be harmed online are directly linked to the accumulation and exploitation of their personal data. 

Online services collect enormous amounts of personal data and personalize or target their services – displaying ads or recommender systems – based on that data. While the systems that target and display ads and curate online content are distinct, both are based on the surveillance and profiling of users. In addition to allowing users to choose a recommender system, settings for all users should by default turn off recommender systems based on behavioral data. To protect all users’ privacy and data protection rights, platforms should have to ask for users’ informed, specific, voluntary, opt-in consent before collecting their data to personalize recommender systems. Privacy settings should be easily accessible and allow users to enable additional protections. 

Data collection in the context of online ads is even more opaque. Due to the large number of ad tech actors and data brokers involved, it is practically impossible for users to give informed consent for the processing of their personal data. This data is used by ad tech companies and data brokers to profile users to draw inferences about what they like, what kind of person they are (including demographics like age and gender), and what they might be interested in buying, seeing, or engaging with. This information is then used by ad tech companies to target advertisements, including for children. Beyond undermining children’s privacy and autonomy, the online behavioral ad system teaches users from a young age that data collection, tracking, and profiling are evils that come with using the web, thereby normalizing being tracked, profiled, and surveilled. 

This is why we have long advocated for a ban of online behavioral advertising. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do. The DSA already bans targeting minors with behavioral ads, but this protection should be extended to everyone. Banning behavioral advertising will be the most effective path to disincentivize the collection and processing of personal data and end the surveillance of all users, including children, online. 

Similarly, pay-for-privacy schemes should be banned, and we welcome the recent decision by the European Commission to fine Meta for breaching the Digital Markets Act by offering its users a binary choice between paying for privacy or having their personal data used for ads targeting. Especially in the face of recent political pressure from the Trump administration to not enforce European tech laws, we applaud the European Commission for taking a clear stance and confirming that the protection of privacy online should never be a luxury or privilege. And especially vulnerable users like children should not be confronted with the choice between paying extra (something that many children will not be able to do) or being surveilled.

[syndicated profile] eff_feed

Posted by Hayley Tsukayama

This week, the U.S. House Energy and Commerce Committee moved forward with a proposal in its budget reconciliation bill to impose a ten-year preemption of state AI regulation—essentially saying only Congress, not state legislatures, can place safeguards on AI for the next decade.

We strongly oppose this. We’ve talked before about why federal preemption of stronger state privacy laws hurts everyone. Many of the same arguments apply here. For one, this would override existing state laws enacted to mitigate against emerging harms from AI use. It would also keep states, which have been more responsive on AI regulatory issues, from reacting to emerging problems.

Finally, it risks freezing any regulation on the issue for the next decade—a considerable problem given the pace at which companies are developing the technology. Congress does not react quickly and, particularly when addressing harms from emerging technologies, has been far slower to act than states. Or, as a number of state lawmakers who are leading on tech policy issues from across the country said in a recent joint open letter, “If Washington wants to pass a comprehensive privacy or AI law with teeth, more power to them, but we all know this is unlikely.”

Even if Congress does nothing on AI for the next ten years, this would still prevent states from stepping into the breach.

Even if Congress does nothing on AI for the next ten years, this would still prevent states from stepping into the breach. Given how different the AI industry looks now from how it looked just three years ago, it’s hard to even conceptualize how different it may look in ten years. State lawmakers must be able to react to emerging issues.

Many state AI proposals struggle to find the right balance between innovation and speech, on the one hand, and consumer protection and equal opportunity, on the other. EFF supports some bills to regulate AI and opposes others. But stopping states from acting at all puts a heavy thumb on the scale in favor of companies.

Stopping states will stop progress. As the big technology companies have done (and continue to do) with privacy legislation, AI companies are currently going all out to slow or roll back legal protections in states.

For example, Colorado passed a broad bill on AI protections last year. While far from perfect, the bill set down basic requirements to give people visibility into how companies use AI to make consequential decisions about them. This year, several AI companies lobbied to delay and weaken the bill. Meanwhile, POLITICO recently reported that this push in Washington, D.C. is in direct response to proposed California rules.

We oppose the AI preemption language in the reconciliation bill and urge Congress not to move forward with this damaging proposal.

[syndicated profile] eff_feed

Posted by Matthew Guariglia

Montana has done something that many states and the United States Congress have debated but failed to do: it has just enacted the first attempt to close the dreaded, invasive, unconstitutional, but easily fixed “data broker loophole.” This is a very good step in the right direction because right now, across the country, law enforcement routinely purchases information on individuals it would otherwise need a warrant to obtain.

What does that mean? In every state other than Montana, if police want to know where you have been, rather than presenting evidence and sending a warrant signed by a judge to a company like Verizon or Google to get your geolocation data for a particular set of time, they only need to buy that same data from data brokers. In other words, all the location data apps on your phone collect —sometimes recording your exact location every few minutes—is just sitting for sale on the open market. And police routinely take that as an opportunity to skirt your Fourth Amendment rights.

Now, with SB 282, Montana has become the first state to close the data broker loophole. This means the government may not use money to get access to information about electronic communications (presumably metadata), the contents of electronic communications, contents of communications sent by a tracking devices, digital information on electronic funds transfers, pseudonymous information, or “sensitive data”, which is defined in Montana as information about a person’s private life, personal associations, religious affiliation, health status, citizen status, biometric data, and precise geolocation. This does not mean information is now fully off limits to police. There are other ways for law enforcement in Montana to gain access to sensitive information: they can get a warrant signed by a judge, they can get consent of the owner to search a digital device, they can get an “investigative subpoena” which unfortunately requires far less justification than an actual warrant.

Despite the state’s insistence on honoring lower-threshold subpoena usage, SB 282 is not the first time Montana has been ahead of the curve when it comes to passing privacy-protecting legislation. For the better part of a decade, the Big Sky State has seriously limited the use of face recognition, passed consumer privacy protections, added an amendment to their constitution recognizing digital data as something protected from unwarranted searches and seizures, and passed a landmark law protecting against the disclosure or collection of genetic information and DNA. 

SB 282 is similar in approach to  H.R.4639, a federal bill the EFF has endorsed, introduced by Senator Ron Wyden, called the Fourth Amendment is Not for Sale Act. H.R.4639 passed through the House in April 2024 but has not been taken up by the Senate. 

Absent the United States Congress being able to pass important privacy protections into law, states, cities, and towns have taken it upon themselves to pass legislation their residents sorely need in order to protect their civil liberties. Montana, with a population of just over one million people, is showing other states how it’s done. EFF applauds Montana for being the first state to close the data broker loophole and show the country that the Fourth Amendment is not for sale. 

[syndicated profile] eff_feed

Posted by Thorin Klosowski

Encrypted chat apps like Signal and WhatsApp are one of the best ways to keep your digital conversations as private as possible. But if you’re not careful with how those conversations are backed up, you can accidentally undermine your privacy.

When a conversation is properly encrypted end-to-end, it means that the contents of those messages are only viewable by the sender and the recipient. The organization that runs the messaging platform—such as Meta or Signal—does not have access to the contents of the messages. But it does have access to some metadata, like the who, where, and when of a message. Companies have different retention policies around whether they hold onto that information after the message is sent.

What happens after the messages are sent and received is entirely up to the sender and receiver. If you’re having a conversation with someone, you may choose to screenshot that conversation and save that screenshot to your computer’s desktop or phone’s camera roll. You might choose to back up your chat history, either to your personal computer or maybe even to cloud storage (services like Google Drive or iCloud, or to servers run by the application developer).

Those backups do not necessarily have the same type of encryption protections as the chats themselves, and may make those conversations—which were sent with strong, privacy-protecting end-to-end encryption—available to read by whoever runs the cloud storage platform you’re backing up to, which also means they could hand them at the request of law enforcement.

With that in mind, let’s take a look at how several of the most popular chat apps handle backups, and what options you may have to strengthen the security of those backups.

How Signal Handles Backups

The official Signal app doesn’t offer any way to back up your messages to a cloud server (some alternate versions of the app may provide this, but we recommend you avoid those, as there don’t exist any alternatives with the same level of security as the official app). Even if you use a device backup, like Apple’s iCloud backup, the contents of Signal messages are not included in those.

Instead, Signal supports a manual backup and restore option. Basically, messages are not backed up to any cloud storage, and Signal cannot access them, so the only way to transfer messages from one device to another is manually through a process that Signal details here. If you lose your phone or it breaks, you will likely not be able to transfer your messages.

How WhatsApp Handles Backups

WhatsApp can optionally back up the contents of chats to either a Google Account on Android, or iCloud on iPhone, and you have a choice to back up with or without end-to-end encryption. Here are directions for enabling end-to-end encryption in those backups. When you do so, you’ll need to create a password or save a 64-digit key.

How Apple’s iMessages Handles Backups

Communication between people with Apple devices using Apple’s iMessage (blue bubbles in the Messages app), are end-to-end encrypted, but the backups of those conversations are not end-to-end encrypted by default. This is a loophole we’ve routinely demanded Apple close.

The good news is that with the release of the Advanced Data Protection feature, you can optionally turn on end-to-end encryption for almost everything stored in iCloud, including those backups (unless you’re in the U.K., where Apple is currently arguing with the government over demands to access data in the cloud, and has pulled the feature for U.K. users).

How Google Messages Handles Backups

Similar to Apple iMessages, Google Messages conversations are end-to-end encrypted only with other Google Messages users (you’ll know it’s enabled when there’s a small lock icon next to the send button in a chat).

You can optionally back up Google Messages to a Google Account, and as long as you have a passcode or lock screen password, the backup of the text of those conversations is end-to-end encrypted. A feature to turn on end-to-end encrypted backups directly in the Google Messages app, similar to how WhatsApp handles it, was spotted in beta last year but hasn’t been officially announced or released.

Everyone in the Group Chat Needs to Get Encrypted

Note that even if you take the extra step to turn on end-to-end encryption, everyone else you converse with would have to do the same to protect their own backups. If you have particularly sensitive conversations on apps like WhatsApp or Apple Messages, where those encrypted backups are an option but not the default, you may want to ask those participants to either not back up their chats at all, or turn on end-to-end encrypted backups. 

Ask Yourself: Do I Need Backups Of These Conversations?

Of course, there’s a reason people want to back up their conversations. Maybe you want to keep a record of the first time you messaged your partner, or want to be able to look back on chats with friends and family. There should not be a privacy trade-off for those who want to save those conversations, but unfortunately you do need to weigh whether or not it’s worth saving your chats with the potential of them being exposed in your security plan.

But also it’s worth considering that we don’t typically need every conversation we have stored forever. Many chat apps, including WhatsApp and Signal, offer some form of “disappearing messages,” which is a way to delete messages after a certain amount of time. This gets a little tricky with backups in WhatsApp. If you create a backup before a message disappears, it’ll be included in the backup, but deleted when you restore later. Those messages will remain there until you back up again, which may be the next day, or may not be many days, if you don’t connect to Wi-Fi.

You can change these disappearing messaging settings on a per-conversation basis. That means you can choose to set the meme-friendly group chat with your friends to delete after a week, but retain the messages with your kids forever. Google Messages and Apple Messages don’t offer any such feature—but they should, because it’s a simple way to protect our conversations that gives more control over to the people using the app.

End-to-end encrypted chat apps are a wonderful tool for communicating safely and privately, but backups are always going to be a contentious part of how they work. Signal’s approach of not offering cloud storage for backups at all is useful for those who need that level of privacy, but is not going to work for everyone’s needs. Better defaults and end-to-end encrypted backups as the only option when cloud storage is offered would be a step forward, and a much easier solution than going through and asking every one of your contacts how or if they back up their chats.

[syndicated profile] eff_feed

Posted by ARRAY(0x55a46dc80f90)

President Trump’s attack on public broadcasting has attracted plenty of deserved attention, but there’s a far more technical, far more insidious policy change in the offing—one that will take away Americans’ right to unencumbered access to our publicly owned airwaves.

The FCC is quietly contemplating a fundamental restructuring of all broadcasting in the United States, via a new DRM-based standard for digital television equipment, enforced by a private “security authority” with control over licensing, encryption, and compliance. This move is confusingly called the “ATSC Transition” (ATSC is the digital TV standard the US switched to in 2009 – the “transition” here is to ATSC 3.0, a new version with built-in DRM).

The “ATSC Transition” is championed by the National Association of Broadcasters, who want to effectively privatize the public airwaves, allowing broadcasters to encrypt over-the-air programming, meaning that you will only be able to receive those encrypted shows if you buy a new TV with built-in DRM keys. It’s a tax on American TV viewers, forcing you to buy a new TV so you can continue to access a public resource you already own. 

This may not strike you as a big deal. Lots of us have given up on broadcast and get all our TV over the internet. But millions of American still rely heavily or exclusively on broadcast television for everything from news to education to simple entertainment. Many of these viewers live in rural or tribal areas, and/or are low-income households who can least afford to “upgrade.” Historically, these viewers have been able to rely on access to broadcast because, by law, broadcasters get extremely valuable spectrum licenses in exchange for making their programming available for free to anyone within range of their broadcast antennas. 

If broadcasters have cool new features the public will enjoy, they don’t need to force us to adopt them

Adding DRM to over-the-air broadcasts upends this system. The “ATSC Transition” is a really a transition from the century-old system of universally accessible programming to a privately controlled web of proprietary technological restrictions. It’s a transition from a system where anyone can come up with innovative new TV hardware to one where a centralized, unaccountable private authority gets a veto right over new devices. 

DRM licensing schemes like this are innovation killers. Prime example: DVDs and DVD players, which have been subject to a similar central authority, and haven’t gotten a single new feature since the DVD player was introduced in 1995. 

DRM is also incompatible with fundamental limits on copyright, like fair use.  Those limits let you do things like record a daytime baseball game and then watch it after dinner, skipping the ads. Broadcasters would like to prevent that and DRM helps them do it. Keep in mind that bypassing or breaking a DRM system’s digital keys—even for lawful purposes like time-shifting, ad-skipping, security research, and so on—risks penalties under Section 1201 of the Digital Millennium Copyright Act. That is, unless you have the time and resources to beg the Copyright Office for an exemption (and, if the exemption is granted, to renew your plea every three years). 

Broadcasters say they need this change to offer viewers new interactive features that will serve the public interest. But if broadcasters have cool new features the public will enjoy, they don’t need to force us to adopt them. The most reliable indicator that a new feature is cool and desirable is that people voluntarily install it. If the only way to get someone to use a new feature is to lock up the keys so they can’t turn it off, that’s a clear sign that the feature is not in the public interest. 

That's why EFF joined Public Knowledge, Consumer Reports and others in urging the FCC to reject this terrible, horrible, no good, very bad idea and keep our airwaves free for all of us. We hope the agency listens, and puts the interests of millions of Americans above the private interests of a few powerful media cartels.

June 2012

S M T W T F S
     12
345 6789
10111213141516
17181920212223
24252627282930

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 22nd, 2025 06:34 am
Powered by Dreamwidth Studios