tools – Techdirt (original) (raw)
Companies Are Simply Ignoring Many New State ‘Right To Repair’ Laws
from the fix-your-own-shit dept
Last March Oregon became the seventh state to pass “right to repair” legislation making it easier, cheaper, and more convenient to repair technology you own. The bill’s passage came on the heels of legislation passed in Massachusetts (in 2012 and 2020), Colorado (in 2022 and 2023), New York (2023), Minnesota, Maine and California. All told, 30 states are considering such bills in 2024.
While the popular reforms are a nice example of U.S. consumer rights headed in the right direction, many of the bills (like New York’s) were watered down almost to the point of uselessness to appease larger tech companies. And in many states, companies simply aren’t complying because enforcement has been largely absent.
A recent report by PIRG examined 21 different mainstream tech devices subject to New York’s recently passed electronics Right to Repair law, then graded them “based on the quality and accessibility of repair manuals, spare parts, and other critical repair materials.” Most fared poorly in terms of easy access to parts and manuals, and New York’s done zero enforcement of its own law so far:
“_The New York Right to Repair Bill has had mixed success. It has gone a long way in pushing companies towards greater repair standards, but it has been surpassed by newer repair bills in other states like the recent passage in Oregon. In order for this bill to remain useful for the people of New York, it should be updated to bring it in line with newer repair standards, as well as provide greater enforcement to move companies towards full compliance in the future._“
NY’s lax enforcement is not particularly surprising, given that NY Governor Kathy Hochul went out of her way to make her state’s law as loophole-filled as possible.
In California, the state legislature recently passed SB-244, the Right to Repair Act, which only just took effect last July. The bill requires that appliance and electronics companies make repair manuals, parts, and tools widely available to consumers, repair shops, and service dealers. But companies in California also seem in no particular rush to come into compliance with the new state law:
“Even though the law’s been in effect for nearly two months now, a couple repair shop owners tell KSBY some manufacturers are not following it…[local independent repair shop owner Eric] Vanderlip told KSBY he reached out to Bose and Polk Audio for schematic diagrams and manuals for certain products, but neither would provide them.
KSBY then reached out to both companies and has not heard back.”
Cool.
Laws are, of course, only worth something if they’re meaningfully enforced, and so far there have not been many indications that major companies are rushing to comply with these new consumer right to repair protections, or that state officials are in any particular rush to make them. Granted many of these laws are new, and it’s going to take a few shots over the bows of major offenders to spark compliance.
The problem is many of these bills already carved out many of the more problematic industries and hardware, including agricultural hardware and medical gear manufacturers. It’s been amazing to see the progress activists have made with these reforms, but it would be a shame if such reforms with widespread bipartisan support wound up being predominantly performative.
Filed Under: california, consumers, drm, hardware, manuals, new york, reform, right to repair, state law, tools
Companies: bose, polk audio
FTC Fires A Warning Shot At Eight Companies Over ‘Right To Repair’ Violations
from the fix-your-own-shit dept
Fri, Jul 12th 2024 05:27am - Karl Bode
Like a growing number of states, the FTC under Lina Khan continues to show it’s somewhat serious about protecting consumers’ rights to repair their own tech. In 2021 the agency issued a useful report busting a lot of lobbying myths about repairability, and over the last few years has been cracking down on companies that claim that using third-party parts or repair shops violates warranty coverage.
In 2022 the agency took action against Harley Davidson for saying the use of third-party repair parts and facilities violated motorcycle warranties. They took similar action against Weber Grills and Westinghouse. Such practices run afoul of the Magnuson-Moss Warranty Act (MMWA) enforced by the FTC.
Now the agency has sent warning letters to eight additional companies. In a new letter, the agency warned four air purifier sellers (aeris Health, Blueair, Medify Air, and Oransi), treadmill maker InMovement, and gaming hardware companies ASRock, Zotac, and Gigabyte that they can’t tell consumers that using third-party parts and repair shops violate product warranties.
“These warning letters put companies on notice that restricting consumers’ right to repair violates the law,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “The Commission will continue our efforts to protect consumers’ right to repair and independent dealers’ right to compete.”
Countless companies across countless sectors work tirelessly to monopolize repair of their own products. Usually this involves claims that using third-party dealerships, repair centers, tools and parts will inherently result in safety issues for consumers (something the FTC’s 2021 study thoroughly debunked). Sometimes it even involves (falsely) claiming right to repair is a boon to scary hackers or sexual predators.
The FTC warned all eight companies to review their box stickers and promotional/warranty materials to ensure they don’t imply that warranty coverage is conditioned on the use of specific parts or services. The agency says it will monitor the companies and circle around with legal action if their demands aren’t met.
The federal government caring about consumer repairability remains a relatively new phenomenon, and comes as Oregon recently became the seventh state to pass state level right to repair restrictions. Outside of the FTC, actual enforcement has so far been scattershot at best.
Filed Under: consumer rights, consumers, ftc, independent repair, parts, product warranties, right to repair, tools
Companies: aeris health, asrock, blueair, gigabyte, inmovement, medify air, oransi, zotac
The Writers’ Strike Makes Sense; Their Demands About AI, However, Do Not
from the let-chatgpt-get-rid-of-the-boring-parts dept
The Writers Guild of America (WGA) is on strike again. Given how much writers contribute to the entire entertainment ecosystem—every satisfying cinematic moment begins its life on the written page—the WGA is asking studios to grant professional writers a reasonable slice of Hollywood’s huge profit pie: a higher minimum wage across all media, higher contributions to benefits, more residuals for streaming. Basically, the same story as writers’ strikes from years past. And, let’s face it: the studios can afford it. Nearly all the WGA’s requests seem sensible, and worth striking over. As such, the overall strike seems righteous.
However, the WGA is also asking Hollywood to “regulate use of artificial intelligence on MBA-covered projects: _AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI._“
The studios’ response? Let’s talk about it next year. (Given the exponential growth of AI in just the last two months, an entire year will feel like a century, and the studios know this, but that’s beside the point.)
The WGA speaks for at least a majority of writers in its guild, and that majority is dead wrong about AI. Perhaps it’s a negotiation tactic to walk back from, but even so, it is a disappointingly myopic starting point. Imagine if someone had just invented flight and the response was, “That’s cool, but we like cars; road trips are more fun. So let’s ban planes.”
The first airplanes were flimsy, I’ll give you that. And dangerous as hell. But come on… surely anyone can see the long-term potential. Why would you prevent people from using the latest and greatest tools? Because using better tools for faster results… has less value?
Let’s stop for a moment to catalog what professional screenwriters actually do.
Screenwriters pick a genre, then create a basic plot. They flesh out a brief synopsis (called a “logline”), a list of character names, a longer and more detailed synopsis (called a “treatment”), and a list of story “beats”, i.e., critical story junctures. Maybe a list of “must have” shots, too.
I’ve personally done all these things. It’s hard. It’s grunt work. It takes time—many days and often weeks, and sometimes months—to get all of it right, to make sure it lands, that all the parts work in harmony with each other. You add, you take away, you agonize, you celebrate… In the end, you trade 2-12 weeks of your life but finally, the hard part is over. Now you can shop the story around town hoping someone will pay you to write the actual screenplay.
With ChatGPT (the free version which anyone can access), I can collapse those days/weeks/months of hard work into just three minutes.
From weeks/months… to minutes.
You don’t have to be a studio producer to see the value here. Not only does AI save time, but it saves costs, as well.
Oh, you don’t like my pitch? Bummer. Just give me three minutes and I’ll give you another one. Or another. Or 10 more just like it.
AI is a screenwriter’s superpower which pancakes all that boring grunt work so writers can iterate faster and spend more time on the really fun part—actually writing the scenes. Is something lost in that process? Maybe. What’s gained, though? The part that’s gained is surely far greater than what’s lost. Perhaps this is why Ashton Kutcher is warning companies to embrace AI or “You’re probably Going to Be Out of Business“.
The WGA is concerned their members will be asked to do more and be paid less, and the WGA might be right. Times are changing. Better tools always remove inefficiencies in the market and removing those redundancies makes room for more nimble competitors: writers copiously using AI will be able to do more than those who don’t.
The WGA is not advocating for a better entertainment system—which would mean better stories coming out more quickly—though that would be great for consumers. Instead, the WGA’s directive is to protect its members. By painting AI as the bogeyman, the WGA is only delaying the inevitable. And they make themselves obsolete along the way.
In 1986, British print unions went on strike. Their members used hot-metal Linotype to lay out newspapers, and the strike was to protest the newly installed desktop computers which let journalists type in their own articles, thereby rendering expensive Linotype printers obsolete. In retrospect, desktop computers were obviously more efficient than Linotype printers, but print unions fought against computers to keep their members from losing their (now obsolete) job. Although this strike lasted a whole year, nothing changed. If anything, desktop computers got even better. The British print unions weren’t trying to make a more efficient printing business, though that was better for consumers; unions only protect their members, often at the expense of creating a more efficient business.
Innovative disruption ends with obsolete jobs being removed from the industry. Will some writers lose their jobs to AI bots? Perhaps. Should they lose their jobs? Yes—if their job can be done better by AI. This is the bitter truth nobody in the WGA dare speak aloud.
There is some good news. AI, as it currently exists, is still not perfect; users still need to understand how to craft a prompt to coax decent results out of ChatGPT. The best writers probably wouldn’t even use AI to write entire scripts (yet)—instead, they’ll use AI as an expeditious collaborative partner to iterate routine actions on the fly:
“Give me five versions of this scene.”
“Give me 10 alternate endings.”
“Give me an iconic character like Keyser Söze with a 3 page background story.”
“Give me something that’s never been done before.”
Ironically, the one thing AI-generated content lacks clarity about is the one thing that will protect WGA members the most—AI-generated content isn’t currently protected by copyright. What studio would invest millions of dollars into an intellectual property they couldn’t lock down with copyright protection?
We haven’t even covered the question of what copyrighted material the AI has been trained on—given the explosion of new content from AI bots, how could anyone ferret out potential copyright violations? Obviously, you’d need to have AI bots scouring for copyright infringement… oh, the irony. All told, human writers might be the safer bet simply because they’re a known risk, legally speaking.
The WGA needs to think of AI like the early days of avionics. At the start of the 20th century, the plane in its current design was highly unreliable, untested, and unsafe. It might take you from New York to New Jersey, but you might also crash along the way. Yet just two decades later, planes would turn the tide of world wars.
Moreover, in the 19th century, it took six weeks to sail across the Atlantic Ocean. In bad weather, it could take fourteen weeks. By 1919, it took a single day to cross the Atlantic Ocean by airplane. Certainly, flying by plane wasn’t the same experience as boarding a ship for weeks on end—that luxurious time aboard a cruise ship was lost. Yet on that single day in 1919, life got faster. We could still take a six week cruise ship if we wanted, but faster options were now available, at last.
Professional screenwriters finally have a superpower to collapse their time doing boring grunt work from months into mere minutes.
And their own union won’t let them use it.
Filed Under: ai, screenwriting, tools, writer's strike
Companies: wga
Elon Musk Is Running Scared From Mastodon; Cuts Off The Best Tool For Finding Your Twitter Followers There
from the trying-to-lock-the-barn-doors dept
People keep claiming that Mastodon isn’t scaring Elon Musk, but it’s pretty clear that he’s worried about the exodus of people from Twitter. With his bizarrely short-sighted decision to end free access to the Twitter API, driving developers over to Mastodon, some people realized that the various tools that people use to find their Twitter followers on Mastodon are likely to be cut off. It’s unclear if this was part of the motivation for ending free access to the API, but it did create a surge in people checking out those tools. But then, last night, just hours after the API announcement, Elon’s Twitter cut off API access to Movetodon, which was the nicest, easiest to use tool for finding and following your Twitter followers on Mastodon.
As when Musk cut off third party client developers, the company has not said what rule Tibor actually broke with Movetodon. And that’s likely because he wasn’t actually breaking any rules at all.
It’s just that Musk is running scared, because he knows people are leaving.
Either way, if you haven’t yet set up a Mastodon account, and you’d like to more easily find your Twitter follows and followers who have already moved over (and found it much better than Twitter), you should probably do so soon before Musk cuts off those other services as well.
The two other popular ones after Movetodon were Debirdify and Fedifinder. They seem to be working right now, but I’m assuming not for long. Almost certainly not after Musk institutes his fees for API access. So, even if you don’t plan on using Mastodon for now, it might make sense to set up an account before these tools disappear.
Filed Under: api, debirdify, elon musk, fedifinder, followers, movetodon, tools
Companies: twitter
Some Tricks To Making Mastodon Way More Useful
from the a-non-beginners-guide dept
It’s been interesting to watch over the last few months as tons of people have migrated from Twitter to Mastodon (or similar compatible ActivityPub-based social media platforms). I’ve noticed, however, that some people keep running into the same issues and challenges as they discover that Mastodon is different than what they’re used to with Twitter. There are a few tips and tricks I’ve been sharing with various people that seemed pretty broadly applicable, so I figured it was worth doing a post laying them out.
A couple of quick things to note: these are unlikely to be universal. It’s just a few of the things that I’ve found that take the Mastodon experience to a new, better, more useful level. In other words, yes, this is highly subjective. Also, some of the tools I’m discussing are relatively new, often developed by users who saw the need and decided to build something (again, this is something that’s nice about the open platform that enables anyone to see something that they feel can be improved… and improve it). This also means that it’s highly likely that there will be even more of these kinds of tools and add-ons from others in the near future, and they may surpass most of the suggestions here. This isn’t meant to be a comprehensive list.
Separately, there are a million “how to get started with Mastodon” posts and articles out there. If you’re brand new to Mastodon, I highly recommend checking those out first to get the basics down. This post is more about taking your Mastodoning to a new level. Perhaps the most comprehensive guide is found at Fedi.tips. A few other good beginner posts are Adam Field’s post on Medium, Dell Cameron’s guide at Gizmodo, Tamilore Oladipo’s guide at Buffer, Amanda Silberling’s guide at TechCrunch, and, finally, Noelle’s wonderful GuideToMastodon.com, which kicks off with the same advice I’ve given tons of people: DON’T PANIC. You’ll figure it out. Lots of people have and so will you.
All of those should give you a pretty good basis for understanding Mastodon, and (in particular) some of its differences from Twitter, which seem to be the things that trip people up the most.
Finding people to follow
My biggest “beginner” suggestion is to find and follow a few fairly active accounts, and then when they “boost” someone interesting, follow those people as well. If you’re trying to “migrate” from Twitter, there are a bunch of tools to try to find the people you follow there, including Fedifinder and Debirdify, but the one I found to have the cleanest interface, and the most useful (and allows one-click following) is Movetodon.
If you’re looking for new people to follow around a particular subject, there are a variety of lists out there, including Trunk, Fediverse.info, Fedi.Directory, and PressCheck.org (which verifies journalists, specifically).
A very cool tool I only recently discovered is Followgraph. You put in your Mastodon handle, it looks up all the people you follow and all the people they follow, and then recommends to you the people who lots of your followers follow, but you don’t… It’s pretty useful in surfacing people I might want to follow (though it also surfaces some people I know about but deliberately don’t want to follow).
I also, generally, recommend not cross posting between Twitter and Mastodon, but there are perfectly good reasons to ignore this suggestion. My thinking on it is that this is somewhere different, and you should learn to use it “natively.” Also, it feels like many people set up a cross-poster and then go off and ignore Mastodon, so their accounts are sort of zombie accounts.
Advanced view
This is probably the tip that is most well known and most commonly suggested for going from Mastodon beginner to expert. If you go into settings and click the box to “enable advanced web interface” then you end up with a multi-column interface.
For people who are familiar with Tweetdeck, the unfortunately long-neglected, multi-column Twitter app that initially made Twitter super useful, was purchased by Twitter, and then basically languished, that’s what you effectively get with the advanced web interface.
There are a few tricks to making this interface more useful as well. The left most column is for search (more on that in a bit) and posting. The right most column is basically the “active” column. This takes a little getting used to, but once you figure it out it makes sense. It can be the “getting started” menu (this is what it is when you first log in):
However, if you click on a particular post to see a thread or replies or whatnot, the post you click on takes over this column. This is a bit different from Twitter/Tweetdeck, but kinda makes sense once you get used to it, as it leaves your other columns in place. To get back to the menu, you can click the “hamburger” menu button that is in the left-most column. It may be a little confusing to have to click something in the left-most column to get the right-most column to go back to the menu, but (again) if you think of the right-most column as the “active” column, it makes sense.
Make use of lists
This is a useful feature whether or not you use the advanced view on Mastodon. If you follow enough people that there is a relatively active flow of new posts, I’ve found that lists are a super useful way to focus in on more interesting stuff, without it becoming overwhelming. This is the same thing that I did with Twitter in the early days, creating a series of “lists” of users, so I could narrow down what I’m following for specific purposes.
In my case, I’ve created four lists: “must read,” “journalism,” “law,” and “tech.” These should be somewhat self-explanatory, but I put the accounts I want to make sure I don’t miss into “must read” and those are usually the first thing I’ll check when checking in on Mastodon. Then I’ll bounce between the other lists and the home feed (of everyone I follow). I do not use either the federated feed or the local feed, as they are (for me) firehoses of noise. On some smaller, more focused, servers, I think the local feed can be quite useful, but for most major servers, it’s mostly useless.
I have seen some new Mastodon users focus on the local and federated feeds, and then get frustrated. I think it’s generally best to ignore the federated feed entirely, and only use the local feed on more tight-knit focused servers.
In the advanced web view, lists are even more powerful, as you can pin them and see all of them next to each other. This is also a little confusing at first, but if you create a list, and then access it (via the “getting started menu” where you click on “lists” and then the list of your choice), you then need to “pin” the list to have it show permanently in the advanced web view. You do this by clicking the slider settings button, followed by the “pin” button:
Once “pinned” you can then move the column left or right in the advanced view with the arrow buttons:
The list interface in Mastodon isn’t the best, and I highly recommend the Mastodon List Manager app, written by Andrew Beers. It has a somewhat simple interface, but it works so much better than the built in list interface. Beers’ app shows all of the people you follow in a giant list, and then puts any list (and you can create new ones directly in the interface) as a kind of grid next to the names of those you follow. You can then check off what lists (if any) you want to put the people you follow onto. It’s very simple, and it just works (for what it’s worth, I ran into a few bugs with it, and Andrew was quite helpful in getting them sorted out and fixed).
This setup makes it super easy to create lists and assign people you follow to various lists. It’s way easier than Mastodon’s built in setup.
There are some limitations to lists. Currently, (unlike Twitter) there really isn’t a way to make your list “public” or to share it. You can export the list as a CSV and in theory share that, but it’s much more complicated than Twitter’s ability to make a list public and have other people follow it. Also, I’ve seen a number of people complain that (again, unlike Twitter) you can’t add users to lists who you don’t follow. I’ve never used that feature on Twitter myself as the people I put on lists are always people I already follow, but some people like to do that to keep tabs on certain people/topics without having to “follow” them in their main feed.
Better UI options
Even as useful and helpful as the advanced web UI is, there are alternative interfaces as well. Most of the really unique efforts are on mobile, and not with the “official” Mastodon apps. I highly recommend checking out a few such apps to figure out what works for you. I use Tusky on Android and find that it works for me, but I hear good things about many other options. And, it sounds as though a bunch of developers are working on even nicer iOS apps as well (the folks who made the popular Tweetbot for Twitter are working on one called Ivory that lots of people are talking about).
However, for regular desktop use there are some additional options as well. I’ve played around with Sengi, Whalebird, TheDesk, and Hyperspace, and none of them really did much for me, to be honest. The advanced web interface struck me as better for me, personally, than any of those apps.
However, there is one other interface that I really like: Pinafore.social. It is not a downloadable desktop client like those above, rather it’s simply an alternative web interface for your existing Mastodon account, that is very clean, and very simple. It has a Twitter-like feel to it, and the site is quick and responsive. If you like a very clean interface better than a more cluttered one, you may like Pinafore quite a lot. Here’s a screenshot of what it looks like on my account:
You can access your notifications or your lists (via the “Community” tab) and it’s all quite nice. I use it probably 30% of the time, though I still use the advanced web interface more of the time. However, when that gets overwhelming, sometimes it’s nice to just switch over to Pinafore and have the cleaner interface.
In an ideal world, I’d love to see what Pinafore’s developer, Nolan Lawson, would do if he created an “advanced web view” version of Pinafore, but on the site he claims it’s not on the roadmap to create a multi-column view version (though I still wish someone else might take the idea and run with it).
This is another area that I’m hoping we’ll see a lot more development in over the next few months, as it’s a wide open space, and the nice thing about such an open system is that anyone can design an interface or app for it.
Extensions
There are some really useful browser extensions that make Mastodon much more useful. I know that some people shy away from browser extensions, especially as they may represent a security risk. But if you’re okay with it (and the main one I’m recommending makes its source code available for people to review), they make things quite useful.
The main extension I recommend is FediAct. One complaint I’ve seen from some users is that if you end up on a Mastodon post on a different server, it’s a little bit complicated to interact with it. This is where the nature of federation feels a little complicated, though it’s not that difficult once you understand it. If you view content from other servers through your own server, you can easily interact with it, because that content has effectively been copied over to your server, and your interactions link back with the original.
However, if you end up on a different server entirely, that server doesn’t know you’re logged into a different federated server, and therefore can’t interact directly. Instead, you have a couple of choices on how to interact, with the most basic one being that when you click to do something, it will ask you to indicate your own Mastodon instance address before effectively moving you over to interact with it on your own server. It’s clunky and a little bit of a nuisance.
Apparently, there was a period of time where Mastodon had built in tools to get around that, but people quickly realized that’s a pretty big security problem, as you’re effectively opening up a cross site scripting hole.
FediAct, however, allows you to do this while controlling it directly in your browser, and making Mastodon work the way most people think it should work. You plug your own instance into the extension, and then if you end up on a different server, you can still like and boost posts just like you could on your own server. It works and is nice and solves one of the bigger headaches people have with Mastodon’s federated setup.
There’s a separate extension called Roam that some people have recommended, which does some of the same things as FediAct regarding interacting with people on other servers. It also has a bunch of other features, including making it easier to post to Mastodon from anywhere, and to schedule posts to show up at a later date. It’s got a very clean interface and looks nice, but I haven’t really done much with it so far.
Hashtags
One of the things people often remind newbies on Mastodon about is that there is no text search: just users and hashtags. Some people find this frustrating (perhaps for good reason), but it does encourage people to make better use of hashtags (something I often still forget to do). That said, there is a nice (relatively new) feature on Mastodon: the ability to follow hashtags. If you find a hashtag that you want to follow, you can follow it just like you would follow a person:
This can be useful if you want to follow a particular topic more than just a few individuals who tweet about that topic. Unfortunately, it appears you cannot yet add hashtags to lists, which would be really helpful and hopefully will be an upgrade at some point soon.
Alternative platforms
As lots of people will remind you, Mastodon is just one implementation for ActivityPub, and there are lots of others. Some of those are designed to create totally different services (like PeerTube and PixelFed), but some of them are just alternative, but usually compatible, takes on creating a microblogging setup. Some of these are forks of Mastodon’s open source code, whereas others appear to be built separately from the ground up, but still made compatible (somewhat) with Mastodon, so you can still follow and communicate with the folks rushing to Mastodon while potentially actually not using Mastodon at all.
There are some forks that are more minor changes to Mastodon, like Hometown and Glitch. Hometown makes very minor changes to Mastodon with things like better list management and better rendering of rich text. Glitch adds a lot more like, better formatting tools, hiding follower counts, a better threaded mode and more.
Then there are just generally alternative takes on microblogging that either are built on or cooperate with ActivityPub. Some of these are more lightweight than Mastodon, and many have more features. This includes things like Pleroma, friendi.ca, and Misskey (which also has forks like Calckey and FoundKey). There are a bunch of other ones as well, and each has some different features, including some features or UI options that people feel are missing from Mastodon.
If you’re finding that Mastodon just isn’t doing it for you, it might be worth looking at the feature sets and UIs of these other platforms to see if they’re more your speed. For the most part, you’ll still be able to communicate with everyone on Mastodon… just via a non-Mastodon server (though sometimes they still call themselves Mastodon, just because).
There are, also, instances that have changed the feature set directly. For example, while the default Mastodon post is limited to 500 characters, there are a bunch of servers that have expanded that. For example, I’m pretty sure that infosec.exchange (a popular instance for the infosec crowd, obviously, that I believe is running the Glitch fork) allows for posts up to 11,000 characters. Or there’s qoto.org, which basically would let you post a novella with a limit of 65,535 characters. It has also implemented quote tweet functionality (all of the “key” forks have this as well), rich text, and actual full text search.
In short, even if there are features you think are missing from Mastodon itself, there may be other instances that have already implemented them, or if you’re technically proficient, you may explore setting up your own alternative instance.
One thing to note: there are (reasonable) complaints from people on smaller instances that some of those may not function as well, as the federated nature of Mastodon means that certain content is effectively excluded from those servers. This creates some problems, and while there are some attempts to solve them (with things like relays) there definitely are some downsides to joining a tiny instance. Of course there are some downsides to joining a giant instance as well. Once again, hopefully these are solvable problems, but did want to flag it for people rushing off to join different instances.
Conclusion
Again, this is not intended to be a comprehensive list, but it does show a bunch of tools, features, and services that I’ve found useful in getting around some of the limitations of Matodon that seem to frustrate some users, and to make this open, federated, social network much more useful.
Filed Under: activitypub, guide, mastodon, tips and tricks, tools
Content Moderation Case Studies: Using AI To Detect Problematic Edits On Wikipedia (2015)
from the ai-to-the-rescue? dept
Summary: Wikipedia is well known as an online encyclopedia that anyone can edit. This has enabled a massive corpus of knowledge to be created, that has achieved high marks for accuracy, while also recognizing that at any one moment some content may not be accurate, as anyone may have entered in recent changes. Indeed, one of the key struggles that Wikipedia has dealt with over the years is with so-called ?vandals? who change a page not to improve the quality of an entry, but to deliberately decrease the quality.
In late 2015, the Wikimedia Foundation, which runs Wikipedia, announced an artificial intelligence tool, called ORES (Objective Revision Evaluation Service) which they hoped might be useful to effectively pre-score edits for the various volunteer editors so they could catch vandalism quicker.
ORES brings automated edit and article quality classification to everyone via a set of open Application Programming Interfaces (APIs). The system works by training models against edit- and article-quality assessments made by Wikipedians and generating automated scores for every single edit and article.
What?s the predicted probability that a specific edit be damaging? You can now get a quick answer to this question. ORES allows you to specify a project (e.g. English Wikipedia), a model (e.g. the damage detection model), and one or more revisions. The API returns an easily consumable response in JSON format:
The system was not designed, necessarily, to be user-facing, but rather as a system that others could build tools on top of to help with the editing process. Thus it was designed to feed some of its output into other existing and future tools.
Part of the goal of the system, according to the person who created it, Aaron Halfaker, was to hopefully make it easier to teach new editors how to be productive editors on Wikipedia. There was a concern that more and more of the site was controlled by an increasingly small number of volunteers, and new entrants were scared off, sometimes by the various arcane rules. Thus, rather than seeing ORES as a tool for automating content moderation, or as a tool for ?quality control? over edits, Halfaker saw it more as a tool to help experienced editors better guide new, well-meaning, but perhaps inexperienced editors in ways to improve.
The motivation for Mr. Halfaker and the Wikimedia Foundation wasn?t to smack contributors on the wrist for getting things wrong. ?I think we who engineer tools for social communities, have a responsibility to the communities we are working with to empower them,? Mr. Halfaker said. After all, Wikipedia already has three AI systems working well on the site?s quality control, Huggle, STiki and ClueBot NG.
?I don?t want to build the next quality control tool. What I?d rather do is give people the signal and let them work with it,? Mr. Halfaker said.
The artificial intelligence essentially works on two axes. It gives edits two scores: first, the likelihood that it?s a damaging edit, and, second, the odds that it was an edit made in good faith or not. If contributors make bad edits in good faith, the hope is that someone more experienced in the community will reach out to them to help them understand the mistake.
?If you have a sequence of bad scores, then you?re probably a vandal,? Mr. Halfaker said. ?If you have a sequence of good scores with a couple of bad ones, you?re probably a good faith contributor.?
Decisions to be made by Wikipedia:
- How useful is artificial intelligence in helping to determine the quality of edits?
- How best to implement a tool like ORES?
- Should it automatically revert likely ?bad? edits?
- Should it be used for quality control?
- Should it be a tool to just highlight edits for volunteers to review?
- What is likely to encourage more editors to help keep Wikipedia as up to date and clean of vandalism?
- What data do you train ORES on? How do you validate the accuracy of the training data?
Questions and policy implications to consider:
- Are there issues when, because the AI has scored something, the tendency is to assume the AI must be ?correct?? How do you make sure the AI is accurate?
- Does AI help bring on new editors or does it scare away new editors?
- Are there ways to prevent inherent bias from being baked into any AI moderation system, especially one trained by existing moderators?
Resolution: Halfaker, who later left Wikimedia to go to Microsoft Research, has published a few papers about ORES since it launched. In 2017, a paper by Halfaker and a few others noted that the tool was increasingly used over the previous three years.
The ORES service has been online since July 2015. Since then, usage has steadily risen as we?ve developed and deployed new models and additional integrations are made by tool developers and researchers. Currently, ORES supports 78 different models and 37 different language-specific wikis.
Generally, we see 50 to 125 requests per minute from external tools that are using ORES? predictions (excluding the MediaWiki extension that is more difficult to track). Sometimes these external requests will burst up to 400-500 requests per second
One thing they noticed was that those using the ORES output often wanted search through the metrics and set their own thresholds rather than accepting the hard coded ones in ORES:
Originally, when we developed ORES, we defined these threshold optimizations in our deployment configuration. But eventually, it became apparent that our users wanted to be able to search through fitness metrics to choose thresholds that matched their own operational concerns. Adding new optimizations and redeploying quickly became a burden on us and a delay for our users. In response, we developed a syntax for requesting an optimization from ORES in realtime using fitness statistics from the models tests
The project also appeared to be successful in getting built into various editing tools, and possibly inspiring ideas for new editing quality tools:
Many tools for counter-vandalism in Wikipedia were already available when we developed ORES. Some of them made use of machine prediction (e.g. Huggle27, STiki, ClueBot NG), but most did not. Soon after we deployed ORES, many developers that had not previously included their own prediction models in their tools were quick to adopt ORES. For example, RealTime Recent Changes includes ORES predictions along-side their realtime interface and FastButtons, a Portuguese Wikipedia gadget, began displaying ORES predictions next to their buttons for quick reviewing and reverting damaging edits. Other tools that were not targeted at counter-vandalism also found ORES predictions? specifically that of article quality (wp10)?useful. For example, RATER,30 a gadget for supporting the assessment of article quality began to include ORES predictions to help their users assess the quality of articles and SuggestBot,31[5] a robot for suggesting articles to an editor, began including ORES predictions in their tables of recommendations.
Many new tools have been developed since ORES was released that may not have been developed at all otherwise. For example, the Wikimedia Foundation product department developed a complete redesign on MediaWiki?s Special:RecentChanges interface that implements a set of powerful filters and highlighting. They took the ORES Review Tool to it?s logical conclusion with an initiative that they referred to as Edit Review Filters. In this interface, ORES scores are prominently featured at the top of the list of available features, and they have been highlighted as one of the main benefits of the new interface to the editing community.
In a later paper, Halfaker explored, among other things, concerns about how AI systems like ORES might reinforce inherent bias.
A 2016 ProPublica investigation [4] raised serious allegations of racial biases in a ML-based tool sold to criminal courts across the US. The COMPAS system by Northpointe, Inc. produced risk scores for defendants charged with a crime, to be used to assist judges in determining if defendants should be released on bail or held in jail until their trial. This expos? began a wave of academic research, legal challenges, journalism, and organizing about a range of similar commercial software tools that have saturated the criminal justice system. Academic debates followed over what it meant for such a system to be ?fair? or ?biased?. As Mulligan et al. discuss, debates over these ?essentially contested concepts? often focused on competing mathematically-defined criteria, like equality of false positives between groups, etc.
When we examine COMPAS, we must admit that we feel an uneasy comparison between how it operates and how ORES is used for content moderation in Wikipedia. Of course, decisions about what is kept or removed from Wikipedia are of a different kind of social consequence than decisions about who is jailed by the state. However, just as ORES gives Wikipedia?s human patrollers a score intended to influence their gatekeeping decisions, so does COMPAS give judges a similarly functioning score. Both are trained on data that assumes a knowable ground truth for the question to be answered by the classifier. Often this data is taken from prior decisions, heavily relying on found traces produced by a multitude of different individuals, who brought quite different assumptions and frameworks to bear when originally making those decisions
Filed Under: ai, content moderation, ores, tools, wikipedia
Companies: wikimedia
Cloudflare Makes It Easier For All Its Users To Help Stop Child Porn Distribution
from the this-is-good dept
We recently wrote about how Senators Lindsey Graham and Richard Blumenthal are preparing for FOSTA 2.0, this time focused on child porn — which is now being renamed as “Child Sexual Abuse Material” or “CSAM.” As part of that story, we highlighted that these two Senators and some of their colleagues had begun grandstanding against tech companies in response to a misleading NY Times article that seemed to blame internet companies for the rising number of reports to NCMEC of CSAM found on the internet, when that should be seen as more evidence of how much the companies are doing to try to stop CSAM.
Of course, working with NCMEC and other such organizations takes a lot of effort. Being able to scan for shared hashes of CSAM isn’t something that every internet site can do. It’s mostly just done by the larger companies. But last week Cloudflare (one of the companies that Senators are demanding “answers” from), did something quite fascinating: it enabled all Cloudlfare users, no matter what level of service, to start using Cloudflare CSAM scanning tools for free, even allowing them to set their own rules and preferences (something that might become very, very important if the Graham/Blumenthal bill becomes the law.
I highly recommend reading the entire article, because it’s quite a clear, interesting, and easy to read article about how fuzzy hashing works (including pictures of dogs and bicycles). As the Cloudflare post notes, those who use such fuzzy hashing tools have intentionally kept at least some of the details secret — because being too public about it would allow those who are producing and distributing CSAM to make changes that “dodge” the various tools and filters, which would obviously be a problem. However, that also results in two potential issues: (1) a lack of transparency in how these filtering systems really operate and (2) an inability for all but the largest players to make use of these tools — which would be disastrous for smaller companies if they were required to make use of such things.
And that’s where Cloudflare’s move is quite interesting. In providing the tool for free to all of its users, it keeps the proprietary nature of the tool secret, but it’s also letting them set the thresholds.
If the threshold is too strict ? meaning that it’s closer to a traditional hash and two images need to be virtually identical to trigger a match ? then you’re more likely to have have many false negatives (i.e., CSAM that isn’t flagged). If the threshold is too loose, then it’s possible to have many false positives. False positives may seem like the lesser evil, but there are legitimate concerns that increasing the possibility of false positives at scale could waste limited resources and further overwhelm the existing ecosystem. We will work to iterate the CSAM Scanning Tool to provide more granular control to the website owner while supporting the ongoing effectiveness of the ecosystem. Today, we believe we can offer a good first set of options for our customers that will allow us to more quickly flag CSAM without overwhelming the resources of the ecosystem.
Different Thresholds for Different Customers
The same desire for a granular approach was reflected in our conversations with our customers. When we asked what was appropriate for them, the answer varied radically based on the type of business, how sophisticated its existing abuse process was, and its likely exposure level and tolerance for the risk of CSAM being posted on their site.
For instance, a mature social network using Cloudflare with a sophisticated abuse team may want the threshold set quite loose, but not want the material to be automatically blocked because they have the resources to manually review whatever is flagged.
A new startup dedicated to providing a forum to new parents may want the threshold set quite loose and want any hits automatically blocked because they haven’t yet built a sophisticated abuse team and the risk to their brand is so high if CSAM material is posted — even if that will result in some false positives.
A commercial financial institution may want to set the threshold quite strict because they’re less likely to have user generated content and would have a low tolerance for false positives, but then automatically block anything that’s detected because if somehow their systems are compromised to host known CSAM they want to stop it immediately.
This is an incredibly thoughtful and nuanced approach, recognizing that when it comes to any sort of moderation, one size can never fit all. And, by allowing sites to set their own thresholds, it actually does add in a level of useful transparency, without exposing the inner workings that would allow bad actors to game the system.
That said, I can almost guarantee that someone (or perhaps multiple someones) will come along before too long and Cloudflare’s efforts to help all of its users combat CSAM will somehow be incorrectly or misleadingly spun to claim that Cloudflare is somehow helping sites to hide or enable CSAM. No good deed goes unpunished.
However if you want to support actual solutions — not grandstanding nonsense — to try to deal with CSAM, approaches like Cloudflare’s are ones worth paying attention to. This is especially true if Graham/Blumenthal and others get their way. Under proposals like the one they’re suggesting, it will become virtually impossible for smaller companies to take the actions necessary to meet the standards to avoid legal liability. And that means that (once again) the big internet companies will end up getting bigger. They all have access to NCMEC and the necessary tools to scan and submit CSAM. Smaller companies don’t. Cloudflare offering up its scan tool for everyone helps level the playing field in a really important way.
Filed Under: child porn, csam, fuzzy hashing, infrastructure, tools
Companies: cloudflare
Facebook Asked To Change Terms Of Service To Protect Journalists
from the a-chance-to-fix-things dept
There are plenty of things to be concerned about regarding Facebook these days, and I’m sure we’ll be discussing them for years to come, but the Knight First Amendment Center is asking Facebook to make a very important change as soon as possible: creating a safe harbor for journalists who are researching public interest stories on the platform. Specifically, the concern is that basic tools used for reporting likely violate Facebook’s terms of service, and could lead to Facebook being able to go after reporters for CFAA violations for violating its terms. From the letter:
Digital journalism and research are crucial to the public?s understanding of Facebook?s platform and its influence on our society. Many of the most important stories written about Facebook and other social media platforms in recent months have relied on basic tools of digital investigation. For example, research published by an analyst with the Tow Center for Digital Journalism, and reported in The Washington Post, uncovered the true reach of the Russian disinformation campaign on Facebook. An investigation by Gizmodo showed how Facebook?s ?People You May Know? feature problematically exploits ?shadow? profile data in order to recommend friends to users. A story published by ProPublica revealed that Facebook?s self-service ad platform had enabled advertisers of rental housing to discriminate against tenants based on race, disability, gender, and other protected characteristics. And a story published by the New York Times exposed a vast trade in fake Twitter followers, some of which impersonated real users.
Facebook?s terms of service limit this kind of journalism and research because they ban tools that are often necessary to it?specifically, the automated collection of public information and the creation of temporary research accounts. Automated collection allows journalists and researchers to generate statistical insights into patterns, trends, and information flows on Facebook?s platform. Temporary research accounts allow journalists and researchers to assess how the platform responds to different profiles and prompts.
Journalists and researchers who use tools in violation of Facebook?s terms of service risk serious consequences. Their accounts may be suspended or disabled. They risk legal liability for breach of contract. The Department of Justice and Facebook have both at times interpreted the Computer Fraud and Abuse Act to prohibit violations of a website?s terms of service. We are unaware of any case in which Facebook has brought legal action against a journalist or researcher for a violation of its terms of service. In multiple instances, however, Facebook has instructed journalists or researchers to discontinue important investigative projects, claiming that the projects violate Facebook?s terms of service. As you undoubtedly appreciate, the mere possibility of legal action has a significant chilling effect. We have spoken to a number of journalists and researchers who have modified their investigations to avoid violating Facebook?s terms of service, even though doing so made their work less valuable to the public. In some cases, the fear of liability led them to abandon projects altogether.
This is a big deal, as succinctly described above. We’ve talked in the past about how Facebook has used the CFAA to sue useful services and how damaging that is. But the issues here have to do with actual reporters trying to better understand aspects of Facebook, for which there is tremendous and urgent public interest, as the letter lays out. Also, over at Gizmodo, Kash Hill has a story about how Facebook threatened them over their story investigating Facebook’s “People You May Know” feature, showing that this is not just a theoretical concern:
In order to help conduct this investigation, we built a tool to keep track of the people Facebook thinks you know. Called the PYMK Inspector, it captures every recommendation made to a user for however long they want to run the tool. It?s how one of us discovered Facebook had linked us with an unknown relative. In January, after hiring a third party to do a security review of the tool, we released it publicly on Github for users who wanted to study their own People You May Know recommendations. Volunteers who downloaded the tool helped us explore whether you?ll show up in someone?s People You Know after you look at their profile. (Good news for Facebook stalkers: Our experiment found you won?t be recommended as a friend just based on looking at someone?s profile.)
Facebook wasn?t happy about the tool.
The day after we released it, a Facebook spokesperson reached out asking to chat about it, and then told us that the tool violated Facebook?s terms of service, because it asked users to give it their username and password so that it could sign in on their behalf. Facebook?s TOS states that, ?You will not solicit login information or access an account belonging to someone else.? They said we would need to shut down the tool (which was impossible because it?s an open source tool) and delete any data we collected (which was also impossible because the information was stored on individual users? computers; we weren?t collecting it centrally).
The proposal in the letter is that Facebook amend its terms of service to create a “safe harbor” for journalism. While Facebook recently agreed to open up lots of data to third party academics, it’s important to note that journalists and academics are not the same thing.
The safe harbor we envision would permit journalists and researchers to conduct public-interest investigations while protecting the privacy of Facebook?s users and the integrity of Facebook?s platform. Specifically, it would provide that an individual does not violate Facebook?s terms of service by collecting publicly available data by automated means, or by creating and using temporary research accounts, as part of a news-gathering or research project, so long as the project meets certain conditions.
First, the purpose of the project must be to inform the general public about matters of public concern. Projects designed to inform the public about issues like echo chambers, misinformation, and discrimination would satisfy this condition. Projects designed to facilitate commercial data aggregation and targeted advertising would not.
Second, the project must protect Facebook?s users. Those who wish to take advantage of the safe harbor must take reasonable measures to protect user privacy. They must store data obtained from the platform securely. They must not use it for any purpose other than to inform the general public about matters of public concern. They must not sell it, license it, or transfer it to, for example, a data aggregator. And they must not disclose any information that would readily identify a user without the user?s consent, unless the public interest in disclosure would clearly outweigh the user?s interest in privacy.
There are a few more conditions in the proposal, including not interfering with the proper working of Facebook. The letter includes a draft amendment as well.
While there may be some hesitation among certain people with anything that seems to try to carve out different rules for a special class of people, I appreciate that the approach here is focused on carving out a safe harbor for journalism rather than journalists. That is, as currently structured, anyone could qualify for the safe harbors if they are engaged in acts of journalism, and it does not have any silly requirement about being attached to a well known media organization or anything like that. The entire setup seems quite reasonable, so now we’ll see how Facebook responds.
Filed Under: cfaa, data collection, journalism, reporting, safe harbor, tools
Companies: facebook, knight 1st amendment center
Valve Decides To Get Out Of The Curation Business When It Comes To 'Offensive' Games
from the the-good-and-the-bad dept
As we’ve said in the past, Valve has always had a tricky line to walk with it’s Steam platform, having to straddle the needs of both the gamers that use the service and the game developers that make it worthwhile. Frankly, it’s walked this line fairly well for the most part. The platform, which was always popular, has exploded as the place to release a new game title online. As we noted way back in ye olde 2016, this popularity has also presented a problem for Steam: saturation. There are now simply so many games available on the platform that blindly wading into it and expecting to find new content you didn’t know you wanted is a dicey proposition at best. More content is an undeniably good thing, but it would be silly to suggest that the deluge of new games released in the past few years hasn’t also had a deleterious effect on the usability of the platform.
Our solution? It won’t surprise you. We advocated that Steam empower the gamers that use it to act as curators. If done properly, this would allow an ecosystem of trusted advisers among gamers that share interests to tell them which titles they should be looking at. To that end, Steam subsequently employed a curators program within the platform that attempted to build exactly this ecosystem. To date, it’s been mediocre at best.
But this isn’t the only publicized problem Steam has had in recent days. In addition, the platform has been in the news for its wishy-washy but ultimately heavy-handed approach to games that have either mature sexual content or are offensive to large swaths of people. Combinations of so-called sex-games and games that make such topics as school shootings central to gameplay have been banned, or not, often to much critique from every side from gamers.
The concept of empowering its community to serve as its own filters and the no-win situation when it comes to offensive games has now collided, causing Valve to announce that it’s getting out of the game content moderating business entirely.
In a blog post musing on the difficulty of deciding on a case-by-case basis what should and should not be allowed on Steam, Valve’s Erik Johnson explained that the company does, in fact, have a team of humans that looks at “every controversial title submitted to us,” and employees frequently disagree like Steam users do. “The harsh reality of this space, that lies at the root of our dilemma, is that there is absolutely no way we can navigate it without making some of our players really mad,” Johnson wrote.
“We’ve decided that the right approach is to allow everything onto the Steam Store, except for things that we decide are illegal, or straight up trolling,” said Johnson. “Taking this approach allows us to focus less on trying to police what should be on Steam, and more on building those tools to give people control over what kinds of content they see.”
There are two reactions that leap immediately to mind. First and foremost, this will be a good and useful experiment by Valve. Empowering customers and communities is almost always the right approach. Acting as a gatekeeper or the warden managing the walled garden is not an approach we believe in. Moreso, an approach by a company that puts its trust in the everyday customer is typically an inherently consumer-friendly one. The ideals behind this kind of move are a good one. Censorship sucks, choice is better.
On the other hand, the other immediate reaction has to be that Valve had damned well better have its user tools in order when it rolls this out en masse, because two things will happen otherwise. Most directly, gamers who are being inundated with games and content they find horrifying, offensive, or otherwise view negatively are going to be fully up in arms. It’s easy to imagine families that game together, between parents and young children, losing their shit if the Steam homepage is suddenly full of games laden with overt sexual content or school shootings.
Even more so, if you thought the floodgates had been open when it came to the sheer volume of titles on Steam previously, this is going to introduce a potential tidal wave of new games onto the marketplace. If Valve isn’t supremely prepared to empower users now with far better curating tools than it already has, the platform is likely going to take a severe dip in its usability as a place to discover games.
In other words: decent idea, assuming Valve has put a ton of thought into how this will impact its platform.
Filed Under: content moderation, steam, tools, users
Companies: valve
DailyDirt: Helping The Blind With Technology
from the urls-we-dig-up dept
We’ve seen some early-stage advances for ways that might help restore sight to people with low vision (or no vision), but it will take many more years before the clinical trials and safety approvals are complete. And not everyone will want to undergo an eye surgery to try to regain some vision, either. Fortunately, robots and wearable technology continue to improve, and these gadgets could become very useful for the blind (and the rest of us, too). Maybe we won’t just see telecommuting iPads for remote workers — but also robot assistants for casual and everyday uses, as well.
- Can robots become better than guide dogs at helping the blind? Given that some of the most advanced robots still have trouble navigating the world by themselves, robots helping the blind might not happen for a long time — but progress will undoubtedly be welcome by both the sighted and the blind. [url]
- A wearable device could help blind users by providing tactile or audio feedback based on sensors embedded in a ring. A “smart ring” could have cameras and haptic feedback to allow a user to point it at something and have it read text or recognize objects…. But maybe a smart watch app might be a better way to start this kind of assistive tool? [url]
- Tactile Navigation Tools is a company founded by a visually-impaired doctor, making a sensor-equipped vest and “smart cane” to help the blind. The vest and cane can work together to help a user identify dangerous obstacles — and could also be useful for fire-fighting or military personnel to navigate in low-visibility environments. [url]
After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.
Filed Under: baxter, blind, haptics, low vision, robots, smartcane, smartring, tools, visually impaired, wearables
Companies: tactile navigation tools