archiving – Techdirt (original) (raw)
Yes, Digital Books Do Wear Out; Stop Accepting Publishers Claims That They Don’t
from the digital-data-wears-out dept
There’s a great post by Brewster Kahle on the Internet Archive blog with the title “Digital Books wear out faster than Physical Books“. He makes an important point about the work involved in providing and preserving digital books:
The Internet Archive processes and reprocesses the books it has digitized as new optical character recognition technologies come around, as new text understanding technologies open new analysis, as formats change from djvu to daisy to epub1 to epub2 to epub3 to pdf-a and on and on. This takes thousands of computer-months and programmer-years to do this work. This is what libraries have signed up for—our long-term custodial roles.
Also, the digital media they reside on changes, too—from Digital Linear Tape to PATA hard drives to SATA hard drives to SSDs. If we do not actively tend our digital books they become unreadable very quickly.
The issue is particularly acute for this sector because ebooks potentially offer huge advantages over physical ones, which therefore encourages libraries and archives to adopt that format. Unfortunately, the latter are faced by two sets of problems: the one mentioned above, and the fact that publishers are making digital books less useful than analogue ones in order to boost their profits, as I detailed in Walled Culture the book.
Of course, ebooks are not the only digital artefacts subject to the problems pointed out by Brewster. Digital music and digital films also wear out in the sense that formats change and the media they are stored on must be replaced as technology progresses. It also applies to the world of video games – a cultural area often overlooked. Moreover, video games – like ebooks – are typically locked up using Digital Rights Management (DRM), which adds a further challenge to preserving them: it’s generally against the law to circumvent that DRM, even for purposes of making backups or changing its formatting.
In other words, the problem of archiving digital creations is hard enough, but thanks to copyright, it’s often impossible. So much for copyright supporting creativity…
Originally posted to the Walled Culture blog.
Filed Under: archiving, digital, ebooks, physical, wearing out
If Twitter Goes Down In Flames, What Happens To Its Huge And Historically Important Collection Of Tweets?
from the something-to-start-thinking-about dept
This blog has just written about the likely loss of a very particular kind of culture – K-pop live streams. Culture is culture, and a loss is a loss. But potentially we are facing the disappearance of a cultural resource that is indisputably more important. I’m talking about Twitter, and its vast store of tweets that have been written over the last 16 years of its existence.
We have rather taken Twitter and its key role in modern culture and public discourse for granted. But the recent purchase of the company by Elon Musk, and his idiosyncratic decisions since doing so, have (a) raised the possibility that Twitter will go bankrupt, as Musk himself has allegedly said, and (b) made people realize how much of value would be lost if that happens.
There is no ongoing independent backup of Twitter. There was to begin with: the US Library of Congress (LoC) signed an agreement allowing it to create a complete Twitter Archive for a while. That ran for 12 years, during which time billions of tweets were collected. As an update on the Twitter Archive explained in 2017, the decision not to collect everything thereafter was taken because of the dramatic increase in the number of tweets; the fact that the Library of Congress only received text, but many tweets were more visual than textual; and the increase in potential tweet length from 140 to 280 characters. The LoC also noted that its partial collection already “documents the rise of an important social media platform”, and that in any case, it does not aim to “collect comprehensively”. As a result, it started adding tweets on a more selective basis. It concluded:
The Twitter Archive may prove to be one of this generation’s most significant legacies to future generations. Future generations will learn much about this rich period in our history, the information flows, and social and political forces that help define the current generation.
I would argue that this was still true after the archive was halted; whether it will be in the future, remains to be seen. Nonetheless, at the very least we are faced with losing many, perhaps most tweets from the years 2017 until 2022. That’s because as far as I am aware, no one else is receiving a full feed of tweets in the way the Library of Congress was. The indispensable Internet Archive holds snapshots, but there is no guarantee it has a particular tweet.
Downloading and storing all tweets directly from the public Twitter service is not possible. That’s not so much for technical reasons – it would be a challenge but surely not beyond today’s advanced systems – but because of copyright. Twitter’s Terms of Service state:
You retain your rights to any Content you submit, post or display on or through the Services. What’s yours is yours — you own your Content (and your incorporated audio, photos and videos are considered part of the Content).
Making copies of billions of tweets without permission would be too risky for any organization to contemplate, given the huge costs involved in such a project. Obtaining that permission from hundreds of millions of Twitter users to make copies of their tweets would be a licensing nightmare. Whatever happens as a result of Elon Musk’s changes to the service, that copyright problem is not something that is going to disappear. As a result, what the Library of Congress rightly called “one of this generation’s most significant legacies to future generations” will always be at risk of disappearing forever, leaving the valuable but incomplete archive the LoC holds, but does not make publicly available.
Follow me @glynmoody on Twitter, or Mastodon. Originally posted to the Walled Culture blog.
Filed Under: archiving, elon musk, history
Companies: twitter
Hobbyists Once Again Do Preservation: Every ‘Nintendo Power’ Mag Digitized Online
from the power-to-the-people dept
One of the wonders of a digital world is that art preservation in many forms suddenly gets much, much easier. For all kinds of art, be it video games, music, drawings/paintings, etc., at the very least an uploaded digital simulacrum of the art means that it can’t be easily lost due to the pernicious lack of care by the creators of the art itself.
I’ve spent quite a bit of ink and time discussing how this applies to video games. And, beyond just the games themselves, which are obviously digital in nature, the peripheral art and culture that surrounds those games, such as game manuals. The truly frustrating part of those otherwise very cool stories is that it really shouldn’t be left to fans and hobbyists to do this kind of preservation and archiving. Why don’t gaming companies want to preserve their own cultural output somewhere? Publishers? Developers? It’s almost never them that does the hard work. That is typically done by a small number of fans in the public, who then risk being slapped around over intellectual property concerns by those whose job they’re doing.
That certainly would be the case if I were going to upload every Nintendo Power magazine to the internet, as was done recently.
Uploaded to Archive.org today by Gumball, all 285 issues of Nintendo Power are now unofficially available in .cbr format. At just over 40 gigabytes for the whole shebang, the vast majority of the collection comes courtesy of Retromags, a community-run project dedicated to archiving classic video game magazines. A couple of remaining issues were sourced via Reddit by Gumball. Scanned in full color, the collection is a wonderful way to browse through gaming and media history.
The escalating Reddit post is gaining a lot of attention and appreciation from gamers who have either been looking to complete their own collections or to find the couple of missing issues that weren’t in the Retromags collection. “I just wanted to get every issue in one place,” Gumball says in another Reddit reply. “The ones that I could not find were issues 208 and 285. Retromags did not have them [but] a dude over in the r/DHexchange happened to have both of these [and] allowed me to complete the set.
If you’re a gamer of a certain age, Nintendo Power magazines were the absolute best. And even if you aren’t, or if you happen to think that the magazine is pointless trash, that doesn’t really matter. Those magazines are still cultural output that are absolutely worth preserving. I plan to go through them myself and just drink in the nostalgia, thinking back to when I was a child diving into these magazines.
But this is Nintendo we’re talking about. And Nintendo has never been shy about attacking anyone who remotely comes close to stepping on their IP, even if, as in this case, the company can’t be bothered to do any of this archiving or preservation itself.
Unfortunately, Nintendo’s history with these sorts of efforts isn’t exactly comforting. But as physical media, especially printed manuals and magazines like Nintendo Power, become harder to find, having access to archives like this is an essential way to preserve this history.
Hopefully Nintendo can manage to see that as well. Somehow, though, I suspect the lawyers already have pen to paper.
Filed Under: archiving, nintendo power, retro mags, video games
Companies: internet archive, nintendo
The Decentralized Web Could Help Preserve The Internet's Data For 1,000 Years. Here's Why We Need IPFS To Build It.
from the protocols-not-platforms dept
The internet economy runs on data. As of 2019, there were over 4.13 billion internet users generating more than 2.5 quintillion bytes of data per day. By the end of 2020, there will be 40 times more bytes of data than there are stars to observe in space. And all of this data is powering a digital revolution, with the data-driven internet economy already accounting for 6.9% of U.S. GDP in 2017. The internet data ecosystem supports a bustling economy ripe with opportunity for growth, innovation, and profit.
There’s just one problem: While user-generated data is the web’s most valuable asset, internet users themselves have almost no control over it. Data storage, data ownership, and data use are all highly centralized under the control of a few dominant corporate entities on the web, like Facebook, Google, and Amazon. And all that data centralization comes at an expensive cost to the ordinary internet user. Today’s internet ecosystem, while highly profitable for a few corporations, creates incentives for major platforms to exercise content censorship over end-users who have nowhere else to go. It is also incompatible with data privacy, insecure against cybercrime and extremely fragile.
The web’s fragility in particular presents a big problem for the long-term sustainability of the web: we’re creating datasets that will be important for humanity 1000 years from now, but we aren’t safeguarding that data in a way that is future-proof. Link rot plagues the web today, with one study finding that over 98% of web links decay within 20 years. We are exiting the plastic era, and entering the data era, but at this rate our data won’t outlast our disposable straws.
To build a stronger, more resilient and more private internet, we need to decentralize the web by putting users back in control of their data. The web that we deserve isn’t the centralized web of today, but the decentralized web of tomorrow. And the decentralized web of tomorrow will need to last the next 1,000 years, or more.
Our team has been working for several years to make this vision of a decentralized web a reality by changing the way that apps, developers, and ordinary internet users make and share data. We couldn’t be doing this today without the InterPlanetary File System (IPFS)—a crucial tool in our toolbox that’s helping us tear down the major technological hurdles to building a decentralized web. To see why, we need to understand both the factors driving centralization on the web today, and how IPFS changes the game.
In fact, I want to make a bold prediction: in the next one to two years, we’re going to see every major web-browser shipping with an IPFS peer, by default. This has already started with the recent announcement that Opera for Android will now support IPFS out of the box. This type of deep integration is going to catalyze a whole range of new user and developer experiences in both mobile and desktop browsers. Perhaps more importantly, it is going to help us all safeguard our data for future net-izens.
Here’s how:
With the way the web works now, if I want to access a piece of data, I have to go to a specific server location. Content on the internet today is indexed and browsed based on where it is. Obviously, this method of distributing data puts a lot of power into the hands of whoever owns the location where data is stored, just as it takes power out of the hands of whoever generates data. Major companies like Google and Amazon became as big as they are by assuming the role of trusted data intermediaries, routing all our internet traffic to and through their own central servers where our data is stored.
Yet, however much we may not like “big data” collecting and controlling the internet’s information, the current internet ecosystem incentivizes this kind of centralization. We may want a freer, more private and more democratic internet, but as long as we continue to build our data economy around trusted third-party intermediaries who assume all the responsibilities of data storage and maintenance, we simply can’t escape the gravitational pull of centralization. Like it or not, our current internet incentives rely on proprietary platforms that disempower ordinary end users. And as Mike Masnick has argued in his essay “Protocols, Not Platforms: A Technological Approach to Free Speech”, if we want to fix the problems with this web model, we’ll have to rebuild the internet from the protocol layer up.
That’s where IPFS comes in.
IPFS uses “content-addressing,” an alternative way of indexing and browsing data that is based, not on where that data is, but on what it is. On a content-addressable network, I don’t have to ask a central server for data. Instead, the distributed network of users itself can answer my data requests by providing precisely the piece of data requested, with no need to reference any specific storage location. Through IPFS, we can cut out the data intermediaries and establish a data sharing network where information can be owned by anyone and everyone.
This kind of distributed data economy undermines the big data business model by reinventing the incentive structures of web and app development. IPFS makes decentralization workable, scalable and profitable by putting power in the hands of end users instead of platforms. Widespread adoption of IPFS would represent the major upgrade to the web that we need to protect free speech, resist surveillance and network failure, promote innovation, and empower the ordinary internet user.
Of course, the decentralized web still needs a lot of work before it is as user-friendly and accessible as the centralized web of today. But already we’re seeing exciting use cases for technology built on IPFS.
To get us to this exciting future faster, Textile makes it easier for developers to utilize IPFS to its full potential. Some of our partners are harnessing the data permanence that IPFS enables to build immutable digital archives that could withstand server failure and web decay. Others are using our products (e.g., Buckets) to deploy amazing websites, limiting their reliance on centralized servers and allowing them to store data more efficiently.
Textile has been building on IPFS for over three years, and the future of our collaboration on the decentralized web is bright. To escape the big data economy, we need the decentralized web. The improvements brought by IPFS, release after release, will help make the decentralized web a reality by making it easier to onboard new developers and users. As IPFS continues to get more efficient and resilient, its contribution to empowering the free and open web we all deserve will only grow. I can’t wait for the exponential growth we’ll see as this technology continues to become more and more ubiquitous across all our devices and platforms.
Carson Farmer is a researcher and developer with Textile.io. Twitter: @carsonfarmer
Filed Under: archiving, decentralized web, ipfs, platforms, preserving data, protocols
The 2nd Circuit Contributes To Fair Use Week With An Odd And Problematic Ruling On TVEyes
from the fair-us-has-no-predictive-value dept
For years, we’ve quoted a copyright lawyer/law professor who once noted that the standards for fair use are an almost total crapshoot: nearly any case can have almost any result, depending on the judge (and sometimes jury) in the case. Even though there are “four factors” that must be evaluated, judges will often bend over backwards to twist those four factors to get to their desired result. Some might argue that this is a good thing in giving judges discretion in coming up with the “right” solution. But, it also means that there’s little real “guidance” on fair use for people who wish to make use of it. And that’s a huge problem, as it discourages and suppresses many innovations that might otherwise be quite useful.
Case in point: earlier this week the 2nd Circuit rejected a lower court decision in the Fox News v. TVEyes case. If you don’t recall, TVEyes provides a useful media monitoring service that records basically all TV and radio, and makes the collections searchable and accessible. It’s a useful tool for other media companies (which want to use clips), for large PR firms tracking mentions, and for a variety of other uses as well. The initial ruling was a big win for fair use (even when done for profit) and against Fox News’ assertion of the obsolete doctrine of “Hot News” misappropriation. That was good. However, that initial ruling only covered some aspects of TVEyes’ operations — mainly the searching and indexing. A second ruling was more of a mixed bag, saying that archiving the content was fair use, but allowing downloading the content and “date and time search” (as opposed to content search) was not fair use.
Some of this was appealed up to the 2nd circuit — specifically that second ruling saying parts of the service were not fair use. Thankfully, Fox didn’t even bother appealing the “hot news” ruling or the “fair use on index search” ruling. As you’d expect, the court runs through a four factors test, and as noted above, the analysis is… weird. Once again, it seems clear that the court decided Fox should win and then bent its four factors analysis to make that happen. The court separates out TVEyes operations into two things: “Search” and “Watch.” Whereas the lower court separated out “Watch” into various components, here the court decides that the entire “Watch” part is not fair use, and thus there’s no need to examine the components (the “Search” part remains covered by fair use — which, again, Fox did not challenge).
First, the court explores “the purpose and character” of the use, and whether or not its transformative, which would lean towards fair use. Much of the discussion focuses on the Google Books case, in which the same court found that Google scanning books and making them searchable was transformative and thus, fair use. Here, the court notes the similarities that make TVEyes transformative, which is a good start:
TVEyes?s copying of Fox?s content for use in the Watch function is similarly transformative insofar as it enables users to isolate, from an ocean of programming, material that is responsive to their interests and needs, and to access that material with targeted precision. It enables nearly instant access to a subset of material??and to information about the material??that would otherwise be irretrievable, or else retrievable only through prohibitively inconvenient or inefficient means.
Sony Corporation of America vs. Universal City Studios, Inc. is instructive. See 464 U.S. 417 (1984). In Sony, a television customer, who (by virtue of owning a television set) had acquired authorization to watch a program when it was broadcast, recorded it in order to watch it instead at a later, more convenient time. That was held to be a fair use. While Sony was decided before ?transformative? became a term of art, the apparent reasoning was that a secondary use may be a fair use if it utilizes technology to achieve the transformative purpose of improving the efficiency of delivering content without unreasonably encroaching on the commercial entitlements of the rights holder.
The Watch function certainly qualifies as technology that achieves the transformative purpose of enhancing efficiency: it enables TVEyes?s clients to view all of the Fox programming that (over the prior thirty?two days) discussed a particular topic of interest to them, without having to monitor thirty?two days of programming in order to catch each relevant discussion; and it eliminates the clients? need even to view entire programs, because the ten most relevant minutes are presented to them. Much like the television customer in Sony, TVEyes clients can view the Fox programming they want at a time and place that is convenient to them, rather than at the time and place of broadcast. For these reasons, TVEyes?s Watch function is at least somewhat transformative.
Of course the “at least somewhat” qualifier on “transformative” should be a foreshadowing of what comes next. First, the court notes that the commercial nature of TVEyes walks back at least some of its fair use argument, but concludes the first factor goes into TVEyes’ corner “albeit slightly.”
On the 2nd factor, “the nature of the copyrighted work,” the court correctly notes that this is kind of a superfluous factor that almost never matters in any copyright lawsuit. The 3rd factor is a big one: “the amount and substantiality of the portion used.” In the Google Books ruling, this same court correctly and usefully pointed out that this is not about the “percentage of the overall” that is used, but rather if the user was using more than is necessary for the use at hand. Under that understanding, it would seem that this should lean towards TVEyes’ position, since it would need to offer up all the content as part of its service. But the court feels otherwise.
This factor clearly favors Fox because TVEyes makes available virtually the entirety of the Fox programming that TVEyes users want to see and hear. While ?courts have rejected any categorical rule that a copying of the entirety cannot be a fair use,? ?a finding of fair use is [less] likely . . . when the copying is extensive, or encompasses the most important parts of the original.? Id. at 221. In this respect, the TVEyes Watch function is radically dissimilar to the service at issue in Google Books.
What kills TVEyes here in the Google Books comparison is that Google Books had a “snippet” function that only showed parts of the book, rather than the whole thing:
Google?s snippet function was designed to ensure that users could see only a very small piece of a book?s contents. Each snippet was three lines of text, constituting approximately one?eighth of a page; a viewer could see at most three snippets per book for any searched term, and no more than one per page. Users were prevented from performing repeated searches to find multiple snippets that could be compiled into a coherent block of text. Approximately 22% of a book?s text was ?blacklist[ed]?: no snippet could be shown from those pages. Id. at 222. And snippets were not available at all for such books as dictionaries or cookbooks, in which a snippet might convey all the information that a searcher was likely to need. While the snippets allowed a user to judge whether a book was responsive to the user?s needs, they were abbreviated to ensure that it would be nearly impossible for a user to see a meaningful exposition of what the author originally intended to convey to readers.
TVEyes redistributes Fox?s news programming in ten?minute clips, which??given the brevity of the average news segment on a particular topic??likely provide TVEyes?s users with all of the Fox programming that they seek and the entirety of the message conveyed by Fox to authorized viewers of the original. Cf. Harper & Row Publishers, Inc. v. Nation Enterprises, 471 U.S. 539, 564?65 (1985) (finding no fair use when the copying involved only about 300 words, but the portion copied was ?the heart of the book?). TVEyes?s use of Fox?s content is therefore both ?extensive? and inclusive of all that is ?important? from the copyrighted work.
TVEyes was hoping that by cutting things down to 10 minute clips that would support the fair use snippets argument, but it didn’t really fly.
The fourth factor is where things get really odd and potentially dangerous to fair use. This is “the effect on the market,” which is often the determining factor on fair use. For many (though, thankfully not all) courts, if they see that you’re somehow diminishing the original market, you lose on fair use. Many copyright holders have tried to obliterate all fair use cases by basically arguing that the fact that a for-profit entity is making use of their work proves there’s a market for the works, and thus only the copyright holder should be entitled to that market, or else the user is clearly diminishing the market (even if the copyright holder is not even in that market). In short “if there’s a market anywhere related to this content, the copyright holder should own that entire market.” Many courts have rejected that line of thinking as it effectively obliterates fair use. But… not this court.
The success of the TVEyes business model demonstrates that deep?pocketed consumers are willing to pay well for a service that allows them to search for and view selected television clips, and that this market is worth millions of dollars in the aggregate. Consequently, there is a plausibly exploitable market for such access to televised content, and it is proper to consider whether TVEyes displaces potential Fox revenues when TVEyes allows its clients to watch Fox?s copyrighted content without Fox?s permission.
Such displacement does occur. Since the ability to re?distribute Fox?s content in the manner that TVEyes does is clearly of value to TVEyes, it (or a similar service) should be willing to pay Fox for the right to offer the content. By providing Fox?s content to TVEyes clients without payment to Fox, TVEyes is in effect depriving Fox of licensing revenues from TVEyes or from similar entities. And Fox itself might wish to exploit the market for such a service rather than license it to others. TVEyes has thus ?usurp[ed] a market that properly belongs to the copyright?holder.? Kirkwood, 150 F.3d at 110. It is of no moment that TVEyes allegedly approached Fox for a license but was rebuffed: the failure to strike a deal satisfactory to both parties does not give TVEyes the right to copy Fox?s copyrighted material without payment.
In short, by selling access to Fox?s audiovisual content without a license, TVEyes deprives Fox of revenues to which Fox is entitled as the copyright holder. Therefore, the fourth factor favors Fox.
That’s… bad. This will undoubtedly be quoted in lots of other copyright/fair use cases, and used to argue that any successful market involving the fair use of a copyright-covered work will deprive the copyright holder of license revenue. As EFF notes in its analysis of the ruling, this appears to completely ignore a fundamental principle of how fair use works:
If use of someone?s words was contingent on the permission of the person who said them, you would never be able to critique what was being said. Fair use allows the use of copyrighted material without permission for this very reason. It?s not in the interest of anyone to license out clips of their material for the purpose of it being debunked, which is why the service provided by TVEyes is so valuable.
Jonathan Band, over at the Disruptive Competition Project finds more to like in the ruling — specifically citing the fact that much of the ruling upholds the important fair use parameters set forth in the Google Books ruling and doesn’t really mess with those. He also doesn’t seem as bothered by that fourth factor analysis, but does worry about one aspect of the transformative analysis: the part that cites the Sony Betamax ruling:
In support of its transformativeness conclusion, the panel cited the Supreme Court?s Betamax decision, which found that consumers? time shifting of television programming was a fair use. The panel stated that
Betamax?s ?apparent reasoning was that a secondary use may be a fair use if it utilizes technology to achieve the transformative purpose of improving the efficiency of delivering content without unreasonably encroaching on the commercial entitlements of the rights holder.? However, most observers don?t view Betamax as a transformative use. And there is no reason to treat Betamax as a transformative use case; transformativeness is not a requirement for fair use. The panel?s odd reading of Betamax compelled a concurring decision by Judge Kaplan (a district court judge sitting by designation) that strongly disagreed with this interpretation. Indeed, the panel itself was not that convinced by its reasoning. Later in its first factor discussion, it acknowledged that ?the Watch function has only a modest transformative character because, notwithstanding the transformative manner in which it delivers content, it essentially republishes that content unaltered from its original form, with no ?new expression, meaning or message.??
The other issue that the court reviewed was the incredibly broad permanent injunction that the district court had issued on TVEyes after finding some of its service not to be fair use. Without much discussion, the 2nd Circuit notes that since that injunction was based on mistakes about what was fair use, it’s sending that back to the lower court to review.
It is possible that this will all get appealed to the Supreme Court, though it’s not at all clear that this is a case the Supreme Court would actually take (and, there’s an argument that a majority of the Supreme Court may be fans of Fox News, in which case, Fox may get something of an edge…). However, this does seem like yet another in a long list of copyright cases, where we see useful innovations likely killed off by copyright. Having a system for professionals to monitor the media and make use of it is incredibly useful. And yet, with this ruling such things can be massively restricted. And, on top of that, with the language in the 4th factor above, we should all be worried about what other innovations will now be shut down (or never even started) going forward.
Filed Under: archiving, commercial use, copyright, fair use, search, watch
Companies: fox, tveyes
The Museum Of Art And Digital Entertainment Calls For Anti-Circumvention Exemptions To Be Extended To Online Game Archives
from the preserve-and-protect dept
Now that we’ve covered a couple of stories about game companies, notably Blizzard, bullying the fans that run antiquated versions of MMO games on their own servers to shut down, it’s as good a time as any to discuss a recent call for the DMCA anti-circumvention exemptions to include the curation of abandoned MMO games. A few weeks back, during the triennial public consultation period in which the U.S. Copyright Office gathers public commentary on potential exemptions to the DMCA’s anti-circumvention provisions, a bunch of public comments came in on the topic of abandoned video games. Importantly, the Librarian of Congress already has granted exemptions for the purpose of preserving the art of video games so that libraries and museums can use emulators to revive classic games for the public.
But what do you do if you’re looking to preserve a massive multiplayer online game, or even single-player games, that rely on server connections with the company that made those games in order to operate? Those servers don’t last forever, obviously. Hundreds of such games have been shut down in recent years, lost forever as the companies behind them no longer support the games or those that play them.
Well, one non-profit in California, The Museum of Art and Digital Entertainment, wants anti-circumvention exemptions for running servers for these games to keep them alive as well.
“Although the Current Exemption does not cover it, preservation of online video games is now critical,” MADE writes in its comment to the Copyright Office. “Online games have become ubiquitous and are only growing in popularity. For example, an estimated fifty-three percent of gamers play multiplayer games at least once a week, and spend, on average, six hours a week playing with others online.”
“Today, however, local multiplayer options are increasingly rare, and many games no longer support LAN connected multiplayer capability,” MADE counters, adding that nowadays even some single-player games require an online connection. “More troubling still to archivists, many video games rely on server connectivity to function in single-player mode and become unplayable when servers shut down.”
Due to that, MADE is asking the Copyright Office (and the Librarian of Congress) to allow libraries and museums exemptions to run their own servers to display these games as well. Frankly, it’s difficult to conjure an argument against the request. If games are art, and they are, then they ought to be preserved. The Copyright Office has already agreed with this line of thinking for the category of games that don’t require an online connection, so it’s difficult to see how it could punt on the issue of online games.
And, yet, we have examples of fan-run servers of abandoned games, or versions of games, getting bullied by companies like Blizzard. These fan-servers are essentially filling the same role that groups like MADE would like to do: preserving old gaming content that has been made otherwise unavailable by companies that have turned down online game servers.
It’s enough to make one wonder why a group of fans of a game shouldn’t get the same protections afforded to a library or museum, if the end result is nearly identical.
Filed Under: anti-circumvention, archiving, copyright office, dmca, dmca 1201, drm, librarian of congress, museums, triennial review, video games
Companies: made, museum of art and digital entertainment
Techdirt Reading List: When We Are No More: How Digital Memory Is Shaping Our Future
from the memory-is-about-the-future dept
We’re back again with another in our weekly reading list posts of books we think our community will find interesting and thought provoking. Once again, buying the book via the Amazon links in this story also helps support Techdirt.
Earlier this week, I heard Abby Smith Rumsey do a wonderful and fascinating interview with Russ Roberts on his Econtalk podcast. Rumsey is a writer and historian who spent many years working at or with the Library of Congress working on archiving and preserving cultural works — including many years focusing on digital preservation. Out of that comes her book, When We Are No More: How Digital Memory Is Shaping Our Future.
I haven’t yet read the whole thing, but from what I have read, it’s a wonderfully written book that delves into a number of issues not just around archiving and preserving digital content — but thinking about what is memory itself, including about how important it is to the future. The book (not surprisingly) does touch a bit on questions we often discuss here, such as copyright, but also provokes a lot of thought around the nature of digital content, and whether or not we’ll really be able to preserve it going into the future. Of course, it also talks about why it’s so important to preserve information and looks at some historical issues around culture preservation and memory. All in all, it’s a really fascinating and thought-provoking work.
Filed Under: abby smith rumsey, archiving, digital archives, memory, reading list, techdirt reading list
TVEyes Hit With Incredibly Restrictive Permanent Injunction By Court
from the so-much-for-the-fair-use-'win' dept
The last time we checked in with the long-running TVEyes case, the TV monitoring company had scored another partial victory for fair use. The company packages clips of news stories from TV broadcasts and makes them available to paying subscribers — which include journalists and government officials.
It had scored a much larger fair use win earlier, when the court found that even the storage of clips by TVEyes fell under fair use, despite Fox News’ protests to the contrary. A year later, some of TVEyes’ fair use victory was scaled back. The court took another look at the end users’ ability to download and store clips and found these actions weren’t covered under the fair use ruling. Users could privately share clips and create archives only they could access. What wasn’t covered was public sharing and downloading of clips.
In order to comply with the court’s decision, TVEyes would need to additionally restrict access to its compiled content. The court didn’t say specifically what TVEyes would have to change to comply with the ruling at that point. Those instructions appear to have arrived.
An injunction issued by the court contains all sorts of new restrictions, as Eriq Gardner reports.
Here’s a list of things that are now forbidden:
Enabling users to download to their own computers video clips of content telecast on the Fox News Channel or Fox Business Network.
Enabling users to view FNC or FBN content by searching by date, time, and channel.
Enabling users from sharing video clips of FNC or FBN content on social media websites rather than by personally directed emails, with further limitations.
Those further limitations?
If a TVEyes client wants to email a clip, he or she can only do so to five or less recipients. The client also has to register their work email with TVEyes instead of using Gmail or another free web email service. Those being sent the clip will also have to submit their own email address to ensure they are the intended recipients.
These new restrictions could do some serious damage to TVEyes, which charges subscribers ~$500/month for access to a wide variety of news clips. Where high-profile subscribers like Reuters, Bloomberg, the White House (yes, THAT one), the Dept. of Defense and others will go if they find the new restrictions unworkable isn’t exactly clear, but it’s a safe bet that Fox’s litigious efforts will see a few of these entities finding the service no longer worth the investment.
And that’s not the full extent of the restrictions in the permanent injunction. TVEyes will also be required to create and implement a social media blockade solely for Fox News content to prevent the public sharing of its clips. Any Fox content circulated by TVEyes will also have to carry a warning that the content has not been purchased or licensed by the company and that unauthorized sharing is considered copyright infringement. So, for $500/month, TVEyes’ subscribers will now have the privilege of being blasted with anti-piracy warnings as if they were lowly, DVD-purchasing peasants.
This order can be appealed and most certainly will be, as it imposes a ton of restrictions on content originating from a single source. Fox News gets its own new set of rules and everyone else plays by the old ones. The court’s decisions haven’t exactly added up to a fair use win, because a real fair use ruling would apply across the board, not just to everything but this one particular litigant’s content.
Filed Under: archiving, copyright, downloading, fair use, search, tv, tv news
Companies: fox, tveyes
Latest TVEyes Ruling A Mixed Bag: Archiving & Sharing Privately Is Fair Use; Downloading & Sharing Publicly Is Not
from the some-good,-some-bad dept
Last year, we wrote about a big fair use win by TV monitoring company TVEyes — a service used by governments, news companies and more to record, index and store TV broadcasts and make them searchable. Fox, a company that sometimes relies on fair use itself, sued TVEyes, alleging infringement and a violation of the infamous hot news doctrine. The court ruled pretty unambiguously in favor of fair use (yes, even as TVEyes is storing everything) for most of TVEyes basic operation (searching and indexing), and completely rejected the hot news claim. However, it did leave aside one area for further investigation: the features provided by TVEyes that allows users to save, archive, download, email and share clips as well as the feature for doing a “date-time search” (allowing users to retrieve video from a specific network based on the date and time of the broadcast. For those, the court wanted more evidence before deciding.
It has now ruled on that aspect and it’s a partial win for fair use and a partial loss, which may be troubling. The court declared the archiving function to be fair use. But the downloading and “date time search” functions are not fair use. The emailing feature could be fair use, “but only if TVEyes develops and implements adequate protective measures.”
Let’s look at the details. First, the court decides that the archiving function is fair use because it is integral to TVEyes’ overall service:
Democracy works best when public discourse is vibrant and debate thriving. But debate cannot thrive when the message itself (in this case, the broadcast) disappears after airing into an abyss. TVEyes’ service allows researchers to study Fox News’ coverage of an issue and compare it to other news stations; it allows targets of Fox News commentators to learn what is said about them on the network and respond; it allows other media networks to monitor Fox’s coverage in order to criticize it. TVEyes helps promote the free exchange of ideas, and its archiving feature aids that purpose.
Archiving video clips to remain stored beyond 32 days and to facilitate successive reference is integral to TVEyes’ service and its transformational purpose of media monitoring. And Fox has not identified any actual or potential market harm arising from archiving. I hold that the archiving function is fair use, complementing TVEyes’ searching and indexing functions.
As for emailing and sharing, there the court says it is fair use… if TVEyes includes a few protections:
I agree that to prohibit e-mail sharing would prevent TVEyes users from realizing much of the benefit of its transformative service. For example, members of Congress rely on TVEyes to be made aware of what the media has to say about the issues of the day and about them. But their interns and staffers, not they, sit at computers querying keywords of interest through the TVEyes portal, and then e-mail the results up the chain of command. Without e-mail, the Congressman would be limited to either sharing a computer with his staffer or else having the staffer describe the contents of the clip to the Congressman without showing him the clip. In practice, the former is unrealistic and the latter fails to deliver “the full spectrum of information . . . [including] what was said, [and] how it was said with subtext body language, tone of voice, and facial expression-all crucial aspects of the presentation of, and commentary on, the news.”
[….]
However, there is also substantial potential for abuse. In its current incarnation, TVEyes’ e-mailing feature cannot discriminate between sharing with a boss and sharing with a friend, nor between sharing for inclusion in a study and sharing a clip for inclusion in a client sales pitch. Fair use cannot be found unless TVEyes develops necessary protections. What limits should be placed on subscribers who share links through social media? What can prevent subscribers from sharing for purposes not protected by § 107? If TVEyes cannot prevent indiscriminate sharing, it risks becoming a substitute for Fox’s own website, thereby depriving Fox of advertising revenue.
This seems a bit strange to me, frankly. You still have to be a subscriber to make use of TVEyes, but then you can share clips freely online, which would seem to be a part of a reasonable news function, which should support fair use. But the court seems to think it’s only fair use if it’s kept “internally” via email.
Moving on to downloading, here, the court is not convinced that this is “integral” to the purpose of the product, citing a bunch of famed copyright cases, including the cases against Napster, ReDigi and MP3.com. Basically “downloading,” according to the court, must be infringing, and thus not fair use.
I believe that TVEyes’ downloading function goes well beyond TVEyes’ transformative services of searching and indexing…. TVEyes is transformative because it allows users to search and monitor television news. Allowing them also to download unlimited clips to keep forever and distribute freely may be an attractive feature but it is not essential. Downloading also is not sufficiently related to the functions that make TVEyes valuable to the public, and poses undue danger to content-owners’ copyrights.
The court completely rejects TVEyes argument that downloading is essential for offline use, because the court insists that broadband is basically available anywhere, so it’s unlikely anyone will really need the service online.
Finally, there’s the “date-time search” feature, which apparently is used in nearly 6% of all TVEyes’ searches. Again, the court doesn’t buy the fair use argument, saying that the date-time search isn’t so much a “search” as it is a way for people to find something they already know is there, and that makes it much closer to the original programming and thus less “transformative.”
The feature is not as much a “search” tool as a content delivery tool for users who already know what they seek. In such cases, TVEyes is not so transformational, since users should be able to procure the desired clip from Fox News or its licensing agents, albeit for a fee. Put simply, if a user wants to watch the first half of last Thursday’s 0 ‘Reilly Factor, the Court sees no reason why he should not be asked to buy the DVD/
Unlike TVEyes’ core business, its “Date-Time search” function duplicates Fox’s existing functionality. Fox’s contention that TVEyes’ Date-Time search is likely to cannibalize Fox News website traffic and sales by its licensing agents is persuasive.
It does seem a bit worrying when courts get to decide which features of your service are okay and which are not. We generally want markets determining innovative features, rather than judges. And this ruling seems… particularly subjective on a number of points. There is no four factors test being done in any of these. It basically just takes the original ruling that the search and indexing is fair use, and then just focuses on whether these features are “essential” to that service to determine if they, too, are fair use. Again, it’s troubling when a court is deciding if a feature that customers clearly like is “essential.” That’s not how innovation is supposed to work.
This case is still early and I expect that there will be appeals on both sides, so this ruling, by itself, isn’t that important yet. What happens next, in terms of how the appeals court rules, is where things will get really interesting.
Filed Under: archiving, downloading, fair use, fox news, indexing, searching, tv
Companies: fox, news corp, tveyes
Cerf Warns Of A 'Lost Century' Caused By Bit Rot; Patents And Copyright Largely To Blame
from the and-he-should-know dept
According to his online biography, Vint Cerf is:
> Vice president and Chief Internet Evangelist for Google. He is responsible for identifying new enabling technologies and applications on the Internet and other platforms for the company.
That suggests someone whose main job is to look forward, rather than back, and with a certain optimism too. But an article in the Guardian reports on a speech he gave in which he is not only concerned with the past of online technologies, rather than their future, but is also issuing an important warning about their fatal flaws:
> Humanity’s first steps into the digital world could be lost to future historians, Vint Cerf told the American Association for the Advancement of Science’s annual meeting in San Jose, California, warning that we faced a “forgotten generation, or even a forgotten century” through what he called “bit rot”, where old computer files become useless junk.
Of course, he’s not the first person to raise that issue — Techdirt wrote about this recently — but Cerf’s important contributions to the creation of the Internet, and his current role at Google, lend particular weight to his warning. That said, the Guardian article seems to miss the central reason all this is happening. It’s not that it’s really hard to create emulators to run old programs or open old files. The real issue is tucked away right at the end of the article, which quotes Cerf as saying:
> “the rights of preservation might need to be incorporated into our thinking about things like copyright and patents and licensing. We’re talking about preserving them for hundreds to thousands of years,” said Cerf.
The main obstacles to creating software that can run old programs, read old file formats, or preserve old webpages, are patents and copyright. Patents stop people creating emulators, because clean-room implementations that avoid legal problems are just too difficult and expensive to carry out for academic archives to contemplate. At least patents expire relatively quickly, freeing up obsolete technology for reimplementation. Copyright, by contrast, keeps getting extended around the world, which means that libraries would probably be unwilling to make backup copies of digital artefacts unless the law was quite clear that they could — and in many countries, it isn’t.
Once again, we see that far from promoting and preserving culture, intellectual monopolies like patents and copyright represent massive impediments that may, as Cerf warns, result in vast swathes of our digital culture simply being lost forever.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Filed Under: archiving, copyright, digital, history, intellectual property, losing history, patents, vint cerf