simulation – Techdirt (original) (raw)

We Brought Our Election Simulation Game To Chicago… And Learned The Chicago Way

from the russians-in-a-dimly-lit-shipping-container dept

You may recall that, back in June, I wrote about a bizarre situation in which an election simulation game, that I helped co-design, called “Machine Learning President,” somehow had some of the rules sheets leaked to Rebekah Mercer, from which they were leaked once again to Jane Mayer at the New Yorker, who wrote up an article there, not knowing the provenance of the game. This caused many, many people to assume that the Mercers had somehow made up this game to “relive” the success of the 2016 election. This resulted in a ton of angry headlines and tweets — including the host of NPR’s comedic news-based “game show” Wait Wait Don’t Tell Me, Peter Sagal, who alerted his friend, Cards Against Humanity designer, Max Temkin, who tweeted angrily about the game.

The next day, when I wrote up my post explaining what the game really was about — we had a lot of people reach out to ask if they, too, could play the game. Unfortunately, it’s a ton of work to put on, and the crew who designed the game — lead by Berit Anderson and Brett Horvath from Scout.ai and Guardians.ai, who initially conceived of the game, along with Randy Lubin (who is our partner in our CIA game project), and science fiction writer, Eliot Peper — are all super busy. However, by far the most aggressive in getting us to play the game were Max Temkin and Peter Sagal.

It finally happened two weeks ago in Chicago, and Charlie Hall at Polygon has a brilliant write-up about how the game went:

Inside a warehouse on Chicago?s North Side, within the thin strip of industrial property between Bucktown and Lincoln Park, Sen. Elizabeth Warren is pandering to a small group of powerful evangelicals. Just a few feet away, near a bowl brimming with 20-sided dice, Vice President Mike Pence is honing his next batch of TV and radio ads. Meanwhile, within a dimly lit shipping container, Russian oligarchs are desperately trying to funnel money to Sen. Kamala Harris through Black Lives Matter.

No, it?s not the fever dream of some political wonk stranded here along the nation?s third coast. It?s a 40-person live-action role-play of the 2020 presidential election. Formally, it?s called a ?scenario planning game.? In motion, it?s a vehicle for some of the most engaging political theater that I?ve ever seen.

This was really only the second time we’ve run the full game (there have been a few playtests of part of the game). The first time was back in San Francisco in February and then this time in Chicago. The San Francisco game was incredible, but the crew that Max and Peter brought out to play in Chicago took it to the next level in terms of truly inhabiting their roles… and learning from the experience:

For Sagal, who every week tells jokes about political figures set against the backdrop of real NPR News stories, Machine Learning President was an educational experience.

?I make my living reading the news,? Sagal said. ?All that shit is real, but it?s not important. The important shit we never find out about, and I honestly think this game illustrates that.

?Look at it this way: All of the candidates tonight got to make speeches, and these speeches were important. […] But what was also interesting was that there was no attempt on anybody?s part to use those speeches to convince anybody of anything. That all happened during the 15-minute rounds. The speech was just about signaling. The speech was not making the deal or convincing anybody to do anything. It was just about delivering on something and positioning yourself to confirm a deal you?ve already made. There was no persuasive aspect to any of the things that any of us said, because the persuading had already been done. Or, as in our case with the Evangelicals, not done.

?What the game teaches you,? Sagal continued, ?is that the shit that we get to see, as citizens who watch the speeches and get the emails […] is nonsense and not important. The stuff that?s really important is happening behind the scenes.?

That may be the best endorsement we’ve seen of the game yet — and the article only barely touches on some of the crazy alliances, use of technology, dealmaking, events, and backstabbing that played out over the course of a truly frantic evening. As the article notes, a big part of the point of the game is to get people to better understand how politics and tech and money intersect, and we heard from multiple people who played in Chicago saying they couldn’t stop thinking about it after playing (we also had a few veterans of actual political campaigns note that it hit a little too close to home). Chicago has a reputation in politics, and I will say that the folks who showed up to play demonstrated that quite effectively.

Either way, we’re still hoping to set up additional events for the game, even though it’s quite a bit of work to run, and all of us are pretty swamped with other stuff. However, since people keep asking for it, we’re trying to figure out ways to run it perhaps a bit more often.

Filed Under: chicago, election, election game, elections, games, machine learning president, max temkin, peter sagal, politics, scenario planning, simulation

Real Estate Developer Found Using Video Game Footage In Marketing Material… Which Is Pretty Cool!

from the just-real-enough dept

We’ve featured a number of stories here about entities attempting to pass off video game footage as something in real life. On the one hand, since these stories usually feature governments doing this in a pretty bald-faced attempt at trickery, and since these attempts at trickery typically have something to do with the realm of war, it’s easy to take a negative view of the whole thing entirely. On the other hand, it’s hard to escape the notion that our video games have gotten so realistic so as to be able to fool large swaths of people into thinking they are depictions of the real thing, which is pretty damned cool.

And, yet, even when the use of game footage is more innocuous, it still seems to get people’s fur up. In the UK, one housing developer was caught using a screenshot from Cities: Skylines, a city-building game, in its pitch material for a housing project.

The eight-page brochure, which was published by this newspaper earlier this week,outlines Norwich-based firm Lanpro’s vision for a town the size of Thetford, just off the A1067. Lifelong gamer, Matt Carding-Woods, recognised an image used on page three of the document as a screenshot from the 2015 city building game, Cities: Skylines, released by studio Colossal Order.

Lanpro defended the image’s use as educational, and said the document was only ever intended to be distributed internally.

A number of things apparently gave it away, including small patches of brown trees being rendered around garbage incinerators to represent pollution. While that’s actually quite funny, some residents and internet users criticized the developer for failing to note that the image was a screenshot from the game. Residents in particular seemed to indicate that the image’s use demonstrated the cavalier attitude the developer was taking the project as a whole.

But how does that make any sense? Developers typically include renderings of future projects in pitch material. Those renderings are usually created by graphical artists that specialize in that sort of thing. But if Cities:Skylines is simply good enough at depicting residential neighborhoods that one can create a rendering within the game and use that instead, how is that anything other than pretty neat? Now, in this case, it seems that Lanpro used a neighborhood created in the game by another player. But, again, so what? As Lanpro’s Chris Leeming notes, it’s not like this is the first time a developer has used images from the game to pitch a project.

“It is after the detailed technical work and analysis that we will be able to form a masterplan for the proposals and provide an image of what the scheme may look like.”

He added that there have been several examples of the “serious use of this software to model, engage and explain projects,” including by city planners working in Stockholm, Sweden.

And that makes sense. As games become better simulations and have increasingly convincing imagery, I would expect to see more of this rather than less.

Filed Under: cities: skylines, real estate, rendering, simulation, video game
Companies: lanpro

Warner Bros. DMCAs Insanely Awesome Recreation Of Blade Runner By Artificial Intelligence

from the oh-the-irony dept

I’m going to dispense with any introduction here, because the meat of this story is amazing and interesting in many different ways, so we’ll jump right in. Blade Runner, the film based off of Philip K. Dick’s classic novel, Do Androids Dream Of Electric Sheep, is a film classic in every last sense of the word. If you haven’t seen it, you absolutely should. Also, if you indeed haven’t seen the movie, you’ve watched at least one less film than an amazing artificial intelligence software developed by Terrance Broad, a London-based researcher working on his advanced degree in creative computing.

His dissertation, “Autoencoding Video Frames,” sounds straightforwardly boring, until you realize that it’s the key to the weird tangle of remix culture, internet copyright issues, and artificial intelligence that led Warner Bros. to file its takedown notice in the first place. Broad’s goal was to apply “deep learning” — a fundamental piece of artificial intelligence that uses algorithmic machine learning — to video; he wanted to discover what kinds of creations a rudimentary form of AI might be able to generate when it was “taught” to understand real video data.

The practical application of Broad’s research was to instruct the artificial neural network, an AI that is something of a simulacrum of the human brain or thought process, to watch Blade Runner several times and attempt to reconstruct its impression of what it had seen. In other words, the original film is the interpretation of the film through human eyes, while Broad’s AI reconstructed what is essentially what the film looks like through the eyes of an artificial intelligence. And if that hasn’t gotten your heartrate up a bit, then you and I live on entirely different planets.

The AI first had to learn to discern footage from Blade Runner from other footage. Once it had done that, Broad has the AI “watch” numerical representations of frames from the film and then attempt to reconstruct them into a visual medium.

Once it had taught itself to recognize the Blade Runner data, the encoder reduced each frame of the film to a 200-digit representation of itself and reconstructed those 200 digits into a new frame intended to match the original. (Broad chose a small file size, which contributes to the blurriness of the reconstruction in the images and videos I’ve included in this story.) Finally, Broad had the encoder resequence the reconstructed frames to match the order of the original film.

Broad repeated the “learning” process a total of six times for both films, each time tweaking the algorithm he used to help the machine get smarter about deciding how to read the assembled data. Here’s what selected frames from Blade Runner looked like to the encoder after the sixth training. Below we see two columns of before/after shots. On the left is the original frame; on the right is the encoder’s interpretation of the frame.

Below is video of the original film and the reconstruction side by side.

The blur and image issues are due in part to the compression of what the AI was asked to learn from and its response in reconstructing it. Regardless, the output product is amazingly accurate. The irony of having this AI learn to do this via Blade Runner specifically was intentional, of course. The irony of one unintended response to this project was not.

Last week, Warner Bros. issued a DMCA takedown notice to the video streaming website Vimeo. The notice concerned a pretty standard list of illegally uploaded files from media properties Warner owns the copyright to — including episodes of Friends and Pretty Little Liars, as well as two uploads featuring footage from the Ridley Scott movie Blade Runner.

Just a routine example of copyright infringement, right? Not exactly. Warner Bros. had just made a fascinating mistake. Some of the Blade Runner footage — which Warner has since reinstated — wasn’t actually Blade Runner footage. Or, rather, it was, but not in any form the world had ever seen.

Yes, Warner Bros. DMCA’d the video of this project. To its credit, it later rescinded the DMCA request, but this project has fascinating implications for the copyright process and its collision with this kind of work. For instance, if the automatic crawlers looking for film footage snagged this automatically, is that essentially punishing Broad’s AI for doing its task so accurately that its interpretation of the film so closely matched the original? And, at a more basic level, is the output of the AI even a reproduction copy of the original film, subjecting it to the DMCA process, or is it some kind of new “work” entirely? As the Vox post notes:

In other words: Warner had just DMCA’d an artificial reconstruction of a film about artificial intelligence being indistinguishable from humans, because it couldn’t distinguish between the simulation and the real thing.

Other comments have made the point that if the video is simply the visual interpretation of the “thoughts” of an artificial intelligence, then how is that copyrightable? One can’t copyright thoughts, after all, only the expression of those thoughts. If these are the thoughts of an AI, are they subject to copyright by virtue of the AI not being “human?” And I’m just going to totally leave alone the obvious subsequent question as to how we’re going to define human, because, hell, that’s the entire point of Dick’s original work.

Broad noted to Vox that the way he used Blade Runner in his AI research doesn’t exactly constitute a cut-and-dried legal case: “No one has ever made a video like this before, so I guess there is no precedent for this and no legal definition of whether these reconstructed videos are an infringement of copyright.”

It’s an as yet unanswered question, but one which will need to be tackled. Video encoding and delivery, like many other currently human tasks, is ripe for the employment of AI of the kind that Broad is trying to develop. The closer that software gets to becoming wetware, questions of copyright will have to be answered, lest they get in the way of progress.

Filed Under: ai, automation, blade runner, dmca, simulation, terrance broad
Companies: warner bros.

DailyDirt: Fooling Your Senses

from the urls-we-dig-up dept

Visual illusions can be fun to observe, and there are countless examples that trick human perception into seeing things that aren’t real. However, other senses can also be fooled. As computer interfaces try to engage more senses (eg. touch, spatial awareness, etc), there may be interesting applications for tricking human perception for virtual reality environments. We may also just learn more about how our brains work. Here are just a few illusions that might seem creepy or cool, depending on your point of view.

If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.

Filed Under: augmented reality, brain, haptics, illusions, perception, rhi, rubber hand illusion, senses, simulation, virtual reality

DailyDirt: Making Money The Old-Fashioned Way… By Algorithms

from the urls-we-dig-up dept

People are changing the way they make decisions now that technology can help them crunch more numbers than ever before. Instead of just going with a gut instinct, decisions can be based on all kinds of random data analysis (for better or worse). Big data is a popular trend, and more and more successful examples of data mining for profit seem to get publicized every day. But are we only looking at the winning combinations and ignoring the losers? Here are just a few examples of algorithms that might be making some money.

If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.

Filed Under: algorithm, artificial intelligence, big data, data mining, gigo, poker, simulation, venture capital

DailyDirt: Simulations For Living On Mars

from the urls-we-dig-up dept

Manned missions to Mars aren’t going to happen for decades (if ever?), but in the mean time, we have awesome robots roaming the surface of Mars for us. We also have some simulations of living on Mars — like the Mars500 project — and the unforgettable original Total Recall movie. Here are just a few more Martian simulations if you need some help escaping from the realities of Earth.

If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post.

Filed Under: colony, fmars, mars, mars500, settlement, simcity, simulation, space

What If You Could Recreate Live Performances By Dead Artists On A Computer?

Via Shocklee comes this story of a company that claims to have created software that can recreate live performances by famous musicians (even dead ones). Basically, the software learns (or so its creators claim) exactly how certain musicians played, and then can mimic that style exactly. Here’s how Pocket Lint describes it:

Zenph Studio’s approach is to work out how the musician and the instrument acts and responds, then get a computer to play that track again as a real-time, real-life performance, which in turn can be recorded using modern techniques. The new track isn’t a re-mastering, but a re-performance, as if the musician was actually playing it even though the artist may or may not be dead.

The technology works by ascertaining how an artist strikes a note and then recreating that note again. For the piano, the company takes into account everything from how an artist strikes a note to their hand movement, how they play when tired (yes, it can recreate fatigue) and even, as for the case of Jerry Lee Lewis, how they play with their feet. For the guitar there is even more to take into account, like pad placement, fingernails, and bending of the strings, the list goes on.

The result is that songs recorded 100 years ago can and will be able to be re-recorded with modern recording equipment, allowing old songs to be revitalised and enjoyed once more “in surround sound or headphone listening”.

And, of course, the technology goes well beyond just remastering. In theory, you could create entirely new recordings by long-dead artists, matching their exact styles. As the article suggests, you could toss John Lennon into a Rolling Stones song.

Of course, if this sounds sorta familiar, that’s because we were just talking about the legal mess associated with Bluebeat.com’s claims that the music it offers from its site for sale are not the original works by bands like the Beatles, but an entirely new recording through a “psycho-acoustic simulation.”

So, now, take this software that supposedly can perfectly mimic a certain musician’s playing, and have it record a song. Say it’s a new song. Who owns the copyright? What if it’s adding John Lennon to a Rolling Stone’s song? Who owns the copyright? What if it’s an old song, updated in some slight way? Who owns the copyright? What if it’s just the same song but “remastered”? Who owns the copyright? The legal questions raised by this kind of software are going to keep copyright lawyers busy for a long, long time.

Filed Under: copyright, music, recreation, simulation
Companies: zenph studio