game algorithms – Techdirt (original) (raw)
Stories filed under: "game algorithms"
DailyDirt: Badminton Robots FTW
from the urls-we-dig-up dept
We have computers that can beat us at games like chess and Go (and Jeopardy!), but we haven’t seen too many robots that can beat humans at more physical sports like soccer or tennis. We’ve seen some air hockey robots that are nearly unbeatable, so it’s really only a matter of time before robots learn how to play sports with a few more dimensions. Here are some badminton robots that are inching toward playing better than some of us.
- Badminton robots are getting better slowly. This robot has binocular vision from two cameras and was built by students at the University of Electronic Science and Technology of China. However, it cheats a little bit by using two rackets….
- Just last year, Dong Jiong (a retired badminton professional) played against a couple of these badminton robots on a regulation court. The robots didn’t play very well, but they actually played — and presumably, upgrades will give them superhuman badminton skills someday.
- Badminton robots from the Flanders’ Mechatronics Technology Centre (FMTC) in Belgium were relatively primitive 2013. However, the developers were learning about optimizing the efficiency of robots — not actually trying to create a robot that could beat people at badminton.
After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.
Filed Under: ai, air hockey, artificial intelligence, badminton, dong jiong, flanders' mechatronics technology centre, game algorithms, games, machine learning, robots, university of electronic science and technology of china
DailyDirt: Thinking Machines
from the urls-we-dug-up dept
It’s a source of wonder and excitement for some, panic and concern for others, and a whole lot of cutting edge work for the people actually making it happen: artificial intelligence, the end-game for computing (and, as some would have you believe, humanity). But when you set aside the sci-fi predictions, doomsday warnings and hypothetical extremes, AI is a real thing happening all around us right now — and achieving some pretty impressive feats:
- AlphaGo has already conquered the game many thought a computer never could, but can it master the more “human” game of poker? The statistics it can handle, but what about the behavioural cues? [url]
- Can machines be as creative as humans? Or moreso? There’s plenty of reason to believe they are already well on the way. [url]
- This cluster of IBM chips mimics 16-million neurons and 4-billion synapses to provide an insanely fast platform for neural network applications. The Lawrence Livermore National Lab is seeing if it can replace more traditional supercomputers. [url]
After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.
Filed Under: ai, algorithms, alphago, artificial intelligence, creativity, game algorithms, games, lawrence livermore national lab, machine learning, neural network, poker, supercomputers
Companies: google, ibm
DailyDirt: AlphaGo Plays Better Go Than Puny Humans…
from the urls-we-dig-up dept
In case you missed it, humanity has been dealt a decisive intellectual blow by a go-playing computer program called AlphaGo. We mentioned AlphaGo back in January when Google announced that it had defeated European Go champion Fan Hui and was challenging Lee Sedol next. So now that the results are in, AlphaGo has shown the world that artificial intelligence can best the best of humanity at our most difficult games. We’ve seen this already with chess, and if you don’t remember, people tried to make a variant of chess called Arimaa that humans could hold up as a game people could win over computers (ahem, that didn’t work). We still have Calvinball, Diplomacy and certain forms of poker….
- AlphaGo won the match 4-1, beating the 18-time world champion soundly and winning a $1 million prize. This is a major milestone for AI, and this win and its coverage will spur more AI research (and counteract the AI winter of the 1970s). [url]
- In the second game of the five-game match, AlphaGo made a very non-human move. This unorthodox move was described as “beautiful” — and could lead to more non-intuitive strategies for human go players. [url]
- AlphaGo was defeated in game 4 by Lee Sedol, as Lee found a weakness in the algorithm to exploit. The commentary on the game will probably be discussed and continued by expert Go players for years to come, but it’s likely that go-playing software will improve — whereas humans won’t be able to upgrade their skills so readily. [url]
- Plenty of naysayers predicted that computers would never reach this level of play and beat a top go champion. However, there’s at least one person (computer scientist Feng-Hsiung Hsu) with written evidence who predicted in 2007 that go software would de-throne top human players before 2017. [url]
After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.
Filed Under: ai, algorithms, alphago, arimaa, artificial intelligence, chess, fan hui, feng-hsiung hsu, game algorithms, games, go, lee sedol
Companies: deepmind, google
DailyDirt: Winning Isn't Everything
from the urls-we-dig-up dept
Not that long ago, we mentioned that progress towards an algorithm that could play the game of Go better than humans was on the horizon. It looks like our wetware shouldn’t be too smug about being able to play Go now, but we can still have fun playing, right? And it’ll still take a while before robots are any good at (non-contact) sports. Ping pong, FTW!
- Google has announced that its machine learning system AlphaGo has beaten a human expert in 5 games. That’s impressive, considering that all other attempts haven’t been able to reliably defeat amateur players. In March, AlphaGo is scheduled to play a top player in the world, Lee Sedol, to really test its abilities. And hopefully, this time the human player will lose more gracefully without accusing the computer’s side of cheating. [url]
- There are some games that you can play to win, but it’s not a very fun experience. Try playing Monopoly ruthlessly. If you played Monopoly as a kid, you probably didn’t learn the rules correctly. And you probably up-ended the board to end the game. [url]
- Ideally, we humans should take advances in machine learning as an opportunity for more human-machine cooperation to solve the world’s most dire problems. Together, machines and people should be able to solve the “wicked” hard problems of poverty, pollution, diseases and more. Maybe we can convince the robots to help us. [url]
- Oh. And Facebook’s effort to get a computer playing Go that can beat humans is “getting close” — but maybe it can get exponentially better by March? The guy sitting a few feet from Mark has a bit of time to really get his algorithms working. Or maybe it should challenge Alphago to a match to see how the two compare. [url]
After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.
Filed Under: algorithms, alphago, artificial intelligence, deep learning, game algorithms, games, go, machine learning, monopoly
Companies: google
DailyDirt: A Minute To Learn, A Lifetime To… No, Go Is Much Harder To Learn, Actually
from the urls-we-dig-up dept
For many years, the ancient game Go has been held up as the game that artificial intelligence can’t win. AI has beaten humans at several games quite handily, even games like Poker and Jeopardy! that should give humans a bit of an edge. Still, Go hasn’t been cracked… yet. Any bets on when humans won’t be so smug about Go?
- Apparently, Google is working on a Go algorithm — and it might have a “surprising” announcement in a few months. Well, it won’t be so surprising now, will it? And maybe if you’ve been playing Go online and you’ve noticed a player get really, really, REALLY good in a short amount of time… that’s actually not a person playing. [url]
- Facebook AI researchers are also working on a Go project called Darkforest. This AI already plays decently well against many human players, but it’s not beating the best humans. Combining neural networks and other algorithms might crack Go, as well as help improve Facebook addictiveness. Yay? Perhaps Elon Musk is right to be worried about advanced AI. [url]
- If you’d like to get an overview of Go and why it’s hard from a software engineer’s perspective, check out this talk. The history of Go is fascinating, and the challenges for AI to play this game are impressive — so predictions for when AI will beat the best humans at Go vary widely. [url]
After you’ve finished checking out those links, take 10% off any $50+ order from our Daily Deals using the promo code DAILYDIRT.
Filed Under: ai, algorithms, artificial intelligence, darkforest, deep learning, game algorithms, games, go, machine learning, neural networks
Companies: facebook, fog creek, google
DailyDirt: Can't We Just Play Games For Fun?
from the urls-we-dig-up dept
We’ve seen plenty of advances in game algorithms that make us humans look pretty weak compared to the best chess (and checkers and poker and RPS and air hockey and Flappy bird and…) playing computers. Computers aren’t having any fun beating us at all these games, but they do it nonetheless. As always, let’s just hope they figure out quickly that no one wins at thermonuclear war.
- It seems a bit irrational for humans to keep playing a game that a computer can play better than 99.999999% of all humans, but that doesn’t mean we shouldn’t try to create better and better chess playing algorithms. A deep learning program called Giraffe has taught itself how to play chess at an FIDE International Master level in just three days (on a modern mainstream PC, not a supercomputer). It’s not playing at a (super-)Grandmaster level yet, but it’s also not evaluating millions of moves per second like Deep Blue and other chess supercomputers can. [url]
- Google’s DeepMind AI is beating humans at more classic video games — now up to 31 titles, such as Q*Bert and Zaxxon. However, it hasn’t yet mastered games like Ms. Pac-Man or Asteroids. Phew! We’re not obsolete yet…. [url]
- If you think humans are safe by sticking to sports like soccer, basketball or baseball, you might want to see a few robots in development for playing some of these sports. It might take some time for robots to catch up, but I doubt anyone really wants to play any kind of full-contact sport against a robot, anyway. [url]
After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.
Filed Under: ai, artificial intelligence, chess, deep blue, deepmind, game algorithms, robotics, robots, sports, video games
Companies: google, ibm
DailyDirt: Computers Are Learning How To Play More Video Games, But They'll Never Appreciate A Good Game?
from the urls-we-dig-up dept
Researchers can program computers to play all kinds of games and even beat the best humans at them. So far, we’re not worried about AI that can beat us at chess or Jeopardy, but maybe we’ll be more worried when a computer can program another computer to play chess at a grandmaster level. Luckily, there’s at least one billionaire willing to chip in a few million bucks to try to keep Terminators from destroying humanity.
- Google DeepMind has created software that can play old Atari games without humans teaching it how to play — and the AI plays 22 out of 49 games better than expert human players. Those Atari games are more difficult than you might think, but it’s not hard to imagine that humans will be no match for AI playing Pac-man in the near future. [url]
- A Civilization V match pitted 42 computer-controlled players against each other. It wasn’t an endless match (sorry, no Wargames conclusion), and a winner of this “Battle Royale” emerged after 179 turns. [url]
- Flappy Bird is a pretty hard game for humans to play, but a robot can play it without getting tired or frustrated. It’s not exactly a breakthrough in robotics, but this bot demonstrates a big difference in how computers play video games. [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: ai, artificial intelligence, atari, civilization v, elon musk, flappy bird, game algorithms, machine learning, robotics, video games, wargames
Companies: deepmind, google
DailyDirt: Computers Like To Sit In Front Of Computers And Play Games All Day, Too
from the urls-we-dig-up dept
Artificial intelligence software has been getting better and better over the years at beating humans at their own games. Games like Connect Four and Checkers are already solved, and while we humans might like to point out that there are games like Othello, Go, Diplomacy and Calvinball that still favor human players, it may only be a matter of time before computers outwit us at those games, too. Check out a few more games that algorithms are learning to play better than human brains.
- A specific version of Texas Hold ’em (heads-up limit Texas Hold ?em) will likely be dominated by computer players now that an algorithm has minimized a “regret” function for playing it. Poker hasn’t been solved, but humans better watch out when playing online to make sure their opponents are actually other humans (if it’s even possible to tell). [url]
- A computer simulating ant behavior has found almost half a million novel solutions to the “knight’s tour” problem in chess. This isn’t really a game, but it shows how AI can use some pretty wild strategies to solve game-related challenges. [url]
- The game of Go (aka wei qi) isn’t going to be solved by a computer any time soon, but Go-playing software is getting better against human opponents. To make move decisions, advanced Go AI programs play randomized simulations of entire games to try to pick between move options. That’s not quite how humans play the game, but apparently it’s a somewhat winning strategy to use against human players. [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: ai, algorithms, artificial intelligence, chess, game algorithms, games, go, knight's tour, poker
DailyDirt: I, For One, Welcome Ping Pong Robots
from the urls-we-dig-up dept
Not too long ago, Kuka filmed an ad hinting that their industrial robot arm was fast enough and had software that enabled it to play ping pong with professional table tennis player, Timo Boll. However, that match was pretty disappointing because it never really showed the robot arm returning a tournament-level serve from Boll (or even returning any kind of shot that wasn’t highly edited to make it look more dramatic). Here are a few ping pong playing robots in reality — and they are not yet ready to compete with humans.
- Japanese electronics maker Omron demonstrated a ping pong robot at the 2014 Ceatec tech expo, and its 600+kg bot can play nicely with a human for over 100 volleys. This robot isn’t exactly going to beat anyone at a game, but it has reflexes in the sub-millisecond range, and presumably, software/hardware upgrades could make it more intimidating. [url]
- German researchers trained a robot to play ping pong, and it can return some gentle shots and keep its returns on the table (for the most part). Katharina Muelling and her colleagues were learning about how to teach robots physical skills by imitation, so maybe if they’d used a professional table tennis player to train their robot…. [url]
- Chinese humanoid robots have played ping pong against each other in a rally lasting 176 strokes. It’s not the most exciting game, but these bots can do both a forehand and backhand stroke — and play against humans, too. [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: ai, artificial intelligence, game algorithms, katharina muelling, machine learning, ping pong, robotics, robots, table tennis, timo boll
Companies: kuka, omron
DailyDirt: Playing Games With Robots
from the urls-we-dig-up dept
Robots have mastered games like air hockey, and they’re about to dominate other casual table top games. It’s getting to the point that humans won’t be able to play any games that computers aren’t better at. Here are just a few other bots that might ruin your amateur winning streak (someday).
- German table tennis champion Timo Boll will play against an industrial robot arm made by Kuka. The match is scheduled for this March 11th, and the teaser trailer for the event makes the single-armed robot look like a formidable opponent. [url]
- Basement entertainment will never be the same because robots are learning how to play foosball, too. These robots can shoot a ball at speeds of 6 meters per second, enough to whizz by a casual foosball player, and pretty soon these bots will consistently beat humans. [url]
- Two-armed robots are ready to play billiards and have already managed to nail easy shots about 80% of the time. This humanoid robot plays pool similarly to a human, judging the difficulty of shots in order to plan a sequence of higher probability shots. [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: ai, artificial intelligence, billiards, foosball, game algorithms, games, ping pong, pool, robots, table tennis, timo boll