ethical code – Techdirt (original) (raw)
Stories filed under: "ethical code"
DailyDirt: Lethal Machines
from the urls-we-dig-up dept
Artificial intelligence is obviously pretty far from gaining sentience or even any kind of disturbingly smart general intelligence, but some of its advances are nonetheless pretty impressive (eg. beating human chess grandmasters, playing poker, driving cars, etc). Software controls more and more stuff that come in contact with people, so more people are starting to wonder when all of this smart technology might turn on us humans. It’s not a completely idle line of thinking. Self-driving cars/trucks are legitimate safety hazards. Autonomous drones might prevent firefighters from doing their job. There are plenty of situations that are not entirely theoretical in which robots could potentially harm large numbers of people unintentionally (and possibly in a preventable fashion). Where should we draw the line? Asimov’s 3 laws of robotics may be insufficient, so what kind of ethical coding should we adopt instead?
- An open letter from the Future of Life Institute (FLI) is warning against the possibility of an artificial intelligence (AI) arms race that could threaten humanity. Autonomous weapons are a reality that could hinder beneficial AI research — as well as systematically kill people without “meaningful human control” behind the algorithms. [url]
- Autonomous cars with an ethical code in addition to just software code… are getting increasing attention as the odds of self-driving vehicles on public roads grows ever more likely. If a child runs in front of an autonomous car, should the car swerve to avoid the kid? There is an ethical dilemma inherent in making vehicles that are smart enough to know the difference between a kid and some other moving object, but these questions might be avoided entirely by making smart systems only so smart and no smarter — minimizing liability for the companies making the machines. [url]
- A precursor to an artificial intelligence race might be a supercomputer hardware arms race, and we’re already ordering up a National Strategic Computing Initiative (NSCI) to build an exaflop computer to rival China’s Tianhe-2. Sure, artificial intelligence doesn’t need to be developed on super fast computers, but if fast computers are considered potential weapons, it’s not a huge leap of logic to see a supercomputer arms race as a military threat. [url]
After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.
Filed Under: ai, algorithms, artificial intelligence, asimov, autonomous vehicles, drones, ethical code, fli, military, national strategic computing initiative, nsci, robotics, supercomputers, tianhe-2, war, weapons
Companies: future of life institute