The Alignment Problem (original) (raw)

Property Value
dbo:abstract The Alignment Problem: Machine Learning and Human Values is a 2020 non-fiction book by the American writer Brian Christian. It is based on numerous interviews with experts trying to build artificial intelligence systems, particular machine learning systems, that are aligned with human values. (en)
dbo:author dbr:Brian_Christian
dbo:isbn 0393635821
dbo:nonFictionSubject dbr:AI_alignment
dbo:numberOfPages 496 (xsd:positiveInteger)
dbo:oclc 1137850003
dbo:publisher dbr:W._W._Norton_&_Company
dbo:releaseDate 2020-10-06 (xsd:date)
dbo:thumbnail wiki-commons:Special:FilePath/The_Alignment_Problem_book_cover.jpg?width=300
dbo:wikiPageExternalLink https://brianchristian.org/the-alignment-problem/
dbo:wikiPageID 69438830 (xsd:integer)
dbo:wikiPageLength 7440 (xsd:nonNegativeInteger)
dbo:wikiPageRevisionID 1106488873 (xsd:integer)
dbo:wikiPageWikiLink dbr:Behaviorism dbr:ProPublica dbr:Publishers_Weekly dbr:Satya_Nadella dbc:Futurology_books dbr:Brian_Christian dbr:DeepMind dbr:Julia_Angwin dbr:Perceptron dbc:English-language_books dbc:English_non-fiction_books dbc:W._W._Norton_&_Company_books dbr:Nature_(journal) dbr:The_New_York_Times dbr:The_Wall_Street_Journal dbr:Machine_learning dbr:Artificial_neural_networks dbr:Actualism dbr:Toby_Ord dbr:W._W._Norton_&_Company dbr:William_MacAskill dbc:Books_about_effective_altruism dbc:Books_about_existential_risk dbr:AI_alignment dbr:AlphaGo dbr:AlphaZero dbr:Ezra_Klein dbr:Fast_Company dbr:Normative dbr:Global_catastrophic_risk dbr:Reinforcement_learning dbr:Artificial_intelligence dbc:2020_non-fiction_books dbc:Existential_risk_from_artificial_general_intelligence dbr:AlexNet dbr:Kate_Crawford dbr:Black_box dbr:Effective_altruism dbr:Dopamine dbr:COMPAS_(software) dbr:Possibilism_(philosophy) dbr:Human_Compatible dbr:Psychological dbr:Kirkus_Reviews dbr:Recidivism dbr:Inverse_reinforcement_learning dbr:Superintelligence:_Paths,_Dangers,_Strategies dbr:Existential_risk
dbp:author dbr:Brian_Christian
dbp:caption Hardcover edition (en)
dbp:country United States (en)
dbp:isbn 393635821 (xsd:integer)
dbp:language English (en)
dbp:mediaType Print, e-book, audiobook (en)
dbp:name The Alignment Problem: Machine Learning and Human Values (en)
dbp:oclc 1137850003 (xsd:integer)
dbp:pages 496 (xsd:integer)
dbp:publisher dbr:W._W._Norton_&_Company
dbp:releaseDate 2020-10-06 (xsd:date)
dbp:subject dbr:AI_alignment
dbp:website https://brianchristian.org/the-alignment-problem/
dbp:wikiPageUsesTemplate dbt:Effective_altruism dbt:About dbt:Infobox_book dbt:Reflist dbt:Short_description dbt:Use_dmy_dates
dc:publisher W. W. Norton & Company
dct:subject dbc:Futurology_books dbc:English-language_books dbc:English_non-fiction_books dbc:W._W._Norton_&_Company_books dbc:Books_about_effective_altruism dbc:Books_about_existential_risk dbc:2020_non-fiction_books dbc:Existential_risk_from_artificial_general_intelligence
rdf:type owl:Thing bibo:Book schema:Book schema:CreativeWork dbo:Work wikidata:Q234460 wikidata:Q386724 wikidata:Q571 dbo:Book dbo:WrittenWork
rdfs:comment The Alignment Problem: Machine Learning and Human Values is a 2020 non-fiction book by the American writer Brian Christian. It is based on numerous interviews with experts trying to build artificial intelligence systems, particular machine learning systems, that are aligned with human values. (en)
rdfs:label The Alignment Problem (en)
owl:sameAs wikidata:The Alignment Problem https://global.dbpedia.org/id/GK4LM
prov:wasDerivedFrom wikipedia-en:The_Alignment_Problem?oldid=1106488873&ns=0
foaf:depiction wiki-commons:Special:FilePath/The_Alignment_Problem_book_cover.jpg
foaf:homepage https://brianchristian.org/the-alignment-problem/
foaf:isPrimaryTopicOf wikipedia-en:The_Alignment_Problem
foaf:name The Alignment Problem: Machine Learning and Human Values (en)
is dbo:notableWork of dbr:Brian_Christian
is dbo:wikiPageRedirects of dbr:Machine_Learning_and_Human_Values
is dbo:wikiPageWikiLink of dbr:Brian_Christian dbr:Existential_risk_from_artificial_general_intelligence dbr:Atlas_of_AI dbr:Machine_Learning_and_Human_Values
is foaf:primaryTopic of wikipedia-en:The_Alignment_Problem