Skip to main content Skip to search
HKSYU Library

    Librarian View

    LEADER 02958cam a2200469 i 4500
    001
    991002024439707546
    005
    20251103114602.0
    008
    211124s2020 nyu b 001 0 eng
    010
     
     
    a| 2020029036
    020
     
     
    a| 9780393635829 q| (hardcover)
    020
     
     
    z| 9780393635836 q| (epub)
    035
     
     
    a| (HKSYU)b21479756-852hksyu_inst
    035
     
     
    a| (OCoLC)1137850003 z| (OCoLC)1197966344 z| (OCoLC)1224596061 z| (OCoLC)1228811531
    040
     
     
    a| DLC b| eng e| rda c| DLC d| OCLCO d| OCLCF d| UAP d| YDX d| CPP d| TCH d| VP@ d| AJB d| CTU d| GYG d| WIQ d| HK-SYU
    042
     
     
    a| pcc
    050
    0
    0
    a| Q334.7 b| .C47 2020
    050
     
    4
    a| Q334.7 b| .C475 2020 9| wsl
    082
    0
    4
    a| 006.3101/9 2| 23
    082
    0
    0
    a| 174/.90063 2| 23
    092
    0
     
    a| 174.90063 b| CHR 2020
    100
    1
     
    a| Christian, Brian, d| 1984- e| author.
    245
    1
    4
    a| The alignment problem : b| machine learning and human values / c| Brian Christian.
    246
    3
    0
    a| Machine learning and human values
    264
     
    1
    a| New York, NY : b| W.W. Norton & Company, c| [2020]
    264
     
    4
    c| ©2020
    300
     
     
    a| xii, 476 pages ; c| 25 cm.
    336
     
     
    a| text b| txt 2| rdacontent
    337
     
     
    a| unmediated b| n 2| rdamedia
    338
     
     
    a| volume b| nc 2| rdacarrier
    504
     
     
    a| Includes bibliographical references (pages 401-451) and index.
    505
    0
    0
    t| Prophecy. t| Representation -- t| Fairness -- t| Transparency -- t| Agency. t| Reinforcement -- t| Shaping -- t| Curiosity -- t| Normativity. t| Imitation -- t| Inference -- t| Uncertainty.
    520
     
     
    a| "A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--Provided by publisher.
    650
     
    0
    a| Artificial intelligence x| Moral and ethical aspects.
    650
     
    0
    a| Artificial intelligence x| Social aspects.
    650
     
    0
    a| Machine learning x| Safety measures.
    650
     
    0
    a| Software failures.
    650
     
    0
    a| Social values.
    907
     
     
    a| b21479756 b| 28-01-22 c| 24-11-21
    910
     
     
    a| ykc b| mkl
    935
     
     
    a| (HK-SYU)501040683 9| ExL
    998
     
     
    a| book b| 19-01-22 c| m d| a e| - f| eng g| nyu h| 4 i| 0
    945
     
     
    h| Supplement l| location i| barcode y| id f| bookplate a| callnoa b| callnob n| PSY205