Publicité
Publicité

Contenu connexe

Similaire à Human in the loop: a design pattern for managing teams working with ML(20)

Publicité
Publicité

Human in the loop: a design pattern for managing teams working with ML

  1. Human  in  the  loop:   a  design  pattern  for  managing     teams  working  with  ML Paco  Nathan    @pacoid   R&D  Group  @  O’Reilly  Media   Strata  CA    San  Jose,  2018-­‐03-­‐08
  2. The  reality  of  data  rates “If  you  only  have  10  examples  of  something,  it’s  going
    to  be  hard  to  make  deep  learning  work.  If  you  have
    100,000  things  you  care  about,  records  or  whatever,
    that’s  the  kind  of  scale  where  you  should  really  start
    thinking  about  these  kinds  of  techniques.”   Jeff  Dean    Google
 VB  Summit  (2017-­‐10-­‐23)   venturebeat.com/2017/10/23/google-­‐brain-­‐chief-­‐says-­‐100000-­‐ examples-­‐is-­‐enough-­‐data-­‐for-­‐deep-­‐learning/ 2
  3. The  reality  of  data  rates Transfer  learning  aside,  most  DL  use  cases  require  
 large,  carefully  labeled  data  sets,  while  RL  requires  
 much  more  data  than  that.   Active  learning  can  yield  good  results  with  substantially   smaller  data  rates,  while  leveraging  an  organization’s   expertise  to  bootstrap  toward  larger  labeled  data  sets,   e.g.,  as  preparation  for  deep  learning,  etc. reinforcement learning supervised learning active learning deep learning data rates (log scale) 3
  4. The  reality  of  data  rates Transfer  learning  aside,  most  DL  use  cases  require   large much  more Active  learning smaller  data  rates,  while  leveraging  an  organization expertise  to  bootstrap  toward  larger  labeled  data  sets,   e.g.,  as  preparation  for  deep  learning,  etc. reinforcement learning supervised learning active learning deep learning data rates (log scale) reinforcement learning supervised learning active learning deep learning data rates (log scale) active  learning:   indicated  for  many   enterprise  use  cases 4
  5. Why  are  AI  programs  different? 5 AI  in  the  software  engineering  workflow
 Peter  Norvig    Google
 TheAIConf  (2017-­‐06-­‐28)   ▪ Content:  models  not  programs   ▪ Process:  training  not  debugging   ▪ Release:  retraining  not  patching   ▪ Uncertainty:  of  objective   ▪ Uncertainty:  of  action/recommendation   ▪ Uncertainty:  propagates  through  model
  6. Active  Learning:   case  studies  and  patterns
  7. Machine  learning supervised  ML:   ▪ take  a  dataset  where  each  element   has  a  label   ▪ train  models  on  a  portion  of  the   data  to  predict  the  labels,  then  
 evaluate  on  the  holdout   ▪ deep  learning  is  a  popular  example,  
 but  only  if  you  have  lots  of  labeled   training  data  available 7
  8. Machine  learning unsupervised  ML:   ▪ run  lots  of  unlabeled  data  through   an  algorithm  to  detect  “structure”   or  embedding   ▪ for  example,  clustering  algorithms   such  as  K-­‐means   ▪ unsupervised  approaches  for  AI  
 are  an  open  research  question 8
  9. Active  learning special  case  of  semi-­‐supervised  ML:   ▪ send  difficult  decisions/edge  cases  
 to  experts;  let  algorithms  handle   routine  decisions  (automation)   ▪ works  well  in  use  cases  which  have   lots  of  inexpensive,  unlabeled  data   ▪ e.g.,  abundance  of  content  to  be   classified,  where  cost  of  labeling  
 is  a  major  expense 9
  10. Who’s  doing  this? 10
  11. Design  pattern:  Active  learning Real-­‐World  Active  Learning:  Applications  and   Strategies  for  Human-­‐in-­‐the-­‐Loop  ML
 Ted  Cuzzillo
 O’Reilly  Media  (2015-­‐02-­‐05)   Active  learning  and  transfer  learning
 Luke  Biewald    CrowdFlower
 The  AI  Conf,  SF  (2017-­‐09-­‐17)   breakthroughs  lag  invention  of  methods;
 must  wait  for  “killer  data  set”  to  emerge,  
 often  a  decade  or  more 11
  12. Design  pattern:  Weak  supervision Creating  large  training  data  sets  quickly
 Alex  Ratner    Stanford
 O’Reilly  Data  Show  (2017-­‐06-­‐08)   Snorkel:  using  weak  supervision  and  
 data  programming  as  another  instance  
 of  human-­‐in-­‐the-­‐loop
 github.com/HazyResearch/snorkel   conferences.oreilly.com/strata/strata-­‐ny/public/ schedule/detail/61849 12
  13. Design  pattern:  Human-­‐in-­‐the-­‐loop Paul  English  on  Lola's  Debut  for  Business  Travelers
 Elizabeth  West
 Business  Travel  News  (2017-­‐10-­‐04)   founded  2015  by  Paul  English  and  other  Kayak  execs:  
 on-­‐demand,  personal  travel  service;  uses  expert  travel  agents  for  HITL   initially  criticized  by  travel  industry  as  “competing  against  Siri”;  
 currently  displacing  OTAs  in  a  reversal  of  “AI  vs.  jobs”   can  book  on  Airbnb,  Southwest,  etc.,  which  aren’t  available  via  OTA,  
 because  of  the  human  delegation   “The  first  time  you  use  Lola  it’s  going  to  be  great  because  it’s  a  conversation.  
  We’re  not  making  you  think  like  a  computer”   “Instead  of  showing  you  300  choices  or  1,000  choices,  we  think  we  can  
    show  you  three  choices,  kind  of  good,  better,  best” 13
  14. Design  pattern:  Human-­‐in-­‐the-­‐loop Anand  Kulkarni    Crowdbotics   HITL  for  code+test  gen,  trained  from  GitHub,  StackOverflow,   etc.,  with  JIRA  tickets  as  the  granular  object  in  the  system   parse  specs  from  JIRA  history,  reuse  what’s  been  done  before;   generate  PRs  for  popular  web  stacks:  React,  Flask,  Ruby,  etc.   resolve  specs  into  the  approach  needed  and  time  required,  
 where  product  managers  get  cost  estimates,  then  on-­‐demand   expert  programmers  implement  for  you   have  the  in-­‐house  engineers  handle  “radically  novel”  projects   results:  1.5x  software  dev  throughput 14
  15. Design  pattern:  Human-­‐in-­‐the-­‐loop Building  a  business  that  combines  human   experts  and  data  science
 Eric  Colson    StitchFix
 O’Reilly  Data  Show  (2016-­‐01-­‐28)   “what  machines  can’t  do  are  things  around  cognition,
    things  that  have  to  do  with  ambient  information,  or
    appreciation  of  aesthetics,  or  even  the  ability  to
    relate  to  another  human”
 15
  16. Design  pattern:  Human-­‐in-­‐the-­‐loop EY,  Deloitte  And  PwC  Embrace  Artificial   Intelligence  For  Tax  And  Accounting
 Adelyn  Zhou
 Forbes  (2017-­‐11-­‐14)   compliance  use  cases  in  reviewing  lease  
 accounting  standards   3x  more  consistent  and  2x  efficient  than  
 the  previous  humans-­‐only  teams   break-­‐even  ROI  within  less  than  a  year 16
  17. Design  pattern:  Human-­‐in-­‐the-­‐loop Unsupervised  fuzzy  labeling  using  deep   learning  to  improve  anomaly  detection
 Adam  Gibson    Skymind
 Strata  Data  Conf,  Singapore  (2017-­‐12-­‐07)   large-­‐scale  use  case  for  telecom  in  Asia   method:  overfit  variational  autoencoders,  
 then  send  outliers  to  human  analysts 17
  18. Design  pattern:  Human-­‐in-­‐the-­‐loop Strategies  for  integrating  people  and  machine   learning  in  online  systems
 Jason  Laska    Clara  Labs
 The  AI  Conf,  NY  (2017-­‐06-­‐29)   establishing  a  two-­‐sided  marketplace  where  
 machines  and  people  compete  on  a  spectrum  
 of  relative  expertise  and  capabilities
 
 18
  19. Design  pattern:  Human-­‐in-­‐the-­‐loop Strategies  for  integrating  people  and  machine   learning  in  online  systems Jason  Laska The  AI  Conf establishing  a  two-­‐sided  marketplace  where   machines  and  people  compete  on  a  spectrum   of  relative   
 19 “the  trick  is  to  design  systems  from  Day  1
    which  learn  implicitly  from  the  intelligence
    which  is  already  there”    Michael  Akilian    Clara  Labs  
  20. Design  pattern:  Human-­‐in-­‐the-­‐loop Building  human-­‐assisted  AI  applications
 Adam  Marcus    B12
 O’Reilly  Data  Show  (2016-­‐08-­‐25)   “Humans  where  they’re  best,  machines  for  the  rest.”   Orchestra:  a  platform  for  building  human-­‐assisted  
 AI  applications,  e.g.,  create/update  business  websites
 https://github.com/b12io/orchestra   example:  http://www.coloradopicked.com/ 20
  21. Design  pattern:  Flash  teams Expert  Crowdsourcing  with  Flash  Teams
 Daniela  Retelny,  et  al.  
 Stanford  HCI
 UIST  (2014-­‐10-­‐05)   computationally-­‐guided  teams  of  crowd  experts   supported  by  lightweight,  reproducible,  scalable   team  structures   “elastic  recruiting”:  grow  and  shrink  teams  on   demand,  combine  teams  into  larger  organizations   http://stanfordhci.github.io/flash-­‐teams/ 21
  22. Problem:   disambiguating  contexts
  23. AI  in  Media ▪ content  which  can  represented  as  
 text  can  be  parsed  by  NLP,  then   manipulated  by  available  AI  tooling     ▪ labeled  images  get  really  interesting   ▪ text  or  images  within  a  context  have  
 inherent  structure   ▪ representation  of  that  kind  of  structure   is  rare  in  the  Media  vertical  –  so  far 23
  24. Disambiguating  contexts Overlapping  contexts  pose  hard  problems  in  natural  language  understanding.   That  runs  counter  to  the  correlation  emphasis  of  big  data.
 NLP  libraries  lack  features  for  disambiguation.
  25. Disambiguating  contexts 25 Suppose  someone  publishes  a  book  which  uses  the  term   `react`:  are  they  talking  about  a  JavaScript  library,  or  about   human  behavior  during  interviews?    Our  customers  ask  for   both.   We  handle  lots  of  content  about  both.  Disambiguating  those   contexts  is  important  for  good  UX  in  personalized  learning.   In  other  words,  how  do  machines  help  people  
 distinguish  that  content  within  search?   Potentially  a  good  case  for  deep  learning,  
 except  for  the  lack  of  labeled  data  at  scale.
  26. Active  learning  through  Jupyter 26 Jupyter  notebooks  are  used  to  manage  ML  
 pipelines  for  disambiguation,  where  machines  
 and  people  collaborate:   ▪ ML  based  on  examples  –  most  all  of  the  feature   engineering,  model  parameters,  etc.,  has  been   automated   ▪ https://github.com/ceteri/nbtransom   ▪ based  on  use  of  nbformat,  pandas,  scikit-­‐learn
  27. Active  learning  through  Jupyter 27 Jupyter  notebooks  are  used  to  manage  ML   pipelines and  people  collaborate:   ▪ ML  based  on  examples  –  most  all  of  the  feature   engineering,  model  parameters,  etc.,  has  been   automated   ▪ https://github.com/ceteri/nbtransom ▪ based  on  use  of   Jupyter  notebook  as…   ▪ one  part  configuration  file   ▪ one  part  data  sample   ▪ one  part  structured  log   ▪ one  part  data  visualization  tool   plus,  subsequent  data  mining  of  these  
 notebooks  helps  augment  our  ontology
  28. Active  learning  through  Jupyter 28 ML#Pipelines Jupyter#kernel Browser SSH#tunnel
  29. Active  learning  through  Jupyter ▪ Notebooks  allow  the  human  experts  to  access  the   internals  of  a  mostly  automated  ML  pipeline,  rapidly   ▪ Stated  another  way,  both  the  machines  and  the  people   become  collaborators  on  shared  documents   ▪ Anticipates  upcoming  collaborative  document  features   in  JupyterLab
  30. Active  learning  through  Jupyter 1. Experts  use  notebooks  to  provide  examples  of  book  chapters,  video   segments,  etc.,  for  each  key  phrase  that  has  overlapping  contexts   2. Machines  build  ensemble  ML  models  based  on  those  examples,   updating  notebooks  with  model  evaluation   3. Machines  attempt  to  annotate  labels  for  millions  of  pieces  of  content,  
 e.g.,  `AlphaGo`,  `Golang`,  versus  a  mundane  use  of  the  verb  `go`   4. Disambiguation  can  run  mostly  automated,  in  parallel  at  scale  –  
 through  integration  with  Apache  Spark   5. In  cases  where  ensembles  disagree,  ML  pipelines  defer  to  human   experts  who  make  judgement  calls,  providing  further  examples   6. New  examples  go  into  training  ML  pipelines  to  build  better  models   7. Rinse,  lather,  repeat
  31. Social  Systems:   collaboration  with  machines
  32. Product  management The  History  and  Evolution  of  Product  Management
 Martin  Eriksson
 Mind  the  Product  (2015-­‐10-­‐28)   From  PM’s  origins  as  “Brand  Men”,
 on  through  the  success  arc  of  Hewlett-­‐Packard,
 on  to  Agile  Manifesto,  Lean  Enterprise,  etc.   Formerly  part  of  Engineering  or  Marketing,
 PM  now  “taking  a  seat  at  the  table”  under  CEOs 32
  33. Conway’s  Law How  Do  Committees  Invent?
 Melvin  Conway
 Datamation  (1968-­‐04)   Organizations  that  create  systems  produce  designs  
 which  copy  their  own  communication  structures.   For  each  level  of  delegation,  someone’s  scope  of  
 inquiry  narrows,  design  alternatives  also  narrow  –  
 until  a  system  is  simple  enough  to  be  understood  
 in  human  terms. 33
  34. Conway’s  Law  illustrated Organizational  Charts
 Manu  Cornet    Bonkers  World   Cognitive  biases:   ▪ anthropocentrism   ▪ system  justification   In  retrospect,  Agile  Manifesto  
 contains  examples   See  related  descriptions:
 Destruction  and  Creation
 John  R.  Boyd    USAF
 (1976-­‐09-­‐03) 34
  35. First-­‐order  cybernetics Cybernetics:  Or  Control  and  Communication  
 in  the  Animal  and  the  Machine
 Norbert  Wiener    MIT
 MIT  Press  (1948)   early  work  had  been  about  closed-­‐loop  control  systems:   homeostasis,  habituation,  adaptation,  and  other   regulatory  processes   given  a  system  which  has  input  and  output,  a  controller   leveraging  a  negative  feedback  loop,  and  one  or  more   observers  outside  of  the  system   related  to  the  early  Macy  Conferences 35
  36. “the  organism  was  no  longer  an  input/output  machine;
    rather  it  was  part  of  a  loop  from  perception  to  action
    and  back  again  to  perception”   Paul  Pangaro  describing  Jerry  Lettvin  @  MIT  cybernetics
  37. Second-­‐order  cybernetics 1. von  Foerster:  one  can  apply  the  understandings  developed  in   cybernetics  to  the  subject  matter  itself   2. presence  of  the  observer  is  inevitable  and  may  be  desirable:  
 “What  is  said  is  said  to  an  observer”   3. eigen  functions:  stable,  dynamically  self-­‐perpetuating  states  that   are  self-­‐referential:  “We  construct  our  realities”  per  constructivism   4. autopoiesis:  a  living  entity  exists  as  a  network  of  components,   recursively  producing  itself,  realizing  its  boundaries;  it  grows  and   maintains  itself  by  reference  to  itself   5. feedback  loops  represent  conversations,  from  which  the   participants  cannot  be  detached   6. an  essentially  ethical  understanding   7. a  productive  interaction  between  theory  and  practice,  in  which   each  supports  the  other 37
  38. Second-­‐order  cybernetics 1. von  Foerster:  one  can  apply  the  understandings  developed  in   cybernetics 2. presence  of  the  observer  is  inevitable  and  may  be  desirable:   “What  is  said  is  said  to  an  observer” 3. eigen  functions:  stable,  dynamically  self-­‐perpetuating  states  that   are  self-­‐referential:  “We  construct  our  realities”  per   4. autopoiesis recursively  producing  itself,  realizing  its  boundaries;  it  grows  and   maintains  itself  by  reference  to  itself   5. feedback  loops  represent   participants  cannot  be  detached 6. an  essentially  ethical  understanding   7. a  productive  interaction  between  theory  and  practice,  in  which   each  supports  the  other 38 second-­‐order  cybernetics  lays  a  foundation  for  AI  –   it’s  about  the  semantic  relations  of  conversations   within  a  system;  quite  apt  for  leveraging  NLP,  active   learning,  etc.,  when  you  have  semi-­‐structured  dialog
  39. Second-­‐order  cybernetics Autopoiesis  and  Cognition:  The  Realization  of  the  Living
 Humberto  Maturana,  Francisco  Varela
 Kluwer  (1980  /  original  1972)   Understanding  Computers  and  Cognition:  
 A  New  Foundation  for  Design
 Terry  Winograd,  Fernando  Flores
 Intellect  Books  (1986)   Conversations  for  Action  and  Collected  Essays
 Fernando  Flores
 Createspace  (2013) 39
  40. Second-­‐order  cybernetics ▪ biology  informing  computer  science   ▪ historical  context  of  Project  Cybersyn   ▪ autopoiesis  and  cognition   ▪ organizational  closure:  
 “self-­‐making  means  stability”   ▪ speech  acts  (e.g.,  social  analysis  of  open  source)   ▪ IMO,  blueprints  for  AI  systems   Also,  the  focus  on  “information  as  a  collection  of  facts”  
 is  yet  another  form  of  cognitive  bias  –  instilled  through  
 30+  years  of  data  warehouse  practices,  where  data  must  
 fit  into  dimensions,  facts,  schema 40
  41. Active  Learning:   theory,  practices,  community
  42. HITL  theory:  choosing  what  to  learn Active  Learning  Literature  Survey
 Burr  Settles    UW  Madison
 (2010-­‐01-­‐26)   Can  machines  learn  more  economically  if  they  ask  human   “oracles”  questions?    e.g.,  task  in-­‐house  experts  with  the  edge   cases?   ▪ uncertainty  sampling:  query  about  instances  which  ML  is   least  certain  how  to  label  -­‐  least  confidence  /  margin  /  entropy   ▪ query-­‐by-­‐committee:  ensemble  of  ML  models  votes;  query   the  instance  about  which  they  disagree  most   ▪ expected  error  reduction:    maximize  the  expected   information  gain  of  the  query   ▪ variance  reduction:  minimize  future  generalization  error  of   the  model  (e.g.,  loss  function)   ▪ density-­‐weighted  methods:  instances  which  are  both   uncertain  and  “representative”  of  the  underlying  distribution 42
  43. HITL  practices:  emerging  themes while  ML  was  mostly  about  generalization,  
 now  we  can  borrow  from  Frank  Knight  (1921):  
 using  ML  models  to  explore  uncertainty  in   relationship  to  profit  vs.  risk   ▪ distinguish  forms  of  uncertainty:  aleatoric   (noise)  vs.  epistemic  (incomplete  model)   ▪ see  also:  meta-­‐learning  [1]  and  [2]   ▪ people  who  aren’t  ML  experts  should  be  able  to   train  and  iterate  robust  models  using  examples   ▪ emphasize  use  of  fitness  functions  to  make   decisions,  in  lieu  of  objective  functions  which
 tend  to  rely  on  overly  simplified  KPIs 43
  44. HITL  practices:  model  interpretation explicability  of  ML  models  becomes  essential,  
 must  be  intuitive  for  the  human  experts  involved:  
 Skater,  and  also  Anchors,  SHAP,  STREAK,  LIME,  etc.
 The  Building  Blocks  of  Interpretability
 Chris  Olah,  et  al.    Google  Brain
 Distill  (2018-­‐03-­‐06)   Challenges  for  Transparency
 Adrian  Weller
 WHI  (2017-­‐07-­‐29)   The  Mythos  of  Model  Interpretability
 Zachary  Lipton
 WHI  (2016-­‐03-­‐06) 44
  45. Interpreting  Machine  Learning  Models
 Wed  Mar  28  |  10-­‐11  am  Pacific   datascience.com/resources/webinars/interpreting-­‐machine-­‐learning-­‐models   live  webinar:  we’ll  discuss  the  need  for  methods  which  make  the  process  of   explaining  machine  learning  models  more  intuitive,  and  also  evaluate  myths   about  model  interpretability,  from  both  research  and  business  perspectives. 45 Pramit  Choudhary   Lead  Data  Scientist   DataScience.com   Sameer  Singh   CS    Assistant  Professor   UC  Irvine Paco  Nathan   Dir,  Learning  Group   O'Reilly  Media
  46. HITL  resources:  conferences,  journals,  etc. HILDA  2018
 Workshop  on  Human-­‐In-­‐the-­‐Loop  Data  Analytics
 Co-­‐located  with  SIGMOD  2018
 June  in  Houston   Collective  Intelligence  2018
 University  of  Zurich,  Switzerland
 collocated  with  AAAI  HCOMP  2018
 July  in  Zurich   HCOMP  in  Slack
 https://hcomp.slack.com/   Human  Computation  journal
 http://hcjournal.org/ojs/index.php?journal=jhc 46
  47. HITL  tooling:  active  learning Agnostic  Active  Learning  Without  Constraints
 Alina  Beygelzimer,  Daniel  Hsu,  John  Langford,  
 Tong  Zhang
 NIPS  (2010-­‐06-­‐14)   The  End  of  the  Beginning  of  Active  Learning
 Daniel  Hsu,  John  Langford
 Hunch.net  (2011-­‐04-­‐20)   https://github.com/JohnLangford/vowpal_wabbit/wiki   focused  on  cases  where  labeling  is  expensive;  uses  importance   weighted  active  learning;  handles  “adversarial  label  noise”   as  good  or  better  than  supervised  ML,  wherever  supervised   ML  works 47
  48. HITL  tooling:  machine  teaching Prodigy:  a  new  tool  for  radically   efficient  machine  teaching
 Matthew  Honnibal,  Ines  Montani     Explosion.ai  (2017) 48
  49. Management  strategy:  before In  general  with  Big  Data,  we  were  considering:   ▪ DAG  workflow  execution  –  
 those  are  typically  linear   ▪ data-­‐driven  organizations   ▪ ML  based  on  optimizing  for  
 objective  functions   ▪ general  considerations  about  
 correlation  vs.  causation   ▪ avoid  “garbage  in,  garbage  out” 49 Jarvis  workflow
  50. Management  strategy:  after HITL  introduces  circularities:   ▪ deprecate  linear  input/output  systems  
 as  the  “conventional  wisdom”   ▪ analogous  to  an  OODA  loop  which   incorporates  automation/augmentation   ▪ recognize  multiple  feedback  loops  
 as  conversations  for  action   ▪ recognize  opportunity:  loops  from   perception  (e.g.,  DL)  to  action  (e.g.,  HITL)   and  back  again  to  perception   ▪ design  systems  to  learn  implicitly  
 from  the  intelligence  already  there   ▪ hint:  recognize  the  “verbs”  being  used,   rather  than  over-­‐emphasizing  “nouns” 50 Experts decide about edge cases, providing examples Experts learn through Customer interactions Customers request Sales, Marketing, Service, Training Experts gain insights via Model explanations ML Models Models focus Experts (e.g., weak supervision) Organizational Learning Human Experts Examples, Actions Customers Models act on decisions when possible Customer Use Cases Models explore uncertainty when needed
  51. Management  strategy:  no-­‐collar  workforce No-­‐collar  workforce:  Humans  and  machines  in  one  loop
 Anthony  Abbatiello,  Tim  Boehm,  Jeff  Schwartz,  Sharon  Chand
 Deloitte  Insights  (2017-­‐12-­‐05)   ▪ near-­‐future:  human  workers  and  machines  complement   the  other’s  efforts  in  a  single  loop  of  productivity   ▪ 2018-­‐20:  expect  firms  to  embrace  a  “no-­‐collar  workforce”   trend  by  redesigning  jobs   ▪ yet  only  ~17%  ready  to  manage  a  workforce  in  which   people,  robots,  and  AI  work  side  by  side  –  largely  due  to   cultural,  tech  fluency,  regulatory  issues   ▪ e.g.,  what  about  onboarding  or  retiring  non-­‐human   workers?  these  are  no  longer  theoretical  questions   ▪ HR  orgs  must  develop  strategies  and  tools  for  recruiting,   managing,  and  training  a  hybrid  workforce 51
  52. Summary:   how  this  matters
  53. Conference  summaries,  Oct  2017  part  1
 PN    (2017-­‐10-­‐10)   Themes  emerging  in  AI  conferences  about  the  impact   of  ML  on  software  process,  i.e.,  something’s  afoot:   2009–ish,  data  science  ran  headlong  into  prod  mgmt
 2012-­‐ish,  data  sci  leaders  moved  into  prod  exec  roles   2018-­‐ish,  AI  apps  disrupting  prod  mgmt
 … 53 Extrapolating  trends
  54. Flywheel  Effect,  circa  2018   AI  drives  features  in  products  and  services  …  
 which  in  turn  drives  cloud  consumption  …  
 which  in  turn  acquires  even  more  data  …  
 particularly  for  mobile  or  embedded  products   Incumbents  now  lead  in  AI  +  cloud  +  mobile/embed:  
 Google,  Amazon,  Microsoft,  IBM,  Apple,  Baidu,  etc.
  55. segment assets liabilities Google,   Amazon,   Microsoft,
 IBM,
  Apple,  
 Baidu,  
 etc. ▪ AI  +  cloud  +  mobile/embed,  
 leveraging  a  flywheel  effect   ▪ had  focused  business  lines  well  
 in  advance  to  prepare  large-­‐scale  
 labeled  data  sets   ▪ uses  AI  to  explore  uncertainty,  
 focusing  their  core  expertise ▪ high  capital  expenses,  long-­‐term  R&D  
 as  hardware  evolves  rapidly   ▪ potential  vulnerabilities  by  automating  
 too  much   ▪ potential  vulnerabilities  by  mistaking  
 first-­‐order  cybernetics  for  second-­‐order <  50% ▪ HITL  provides  a  vector  to  compete  
 against  top  incumbents,  with  many   unexplored  areas  of  opportunity ▪ facing  barriers:  talent  gap,  competing  
 investment  priorities,  security  concerns   ▪ verticals  eroded  by  horizontal  business  
 lines  from  top  incumbents >  50% ?? ▪ struggling  to  recognize  business  use  cases   ▪ buried  in  tech  debt  from  digital  infrastructure   ▪ lacks  management  support Challenge:  adoption  by  industry  segment 55
  56. What  is  changing  and  why? Second-­‐order  cybernetics  began  partly  as  a  study  of  how   complex  systems  fail,  and  also  about  what  social  systems  
 and  physical  systems  had  in  common   It  provides  foundations  for  AI  systems  of  people  +  machines   Feedback  loops  represent  structured  conversations  for  action,   from  which  the  participants  cannot  be  detached   The  organization  is  no  longer  viewed  as  an  input/output   machine;  rather  it’s  a  pluralistic  network  of  loops  from   perception  to  action  and  back  again  to  perception  –  
 e.g.,  DL  augments  perception  and  RL  augments  actions 56
  57. Second-­‐order  cybernetics  began  partly  as  a  study  of  how   complex  systems   and  physical  systems  had  in  common   It  provides  foundations  for   Feedback  loops  represent  structured   action The  organization  is  no  longer  viewed  as  an  input/output   machine;  rather  it’s  a  pluralistic  network  of  loops  from   perception  to  action  and  back  again  to  perception   e.g.,  DL  augments   What  is  changing  and  why? 57 In  other  words,  as  the  flywheel  effect  itself  
 is  evolving,  to  stay  ahead  we  must  recognize   the  emerging  “verbs”,  which  are  entry  points   into  the  business  use  cases
  58. What  do  organizations  carry  into  AI? Assess  the  cognitive  biases  we  bring  into  AI  systems  of  people  +  machines:   ▪ anthropocentrism  and  system  justification,  as  shown  by  Conway’s  Law   ▪ DW  +  BI  cultural  lens  overemphasizes  “information  as  a  collection  of  facts”,  
 missing  the  conversations  for  action   ▪ digitalization  sequence  “Product”,  “Service”,  “Data”:  overreacting  to  the  nouns   (facts),  while  ignoring  the  verbs  (relations)   ▪ delegation  +  committee:  narrowing  the  scope  of  inquiry  and  design  alternatives  
 until  a  system  is  simple  enough  to  understand  in  human  terms   ▪ some  incumbents  hold  tenaciously  to  ML  apps  within  first-­‐order  cybernetics,  
 i.e.,  bias  toward  mostly  top-­‐down  command  and  control   Instead,  we  must  design  systems  that  learn  implicitly  from  the  intelligence   already  within  an  organization  and  its  relationships  with  the  customers,   channels,  etc.  Sales,  Customer  Support,  Professional  Services,  Marketing 58
  59. What  do  organizations  carry  into  AI? Assess  the   ▪ anthropocentrism ▪ DW  +  BI  cultural  lens  overemphasizes  “ missing  the  conversations  for  action ▪ digitalization  sequence  “Product”,  “Service”,  “Data”:   (facts),  while   ▪ delegation until  a  system  is  simple  enough  to  understand  in  human  terms ▪ some  incumbents  hold  tenaciously  to  ML  apps  within  first-­‐order  cybernetics,   i.e.,  bias  toward   Instead,  we  must  design  systems  that  learn  implicitly  from  the  intelligence   already  within  an  organization  and  its  relationships  with  the  customers,   channels,  etc.   59 Could  we  be  encountering  early  stages  of   not-­‐only-­‐human  cognition  attempting  to   optimize  beyond  human  predispositions   and  cognitive  biases? [ed  note:  say  at  least  one  strange  thing]
  60. “The  future  belongs  to  those  who
    understand  at  a  very  deep  level  how
    to  combine  their  unique  expertise
    with  what  algorithms  do  best.”              –  Pedro  Domingos,  The  Master  Algorithm
  61. The  AI  Conf   CN  Apr  10-­‐13
 NY,  Apr  29-­‐May  2
 SF,  Sep  4-­‐7
 UK,  Oct  8-­‐11   Strata  Data   UK,  May  21-­‐24
 NY,  Sep  11-­‐14
 SF,  Mar  26-­‐28   JupyterCon  +  events   BOS,  Mar  21
 ATL,  Mar  31
 DC,  May  15
 NY,  Aug  21-­‐25   OSCON   PDX,  Jul  16-­‐19
  62. Get  Started  with   NLP  in  Python Just  Enough  Math Building  Data   Science  Teams Hylbert-­‐Speys How  Do  You  Learn? arycles,  online  courses,  conference  summaries…   liber118.com/pxn/
 @pacoid
Publicité