Fixing the program my computer learned: End-user debugging of machine-learned programs

Text-only Preview

Fixing the program my computer learned: End-user debugging of machine-learned programs Dr Simone Stumpf City University London [email protected] Bio 1996 BSc, Comp Sci w/ Cog Sci, UCL 2001 PhD Comp Sci, UCL 2001 - 2004 Research Fellow, UCL 2004 - 2007 Research Manager, Oregon State (OSU) 2007 - 2009 UX Architect, White Horse 2008 - present Asst Professor (Senior Research), OSU 2009 - present Lecturer, City University London 2 What are machine-learned programs? •! Systems that “predict” –! Spam filters, “smart desktops”, web page recommendations •! Learn from and adapt to user after deployment •! Probabilistic machine learning algorithms •! Resulting behaviour is a program How do you debug a program that was written by a machine instead of a person? Especially when you don’t know much about programming and are working with a program you can’t even see? 3 A quick machine learning detour… “Simple” algorithm like Naïve Bayes –! Have input (features) and outputs (labels or classes) –! From training data they learn a function: weight*input = class –! As they further learn weights are changed eg. spam filters (bag-of-words approach) –! take all words appearing in the training data as features –! throw out stop words (a, the, ?) –! do stemming (walking, walked = walk) –! learn how prevalent certain words are in spam messages –! use that function to predict whether new email message is spam 4 Current debugging approach Based on your interest in: We recommend: ! ! ! " 5 Problems and opportunities for end users •! Are not machine learning experts or programmers •! Only they can fix if incorrect behaviour occurs –! Cannot inspect source code –! Can only observe results at run-time –! Can usually only give more training examples to influence future behaviour –! Need to provide lots of training data to change behaviour •! Much richer knowledge could be exploited •! Could increase usability and trust How can the program communicate its reasoning to the end user? How could the user talk back? 6 Formative study •! Enron email dataset folders (farmer-d): Personal, Resume, Bankrupt, Enron News (122 messages) •! Lo-fi prototypes with explanations –! Rule-based –! Similarity-based –! Keyword-based •! 13 participants, talk-aloud 7 Explanations by ML program Simplified yet faithful Concrete •! Rule-based best understood but no clear overall preference •! Serious understandability problems with Similarity-based •! Negative keyword list with keyword-based problematic (negative weights) Matters if they they think reasoning is sound and it is communicated clearly, word choices important 8 What does the user tell the program? •! Select different features (53%) –! It should put email in ‘Enron News’ if it has the keywords “changes” and “policy”. •! Adjust weights (12%) –! The second set of words should be given more importance. •! Parse/extract in different way (10%) –! I think that it should look for typos in the punctuation for indicators toward ‘Personal’. •! Employ feature combinations (5%) –! I think it would be better if it recognized a last and a first name together. •! Use relational features (4%) –! This message should be in ‘EnronNews’ since it is from the chairman of the company. 9 What knowledge do they use? •! Commonsense (36%) –! “Qualifications” would seem like a really good Resume word, I wonder why that’s not down here. •! English (30%) –! Does the computer know the difference between “resumé” and “resume”? •! Domain (15%) –! Different words could have been found in common like … “Ken Lay”. 10