목차
1. Introduction
2. Why can't we just Trust ML models
2.1. No data nor ML algorithms can be bias free
2.2. There is no single definition of fairness in machine learning
3. What are the requirements for trustworthy ML?
3.1. Prerequisite to trust : Explainability
3.2. Fundamentals to trust : Accountability
4. Conclusion
2. Why can't we just Trust ML models
2.1. No data nor ML algorithms can be bias free
2.2. There is no single definition of fairness in machine learning
3. What are the requirements for trustworthy ML?
3.1. Prerequisite to trust : Explainability
3.2. Fundamentals to trust : Accountability
4. Conclusion
본문내용
Research on Fair, Accountable, and Transparent ML systems
( Explaining what learned models predict: In which cases can we trust machine learning models and when is caution required )
Introduction
Our lives are becoming more dependent on the presence of machine learning(ML). But as data, the fuel of ML, is collection of pervasive human bias and disparate factors in society, ML is also vulnerable to bias that affects the fairness [1]. Since ProPublica reported COMPAS, a tool used by courts in the US, was biased against African-Americans [2], similar studies have been conducted in other fields, such as YouTube's favoritism to specific dialect and gender [3], or an A-level grading system that discriminated the marginalized students [4], which left us a question 'can we still trust ML models?'.
This essay will address
1) Impossibility of bias-free data and ML.
2) Conflict in concepts of individual and group fairness.
3) How explainability can lead ML models to trust.
keywords: machine learning, bias, fairness, explainability, accountability, regulation
Why can't we just trust ML models?
1. No data nor ML algorithms can be bias free.
As Tom Mitchell puts, ML models learn the patterns from historical data and
( Explaining what learned models predict: In which cases can we trust machine learning models and when is caution required )
Introduction
Our lives are becoming more dependent on the presence of machine learning(ML). But as data, the fuel of ML, is collection of pervasive human bias and disparate factors in society, ML is also vulnerable to bias that affects the fairness [1]. Since ProPublica reported COMPAS, a tool used by courts in the US, was biased against African-Americans [2], similar studies have been conducted in other fields, such as YouTube's favoritism to specific dialect and gender [3], or an A-level grading system that discriminated the marginalized students [4], which left us a question 'can we still trust ML models?'.
This essay will address
1) Impossibility of bias-free data and ML.
2) Conflict in concepts of individual and group fairness.
3) How explainability can lead ML models to trust.
keywords: machine learning, bias, fairness, explainability, accountability, regulation
Why can't we just trust ML models?
1. No data nor ML algorithms can be bias free.
As Tom Mitchell puts, ML models learn the patterns from historical data and
소개글