[영문]뮌헨공대 석사 합격 에세이 - 데이터 공학 및 분석 학과
본 자료는 1페이지 의 미리보기를 제공합니다. 이미지를 클릭하여 주세요.
닫기
  • 1
  • 2
  • 3
  • 4
해당 자료는 1페이지 까지만 미리보기를 제공합니다.
1페이지 이후부터 다운로드 후 확인할 수 있습니다.

소개글

[영문]뮌헨공대 석사 합격 에세이 - 데이터 공학 및 분석 학과에 대한 보고서 자료입니다.

목차

1. Introduction

2. Why can't we just Trust ML models
2.1. No data nor ML algorithms can be bias free
2.2. There is no single definition of fairness in machine learning

3. What are the requirements for trustworthy ML?
3.1. Prerequisite to trust : Explainability
3.2. Fundamentals to trust : Accountability

4. Conclusion

본문내용

Research on Fair, Accountable, and Transparent ML systems
( Explaining what learned models predict: In which cases can we trust machine learning models and when is caution required )

Introduction
Our lives are becoming more dependent on the presence of machine learning(ML). But as data, the fuel of ML, is collection of pervasive human bias and disparate factors in society, ML is also vulnerable to bias that affects the fairness [1]. Since ProPublica reported COMPAS, a tool used by courts in the US, was biased against African-Americans [2], similar studies have been conducted in other fields, such as YouTube's favoritism to specific dialect and gender [3], or an A-level grading system that discriminated the marginalized students [4], which left us a question 'can we still trust ML models?'.
This essay will address

1) Impossibility of bias-free data and ML.
2) Conflict in concepts of individual and group fairness.
3) How explainability can lead ML models to trust.

keywords: machine learning, bias, fairness, explainability, accountability, regulation
Why can't we just trust ML models?
1. No data nor ML algorithms can be bias free.
As Tom Mitchell puts, ML models learn the patterns from historical data and
  • 가격15,000
  • 페이지수4페이지
  • 등록일2021.08.11
  • 저작시기2020.11
  • 파일형식아크로뱃 뷰어(pdf)
  • 자료번호#1153815
본 자료는 최근 2주간 다운받은 회원이 없습니다.
다운로드 장바구니