Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Binary classification #4

Open
iammiori opened this issue Apr 22, 2019 · 6 comments
Open

Binary classification #4

iammiori opened this issue Apr 22, 2019 · 6 comments

Comments

@iammiori
Copy link
Owner

  • Binary Label Encoding -> 0 / 1
  • linear regression 쓰면, hypothesis 가 간단하지만, hypothesis는 1보다 크고, 보다 작을 수 있어.
  • 하지만 우리는 0, 1 사이의 데이터 원해 => logistic
@iammiori
Copy link
Owner Author

sigmoid

sigmoid

: curved in two directions (like S)
: logistic function := sigmoid function
sigmoid 때문에 h(x) 는 [0,1] 로 bound 됨

@iammiori
Copy link
Owner Author

hypothesis

= 1/ (1+ e^(-W^T * X))

@iammiori
Copy link
Owner Author

iammiori commented Apr 22, 2019

cost function

  • 기존 linear regression 에 cost function 적용하면 local minima 에 빠짐
  • cost function : 우리의 예측값이 얼마나 정답에 가까운가를 측정하는 척도
  • 말인즉슨, 정답에 가까워 질수록 cost function 값 작게
  • 정답에서 멀어질 수록 cost function 값 크게
  • Y=1 일때는 -log(H(x))
  • Y=0 일때는 -log(1-H(x))
  • cost = -1/m 시그마 {ylog(H(x)) + (1-y)log(1-H(x))}

@iammiori
Copy link
Owner Author

Gradient Decent Alogrithm

code 로 나타내면

cost = tf.reduce_mean(-tf.reduce_mean(Y*tf.log(hypothesis) + (1-Y)*tf.log(1-yhopthesis))

minimize 시키면

a = tf.Variable(0.1)
optimizer = tf.train.GradientDescentOptimizer(a)
train = optimizer.minimize(cost)

@iammiori
Copy link
Owner Author

Softmax

모든 값이 [0,1]
전체합이 1 (확률정규화)

@iammiori
Copy link
Owner Author

One-hot encoding

argmax 써서 확률을 1, 0으로 바꿔줘

Cross-entropy

D(S,L) = - 시그마 Li log(Si)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant