1. Cross Entropy
y = 1 vs. y_hat = 0.0001
import numpy as np
y = 1
y_hat = 1
-y * np.log(y_hat)
-0.0
y = 1 vs. y_hat = 0.0001
y = 1
y_hat = 0.0001
-y * np.log(y_hat)
9.210340371976182
y = 0 vs. y_hat = 0
y = 0
y_hat = 0
-(1 - y) * np.log(1 - y_hat)
-0.0
y = 0 vs. y_hat = 0.9999
y = 0
y_hat = 0.9999
-(1 - y) * np.log(1 - y_hat)
9.210340371976294
2. Information Theory
2-1. 발생 확률이 서로 다른 사건 A, B, C - Information Gain
A = 0.9
B = 0.5
C = 0.1
print('%.3f' % -np.log(A), '%.3f' % -np.log(B), '%.3f' % -np.log(C))
0.105 0.693 2.303
2-2. AlphaGo와 Apes의 바둑대결 승리 확률 - Degree of Surprise
Alphago = 0.999
Apes = 0.001
print('%.3f' % -np.log(Alphago), '%.3f' % -np.log(Apes))
0.001 6.908
3. Entropy
3-1. 승률이 비슷한 두팀의 Entropy
P1 = 0.5
P2 = 0.5
-P1 * np.log(P1) - P2 * np.log(P2)
0.6931471805599453
3-2. 승률 차이가 큰 두팀의 Entropy
P1 = 0.999
P2 = 0.001
-P1 * np.log(P1) - P2 * np.log(P2)
0.007907255112232087
Random Forest (0) | 2022.06.08 |
---|---|
의사결정 나무(Decision Tree) (0) | 2022.06.08 |
로지스틱 회귀(Logistic Regression) 2 (0) | 2022.06.07 |
로지스틱 회귀(Logistic Regression) 1 (0) | 2022.06.07 |
회귀분석(Regression Analysis) 4 (0) | 2022.06.07 |