Codelog

[AIB] High dimensional Data ๋ณธ๋ฌธ

Boot Camp/section1

[AIB] High dimensional Data

minzeros 2022. 1. 6. 17:44

๐Ÿ’ก Vector transformation

๋ฒกํ„ฐ ๋ณ€ํ™˜์€ ์ž„์˜์˜ ๋‘ ๋ฒกํ„ฐ๋ฅผ ๋”ํ•˜๊ฑฐ๋‚˜ ํ˜น์€ ์Šค์นผ๋ผ ๊ฐ’์„ ๊ณฑํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค.

๋ฒกํ„ฐ ๋ณ€ํ™˜์œผ๋กœ์จ์˜ ๋งคํŠธ๋ฆญ์Šค-๋ฒกํ„ฐ ๊ณฑ
  • f ๋ผ๋Š” transformation์„ ์‚ฌ์šฉํ•˜์—ฌ
  • ์ž„์˜์˜ ๋ฒกํ„ฐ [x1, x2]์— ๋Œ€ํ•ด์„œ
  • [2x1+x2,  x1-3x2] ๋กœ ๋ณ€ํ™˜

์—ฌ๊ธฐ์„œ ์›๋ž˜ ๋ฒกํ„ฐ [x1, x2]๋Š” ์œ ๋‹›๋ฒกํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ ์•„๋ž˜์ฒ˜๋Ÿผ ๋ถ„๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š”๋ฐ,

๋ถ„๋ฆฌ๋œ ๊ฐ ์œ ๋‹›๋ฒกํ„ฐ๋Š” transformation์„ ํ†ตํ•ด์„œ ๊ฐ๊ฐ

  • 2x1, x1๊ณผ
  • x2, -3x2 ๋ผ๋Š” ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜์™€์•ผ ํ•œ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค.

์ด๋ฅผ ๋งคํŠธ๋ฆญ์Šค ํ˜•ํƒœ๋กœ ํ•ฉ์น˜๊ฒŒ ๋˜๋ฉด, ์•„๋ž˜์™€ ๊ฐ™์€ T ๋งคํŠธ๋ฆญ์Šค๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋‹ค.

์ด ๋งคํŠธ๋ฆญ์Šค๋ฅผ ์ฒ˜์Œ ๋ฒกํ„ฐ์— ๊ณฑํ–ˆ์„ ๊ฒฝ์šฐ transformation์ด ์›ํ•˜๋Š” ๋Œ€๋กœ ์ด๋ฃจ์–ด์ง„๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค.

์ฆ‰, ์ž„์˜์˜ R2 ๋ฒกํ„ฐ๋ฅผ ๋‹ค๋ฅธ R2 ๋‚ด๋ถ€์˜ ๋ฒกํ„ฐ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ณผ์ •์€, ํŠน์ • T๋ผ๋Š” ๋งคํŠธ๋ฆญ์Šค๋ฅผ ๊ณฑํ•˜๋Š” ๊ฒƒ๊ณผ ๋™์ผํ•œ ๊ณผ์ •์ด๋‹ค.

๋”ฐ๋ผ์„œ Vector transformation์€ ์„ ํ˜•, ์ฆ‰ ๊ณฑํ•˜๊ณ  ๋”ํ•˜๋Š” ๊ฒƒ์œผ๋กœ๋งŒ ์ด๋ฃจ์–ด์กŒ๊ธฐ ๋•Œ๋ฌธ์— ๋งคํŠธ๋ฆญ์Šค์™€ ๋ฒกํ„ฐ์˜ ๊ณฑ์œผ๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค.

 

 

๐Ÿ’ก ๊ณ ์œ ๋ฒกํ„ฐ (Eigenvector)

Transformation์€ matrix๋ฅผ ๊ณฑํ•˜๋Š” ๊ฒƒ์„ ํ†ตํ•ด, ๋ฒกํ„ฐ๋ฅผ ๋‹ค๋ฅธ ์œ„์น˜๋กœ ์˜ฎ๊ธด๋‹ค ๋ผ๋Š” ์˜๋ฏธ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค.

์ด๋ฒˆ์—๋Š” R3 ๊ณต๊ฐ„์—์„œ์˜ transformation์„ ์˜ˆ์‹œ๋กœ ๋“ค์–ด๋ณผ ๊ฒƒ์ด๋‹ค.

์ง€๊ตฌ๋ณธ์„ R3 ๊ณต๊ฐ„์œผ๋กœ ๋ณด๋ฉด, R3 ๊ณต๊ฐ„์ด ํšŒ์ „ํ•  ๋•Œ, ์œ„์น˜์— ๋”ฐ๋ผ์„œ ๋ณ€ํ™”ํ•˜๋Š” ์ •๋„๊ฐ€ ๋‹ค๋ฅด๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค.

๊ฐ€๋ น ์ ๋„ ๋ถ€๊ทผ์— ์žˆ๋Š” ์ ์˜ ๋ณ€ํ™”๋˜๋Š” ๊ฑฐ๋ฆฌ์™€ ๊ทน์ง€๋ฐฉ์— ์žˆ๋Š” ์ ์˜ ์œ„์น˜๊ฐ€ ๋ณ€ํ™”๋˜๋Š” ๊ฑฐ๋ฆฌ๋Š” ๋‹ค๋ฅผ ๊ฒƒ์ด๋‹ค.

์ด๋Š” ํšŒ์ „์ถ•์œผ๋กœ ๊ฐ€๊นŒ์ด ๊ฐˆ์ˆ˜๋ก / ๋ฉ€์–ด์งˆ์ˆ˜๋ก ๋”์šฑ ๋ช…ํ™•ํ•ด์ง€๋ฉฐ, ์ •ํ™•ํ•˜๊ฒŒ ํšŒ์ „์ถ•์— ์œ„์น˜ํ•ด์žˆ๋Š” ๊ฒฝ์šฐ, transformation์„ ํ†ตํ•ด ์œ„์น˜๊ฐ€ ๋ณ€ํ•˜์ง€ ์•Š๋Š”๋‹ค.

์ด๋ ‡๊ฒŒ transformation์— ์˜ํ–ฅ์„ ๋ฐ›์ง€ ์•Š๋Š” ํšŒ์ „์ถ•, ํ˜น์€ ๋ฒกํ„ฐ๋ฅผ ๊ณต๊ฐ„์˜ ๊ณ ์œ ๋ฒกํ„ฐ๋ผ๊ณ  ๋ถ€๋ฅธ๋‹ค.

 

Eigenvector๋Š” ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ๊ณต๋ถ„์‚ฐ ํ–‰๋ ฌ์„ ์ ์šฉํ•˜์—ฌ ์„ ํ˜• ๋ณ€ํ™˜์„ ์‹œํ–‰ํ–ˆ์ง€๋งŒ, ๋ฐฉํ–ฅ์€ ๋ณ€ํ™”ํ•˜์ง€ ์•Š๊ณ  ํฌ๊ธฐ๋งŒ ๋ณ€ํ™”ํ•˜๋Š” ๋ฒกํ„ฐ๋ฅผ ๋งํ•œ๋‹ค. PCA ๊ณผ์ •์—์„œ๋Š”, ์ด๋ ‡๊ฒŒ ์ฐพ์•„์ง€๋Š” Eigenvector์˜ Eigenvalue๊ฐ€ ํฐ ์ˆœ์„œ๋Œ€๋กœ PC1, PC2๋ฅผ ์ฑ„ํƒํ•˜๊ฒŒ ๋œ๋‹ค.

์ฆ‰ Eigenvalue์˜ ๊ฐ’์„ ํ†ตํ•ด์„œ ์–ด๋–ค ๊ฒƒ์„ PC์ถ•์œผ๋กœ ์„ ํƒํ•  ์ง€ ์ •ํ•˜๊ฒŒ ๋œ๋‹ค.

 

๐Ÿ’ก ๊ณ ์œ ๊ฐ’ (Eigenvalue)

์•ž์„œ ๋ดค๋˜ ๊ณ ์œ ๋ฒกํ„ฐ๋Š” ์ฃผ์–ด์ง„ transformation์— ๋Œ€ํ•ด์„œ ํฌ๊ธฐ๋งŒ ๋ณ€ํ•˜๊ณ  ๋ฐฉํ–ฅ์€ ๋ณ€ํ™”ํ•˜์ง€ ์•Š๋Š” ๋ฒกํ„ฐ์ด๋‹ค.

์—ฌ๊ธฐ์„œ ๋ณ€ํ™”ํ•˜๋Š” ํฌ๊ธฐ๋Š” ๊ฒฐ๊ตญ ์Šค์นผ๋ผ ๊ฐ’์œผ๋กœ ๋ณ€ํ™”ํ•  ์ˆ˜ ๋ฐ–์— ์—†๋Š”๋ฐ, ์ด ํŠน์ • ์Šค์นผ๋ผ ๊ฐ’์„ ๊ณ ์œ ๊ฐ’์ด๋ผ๊ณ  ํ•œ๋‹ค.

 

๊ณ ์œ ๊ฐ’, ๊ณ ์œ ๋ฒกํ„ฐ ๊ณ„์‚ฐํ•˜๊ธฐ

๊ณ ์œ ๊ฐ’, ๊ณ ์œ ๋ฒกํ„ฐ ๊ณ„์‚ฐ ๋‚ด์šฉ์€ Matrix Diagonalization๊ณผ Gaussian Elimination ๋“ฑ, ์„ ํ˜•๋Œ€์ˆ˜์˜ ๋ณต์žกํ•œ ๋‚ด์šฉ๋“ค์„ ํฌํ•จํ•˜๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋Œ€์‹  ์ด๋ฅผ ์‘์šฉํ•˜๋Š” Principle Component Analysis(PCA) ์— ๊ด€๋ จ๋œ ๋‚ด์šฉ์„ ์•Œ์•„๋ณด๋ ค๊ณ  ํ•œ๋‹ค.

 

 

๐Ÿ’ก Dimension Reduction

๋ฐ์ดํ„ฐ์˜ ์‹œ๊ฐํ™”๋‚˜ ํƒ์ƒ‰์ด ์–ด๋ ค์›Œ์ง€๋Š” ๊ฒƒ ๋ฟ๋งŒ ๋ชจ๋ธ๋ง์—์„œ์˜ overfitting ์ด์Šˆ๋ฅผ ํฌํ•จํ•˜๋Š” ๋“ฑ ๋น…๋ฐ์ดํ„ฐ ๋ฐ์ดํ„ฐ์…‹์˜ feature๊ฐ€ ๋งŽ์œผ๋ฉด ๋งŽ์„์ˆ˜๋ก ์ด๋กœ ์ธํ•ด ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ๋Š” ์ ์  ๋งŽ์•„์งˆ ๊ฒƒ์ด๋‹ค.

๋จธ์‹ ๋Ÿฌ๋‹์—์„œ๋Š” ์ด๋ฅผ ์œ„ํ•œ ๋‹ค์–‘ํ•œ ์ฐจ์›์ถ•์†Œ ๊ธฐ์ˆ ๋“ค์ด ์ด๋ฏธ ์—ฐ๊ตฌ๋˜์–ด ์žˆ๋‹ค.

 

1. Feature Selection

Feature Selection์ด๋ž€ ๋ฐ์ดํ„ฐ์…‹์—์„œ ๋œ ์ค‘์š”ํ•œ feature๋ฅผ ์ œ๊ฑฐํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์˜๋ฏธํ•œ๋‹ค.

  • ์„ ํƒ๋œ feature ํ•ด์„์ด ์‰ฝ๋‹ค.
  • feature๋“ค๊ฐ„์˜ ์—ฐ๊ด€์„ฑ์„ ๊ณ ๋ คํ•ด์•ผ ํ•œ๋‹ค.
  • Ex) LASSO, Genetic algorithm ๋“ฑ

 

2. Feature Extraction

๊ธฐ์กด์— ์žˆ๋Š” feature ํ˜น์€ ๊ทธ๋“ค์„ ๋ฐ”ํƒ•์œผ๋กœ ์กฐํ•ฉ๋œ(feature engineering) feature๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์˜๋ฏธํ•œ๋‹ค.

  • feature๋“ค๊ฐ„์˜ ์—ฐ๊ด€์„ฑ์ด ๊ณ ๋ ค๋œ๋‹ค.
  • feature์˜ ์ˆ˜๋ฅผ ๋งŽ์ด ์ค„์ผ ์ˆ˜ ์žˆ๋‹ค.
  • feature ํ•ด์„์ด ์–ด๋ ต๋‹ค.
  • Ex) PCA, Auto-encoder ๋“ฑ

 

 

๐Ÿ’ก PCA (Principal Component Analysis)

  • ๊ณ ์ฐจ์› ๋ฐ์ดํ„ฐ๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ๋ฒ•
  • ๋‚ฎ์€ ์ฐจ์›์œผ๋กœ ์ฐจ์›์ถ•์†Œ
  • ๊ณ ์ฐจ์› ๋ฐ์ดํ„ฐ๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ์‹œ๊ฐํ™” + clustering
  • ์›๋ž˜ ๊ณ ์ฐจ์› ๋ฐ์ดํ„ฐ์˜ ์ •๋ณด(๋ถ„์‚ฐ)๋ฅผ ์ตœ๋Œ€ํ•œ ์œ ์ง€ํ•˜๋Š” ๋ฒกํ„ฐ๋ฅผ ์ฐพ๊ณ , ํ•ด๋‹น ๋ฒกํ„ฐ์— ๋Œ€ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ (Linear) Projection

PCA์˜ ๋ชฉ์ ์€ ์ฐจ์› ์ถ•์†Œ, ์ฆ‰ feature ๊ฐœ์ˆ˜ ๊ฐ์†Œ์ด๋‹ค. 

์ฐจ์›์ด ๋งŽ์œผ๋ฉด (feature๊ฐ€ ๋งŽ์œผ๋ฉด) ๋ถ„์„์— ์˜ํ–ฅ๋ ฅ์ด ์—†๋Š” feature๋ฅผ ๋ชจ๋‘ ๋ฐ˜์˜ํ•˜๊ณ , feature๋“ค๋ผ๋ฆฌ ๋‚˜ํƒ€๋‚ด๋Š” ๊ฐ’์ด ์ค‘๋ณต๋˜๊ธฐ ๋•Œ๋ฌธ์— overfitting์ด ๋ฐœ์ƒํ•˜๊ธฐ ์‰ฝ๋‹ค. PCA๋Š” ์ด๋Ÿฌํ•œ overfitting ๋ฐฉ์ง€์—๋„ ๋„์›€์„ ์ค€๋‹ค.

 

cf.

๋ฐ์ดํ„ฐ๊ฐ€ ์œ ์˜๋ฏธํ•œ ์ •๋ณด๋ฅผ ๋งŽ์ด ๋‹ด๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์€ ๋ฐ์ดํ„ฐ์˜ ๋ถ„์‚ฐ์ด ํฌ๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ๋‹ค.

๋ฐ์ดํ„ฐ๋ฅผ ๊ฐ€์ง€๊ณ  ๋ถ„์„์„ ํ•˜๋ ค๋ฉด ์„œ๋กœ ๋‹ค๋ฅธ ๊ฐ’์„ ๊ฐ–๋Š” ๋ถ€๋ถ„(๋ฐ์ดํ„ฐ)๊ฐ€ ํ•„์š”ํ•˜๋‹ค.

์˜ˆ๋ฅผ ๋“ค๋ฉด, ์‚ฌ์ž์™€ ํ˜ธ๋ž‘์ด ๋ถ„๋ฅ˜ ๋ชจ๋ธ์—์„œ ๊ฐˆ๊ธฐ์˜ ์œ ๋ฌด์™€ ๊ฐ™์€ ์ฐจ์ด๋ฅผ ๊ฐ–๋Š” ๋ฐ์ดํ„ฐ๋ฅผ ๋งํ•  ์ˆ˜ ์žˆ๋‹ค.

๋”ฐ๋ผ์„œ ๋ฐ์ดํ„ฐ ๊ฐ’์ด ๋‹ค๋ฅด๋‹ค๋Š” ๊ฒƒ์ด ๊ฒฐ๊ตญ ๋ฐ์ดํ„ฐ์˜ ๋ถ„์‚ฐ์ด ํฌ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค.

 

 

๐Ÿ”ฅ PCA Process

๋‹ค์ฐจ์›์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์‹œ๊ฐํ™”ํ•˜๊ธฐ ์œ„ํ•ด์„œ ์ •๋ณด ์†์‹ค์ด ์ œ์ผ ์ ์€ 2์ฐจ์›์œผ๋กœ ์ถ•์†Œํ•ด์•ผ ํ•œ๋‹ค.

 

1) ๋ฐ์ดํ„ฐ ์ค€๋น„

import numpy as np

X = np.array([ 
              [0.2, 5.6, 3.56], 
              [0.45, 5.89, 2.4],
              [0.33, 6.37, 1.95],
              [0.54, 7.9, 1.32],
              [0.77, 7.87, 0.98]
])

 

2) ๊ฐ ์—ด์— ๋Œ€ํ•ด์„œ ํ‰๊ท ์„ ๋นผ๊ณ , ํ‘œ์ค€ํŽธ์ฐจ๋กœ ๋‚˜๋ˆ„์–ด Normalize ์ง„ํ–‰ํ•จ

standardized_data = (X - np.mean(X, axis=0)) / np.std(X, ddof=1, axis=0)
print("\n Standardized Data: \n", standardized_data)

output :

 

3) Z์˜ ๋ถ„์‚ฐ-๊ณต๋ถ„์‚ฐ ๋งคํŠธ๋ฆญ์Šค๋ฅผ ๊ณ„์‚ฐํ•จ

covariance_matrix = np.cov(standardized_data.T)
print("\n Covariance Matrix: \n", covariance_matrix)

output :

 

4) ๋ถ„์‚ฐ-๊ณต๋ถ„์‚ฐ ๋งคํŠธ๋ฆญ์Šค์˜ ๊ณ ์œ ๋ฒกํ„ฐ์™€ ๊ณ ์œ ๊ฐ’์„ ๊ณ„์‚ฐํ•จ

values, vectors = np.linalg.eig(covariance_matrix)
print("\n Eigenvalues: \n", values)
print("\n Eigenvectors: \n", vectors)

output :

 

5) ๋ฐ์ดํ„ฐ๋ฅผ ๊ณ ์œ ๋ฒกํ„ฐ์— Projection ์‹œํ‚ด (matmul ์‚ฌ์šฉ)

Z = np.matmul(standardized_data, vectors)
print("\n Projected Data: \n", Z)

output :

 

๊ฒฐ๊ณผ

์›๋ณธ ๋ฐ์ดํ„ฐ
PCA๋ฅผ ๊ฑฐ์นœ ๋ฐ์ดํ„ฐ

 

PCA๋Š” ๊ณ ์ฐจ์›์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„์‚ฐ์„ ์œ ์ง€ํ•˜๋Š” ์ถ•(PC)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ€ํ™˜ํ•œ ๊ฒƒ์ด๋ฉฐ,

ํ•ด๋‹น PC๋“ค ์ค‘ ์ผ๋ถ€๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์ฐจ์› ์ถ•์†Œ๋ฅผ ํ•  ์ˆ˜ ์žˆ๋‹ค.

๋”ฐ๋ผ์„œ Z ๋งคํŠธ๋ฆญ์Šค ์ค‘ pc1, pc2 ๋งŒ์„ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ 2์ฐจ์›์œผ๋กœ ์ถ•์†Œํ–ˆ๋‹ค ๋ผ๋Š” ์˜๋ฏธ๊ฐ€ ์žˆ๊ฒŒ ๋œ๋‹ค.

 

์ฐจ์›์ด ์ถ•์†Œ๋œ ๋ฐ์ดํ„ฐ

 

 

โœจ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•œ PCA

from sklearn.preprocession import StandardScaler, Normalizer
from sklearn.decomposition import PCA

print("Data: \n", X)

scaler = StandardScaler()
Z = scaler.fit_transform(X)
print("\n Standardized Data: \n", Z)

pca = PCA(2)

pca.fit(Z)

print("\n Eigenvectors: \n", pca.components_)	# ๊ณ ์œ ๋ฒกํ„ฐ
print("\n Eigenvalues: \n",pca.explained_variance_)	# ๊ณ ์œ ๊ฐ’

B = pca.transform(Z)
print("\n Projected Data: \n", B)

output :

 

์ค‘๊ฐ„์— Standardized Data๊ฐ€ ์ด์ „๊ณผ ๋‹ค๋ฅธ ์ด์œ 

standardized_data = (X - np.mean(X, axis=0)) / np.std(X, ddof=1, axis=0) ์—์„œ

standard deviation์— ์“ฐ์ด๋Š” ์ž์œ ๋„๊ฐ€ 1์ด๋ƒ ํ˜น์€ 0์ด๋ƒ์˜ ์ฐจ์ด

 

 

๐Ÿ’ก PCA์˜ ํŠน์ง•

  • ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ๋…๋ฆฝ์ ์ธ ์ถ•์„ ์ฐพ๋Š”๋ฐ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Œ
  • ๋ฐ์ดํ„ฐ์˜ ๋ถ„ํฌ๊ฐ€ ์ •๊ทœ์„ฑ์„ ๋„์ง€ ์•Š๋Š” ๊ฒฝ์šฐ ์ ์šฉ์ด ์–ด๋ ค์›€ -> ์ปค๋„ PCA ์‚ฌ์šฉ ๊ฐ€๋Šฅ
  • ๋ถ„๋ฅ˜ / ์˜ˆ์ธก ๋ฌธ์ œ์— ๋Œ€ํ•ด์„œ ๋ฐ์ดํ„ฐ์˜ ๋ผ๋ฒจ์„ ๊ณ ๋ คํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ํšจ๊ณผ์  ๋ถ„๋ฆฌ๊ฐ€ ์–ด๋ ค์›€ -> PLS ์‚ฌ์šฉ ๊ฐ€๋Šฅ

 

'Boot Camp > section1' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€

[AIB] Clustering (+ PCA ๊ฐœ๋…)  (0) 2022.02.09
[AIB] Intermediate Linear Algebra  (0) 2021.12.28
[AIB] Vector/Matrices  (0) 2021.12.20
[AIB] Bayesian  (0) 2021.11.02
[AIB] Confidence Intervals  (0) 2021.11.02