๐Ÿ’ป My Work/๐Ÿง  AI

[์ธ๊ณต์ง€๋Šฅ/ํ˜ผ๊ณต๋จธ์‹ ] 03-3. ํŠน์„ฑ ๊ณตํ•™๊ณผ ๊ทœ์ œ

Jaeseo Kim 2022. 12. 7. 10:25

๐Ÿ“Œ๋‹ค์ค‘ ํšŒ๊ท€ (Multiple Regression)

์—ฌ๋Ÿฌ ๊ฐœ์˜ ํŠน์„ฑ์„ ์‚ฌ์šฉํ•œ ์„ ํ˜• ํšŒ๊ท€

์•„๋ž˜๋Š” ์„ ํ˜• ํšŒ๊ท€์— ๋Œ€ํ•ด ์ž‘์„ฑํ•œ ๊ธ€์ž…๋‹ˆ๋‹ค. ๐Ÿ˜

 

[์ธ๊ณต์ง€๋Šฅ/ํ˜ผ๊ณต๋จธ์‹ ] 03-2. ์„ ํ˜• ํšŒ๊ท€

์šฐ์„  ๋ฐ์ดํ„ฐ ์ค€๋น„ ํ›„ ํ›ˆ๋ จ๊นŒ์ง€.. (03-1 ๋‚ด์šฉ ์ฐธ๊ณ ) [์ธ๊ณต์ง€๋Šฅ/ํ˜ผ๊ณต๋จธ์‹ ] 03-1. k-์ตœ๊ทผ์ ‘ ์ด์›ƒ ํšŒ๊ท€ ์ง€๋„ ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋ถ„๋ฅ˜์™€ ํšŒ๊ท€๋กœ ๋‚˜๋‰ฉ๋‹ˆ๋‹ค. ์ด ๊ธ€์€ ํšŒ๊ท€์— ๋Œ€ํ•ด ์ž‘์„ฑํ–ˆ์Šต๋‹ˆ๋‹ค. ํšŒ๊ท€ ์ž„์˜์˜ ์ˆซ

avoc-o-d.tistory.com

 

์„ ํ˜• ํšŒ๊ท€์—์„œ๋Š” ๋†์–ด์˜ ๊ธธ์ด ๋ฐ์ดํ„ฐ๋งŒ ๊ฐ€์ง€๊ณ  ํ›ˆ๋ จ์„ ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์—, ์—ฌ์ „ํžˆ ๊ณผ์†Œ์ ํ•ฉ ๋ฌธ์ œ๊ฐ€ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค.

๋†’์ด, ๋‘๊ป˜ ๋“ฑ ํŠน์„ฑ์„ ๋งŽ์ด ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จํ•˜๋ฉด ์„ฑ๋Šฅ์ด ์ข‹์•„์งˆ ๊ฒƒ์œผ๋กœ ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค.

 

๐Ÿ“ŒํŠน์„ฑ ๊ณตํ•™ (Feature Engineering)

๊ธฐ์กด์˜ ํŠน์„ฑ๋ผ๋ฆฌ ์กฐํ•ฉํ•ด์„œ ์ƒˆ๋กœ์šด ํŠน์„ฑ์„ ๋ฝ‘์•„๋‚ด๋Š” (์ถ”๊ฐ€, ๋ฐœ๊ฒฌ) ์ž‘์—…

* ๋จธ์‹  ๋Ÿฌ๋‹์€ ํŠน์„ฑ ๊ณตํ•™์˜ ์˜ํ–ฅ์„ ๋งŽ์ด ๋ฐ›๋Š” ํŽธ, ๋”ฅ๋Ÿฌ๋‹์€ ๋œ ๋ฐ›๋Š” ํŽธ

 

ํŒ๋‹ค์Šค๋กœ ๋ฐ์ดํ„ฐ ์ค€๋น„

๐Ÿ“Œ pandas ํŒ๋‹ค์Šค : (๋„˜ํŒŒ์ด ๋ฐฐ์—ด ๊ฐ™์€) ๋ฐ์ดํ„ฐ ๋ถ„์„ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ
๐Ÿ“Œ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ : (ํŒ๋‹ค์Šค์˜ ํ•ต์‹ฌ ๊ฐ์ฒด) ๋‹ค์ฐจ์› ๋ฐฐ์—ด
import pandas as pd

df = pd.read_csv("https://bit.ly/perch_csv_data") # csv ํŒŒ์ผ์„ ํŒ๋‹ค์Šค ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์œผ๋กœ ์ฝ๊ธฐ
perch_full = df.to_numpy() # ๋„˜ํŒŒ์ด ๋ฐฐ์—ด๋กœ ๋ณ€ํ™˜

๐Ÿ“๋ฆฌ๋งˆ์ธ๋“œ ! ํ–‰์€ ์ƒ˜ํ”Œ, ์—ด์€ ํŠน์„ฑ

csv ํŒŒ์ผ, https://bit.ly/perch_csv_data ์ผ๋ถ€

์ž˜ ๊ฐ€์ ธ์™”๋Š”์ง€ ํ™•์ธํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

๋ฐ์ดํ„ฐ ์ผ๋ถ€

์ž˜ ๊ฐ€์ ธ์™€์กŒ๋„ค์š”~!

 

ํƒ€๊นƒ๊ฐ’์ธ ๋†์–ด์˜ ๋ฌด๊ฒŒ ๋ฐ์ดํ„ฐ๋ฅผ ์ค€๋น„ํ•ด์ค€ ํ›„,

๊ฐ ์ž…๋ ฅ๊ฐ’๊ณผ ํƒ€๊นƒ๊ฐ’์„ ํ›ˆ๋ จ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์–ด ์ค๋‹ˆ๋‹ค.

# http://bit.ly/perch_data
import numpy as np

perch_weight = np.array([5.9, 32.0, 40.0, 51.5, 70.0, 100.0, 78.0, 80.0, 85.0, 85.0, 110.0,
       115.0, 125.0, 130.0, 120.0, 120.0, 130.0, 135.0, 110.0, 130.0,
       150.0, 145.0, 150.0, 170.0, 225.0, 145.0, 188.0, 180.0, 197.0,
       218.0, 300.0, 260.0, 265.0, 250.0, 250.0, 300.0, 320.0, 514.0,
       556.0, 840.0, 685.0, 700.0, 700.0, 690.0, 900.0, 650.0, 820.0,
       850.0, 900.0, 1015.0, 820.0, 1100.0, 1000.0, 1100.0, 1000.0,
       1000.0])

from sklearn.model_selection import train_test_split

# perch_full, perch_weight ์„ ํ›ˆ๋ จ, ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„๊ธฐ
train_input, test_input, train_target, test_target = train_test_split(perch_full, perch_weight, random_state=42)

 

์œ„ ๋ฐ์ดํ„ฐ๋ฅผ ์กฐํ•ฉํ•ด์„œ ์ƒˆ๋กœ์šด ํŠน์„ฑ์„ ๋งŒ๋“ค๊ฒ ์Šต๋‹ˆ๋‹ค!

์„ ํ˜•ํšŒ๊ท€ ๋•Œ์ฒ˜๋Ÿผ ์ง์ ‘ ๋งŒ๋“ค์ง€ ์•Š๊ณ , ๋ณ€ํ™˜๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

 

๐Ÿ“Œ์‚ฌ์ดํ‚ท๋Ÿฐ์˜ ๋ณ€ํ™˜๊ธฐ (Transformer)

  • ํŠน์„ฑ์„ ๋งŒ๋“ค๊ฑฐ๋‚˜ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ํด๋ž˜์Šค
  • fit(), transform() ํ•จ์ˆ˜ ์ œ๊ณต
๐Ÿ“Œ์‚ฌ์ดํ‚ท๋Ÿฐ์˜ PolynomialFeatures ํด๋ž˜์Šค : ๋ณ€ํ™˜๊ธฐ
-
๊ฐ ํŠน์„ฑ์„ ์ œ๊ณฑํ•œ ํ•ญ์„ ์ถ”๊ฐ€, ํŠน์„ฑ๋ผ๋ฆฌ ์„œ๋กœ ๊ณฑํ•œ ํ•ญ์„ ์ถ”๊ฐ€โœจ
- fit() : ์ƒˆ๋กญ๊ฒŒ ๋งŒ๋“ค ํŠน์„ฑ ์กฐํ•ฉ์„ ์ฐพ์Œ
- transform() : ์‹ค์ œ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ€ํ™˜

* ๋ณ€ํ™˜๊ธฐ๋Š” ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ€ํ™˜ํ•˜๋Š” ๋ฐ ํƒ€๊นƒ์ด ํ•„์š”ํ•˜์ง€ ์•Š์Œ.
   โ–ถ๏ธ ๋ชจ๋ธ ํด๋ž˜์Šค์™€ ๋‹ค๋ฅด๊ฒŒ, ํ›ˆ๋ จ ํ•  ๋•Œ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๋งŒ ์ „๋‹ฌ

 

์šฐ์„  ๋ณ€ํ™˜๊ธฐ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

๐Ÿ“ ํ›ˆ๋ จ(fit)์„ ํ•ด์•ผ, ๋ณ€ํ™˜(transform) ์ด ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค.

from sklearn.preprocessing import PolynomialFeatures

# 2๊ฐœ์˜ ํŠน์„ฑ(์›์†Œ) 2, 3 ์œผ๋กœ ์ด๋ฃจ์–ด์ง„ ์ƒ˜ํ”Œ ์ ์šฉํ•ด๋ณด๊ธฐ
poly = PolynomialFeatures()

# 1(bias), 2, 3, 2**2, 2*3, 3**2
poly.fit([[2, 3]]) # ์ƒˆ๋กญ๊ฒŒ ๋งŒ๋“ค ํŠน์„ฑ ์กฐํ•ฉ์„ ์ฐพ์Œ
print(poly.transform([[2, 3]])) # ์‹ค์ œ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ€ํ™˜

๊ฒฐ๊ณผ

2๊ฐœ ๋ฐ–์— ์—†๋˜ [2, 3] ์ƒ˜ํ”Œ์ด 6๊ฐœ์˜ ์ƒ˜ํ”Œ๋กœ ๋ณ€ํ™˜๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํŠน์„ฑ์ด ๋งŽ์•„์กŒ์–ด์š”~

 

๐Ÿค” ์˜๋ฌธ ? 1์€ ์–ด๋–ป๊ฒŒ ๋งŒ๋“ค์–ด์ง„ ํŠน์„ฑ์ผ๊นŒ์š”?

๐Ÿ’ก ๋Œ€๋‹ต ! 1์€ ์„ ํ˜• ๋ฐฉ์ •์‹์˜ ์ ˆํŽธ์„ ์œ„ํ•œ ํŠน์„ฑ์ž…๋‹ˆ๋‹ค.

๐Ÿค” ์˜๋ฌธ ? ์™œ ์žˆ๋Š” ๊ฑธ๊นŒ์š”?

๐Ÿ’ก ๋Œ€๋‹ต ! y = a*x + b*1 => [a, b] * [x, 1] ์ด๋ ‡๊ฒŒ ๋ฐฐ์—ด๋ผ๋ฆฌ์˜ ์—ฐ์‚ฐ์„ ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ ˆํŽธ ํ•ญ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. (์‚ฌ์ดํ‚ท๋Ÿฐ์˜ ์„ ํ˜• ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ ์ถ”๊ฐ€ํ•ด์ฃผ๋ฏ€๋กœ [1, 2, 3]์ฒ˜๋Ÿผ ํŠน์„ฑ์— 1์„ ๋”ฐ๋กœ ๋„ฃ์–ด์ค„ ํ•„์š” ์—†์Šต๋‹ˆ๋‹ค.)

๐Ÿค” ์˜๋ฌธ ? ์šฐ๋ฆฌ์—๊ฒŒ ํ•„์š”ํ•œ ๊ฑธ๊นŒ?

๐Ÿ’ก ๋Œ€๋‹ต ! ์•„๋‹ˆ์š”. ์–ด์ฐจํ”ผ ์‚ฌ์ดํ‚ท๋Ÿฐ ๋ชจ๋ธ์ด ์ ˆํŽธ ํ•ญ์„ ๋ฌด์‹œํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํ•„์š” ์—†์Šต๋‹ˆ๋‹ค.

 

๐Ÿ“์‚ฌ์ดํ‚ท๋Ÿฐ ๋ชจ๋ธ์— include_bias = False ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ „๋‹ฌํ•ด์ฃผ๋ฉด ํŠน์„ฑ์— ์ถ”๊ฐ€๋œ ์ ˆํŽธ ํ•ญ์„ ๋ฌด์‹œํ•ฉ๋‹ˆ๋‹ค. (๊ธฐ๋ณธ๊ฐ’ True)

# (include_bias = False ์ ˆํŽธ์„ ์œ„ํ•œ ํ•ญ์„ ๋ฌด์‹œํ•˜๋ผ๋Š” ์˜๋ฏธ, ํ‘œ๊ธฐ ์•ˆ ํ•ด๋„ ์‚ฌ์ดํ‚ท๋Ÿฐ ๋ชจ๋ธ์€ ์ž๋™์œผ๋กœ ๋ฌด์‹œํ•จ)
poly = PolynomialFeatures(include_bias = False)
poly.fit([[2,3]])
print(poly.transform([[2, 3]]))

๊ฒฐ๊ณผ

์ ˆํŽธ์„ ์œ„ํ•œ ํ•ญ์ด ์ œ๊ฑฐ๋˜๊ณ  ํŠน์„ฑ์˜ ์ œ๊ณฑ๊ณผ ํŠน์„ฑ๋ผ๋ฆฌ ๊ณฑํ•œ ํ•ญ๋งŒ ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค! โœจ

 

์ด์ œ ์ ์šฉํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค!

๐Ÿ“ ์šฐ๋ฆฌ์˜ ํŠน์„ฑ์€ (๊ธธ์ด, ๋†’์ด, ๋‘๊ป˜) ์ž…๋‹ˆ๋‹ค.

 

ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ€ํ™˜๊ธฐ๋กœ ๋ณ€ํ™˜ํ•œ ๋ฐ์ดํ„ฐ train_poly๋ฅผ ๋งŒ๋“ค์–ด์ค๋‹ˆ๋‹ค.

poly = PolynomialFeatures(include_bias=False)
poly.fit(train_input)
train_poly= poly.transform(train_input) # train_input์„ ๋ณ€ํ™˜ํ•œ ๋ฐ์ดํ„ฐ

์ž˜ ๋ณ€ํ™˜๋˜์—ˆ๋Š”์ง€ ๋ฐฐ์—ด์˜ ํฌ๊ธฐ๋ฅผ ํ™•์ธํ•ด๋ณด๋ฉด, ์ž˜ ๋œ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค.

train_poly ์˜ ํฌ๊ธฐ

๊ทธ๋Ÿผ 9๊ฐœ์˜ ํŠน์„ฑ์ด ์–ด๋–ป๊ฒŒ ์ƒ๊ฒผ๋Š”์ง€ ํ™•์ธํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

๐Ÿ“Œ PolynomialFeatures ์˜ get_feature_names_out()
๊ฐ๊ฐ ์–ด๋–ค ์ž…๋ ฅ์˜ ์กฐํ•ฉ์œผ๋กœ ๋งŒ๋“ค์–ด์กŒ๋Š”์ง€ ์•Œ๋ ค์คŒ
# 9๊ฐœ์˜ ํŠน์„ฑ์ด ์–ด๋–ป๊ฒŒ ๋งŒ๋“ค์–ด์กŒ๋Š”์ง€ ํ™•์ธ

print(poly.get_feature_names_out())

๊ฒฐ๊ณผ

  • 'x0' : ์ฒซ ๋ฒˆ์งธ ํŠน์„ฑ
  • 'x1' : ๋‘ ๋ฒˆ์งธ ํŠน์„ฑ
  • ...
  • 'x0^2' : ์ฒซ ๋ฒˆ์งธ ํŠน์„ฑ์˜ ์ œ๊ณฑ
  • 'x0 x1' : ์ฒซ ๋ฒˆ์งธ ํŠน์„ฑ๊ณผ ๋‘ ๋ฒˆ์งธ ํŠน์„ฑ์˜ ๊ณฑ
  • ...

๊ทธ๋Ÿผ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ€ํ™˜ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

๐Ÿ“ ํ›ˆ๋ จ ์„ธํŠธ์— ์ ์šฉํ–ˆ๋˜ ๋ณ€ํ™˜๊ธฐ๋กœ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋ฅผ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค.

fit() ์€ ๋งŒ๋“ค ํŠน์„ฑ์˜ ์กฐํ•ฉ์„ ์ค€๋น„ํ•˜๊ธฐ๋งŒ ํ•˜๊ณ  ๋ณ„๋„์˜ ํ†ต๊ณ„ ๊ฐ’์„ ๊ตฌํ•˜์ง€ ์•Š์•„์„œ, ํ…Œ์ŠคํŠธ ์„ธํŠธ๋ฅผ ๋”ฐ๋กœ ๋ณ€ํ™˜ํ•ด์ค˜๋„ ๋˜๊ธด ํ•˜๋Š”๋ฐ

ํ•ญ์ƒ ํ›ˆ๋ จ ์„ธํŠธ๋ฅผ ๊ธฐ์ค€์œผ๋กœ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋ฅผ ๋ณ€ํ™˜ํ•˜๋Š” ์Šต๊ด€์„ ๋“ค์ด๋Š” ๊ฒƒ์ด ์ข‹๋‹ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค..!

test_poly = poly.transform(test_input)

 

๋‹ค์ค‘ ํšŒ๊ท€ ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๊ธฐ

๐Ÿ“ ๋‹ค์ค‘ ํšŒ๊ท€ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์€ ์„ ํ˜• ํšŒ๊ท€ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ์ฆ‰, ์—ฌ๋Ÿฌ ๊ฐœ์˜ ํŠน์„ฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ์„ ํ˜• ํšŒ๊ท€๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

๊ทธ๋Ÿผ ํ›ˆ๋ จ ํ›„, ํ›ˆ๋ จ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ์— ๋Œ€ํ•œ ์ ์ˆ˜๋ฅผ ํ™•์ธํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

from sklearn.linear_model import LinearRegression
lr = LinearRegression()

# ํ›ˆ๋ จ
lr.fit(train_poly, train_target)

"""
์ ์ˆ˜ ํ™•์ธ
lr.score(train_poly, train_target)
lr.score(test_poly, test_target)
"""

์ ์ˆ˜ ํ™•์ธ

๊ณผ์†Œ์ ํ•ฉ ๋ฌธ์ œ๋Š” ํ•ด๊ฒฐํ•œ ๋“ฏ ํ•ฉ๋‹ˆ๋‹ค! ๐Ÿ˜๐Ÿ˜

 

๊ทธ๋Ÿผ ํŠน์„ฑ์„ ๋” ์ถ”๊ฐ€ํ•˜๋ฉด ์–ด๋–ป๊ฒŒ ๋  ๊นŒ์š”? 3, 4, 5 ... ์ œ๊ณฑ ํ•ญ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด์ฃ !!

๊ทธ๋Ÿผ, 5์ œ๊ณฑ๊นŒ์ง€ ํŠน์„ฑ์„ ๋งŒ๋“ค๊ฒ ์Šต๋‹ˆ๋‹ค.

๐Ÿ“ŒPolynomialFeatures ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜ degree : ๊ณ ์ฐจํ•ญ์˜ ์ตœ๋Œ€ ์ฐจ์ˆ˜๋ฅผ ์ง€์ •
# ํŠน์„ฑ์„ ๋” ๋งŽ์ด ์ถ”๊ฐ€
# 3์ œ๊ณฑ, 4์ œ๊ณฑ ํ•ญ์„ ๋„ฃ์ž
# degree ๋กœ ์ตœ๋Œ€ ์ฐจ์ˆ˜๋ฅผ ์ง€์ •

poly = PolynomialFeatures(degree = 5, include_bias = False)
poly.fit(train_input)

train_poly = poly.transform(train_input)
test_poly = poly.transform(test_input)

train_poly ํฌ๊ธฐ

ํŠน์„ฑ์ด 55๊ฐœ์”ฉ์ด๋‚˜ ๋งŒ๋“ค์–ด์กŒ์Šต๋‹ˆ๋‹ค!

๊ทธ๋Ÿผ ํ›ˆ๋ จ์„ ๋‹ค์‹œ ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

lr.fit(train_poly, train_target)

"""
์ ์ˆ˜ ํ™•์ธ
lr.score(train_poly, train_target)
lr.score(test_poly, test_target)
"""

์ ์ˆ˜ ํ™•์ธ

๐Ÿค” ์˜๋ฌธ ? ์™œ์ด๋ฆฌ ์ ์ˆ˜๊ฐ€ ์ฒ˜์ฐธํ• ๊นŒ์š”?

๐Ÿ’ก ๋Œ€๋‹ต ! ํŠน์„ฑ์˜ ๊ฐœ์ˆ˜๋ฅผ ํฌ๊ฒŒ ๋Š˜๋ฆฌ๋ฉด ์„ ํ˜• ๋ชจ๋ธ์€ ์•„์ฃผ ๊ฐ•๋ ฅํ•ด์ง‘๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์„ธํŠธ์— ๋Œ€ํ•ด ๊ฑฐ์˜ ์™„๋ฒฝํ•˜๊ฒŒ ํ•™์Šตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

=> ์ฆ‰, ํ›ˆ๋ จ ์„ธํŠธ์— ๊ณผ๋Œ€ ์ ํ•ฉ๋˜๋ฏ€๋กœ ํ…Œ์ŠคํŠธ ์„ธํŠธ์—์„œ ์ฒ˜์ฐธํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค.

๐Ÿ’ก ํ•ด๊ฒฐ ๋ฐฉ๋ฒ• ! ๊ณผ๋Œ€ ์ ํ•ฉ์„ ์ค„์ž…๋‹ˆ๋‹ค. => ํŠน์„ฑ์„ ์ค„์ž…๋‹ˆ๋‹ค.

 

๐Ÿ“Œ ๊ทœ์ œ

๋จธ์‹ ๋Ÿฌ๋‹ ๋ชจ๋ธ์ด ํ›ˆ๋ จ ์„ธํŠธ์— ๊ณผ๋Œ€์ ํ•ฉ์ด ๋˜์ง€ ์•Š๋„๋ก ๊ณ„์ˆ˜(ํ˜น์€ ๊ธฐ์šธ๊ธฐ, ๊ฐ€์ค‘์น˜) ๊ฐ’์„ ์™„ํ™”ํ•˜๋Š” ๊ฒƒ

* ์„ ํ˜• ํšŒ๊ท€ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ํŠน์„ฑ์— ๊ณฑํ•ด์ง€๋Š” ๊ณ„์ˆ˜(ํ˜น์€ ๊ธฐ์šธ๊ธฐ, ๊ฐ€์ค‘์น˜)์˜ ํฌ๊ธฐ๋ฅผ ์ž‘๊ฒŒ ๋งŒ๋“ฆ

 

1๊ฐœ์˜ ํŠน์„ฑ์œผ๋กœ ํ›ˆ๋ จํ•œ ๋ชจ๋ธ ์˜ˆ์‹œ๋ฅผ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜์™€ ๊ฐ™์ด ๊ณผ๋Œ€์ ํ•ฉ๋œ ๋ชจ๋ธ์„ ๊ทœ์ œํ•˜์—ฌ ๊ธฐ์šธ๊ธฐ๋ฅผ ์ค„์˜€๋”๋‹ˆ ๋ณดํŽธ์ ์ธ ํŒจํ„ด์„ ํ•™์Šตํ•˜๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค!

 

๊ทธ๋Ÿผ 55๊ฐœ์˜ ํŠน์„ฑ์œผ๋กœ ํ›ˆ๋ จํ•œ ์„ ํ˜• ํšŒ๊ท€ ๋ชจ๋ธ์˜ ๊ณ„์ˆ˜๋ฅผ ๊ทœ์ œํ•ด์„œ ๊ณผ๋Œ€์ ํ•ฉ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

 

๐Ÿ“๋ฆฌ๋งˆ์ธ๋“œ ! ๊ทœ์ œ ์ ์šฉ ์ „์—, ๊ผญ ํŠน์„ฑ์˜ ์Šค์ผ€์ผ์„ ์ •๊ทœํ™” ํ•ด์ค์‹œ๋‹ค! ์Šค์ผ€์ผ์„ ๋งž์ถฐ์ฃผ์ง€ ์•Š์œผ๋ฉด, ๊ณ„์ˆ˜ ๊ฐ’์˜ ํฌ๊ธฐ๊ฐ€ ์„œ๋กœ ๋งŽ์ด ๋‹ค๋ฅด๊ฒŒ ๋˜์–ด ๊ณต์ •ํ•˜๊ฒŒ ์ œ์–ด๋˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค.

๐Ÿค” ์˜๋ฌธ? ์™œ k-์ตœ๊ทผ์ ‘ ์ด์›ƒ ๋ถ„๋ฅ˜๋Š” ์ •๊ทœํ™” ํ•ด์ฃผ๊ณ , LinearRegression ์€ ์ •๊ทœํ™” ์•ˆ ํ•ด์ฃผ๊ณ  ํ›ˆ๋ จํ–ˆ๋‚˜?

๐Ÿ’ก ๋Œ€๋‹ต ! k-์ตœ๊ทผ์ ‘ ์ด์›ƒ์€ ์Šค์ผ€์ผ์ด ๋‹ค๋ฅด๋ฉด, ํ•œ์ชฝ ํŠน์„ฑ์œผ๋กœ ์ ๋ ค์žˆ๋Š” ๋ฐ์ดํ„ฐ ์ค‘์—์„œ ์ด์›ƒ์„ ์ฐพ๊ธฐ ๋•Œ๋ฌธ์— ์ œ๋Œ€๋กœ ๋œ ์ด์›ƒ์„ ๋ชป ์ฐพ์Œ, ๊ทผ๋ฐ ์‚ฌ์ดํ‚ท๋Ÿฐ ํ˜น์€ LinearRegression ๊ฐ™์€ ๊ฒฝ์šฐ๋Š” ํŠน์„ฑ์˜ ์Šค์ผ€์ผ์— ์˜ํ–ฅ์„ ๋ฐ›์ง€ ์•Š์€ ์•Œ๊ณ ๋ฆฌ์ฆ˜(์ˆ˜์น˜์ ์œผ๋กœ ๊ณ„์‚ฐ)์œผ๋กœ ๊ณ„์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ƒ๊ด€ ์—†์Œ

๐Ÿค” ์˜๋ฌธ? ๊ทธ๋Ÿผ ๊ทœ์ œํ•  ๋•?

๐Ÿ’ก ๋Œ€๋‹ต ! ๊ทœ์ œํ•  ๋•Œ์—” ๊ผญ ์ •๊ทœํ™”๋ฅผ ํ•ด์ค˜์•ผ ํ•ฉ๋‹ˆ๋‹ค! ๊ทœ์ œ๋Š” ๊ณ„์ˆ˜(ํ˜น์€ ๊ธฐ์šธ๊ธฐ, ๊ฐ€์ค‘์น˜) ๊ฐ’์„ ์ž‘๊ฒŒ ๋งŒ๋“œ๋Š” ์ผ์ด๊ธฐ ๋•Œ๋ฌธ์— ํŠน์„ฑ์ด ๋‹ค๋ฅด๋ฉด ๊ฐ๊ฐ ํŠน์„ฑ์— ๊ณฑํ•ด์ง€๋Š” ๊ธฐ์šธ๊ธฐ๋„ ๋‹ฌ๋ผ์ง€๊ธฐ ๋•Œ๋ฌธ์— ์ž˜๋ชป๋œ ๊ฐ’์ด ๋‚˜์˜ค๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. (๊ธฐ์šธ๊ธฐ๊ฐ€ ๋น„์Šทํ•ด์•ผ ๊ฐ ํŠน์„ฑ์— ๋Œ€ํ•ด ๊ณต์ •ํ•˜๊ฒŒ ์ œ์–ด๋˜๋‹ˆ๊นŒ)

 

ํ‰๊ท ๊ณผ ํ‘œ์ค€ํŽธ์ฐจ๋ฅผ ์ง์ ‘ ๊ตฌํ•˜์—ฌ ํŠน์„ฑ์„ ํ‘œ์ค€์ ์ˆ˜๋กœ ๋ฐ”๊พธ์ง€ ์•Š๊ณ ,, ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

๐Ÿ“Œ์‚ฌ์ดํ‚ท๋Ÿฐ์˜ StandardScaler ํด๋ž˜์Šค
์ •๊ทœํ™” ๋ณ€ํ™˜๊ธฐ, ๊ผญ ํ›ˆ๋ จ ์„ธํŠธ๋กœ ํ•™์Šตํ•œ ๋ณ€ํ™˜๊ธฐ๋ฅผ ์‚ฌ์šฉํ•ด์„œ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋„ ๋ณ€ํ™˜ํ•˜๊ธฐ!
from sklearn.preprocessing import StandardScaler

ss = StandardScaler()
ss.fit(train_poly)
train_scaled = ss.transform(train_poly) # ํ›ˆ๋ จ ์„ธํŠธ ์ •๊ทœํ™”
test_scaled = ss.transform(test_poly) # ํ…Œ์ŠคํŠธ ์„ธํŠธ ์ •๊ทœํ™”

 

๋ฐ์ดํ„ฐ์— ์ •๊ทœํ™”๋„ ์ ์šฉํ–ˆ์œผ๋‹ˆ, ์ด์ œ ๊ทœ์ œ๋ฅผ ๊ฐ€ํ•˜์—ฌ ํ›ˆ๋ จํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

 

๐Ÿ“๋ฆฟ์ง€์™€ ๋ผ์˜

์„ ํ˜• ํšŒ๊ท€ ๋ชจ๋ธ์— ๊ทœ์ œ๋ฅผ ์ถ”๊ฐ€ํ•œ ๋ชจ๋ธ์„ ๋ฆฟ์ง€์™€ ๋ผ์˜๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค.

  • ๋‘˜ ๋‹ค ๊ทœ์ œ๋ฅผ ๊ฐ€ํ•˜์—ฌ ๊ณ„์ˆ˜์˜ ํฌ๊ธฐ๋ฅผ ์ค„์ด๋Š” ๊ฑด ๊ฐ™์ง€๋งŒ, ๊ทœ์ œ๋ฅผ ๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ๋‹ค๋ฆ„
  • โœจ ๊ทœ์ œ์˜ ์–‘์„ ์ž„์˜๋กœ ์กฐ์ ˆ ๊ฐ€๋Šฅ : alpha ๋งค๊ฐœ๋ณ€์ˆ˜
    • alpha ํฌ๋ฉด ๊ทœ์ œ ๊ฐ•๋„ ์„ธ์ง (๊ณผ๋Œ€ ์ ํ•ฉ ํ•ด๊ฒฐ, ๊ณผ์†Œ ์ ํ•ฉ ์œ ๋„)
    • alpha ์ž‘์œผ๋ฉด ๊ทœ์ œ ๊ฐ•๋„ ์•ฝํ•ด์ง (๊ณผ์†Œ ์ ํ•ฉ ํ•ด๊ฒฐ, ๊ณผ๋Œ€ ์ ํ•ฉ ์œ ๋„)
๐Ÿ”ธ ๋ฆฟ์ง€ : ๊ณ„์ˆ˜๋ฅผ ์ œ๊ณฑํ•œ ๊ฐ’์„ ๊ธฐ์ค€์œผ๋กœ ๊ทœ์ œ๋ฅผ ์ ์šฉ (L2 ๊ทœ์ œ)
๐Ÿ”ธ ๋ผ์˜ : ๊ณ„์ˆ˜์˜ ์ ˆ๋Œ“๊ฐ’์„ ๊ธฐ์ค€์œผ๋กœ ๊ทœ์ œ๋ฅผ ์ ์šฉ (L1 ๊ทœ์ œ)
               (๊ณ„์ˆ˜๋ฅผ ์•„์˜ˆ 0์œผ๋กœ ๋งŒ๋“ค ์ˆ˜๋„ ์žˆ์Œ -> ๊ทธ๋Ÿฐ๋ฐ๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ์‚ฌ์šฉํ•˜๋Š” ์ด์œ ๋Š” ๋ฐ‘์— ์ž‘์„ฑํ–ˆ์Šต๋‹ˆ๋‹ค. ๐Ÿ˜)

 

๐Ÿ“Œ ๋ฆฟ์ง€ ํšŒ๊ท€

๐Ÿ“Œ ์‚ฌ์ดํ‚ท๋Ÿฐ์˜ Ridge ํด๋ž˜์Šค

๐Ÿ“๊ทธ๋ƒฅ ํ›ˆ๋ จ

from sklearn.linear_model import Ridge

ridge = Ridge()

# ํ›ˆ๋ จ
ridge.fit(train_scaled, train_target)

์ ์ˆ˜ ๊ฒฐ๊ณผ

๊ณผ๋Œ€์ ํ•ฉ ๋˜์ง€ ์•Š๊ณ , ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ƒ…๋‹ˆ๋‹ค!

๊ทธ๋Ÿผ, alpha ๊ฐ’์„ ์ž„์˜๋กœ ์ง€์ •ํ•˜์—ฌ ๋”์šฑ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋‚ด๋ณด๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

 

 

๐Ÿ“alpha ๊ฐ’ ์ฐพ๊ธฐ

์ ์ ˆํ•œ alpha ๊ฐ’์„ ์ฐพ๋Š” ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜๋Š”, alpha ๊ฐ’์— ๋Œ€ํ•œ R²(๊ฒฐ์ • ๊ฒŒ์ˆ˜) ๊ฐ’์˜ ๊ทธ๋ž˜ํ”„๋ฅผ ๊ทธ๋ ค๋ณด๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

์ฆ‰, ํ›ˆ๋ จ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ์˜ ์ ์ˆ˜๊ฐ€ ๊ฐ€์žฅ ๊ฐ€๊นŒ์šด ์ง€์ ์ด ์ตœ์ ์˜ alpha !

alpha ๊ฐ’์„ 0.001 ~ 100 10๋ฐฐ์”ฉ ๋Š˜๋ ค ๊ทธ๋ž˜ํ”„๋ฅผ ๊ทธ๋ฆฝ๋‹ˆ๋‹ค.

import matplotlib.pyplot as plt
train_score=[]
test_score=[]

# ์•ŒํŒŒ 0.001~100 10์”ฉ ๋Š˜๋ ค๊ฐ€๋ฉฐ ๋ฆฟ์ง€ ํšŒ๊ท€ ๋ชจ๋ธ ํ›ˆ๋ จ
alpha_list = [0.001, 0.01, 0.1, 1, 10, 100]
for alpha in alpha_list :
  #๋ฆฟ์ง€ ๋ชจ๋ธ ์ƒ์„ฑ
  ridge = Ridge(alpha=alpha)
  # ํ›ˆ๋ จ
  ridge.fit(train_scaled, train_target)
  # ํ›ˆ๋ จ ์ ์ˆ˜์™€ ํ…Œ์ŠคํŠธ ์ ์ˆ˜ ์ €์žฅ
  train_score.append(ridge.score(train_scaled, train_target))
  test_score.append(ridge.score(test_scaled, test_target))


# ๊ทธ๋ž˜ํ”„ ๊ทธ๋ฆฌ๊ธฐ
# 10 ๋ฐฐ์”ฉ ๋Š˜๋ ธ๊ธฐ ๋–„๋ฌธ์— ์™ผ์ชฝ์ด ๋„ˆ๋ฌด ์ด˜์ด˜ํ•ด์ ธ์„œ, ๋Š˜๋ฆฌ์ž! => ๋กœ๊ทธํ•จ์ˆ˜๋กœ ๋ฐ”๊ฟ” ์ง€์ˆ˜๋กœ ํ‘œํ˜„ => 0.001 -> -3
plt.plot(np.log10(alpha_list), train_score)
plt.plot(np.log10(alpha_list), test_score)
plt.xlabel("alpha")
plt.ylabel("R^2")
plt.show()

๊ฒฐ๊ณผ) ๊ณผ๋Œ€์ ํ•ฉ <--- Best ---> ๊ณผ์†Œ์ ํ•ฉ

๊ทธ๋ž˜ํ”„๋กœ๋ถ€ํ„ฐ ๊ฐ€์žฅ ์ ์ ˆํ•œ alpha ๊ฐ’์€ -1์ž…๋‹ˆ๋‹ค. ์ฆ‰, 10^(-1) = 0.1 ์ž…๋‹ˆ๋‹ค!

alpha ๋ฅผ 0.1 ๋กœ ํ•ด์„œ ๋‹ค์‹œ ํ›ˆ๋ จ ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

 

๐Ÿ“์ ์ ˆํ•œ alpha ๊ฐ’์œผ๋กœ ํ›ˆ๋ จ

# 0.1 ์ผ ๋•Œ๊ฐ€ ๊ฐ€์žฅ ์„ฑ๋Šฅ ๊ตฟ๐Ÿ‘
ridge = Ridge(alpha = 0.1)

# ํ›ˆ๋ จ
ridge.fit(train_scaled, train_target)

์ ์ˆ˜ ๊ฒฐ๊ณผ

๊ตฟ๐Ÿ‘๐Ÿ‘

 

 

๐Ÿ“Œ ๋ผ์˜ ํšŒ๊ท€

๐Ÿ“Œ ์‚ฌ์ดํ‚ท๋Ÿฐ์˜ Lasso ํด๋ž˜์Šค

๐Ÿ“๊ทธ๋ƒฅ ํ›ˆ๋ จ

from sklearn.linear_model import Lasso

lasso = Lasso()

# ํ›ˆ๋ จ
lasso.fit(train_scaled, train_target)

์ ์ˆ˜ ๊ฒฐ๊ณผ

์ ์ˆ˜ ์ข‹์Šต๋‹ˆ๋‹ค. ๐Ÿ˜

๐Ÿ“alpha ๊ฐ’ ์ฐพ๊ธฐ

import matplotlib.pyplot as plt
train_score=[]
test_score=[]

# ์•ŒํŒŒ 0.001 ~ 100 10์”ฉ ๋Š˜๋ ค๊ฐ€๋ฉฐ ๋ผ์˜ ํšŒ๊ท€ ๋ชจ๋ธ ํ›ˆ๋ จ
alpha_list = [0.001, 0.01, 0.1, 1, 10, 100]
for alpha in alpha_list :
  #๋ผ์˜ ๋ชจ๋ธ ์ƒ์„ฑ
  # max_iter=10000 : ๋ผ์˜ ๋ชจ๋ธ ํ›ˆ๋ จ ์‹œ, ์ตœ์ ์˜ ๊ณ„์ˆ˜๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ๋ฐ˜๋ณต์ ์ธ ๊ณ„์‚ฐ์„ ์ˆ˜ํ–‰ํ•˜๋Š”๋ฐ, ์ง€์ •ํ•œ ๋ฐ˜๋ณต ํšŸ์ˆ˜๊ฐ€ ๋ถ€์กฑํ•  ๋•Œ ๊ฒฝ๊ณ  ๋œธ ์ด๋ฅผ ํ•ด๊ฒฐ
  lasso = Lasso(alpha=alpha, max_iter=10000)
  # ํ›ˆ๋ จ
  lasso.fit(train_scaled, train_target)
  # ํ›ˆ๋ จ ์ ์ˆ˜์™€ ํ…Œ์ŠคํŠธ ์ ์ˆ˜ ์ €์žฅ
  train_score.append(lasso.score(train_scaled, train_target))
  test_score.append(lasso.score(test_scaled, test_target))


# ๊ทธ๋ž˜ํ”„ ๊ทธ๋ฆฌ๊ธฐ
# 10 ๋ฐฐ์”ฉ ๋Š˜๋ ธ๊ธฐ ๋–„๋ฌธ์— ์™ผ์ชฝ์ด ๋„ˆ๋ฌด ์ด˜์ด˜ํ•ด์ ธ์„œ, ๋Š˜๋ฆฌ์ž! => ๋กœ๊ทธํ•จ์ˆ˜๋กœ ๋ฐ”๊ฟ” ์ง€์ˆ˜๋กœ ํ‘œํ˜„ => 0.001 -> -3
plt.plot(np.log10(alpha_list), train_score)
plt.plot(np.log10(alpha_list), test_score)
plt.xlabel("alpha")
plt.ylabel("R^2")
plt.show()

๊ฒฐ๊ณผ) ๊ณผ๋Œ€์ ํ•ฉ <--- Best ---> ๊ณผ์†Œ์ ํ•ฉ

๊ทธ๋ž˜ํ”„๋กœ๋ถ€ํ„ฐ ๊ฐ€์žฅ ์ ์ ˆํ•œ alpha ๊ฐ’์€ 1์ž…๋‹ˆ๋‹ค. ์ฆ‰, 10^(1) = 10 ์ž…๋‹ˆ๋‹ค!

alpha ๋ฅผ 10 ์œผ๋กœ ํ•ด์„œ ๋‹ค์‹œ ํ›ˆ๋ จ ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

 

๐Ÿ“์ ์ ˆํ•œ alpha ๊ฐ’์œผ๋กœ ํ›ˆ๋ จ

# 10 ์ผ ๋•Œ๊ฐ€ ๊ฐ€์žฅ ์ ํ•ฉ!!
lasso = Lasso(alpha = 10)

# ํ›ˆ๋ จ
lasso.fit(train_scaled, train_target)

์ ์ˆ˜ ๊ฒฐ๊ณผ

์•„์ฃผ ์ข‹์Šต๋‹ˆ๋‹ค~!

 

 

โœจ ์†์„ฑ ์‚ดํŽด๋ณด๊ธฐ

๐Ÿ“Œ coef_ : ๊ณ„์ˆ˜
# ๊ณ„์ˆ˜๊ฐ€ 0์ธ ๋ฐฐ์—ด์˜ ๊ฐœ์ˆ˜ ๋ฐ˜ํ™˜
print(np.sum(lasso.coef_ == 0))

๋ผ์˜๋Š” ๊ณ„์ˆ˜๊ฐ€ 0์ด ๋  ์ˆ˜ ์žˆ๋‹ค๊ณ  ํ–ˆ์Šต๋‹ˆ๋‹ค.

๊ทธ๋Ÿผ 55๊ฐœ ํŠน์„ฑ ์ค‘ ๋ช‡ ๊ฐœ์”ฉ์ด๋‚˜ ๊ณ„์ˆ˜๊ฐ€ 0์ด ๋˜์—ˆ์„์ง€ ํ™•์ธํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

๊ณ„์ˆ˜๊ฐ€ 0์ธ ํŠน์„ฑ ๊ฐœ์ˆ˜

55๊ฐœ ์ค‘์— 40๊ฐœ์”ฉ์ด๋‚˜ ๊ณ„์ˆ˜๊ฐ€ 0์ด ๋˜์—ˆ๋„ค์š”! ์ฆ‰, ๋ผ์˜ ๋ชจ๋ธ์ด ์‚ฌ์šฉํ•œ ํŠน์„ฑ์€ 55๊ฐœ ์ค‘์— 15๊ฐœ๋ฐ–์— ๋˜์ง€ ์•Š๋Š”๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

์ด๋Ÿฌํ•œ ํŠน์ง•์œผ๋กœ, ๋ผ์˜ ๋ชจ๋ธ์„ ์œ ์šฉํ•œ ํŠน์„ฑ์„ ๊ณจ๋ผ๋‚ด๋Š” ์šฉ๋„๋กœ๋Š” ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

 

 

ํšŒ๊ท€ ์•Œ๊ณ ๋ฆฌ์ฆ˜