[Tensorflow] XOR 문제 해결

Author : tmlab / Date : 2017. 2. 17. 14:40 / Category : Analytics

1. XOR 문제를 풀어봅시다.

1) 패키지 import 및 input데이터 생성

In [1]:
import numpy as np
import tensorflow as tf

xy = np.loadtxt('train.txt', unpack=True)
x_data = xy[0:-1]
y_data = xy[-1]
In [2]:
x_data
Out[2]:
array([[ 0.,  0.,  1.,  1.],
       [ 0.,  1.,  0.,  1.]])
In [4]:
y_data
Out[4]:
array([ 0.,  1.,  1.,  0.])

2) X, Y, W에 대한 placeholder 및 변수 정의

In [5]:
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
W = tf.Variable(tf.random_uniform([1,len(x_data)], -1.0, 1.0)) #초기값 세팅

3) hypothesis 및 sigmoid함수 & cost함수 정의

In [6]:
h = tf.matmul(W, X)
hypothesis = tf.div(1., 1.+tf.exp(-h))  # 시그모이드 함수
cost = -tf.reduce_mean(Y*tf.log(hypothesis)+(1-Y)*tf.log(1-hypothesis)) #코스트(크로스엔트로피)

4) 학습알고지름 정의 : Gradientdescent optimizer

In [8]:
a = tf.Variable(0.01)
optimizer = tf.train.GradientDescentOptimizer(a)
train = optimizer.minimize(cost)

5) 세션 실행 (결과 : 정확도가 낮아!!)

In [ ]:
init = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init)
    
    for step in xrange(1000):
        sess.run(train, feed_dict={X:x_data, Y:y_data})
        if step % 200 == 0:
            print step, sess.run(cost, feed_dict={X:x_data, Y:y_data}), sess.run(W)
    
    #모델 테스트
    correct_prediction = tf.equal(tf.floor(hypothesis+0.5), Y) # 계산된 값을 0,1로 변환
    
    #정확도 계산
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    print sess.run([hypothesis, tf.floor(hypothesis+0.5), correct_prediction, accuracy], feed_dict = {X:x_data, Y:y_data})
    print "Accuracy:" , accuracy.eval({X:x_data, Y:y_data})

2. 딥 네트워크로 XOR 문제를 풀어봅시다.

1) 2개의 레이어를 활용해 해결

In [10]:
import numpy as np
import tensorflow as tf

xy = np.loadtxt('train.txt', unpack=True)
x_data = np.transpose(xy[0:-1])     #행렬곱을 위해 트랜스포즈
y_data = np.reshape(xy[-1], (4, 1))

X = tf.placeholder(tf.float32, [None, 2])
Y = tf.placeholder(tf.float32, [None, 1])

W1 = tf.Variable(tf.random_uniform([2,2], -1.0, 1.0))
W2 = tf.Variable(tf.random_uniform([2,1], -1.0, 1.0))

b1 = tf.Variable(tf.zeros([2]), name="Bias1")
b2 = tf.Variable(tf.zeros([1]), name="Bias2")

L2 = tf.sigmoid(tf.matmul(X, W1) + b1)  #시그모이드함수로 간단하게 바꿈
hypothesis = tf.sigmoid(tf.matmul(L2, W2) + b2) #L2를 입력으로 받아서 다시곱함
cost = -tf.reduce_mean(Y*tf.log(hypothesis)+(1-Y)*tf.log(1-hypothesis))

a = tf.Variable(0.1)
optimizer = tf.train.GradientDescentOptimizer(a)
train = optimizer.minimize(cost)


init = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init)
    
    for step in xrange(10000):
        sess.run(train, feed_dict={X:x_data, Y:y_data})
        if step % 1000 == 0:
            print step, sess.run(cost, feed_dict={X:x_data, Y:y_data})
    
    #모델 테스트
    correct_prediction = tf.equal(tf.floor(hypothesis+0.5), Y) # 계산된 값을 0,1로 변환
    
    #정확도 계산
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    print sess.run([hypothesis, tf.floor(hypothesis+0.5), correct_prediction, accuracy], feed_dict = {X:x_data, Y:y_data})
    print "Accuracy:" , accuracy.eval({X:x_data, Y:y_data})
0 0.758463
1000 0.693013
2000 0.692468
3000 0.690366
4000 0.669287
5000 0.50583
6000 0.156117
7000 0.0684559
8000 0.0419283
9000 0.029819
[array([[ 0.02683066],
       [ 0.97878003],
       [ 0.98000884],
       [ 0.02293299]], dtype=float32), array([[ 0.],
       [ 1.],
       [ 1.],
       [ 0.]], dtype=float32), array([[ True],
       [ True],
       [ True],
       [ True]], dtype=bool), 1.0]
Accuracy: 1.0

2) 더넓은 네트워크 (2개의 입력과 10개의 출력)

In [12]:
xy = np.loadtxt('train.txt', unpack=True)
x_data = np.transpose(xy[0:-1])     #행렬곱을 위해 트랜스포즈
y_data = np.reshape(xy[-1], (4, 1))

X = tf.placeholder(tf.float32, [None, 2])
Y = tf.placeholder(tf.float32, [None, 1])

W1 = tf.Variable(tf.random_uniform([2,10], -1.0, 1.0))
W2 = tf.Variable(tf.random_uniform([10,1], -1.0, 1.0))

b1 = tf.Variable(tf.zeros([10]), name="Bias1")
b2 = tf.Variable(tf.zeros([1]), name="Bias2")

L2 = tf.sigmoid(tf.matmul(X, W1) + b1)  #시그모이드함수로 간단하게 바꿈
hypothesis = tf.sigmoid(tf.matmul(L2, W2) + b2) #L2를 입력으로 받아서 다시곱함
cost = -tf.reduce_mean(Y*tf.log(hypothesis)+(1-Y)*tf.log(1-hypothesis))

a = tf.Variable(0.1)
optimizer = tf.train.GradientDescentOptimizer(a)
train = optimizer.minimize(cost)


init = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init)
    
    for step in xrange(5000):
        sess.run(train, feed_dict={X:x_data, Y:y_data})
        if step % 1000 == 0:
            print step, sess.run(cost, feed_dict={X:x_data, Y:y_data})
    
    #모델 테스트
    correct_prediction = tf.equal(tf.floor(hypothesis+0.5), Y) # 계산된 값을 0,1로 변환
    
    #정확도 계산
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    print sess.run([hypothesis, tf.floor(hypothesis+0.5), correct_prediction, accuracy], feed_dict = {X:x_data, Y:y_data})
    print "Accuracy:" , accuracy.eval({X:x_data, Y:y_data})
0 0.772693
1000 0.588382
2000 0.274722
3000 0.091143
4000 0.0437342
[array([[ 0.01572295],
       [ 0.97234547],
       [ 0.97402602],
       [ 0.03642083]], dtype=float32), array([[ 0.],
       [ 1.],
       [ 1.],
       [ 0.]], dtype=float32), array([[ True],
       [ True],
       [ True],
       [ True]], dtype=bool), 1.0]
Accuracy: 1.0

3) 딥러닝!! 깊은 네트워크를 만들어보자!

In [13]:
W1 = tf.Variable(tf.random_uniform([2,5], -1.0, 1.0))
W2 = tf.Variable(tf.random_uniform([5,4], -1.0, 1.0))
W3 = tf.Variable(tf.random_uniform([4,1], -1.0, 1.0))


b1 = tf.Variable(tf.zeros([5]), name="Bias1")
b2 = tf.Variable(tf.zeros([4]), name="Bias2")
b3 = tf.Variable(tf.zeros([1]), name="Bias3")


L2 = tf.sigmoid(tf.matmul(X, W1) + b1)  #시그모이드함수로 간단하게 바꿈
L3 = tf.sigmoid(tf.matmul(L2, W2) + b2)  #시그모이드함수로 간단하게 바꿈
hypothesis = tf.sigmoid(tf.matmul(L3, W3) + b3) #L2를 입력으로 받아서 다시곱함


init = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init)
    
    for step in xrange(10000):
        sess.run(train, feed_dict={X:x_data, Y:y_data})
        if step % 1000 == 0:
            print step, sess.run(cost, feed_dict={X:x_data, Y:y_data})
    
    #모델 테스트
    correct_prediction = tf.equal(tf.floor(hypothesis+0.5), Y) # 계산된 값을 0,1로 변환
    
    #정확도 계산
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    print sess.run([hypothesis, tf.floor(hypothesis+0.5), correct_prediction, accuracy], feed_dict = {X:x_data, Y:y_data})
    print "Accuracy:" , accuracy.eval({X:x_data, Y:y_data})
0 0.694334
1000 0.601415
2000 0.290903
3000 0.0938289
4000 0.0439696
5000 0.0265478
6000 0.0183699
7000 0.0137877
8000 0.010913
9000 0.00896451
[array([[ 0.68195963],
       [ 0.68778819],
       [ 0.69064909],
       [ 0.69612348]], dtype=float32), array([[ 1.],
       [ 1.],
       [ 1.],
       [ 1.]], dtype=float32), array([[False],
       [ True],
       [ True],
       [False]], dtype=bool), 0.5]
Accuracy: 0.5


Archives

05-15 18:52

Contact Us

Address
경기도 수원시 영통구 원천동 산5번지 아주대학교 다산관 429호

E-mail
textminings@gmail.com

Phone
031-219-2910

Tags

Calendar

«   2024/05   »
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
Copyright © All Rights Reserved
Designed by CMSFactory.NET