CNTK - 神经网络二元分类
在本章中,让我们了解什么是使用 CNTK 的神经网络二元分类。
使用 NN 的二元分类类似于多类分类,唯一的不同是只有两个输出节点,而不是三个或更多。在这里,我们将使用两种技术(即单节点和双节点技术)使用神经网络执行二元分类。单节点技术比双节点技术更常见。
加载数据集
对于使用 NN 实现这两种技术,我们将使用钞票数据集。可以从 UCI 机器学习存储库下载数据集,网址为 https://archive.ics.uci.edu/ml/datasets/banknote+authentication。
对于我们的示例,我们将使用 50 个类别伪造 = 0 的真实数据项,以及前 50 个类别伪造 = 1 的假数据项。
准备训练和测试文件
完整数据集中有 1372 个数据项。原始数据集如下所示 −
3.6216, 8.6661, -2.8076, -0.44699, 0 4.5459, 8.1674, -2.4586, -1.4621, 0 … -1.3971, 3.3191, -1.3927, -1.9948, 1 0.39012, -0.14279, -0.031994, 0.35084, 1
现在,首先我们需要将这些原始数据转换为双节点 CNTK 格式,如下所示 −
|stats 3.62160000 8.66610000 -2.80730000 -0.44699000 |forgery 0 1 |# authentic |stats 4.54590000 8.16740000 -2.45860000 -1.46210000 |forgery 0 1 |# authentic . . . |stats -1.39710000 3.31910000 -1.39270000 -1.99480000 |forgery 1 0 |# fake |stats 0.39012000 -0.14279000 -0.03199400 0.35084000 |forgery 1 0 |# fake
您可以使用以下 Python 程序从原始数据创建 CNTK 格式的数据 −
fin = open(".\...", "r") #provide the location of saved dataset text file. for line in fin: line = line.strip() tokens = line.split(",") if tokens[4] == "0": print("|stats %12.8f %12.8f %12.8f %12.8f |forgery 0 1 |# authentic" % \ (float(tokens[0]), float(tokens[1]), float(tokens[2]), float(tokens[3])) ) else: print("|stats %12.8f %12.8f %12.8f %12.8f |forgery 1 0 |# fake" % \ (float(tokens[0]), float(tokens[1]), float(tokens[2]), float(tokens[3])) ) fin.close()
双节点二分类模型
双节点分类和多类分类之间差别很小。在这里,我们首先需要处理 CNTK 格式的数据文件,为此我们将使用名为 create_reader 的辅助函数,如下所示 −
def create_reader(path, input_dim, output_dim, rnd_order, sweeps): x_strm = C.io.StreamDef(field='stats', shape=input_dim, is_sparse=False) y_strm = C.io.StreamDef(field='forgery', shape=output_dim, is_sparse=False) streams = C.io.StreamDefs(x_src=x_strm, y_src=y_strm) deserial = C.io.CTFDeserializer(path, streams) mb_src = C.io.MinibatchSource(deserial, randomize=rnd_order, max_sweeps=sweeps) return mb_src
现在,我们需要为我们的 NN 设置架构参数,并提供数据文件的位置。这可以通过以下 python 代码完成 −
def main(): print("Using CNTK version = " + str(C.__version__) + " ") input_dim = 4 hidden_dim = 10 output_dim = 2 train_file = ".\...\" #提供训练文件的名称 test_file = ".\...\" #提供测试文件的名称
现在,借助以下代码行,我们的程序将创建未经训练的 NN −
X = C.ops.input_variable(input_dim, np.float32) Y = C.ops.input_variable(output_dim, np.float32) with C.layers.default_options(init=C.initializer.uniform(scale=0.01, seed=1)): hLayer = C.layers.Dense(hidden_dim, activation=C.ops.tanh, name='hidLayer')(X) oLayer = C.layers.Dense(output_dim, activation=None, name='outLayer')(hLayer) nnet = oLayer model = C.ops.softmax(nnet)
现在,一旦我们创建了双重未训练模型,我们就需要设置一个学习者算法对象,然后使用它来创建一个训练者训练对象。我们将使用 SGD 学习器和 cross_entropy_with_softmax 损失函数 −
tr_loss = C.cross_entropy_with_softmax(nnet, Y) tr_clas = C.classification_error(nnet, Y) max_iter = 500 batch_size = 10 learn_rate = 0.01 learner = C.sgd(nnet.parameters, learn_rate) trainer = C.Trainer(nnet, (tr_loss, tr_clas), [learner])
现在,一旦我们完成了 Trainer 对象,我们需要创建一个读取器函数来读取训练数据 −
rdr = create_reader(train_file, input_dim, output_dim, rnd_order=True, sweeps=C.io.INFINITELY_REPEAT) banknote_input_map = { X : rdr.streams.x_src, Y : rdr.streams.y_src }
现在,是时候训练我们的 NN 模型了 −
for i in range(0, max_iter): curr_batch = rdr.next_minibatch(batch_size, input_map=iris_input_map) trainer.train_minibatch(curr_batch) if i % 500 == 0: mcee = trainer.previous_minibatch_loss_average macc = (1.0 - trainer.previous_minibatch_evaluation_average) * 100 print("batch %4d: mean loss = %0.4f, accuracy = %0.2f%% " \ % (i, mcee, macc))
训练完成后,让我们使用测试数据项评估模型 −
print(" Evaluating test data ") rdr = create_reader(test_file, input_dim, output_dim, rnd_order=False, sweeps=1) banknote_input_map = { X : rdr.streams.x_src, Y : rdr.streams.y_src } num_test = 20 all_test = rdr.next_minibatch(num_test, input_map=iris_input_map) acc = (1.0 - trainer.test_minibatch(all_test)) * 100 print("Classification accuracy = %0.2f%%" % acc)
在评估我们训练的 NN 模型的准确性之后,我们将使用它对未知数据进行预测 −
np.set_printoptions(precision = 1, suppress=True) unknown = np.array([[0.6, 1.9, -3.3, -0.3]], dtype=np.float32) print(" Predicting Banknote authenticity for input features: ") print(unknown[0]) pred_prob = model.eval(unknown) np.set_printoptions(precision = 4, suppress=True) print("Prediction probabilities are: ") print(pred_prob[0]) if pred_prob[0,0] < pred_prob[0,1]: print("Prediction: authentic") else: print("Prediction: fake")
完整的双节点分类模型
def create_reader(path, input_dim, output_dim, rnd_order, sweeps): x_strm = C.io.StreamDef(field='stats', shape=input_dim, is_sparse=False) y_strm = C.io.StreamDef(field='forgery', shape=output_dim, is_sparse=False) streams = C.io.StreamDefs(x_src=x_strm, y_src=y_strm) deserial = C.io.CTFDeserializer(path, streams) mb_src = C.io.MinibatchSource(deserial, randomize=rnd_order, max_sweeps=sweeps) return mb_src def main(): print("Using CNTK version = " + str(C.__version__) + " ") input_dim = 4 hidden_dim = 10 output_dim = 2 train_file = ".\...\" #provide the name of the training file test_file = ".\...\" #provide the name of the test file X = C.ops.input_variable(input_dim, np.float32) Y = C.ops.input_variable(output_dim, np.float32) withC.layers.default_options(init=C.initializer.uniform(scale=0.01, seed=1)): hLayer = C.layers.Dense(hidden_dim, activation=C.ops.tanh, name='hidLayer')(X) oLayer = C.layers.Dense(output_dim, activation=None, name='outLayer')(hLayer) nnet = oLayer model = C.ops.softmax(nnet) tr_loss = C.cross_entropy_with_softmax(nnet, Y) tr_clas = C.classification_error(nnet, Y) max_iter = 500 batch_size = 10 learn_rate = 0.01 learner = C.sgd(nnet.parameters, learn_rate) trainer = C.Trainer(nnet, (tr_loss, tr_clas), [learner]) rdr = create_reader(train_file, input_dim, output_dim, rnd_order=True, sweeps=C.io.INFINITELY_REPEAT) banknote_input_map = { X : rdr.streams.x_src, Y : rdr.streams.y_src } for i in range(0, max_iter): curr_batch = rdr.next_minibatch(batch_size, input_map=iris_input_map) trainer.train_minibatch(curr_batch) if i % 500 == 0: mcee = trainer.previous_minibatch_loss_average macc = (1.0 - trainer.previous_minibatch_evaluation_average) * 100 print("batch %4d: mean loss = %0.4f, accuracy = %0.2f%% " \ % (i, mcee, macc)) print(" Evaluating test data ") rdr = create_reader(test_file, input_dim, output_dim, rnd_order=False, sweeps=1) banknote_input_map = { X : rdr.streams.x_src, Y : rdr.streams.y_src } num_test = 20 all_test = rdr.next_minibatch(num_test, input_map=iris_input_map) acc = (1.0 - trainer.test_minibatch(all_test)) * 100 print("Classification accuracy = %0.2f%%" % acc) np.set_printoptions(precision = 1, suppress=True) unknown = np.array([[0.6, 1.9, -3.3, -0.3]], dtype=np.float32) print(" Predicting Banknote authenticity for input features: ") print(unknown[0]) pred_prob = model.eval(unknown) np.set_printoptions(precision = 4, suppress=True) print("Prediction probabilities are: ") print(pred_prob[0]) if pred_prob[0,0] < pred_prob[0,1]: print("Prediction: authentic") else: print("Prediction: fake") if __name__== "__main__": main()
输出
Using CNTK version = 2.7 batch 0: mean loss = 0.6928, accuracy = 80.00% batch 50: mean loss = 0.6877, accuracy = 70.00% batch 100: mean loss = 0.6432, accuracy = 80.00% batch 150: mean loss = 0.4978, accuracy = 80.00% batch 200: mean loss = 0.4551, accuracy = 90.00% batch 250: mean loss = 0.3755, accuracy = 90.00% batch 300: mean loss = 0.2295, accuracy = 100.00% batch 350: mean loss = 0.1542, accuracy = 100.00% batch 400: mean loss = 0.1581, accuracy = 100.00% batch 450: mean loss = 0.1499, accuracy = 100.00% Evaluating test data Classification accuracy = 84.58% Predicting banknote authenticity for input features: [0.6 1.9 -3.3 -0.3] Prediction probabilities are: [0.7847 0.2536] Prediction: fake
单节点二分类模型
实现程序与我们上面为双节点分类所做的几乎一样。主要变化是使用双节点分类技术时。
我们可以使用 CNTK 内置的分类错误()函数,但在单节点分类的情况下,CNTK 不支持分类错误()函数。这就是我们需要实现程序定义函数的原因,如下所示 −
def class_acc(mb, x_var, y_var, model): num_correct = 0; num_wrong = 0 x_mat = mb[x_var].asarray() y_mat = mb[y_var].asarray() for i in range(mb[x_var].shape[0]): p = model.eval(x_mat[i] y = y_mat[i] if p[0,0] < 0.5 and y[0,0] == 0.0 or p[0,0] >= 0.5 and y[0,0] == 1.0: num_correct += 1 else: num_wrong += 1 return (num_correct * 100.0)/(num_correct + num_wrong)
经过这一改变,让我们来看看完整的单节点分类示例 −
完整的单节点分类模型
import numpy as np import cntk as C def create_reader(path, input_dim, output_dim, rnd_order, sweeps): x_strm = C.io.StreamDef(field='stats', shape=input_dim, is_sparse=False) y_strm = C.io.StreamDef(field='forgery', shape=output_dim, is_sparse=False) streams = C.io.StreamDefs(x_src=x_strm, y_src=y_strm) deserial = C.io.CTFDeserializer(path, streams) mb_src = C.io.MinibatchSource(deserial, randomize=rnd_order, max_sweeps=sweeps) return mb_src def class_acc(mb, x_var, y_var, model): num_correct = 0; num_wrong = 0 x_mat = mb[x_var].asarray() y_mat = mb[y_var].asarray() for i in range(mb[x_var].shape[0]): p = model.eval(x_mat[i] y = y_mat[i] if p[0,0] < 0.5 and y[0,0] == 0.0 or p[0,0] >= 0.5 and y[0,0] == 1.0: num_correct += 1 else: num_wrong += 1 return (num_correct * 100.0)/(num_correct + num_wrong) def main(): print("Using CNTK version = " + str(C.__version__) + " ") input_dim = 4 hidden_dim = 10 output_dim = 1 train_file = ".\...\" #provide the name of the training file test_file = ".\...\" #provide the name of the test file X = C.ops.input_variable(input_dim, np.float32) Y = C.ops.input_variable(output_dim, np.float32) with C.layers.default_options(init=C.initializer.uniform(scale=0.01, seed=1)): hLayer = C.layers.Dense(hidden_dim, activation=C.ops.tanh, name='hidLayer')(X) oLayer = C.layers.Dense(output_dim, activation=None, name='outLayer')(hLayer) model = oLayer tr_loss = C.cross_entropy_with_softmax(model, Y) max_iter = 1000 batch_size = 10 learn_rate = 0.01 learner = C.sgd(model.parameters, learn_rate) trainer = C.Trainer(model, (tr_loss), [learner]) rdr = create_reader(train_file, input_dim, output_dim, rnd_order=True, sweeps=C.io.INFINITELY_REPEAT) banknote_input_map = {X : rdr.streams.x_src, Y : rdr.streams.y_src } for i in range(0, max_iter): curr_batch = rdr.next_minibatch(batch_size, input_map=iris_input_map) trainer.train_minibatch(curr_batch) if i % 100 == 0: mcee=trainer.previous_minibatch_loss_average ca = class_acc(curr_batch, X,Y, model) print("batch %4d: mean loss = %0.4f, accuracy = %0.2f%% " \ % (i, mcee, ca)) print(" Evaluating test data ") rdr = create_reader(test_file, input_dim, output_dim, rnd_order=False, sweeps=1) banknote_input_map = { X : rdr.streams.x_src, Y : rdr.streams.y_src } num_test = 20 all_test = rdr.next_minibatch(num_test, input_map=iris_input_map) acc = class_acc(all_test, X,Y, model) print("Classification accuracy = %0.2f%%" % acc) np.set_printoptions(precision = 1, suppress=True) unknown = np.array([[0.6, 1.9, -3.3, -0.3]], dtype=np.float32) print(" Predicting Banknote authenticity for input features: ") print(unknown[0]) pred_prob = model.eval({X:unknown}) print("Prediction probability: ") print("%0.4f" % pred_prob[0,0]) if pred_prob[0,0] < 0.5: print("Prediction: authentic") else: print("Prediction: fake") if __name__== "__main__": main()
输出
Using CNTK version = 2.7 batch 0: mean loss = 0.6936, accuracy = 10.00% batch 100: mean loss = 0.6882, accuracy = 70.00% batch 200: mean loss = 0.6597, accuracy = 50.00% batch 300: mean loss = 0.5298, accuracy = 70.00% batch 400: mean loss = 0.4090, accuracy = 100.00% batch 500: mean loss = 0.3790, accuracy = 90.00% batch 600: mean loss = 0.1852, accuracy = 100.00% batch 700: mean loss = 0.1135, accuracy = 100.00% batch 800: mean loss = 0.1285, accuracy = 100.00% batch 900: mean loss = 0.1054, accuracy = 100.00% Evaluating test data Classification accuracy = 84.00% Predicting banknote authenticity for input features: [0.6 1.9 -3.3 -0.3] Prediction probability: 0.8846 Prediction: fake