生成式对搞互连网包括贰个变迁模型(generative,生成式对搞网络包涵3个扭转模型(generative

生成式对抗网络(gennerative adversarial
network,GAN),谷歌201四年建议互联网模型。灵感自三个人博弈的零和博弈,近来最火的非监督深度学习。GAN之父,IanJ.Goodfellow,公认人工智能一流专家。

生成式对抗互联网(gennerative adversarial
network,GAN),Google201四年建议网络模型。灵感自二个人博弈的零和博弈,近来最火的非监督深度学习。GAN之父,伊恩J.Goodfellow,公认人工智能超级专家。

原理。
生成式对搞互联网包括二个变迁模型(generative
model,G)和贰个识别模型(discriminative model,D)。伊恩 J.Goodfellow、JeanPouget-Abadie、Mehdi Mirza、Bing Xu、戴维 沃德e-Farley、Sherjil
Ozair、Aaron Courville、Yoshua Bengio散文,《Generative Adversarial
Network》,https://arxiv.org/abs/1406.2661
生成式对抗互连网布局:
噪音数据->生成模型->假图片—|
|->判别模型->真/假
打乱练习多少->磨练集->真图片-|
生成式对抗互联网重大化解什么从演练样本中学习出新样本。生成模型负责操练出样本的分布,倘诺陶冶样本是图表就变更相似的图纸,假若磨炼样本是小说名子就成形相似的篇章名子。判别模型是叁个二分类器,用来判定输入样本是动真格的数据依然教练转变的样本。
生成式对抗网络优化,是一个2元相当小相当的大博弈(minimax two-player
game)难点。使生成模型输出在输入给判别模型时,判断模型秀难判断是实事求是数据只怕虚似数据。磨炼好的转移模型,能把三个噪音向量转化成和磨炼集类似的范本。Argustus
Odena、Christopher Olah、Jonathon Shlens随想《Coditional Image Synthesis
with Auxiliary Classifier GANs》。
扶持分类器生成式对抗互连网(auxiliary classifier GAN,AC-GAN)达成。

原理。
生成式对搞互连网包含二个变迁模型(generative
model,G)和2个鉴定分别模型(discriminative model,D)。Ian J.Goodfellow、JeanPouget-Abadie、Mehdi Mirza、Bing Xu、大卫 沃德e-Farley、Sherjil
Ozair、亚伦 Courville、Yoshua Bengio随想,《Generative Adversarial
Network》,https://arxiv.org/abs/1406.2661
生成式对抗网络布局:
噪音数据->生成模型->假图片—|
|->判别模型->真/假
打乱练习多少->练习集->真图片-|
生成式对抗网络根本消除什么从训练样本中学习出新样本。生成模型负责练习出样本的遍布,假若磨练样本是图片就变更相似的图片,假如磨炼样本是小说名子就成形相似的稿子名子。判别模型是七个二分类器,用来判断输入样本是实在数据依然教练转变的范本。
生成式对抗网络优化,是2个2元相当的小非常的大博弈(minimax two-player
game)难点。使生成模型输出在输入给判别模型时,判断模型秀难判断是忠实数据也许虚似数据。磨练好的变化模型,能把三个噪声向量转化成和教练集类似的样本。Argustus
Odena、克里Stowe弗 Olah、Jonathon Shlens诗歌《Coditional Image Synthesis
with Auxiliary Classifier GANs》。
帮衬分类器生成式对抗网络(auxiliary classifier GAN,AC-GAN)实现。

生成式对抗互联网利用。生成数字,生成人脸图像。

生成式对抗网络利用。生成数字,生成人脸图像。

生成式对抗互联网达成。https://github.com/fchollet/keras/blob/master/examples/mnist\_acgan.py

Augustus Odena、Chistopher Olah和Jonathon Shlens 论文《Conditional Image
Synthesis With Auxiliary Classifier GANs》。
因而噪声,让变化模型G生成虚假数据,和真实数据1起送到判别模型D,判别模型一方面输出数据真/假,一方面输出图片分类。
率先定义生成模型,指标是生成一对(z,L)数据,z是噪声向量,L是(一,2八,28)的图像空间。

生成式对抗网络达成。https://github.com/fchollet/keras/blob/master/examples/mnist\_acgan.py

Augustus Odena、Chistopher Olah和Jonathon Shlens 论文《Conditional Image
Synthesis With Auxiliary Classifier GANs》。
经过噪声,让变化模型G生成虚假数据,和实事求是数据1起送到判别模型D,判别模型1方面输出数据真/假,一方面输出图片分类。
先是定义生成模型,指标是生成1对(z,L)数据,z是噪声向量,L是(一,2八,2八)的图像空间。

def build_generator(latent_size):
cnn = Sequential()
cnn.add(Dense(1024, input_dim=latent_size, activation=’relu’))
cnn.add(Dense(128 * 7 * 7, activation=’relu’))
cnn.add(Reshape((128, 7, 7)))
#上采集样品,图你尺寸变为 1肆X1四
cnn.add(UpSampling2D(size=(2,2)))
cnn.add(Convolution2D(256, 5, 5, border_mode=’same’, activation=’relu’,
init=’glorot_normal’))
#上采集样品,图像尺寸变为2捌X2八
cnn.add(UpSampling2D(size=(2,2)))
cnn.add(Convolution2D(128, 5, 5, border_mode=’same’, activation=’relu’,
init=’glorot_normal’))
#规约到1个通道
cnn.add(Convolution2D(1, 2, 2, border_mode=’same’, activation=’tanh’,
init=’glorot_normal’))
#变更模型输入层,特征向量
latent = Input(shape=(latent_size, ))
#扭转模型输入层,标记
image_class = Input(shape=(1,), dtype=’int32′)
cls = Flatten()(Embedding(10, latent_size,
init=’glorot_normal’)(image_class))
h = merge([latent, cls], mode=’mul’)
fake_image = cnn(h) #输出虚假图片
return Model(input=[latent, image_class], output=fake_image)
概念判别模型,输入(壹,28,28)图片,输出多少个值,一个是可辨模型认为那张图纸是或不是是虚假图片,另一个是识别模型认为那第图片所属分类。

def build_generator(latent_size):
cnn = Sequential()
cnn.add(Dense(1024, input_dim=latent_size, activation=’relu’))
cnn.add(Dense(128 * 7 * 7, activation=’relu’))
cnn.add(Reshape((128, 7, 7)))
#上采集样品,图你尺寸变为 1四X14
cnn.add(UpSampling2D(size=(2,2)))
cnn.add(Convolution2D(256, 5, 5, border_mode=’same’, activation=’relu’,
init=’glorot_normal’))
#上采集样品,图像尺寸变为28X28
cnn.add(UpSampling2D(size=(2,2)))
cnn.add(Convolution2D(128, 5, 5, border_mode=’same’, activation=’relu’,
init=’glorot_normal’))
#规约到1个通道
cnn.add(Convolution2D(1, 2, 2, border_mode=’same’, activation=’tanh’,
init=’glorot_normal’))
#扭转模型输入层,特征向量
latent = Input(shape=(latent_size, ))
#转移模型输入层,标记
image_class = Input(shape=(1,), dtype=’int32′)
cls = Flatten()(Embedding(10, latent_size,
init=’glorot_normal’)(image_class))
h = merge([latent, cls], mode=’mul’)
fake_image = cnn(h) #出口虚假图片
return Model(input=[latent, image_class], output=fake_image)
概念判别模型,输入(一,2八,2八)图片,输出几个值,三个是甄别模型认为那张图纸是还是不是是虚假图片,另1个是可辨模型认为那第图片所属分类。

def build_discriminator();
#利用激活函数Leaky ReLU来替换标准的卷积神经互连网中的激活函数
cnn = Wequential()
cnn.add(Convolution2D(32, 3, 3, border_mode=’same’, subsample=(2, 2),
input_shape=(1, 28, 28)))
cnn.add(LeakyReLU())
cnn.add(Dropout(0.3))
cnn.add(Convolution2D(64, 3, 3, border_mode=’same’, subsample=(1,
1)))
cnn.add(LeakyReLU())
cnn.add(Dropout(0.3))
cnn.add(Convolution2D(128, 3, 3, border_mode=’same’, subsample=(1,
1)))
cnn.add(LeakyReLU())
cnn.add(Dropout(0.3))
cnn.add(Convolution2D(256, 3, 3, border_mode=’same’, subsample=(1,
1)))
cnn.add(LeakyReLU())
cnn.add(Dropout(0.3))
cnn.add(Flatten())
image = Input(shape=(1, 28, 28))
features = cnn(image)
#有五个出口
#出口真假值,范围在0~1
fake = Dense(1, activation=’sigmoid’,name=’generation’)(features)
#支援分类器,输出图片分类
aux = Dense(10, activation=’softmax’, name=’auxiliary’)(features)
return Model(input=image, output=[fake, aux])
教练进度,50轮(epoch),把权重保存,每轮把假冒伪造低劣数据生成图处保存,观看虚假数据演变进度。

def build_discriminator();
#选拔激活函数Leaky ReLU来替换标准的卷积神经网络中的激活函数
cnn = Wequential()
cnn.add(Convolution2D(32, 3, 3, border_mode=’same’, subsample=(2, 2),
input_shape=(1, 28, 28)))
cnn.add(LeakyReLU())
cnn.add(Dropout(0.3))
cnn.add(Convolution2D(64, 3, 3, border_mode=’same’, subsample=(1,
1)))
cnn.add(LeakyReLU())
cnn.add(Dropout(0.3))
cnn.add(Convolution2D(128, 3, 3, border_mode=’same’, subsample=(1,
1)))
cnn.add(LeakyReLU())
cnn.add(Dropout(0.3))
cnn.add(Convolution2D(256, 3, 3, border_mode=’same’, subsample=(1,
1)))
cnn.add(LeakyReLU())
cnn.add(Dropout(0.3))
cnn.add(Flatten())
image = Input(shape=(1, 28, 28))
features = cnn(image)
#有七个出口
#输出真假值,范围在0~1
fake = Dense(1, activation=’sigmoid’,name=’generation’)(features)
#扶持分类器,输出图片分类
aux = Dense(10, activation=’softmax’, name=’auxiliary’)(features)
return Model(input=image, output=[fake, aux])
磨练进程,50轮(epoch),把权重保存,每轮把假冒伪造低劣数据生成图处保存,观望虚假数据衍变进程。

if __name__ ==’__main__’:
#概念超参数
nb_epochs = 50
batch_size = 100
latent_size = 100
#优化器学习率
adam_lr = 0.0002
adam_beta_l = 0.5
#创设判别互连网
discriminator = build_discriminator()
discriminator.compile(optimizer=adam(lr=adam_lr,
beta_l=adam_beta_l), loss=’binary_crossentropy’)
latent = Input(shape=(lastent_size, ))
image_class = Input(shape-(1, ), dtype=’int32′)
#变化组合模型
discriminator.trainable = False
fake, aux = discriminator(fake)
combined = Model(input=[latent, image_class], output=[fake, aux])
combined.compile(optimizer=Adam(lr=adam_lr, beta_l=adam_beta_1),
loss=[‘binary_crossentropy’, ‘sparse_categorical_crossentropy’])
#将mnist数据转发为(…,一,2八,2八)维度,取值范围为[-1,1]
(X_train,y_train),(X_test,y_test) = mnist.load_data()
X_train = (X_train.astype(np.float32) – 127.5) / 127.5
X_train = np.expand_dims(X_train, axis=1)
X_test = (X_test.astype(np.float32) – 127.5) / 127.5
X_test = np.expand_dims(X_test, axis=1)
num_train, num_test = X_train.shape[0], X_test.shape[0]
train_history = defaultdict(list)
test_history = defaultdict(list)
for epoch in range(epochs):
print(‘Epoch {} of {}’.format(epoch + 1, epochs))
num_batches = int(X_train.shape[0] / batch_size)
progress_bar = Progbar(target=num_batches)
epoch_gen_loss = []
epoch_disc_loss = []
for index in range(num_batches):
progress_bar.update(index)
#发出3个批次的噪音数据
noise = np.random.uniform(-1, 1, (batch_size, latent_size))
# 获取1个批次的诚实数据
image_batch = X_train[index * batch_size:(index + 1) *
batch_size]
label_batch = y_train[index * batch_size:(index + 1) *
batch_size]
# 生成1些噪音标记
sampled_labels = np.random.randint(0, 10, batch_size)
# 产生1个批次的虚伪图片
generated_images = generator.predict(
[noise, sampled_labels.reshape((-1, 1))], verbose=0)
X = np.concatenate((image_batch, generated_images))
y = np.array([1] * batch_size + [0] * batch_size)
aux_y = np.concatenate((label_batch, sampled_labels), axis=0)
epoch_disc_loss.append(discriminator.train_on_batch(X, [y,
aux_y]))
# 爆发四个批次噪声和标志
noise = np.random.uniform(-1, 1, (2 * batch_size, latent_size))
sampled_labels = np.random.randint(0, 10, 2 * batch_size)
# 磨炼转变模型来棍骗判别模型,输出真/假都设为真
trick = np.ones(2 * batch_size)
epoch_gen_loss.append(combined.train_on_batch(
[noise, sampled_labels.reshape((-1, 1))],
[trick, sampled_labels]))
print(‘\nTesting for epoch {}:’.format(epoch + 1))
# 评估测试集,产生三个新批次噪声数据
noise = np.random.uniform(-1, 1, (num_test, latent_size))
sampled_labels = np.random.randint(0, 10, num_test)
generated_images = generator.predict(
[noise, sampled_labels.reshape((-1, 1))], verbose=False)
X = np.concatenate((X_test, generated_images))
y = np.array([1] * num_test + [0] * num_test)
aux_y = np.concatenate((y_test, sampled_labels), axis=0)
# 判别模型是不是能分辨
discriminator_test_loss = discriminator.evaluate(
X, [y, aux_y], verbose=False)
discriminator_train_loss = np.mean(np.array(epoch_disc_loss),
axis=0)
# 创立多少个批次新噪声数据
noise = np.random.uniform(-1, 1, (2 * num_test, latent_size))
sampled_labels = np.random.randint(0, 10, 2 * num_test)
trick = np.ones(2 * num_test)
generator_test_loss = combined.evaluate(
[noise, sampled_labels.reshape((-1, 1))],
[trick, sampled_labels], verbose=False)
generator_train_loss = np.mean(np.array(epoch_gen_loss), axis=0)
# 损失值等质量目的记录下来,并出口
train_history[‘generator’].append(generator_train_loss)
train_history[‘discriminator’].append(discriminator_train_loss)
test_history[‘generator’].append(generator_test_loss)
test_history[‘discriminator’].append(discriminator_test_loss)
print(‘{0:<22s} | {1:4s} | {2:15s} | {3:5s}’.format(
‘component’, *discriminator.metrics_names))
print(‘-‘ * 65)
ROW_FMT = ‘{0:<22s} | {1:<4.2f} | {2:<15.2f} | {3:<5.2f}’
print(ROW_FMT.format(‘generator (train)’,
*train_history[‘generator’][-1]))
print(ROW_FMT.format(‘generator (test)’,
*test_history[‘generator’][-1]))
print(ROW_FMT.format(‘discriminator (train)’,
*train_history[‘discriminator’][-1]))
print(ROW_FMT.format(‘discriminator (test)’,
*test_history[‘discriminator’][-1]))
# 各类epoch保存二遍权重
generator.save_weights(
‘params_generator_epoch_{0:03d}.hdf5’.format(epoch), True)
discriminator.save_weights(
‘params_discriminator_epoch_{0:03d}.hdf5’.format(epoch), True)
# 生成一些可视化虚假数字看演变进程
noise = np.random.uniform(-1, 1, (100, latent_size))
sampled_labels = np.array([
[i] * 10 for i in range(10)
]).reshape(-1, 1)
generated_images = generator.predict(
[noise, sampled_labels], verbose=0)
# 整理到一个方格
img = (np.concatenate([r.reshape(-1, 28)
for r in np.split(generated_images, 10)
], axis=-1) * 127.5 + 127.5).astype(np.uint8)
Image.fromarray(img).save(
‘plot_epoch_{0:03d}_generated.png’.format(epoch))
pickle.dump({‘train’: train_history, ‘test’: test_history},
open(‘acgan-history.pkl’, ‘wb’))

if __name__ ==’__main__’:
#概念超参数
nb_epochs = 50
batch_size = 100
latent_size = 100
#优化器学习率
adam_lr = 0.0002
adam_beta_l = 0.5
#创设判别互联网
discriminator = build_discriminator()
discriminator.compile(optimizer=adam(lr=adam_lr,
beta_l=adam_beta_l), loss=’binary_crossentropy’)
latent = Input(shape=(lastent_size, ))
image_class = Input(shape-(1, ), dtype=’int32′)
#变化组合模型
discriminator.trainable = False
fake, aux = discriminator(fake)
combined = Model(input=[latent, image_class], output=[fake, aux])
combined.compile(optimizer=Adam(lr=adam_lr, beta_l=adam_beta_1),
loss=[‘binary_crossentropy’, ‘sparse_categorical_crossentropy’])
#将mnist数据转载为(…,一,2捌,2八)维度,取值范围为[-1,1]
(X_train,y_train),(X_test,y_test) = mnist.load_data()
X_train = (X_train.astype(np.float32) – 127.5) / 127.5
X_train = np.expand_dims(X_train, axis=1)
X_test = (X_test.astype(np.float32) – 127.5) / 127.5
X_test = np.expand_dims(X_test, axis=1)
num_train, num_test = X_train.shape[0], X_test.shape[0]
train_history = defaultdict(list)
test_history = defaultdict(list)
for epoch in range(epochs):
print(‘Epoch {} of {}’.format(epoch + 1, epochs))
num_batches = int(X_train.shape[0] / batch_size)
progress_bar = Progbar(target=num_batches)
epoch_gen_loss = []
epoch_disc_loss = []
for index in range(num_batches):
progress_bar.update(index)
#发出三个批次的噪音数据
noise = np.random.uniform(-1, 1, (batch_size, latent_size))
# 获取三个批次的真正数据
image_batch = X_train[index * batch_size:(index + 1) *
batch_size]
label_batch = y_train[index * batch_size:(index + 1) *
batch_size]
# 生成1些噪音标记
sampled_labels = np.random.randint(0, 10, batch_size)
# 发生三个批次的虚假图片
generated_images = generator.predict(
[noise, sampled_labels.reshape((-1, 1))], verbose=0)
X = np.concatenate((image_batch, generated_images))
y = np.array([1] * batch_size + [0] * batch_size)
aux_y = np.concatenate((label_batch, sampled_labels), axis=0)
epoch_disc_loss.append(discriminator.train_on_batch(X, [y,
aux_y]))
# 产生四个批次噪声和标志
noise = np.random.uniform(-1, 1, (2 * batch_size, latent_size))
sampled_labels = np.random.randint(0, 10, 2 * batch_size)
# 磨练转变模型来偷天换日判别模型,输出真/假都设为真
trick = np.ones(2 * batch_size)
epoch_gen_loss.append(combined.train_on_batch(
[noise, sampled_labels.reshape((-1, 1))],
[trick, sampled_labels]))
print(‘\nTesting for epoch {}:’.format(epoch + 1))
# 评估测试集,产生多个新批次噪声数据
noise = np.random.uniform(-1, 1, (num_test, latent_size))
sampled_labels = np.random.randint(0, 10, num_test)
generated_images = generator.predict(
[noise, sampled_labels.reshape((-1, 1))], verbose=False)
X = np.concatenate((X_test, generated_images))
y = np.array([1] * num_test + [0] * num_test)
aux_y = np.concatenate((y_test, sampled_labels), axis=0)
# 判别模型是还是不是能辨别
discriminator_test_loss = discriminator.evaluate(
X, [y, aux_y], verbose=False)
discriminator_train_loss = np.mean(np.array(epoch_disc_loss),
axis=0)
# 成立四个批次新噪声数据
noise = np.random.uniform(-1, 1, (2 * num_test, latent_size))
sampled_labels = np.random.randint(0, 10, 2 * num_test)
trick = np.ones(2 * num_test)
generator_test_loss = combined.evaluate(
[noise, sampled_labels.reshape((-1, 1))],
[trick, sampled_labels], verbose=False)
generator_train_loss = np.mean(np.array(epoch_gen_loss), axis=0)
# 损失值等性能指标记录下来,并出口
train_history[‘generator’].append(generator_train_loss)
train_history[‘discriminator’].append(discriminator_train_loss)
test_history[‘generator’].append(generator_test_loss)
test_history[‘discriminator’].append(discriminator_test_loss)
print(‘{0:<22s} | {1:4s} | {2:15s} | {3:5s}’.format(
‘component’, *discriminator.metrics_names))
print(‘-‘ * 65)
ROW_FMT = ‘{0:<22s} | {1:<4.2f} | {2:<15.2f} | {3:<5.2f}’
print(ROW_FMT.format(‘generator (train)’,
*train_history[‘generator’][-1]))
print(ROW_FMT.format(‘generator (test)’,
*test_history[‘generator’][-1]))
print(ROW_FMT.format(‘discriminator (train)’,
*train_history[‘discriminator’][-1]))
print(ROW_FMT.format(‘discriminator (test)’,
*test_history[‘discriminator’][-1]))
# 每一个epoch保存一次权重
generator.save_weights(
‘params_generator_epoch_{0:03d}.hdf5’.format(epoch), True)
discriminator.save_weights(
‘params_discriminator_epoch_{0:03d}.hdf5’.format(epoch), True)
# 生成一些可视化虚假数字看衍变进程
noise = np.random.uniform(-1, 1, (100, latent_size))
sampled_labels = np.array([
[i] * 10 for i in range(10)
]).reshape(-1, 1)
generated_images = generator.predict(
[noise, sampled_labels], verbose=0)
# 整理到二个方格
img = (np.concatenate([r.reshape(-1, 28)
for r in np.split(generated_images, 10)
], axis=-1) * 127.5 + 127.5).astype(np.uint8)
Image.fromarray(img).save(
‘plot_epoch_{0:03d}_generated.png’.format(epoch))
pickle.dump({‘train’: train_history, ‘test’: test_history},
open(‘acgan-history.pkl’, ‘wb’))

磨炼甘休,创立三类文件。params_discriminator_epoch_{{epoch_number}}.hdf5,判别模型权重参数。params_generator_epoch_{{epoch_number}}.hdf5,生成模型权重参数。plot_epoch_{{epoch_number}}_generated.png

练习甘休,创建三类文件。params_discriminator_epoch_{{epoch_number}}.hdf五,判别模型权重参数。params_generator_epoch_{{epoch_number}}.hdf伍,生成模型权重参数。plot_epoch_{{epoch_number}}_generated.png

生成式对抗互联网创新。生成式对抗网络(generative adversarial
network,GAN)在无监察和控制学习不行有效。常规生成式对抗网络判别器使用Sigmoid交叉熵损失函数,学习进度梯度消失。Wasserstein生成式对抗互连网(Wasserstein
generative adversarial
network,WGAN),使用Wasserstein距离衡量,而不是延森-Shannon散度(Jensen-Shannon
divergence,JSD)。使用最小二乘生成式对抗网络(least squares generative
adversarial network,LSGAN),判别模型用小小平方损失小函数(least squares
loss function)。塞BathTyne Nowozin、Botond Cseke、Ryota
汤姆ioka诗歌《f-GAN: Training Generative Neural Samplers using
Variational Divergence 迷你mization》。

生成式对抗互连网创新。生成式对抗互联网(generative adversarial
network,GAN)在无监察和控制学习不行有效。常规生成式对抗互连网判别器使用Sigmoid交叉熵损失函数,学习进度梯度消失。Wasserstein生成式对抗网络(Wasserstein
generative adversarial
network,WGAN),使用Wasserstein距离衡量,而不是Jensen-Shannon散度(Jensen-Shannon
divergence,JSD)。使用最小二乘生成式对抗互连网(least squares generative
adversarial network,LSGAN),判别模型用小小平方损失小函数(least squares
loss function)。Sebastian Nowozin、Botond Cseke、Ryota
汤姆ioka散文《f-GAN: Training Generative Neural 萨姆plers using
Variational Divergence Minimization》。

参考资料:
《TensorFlow技术解析与实战》

参考资料:
《TensorFlow技术解析与实战》

迎接付费咨询(150元每时辰),作者的微信:qingxingfengzi

欢迎付费咨询(150元每小时),小编的微信:qingxingfengzi

相关文章