国产无码免费,人妻口爆,国产V在线,99中文精品7,国产成人无码AA精品一,制度丝袜诱惑av,久久99免费麻辣视频,蜜臀久久99精品久久久久久酒店
        訂閱
        糾錯
        加入自媒體

        一文了解如何在Python中使用自動編碼器

        介紹自動編碼器實際上是一種人工神經網絡,用于對以無監(jiān)督方式提供的輸入數據進行解壓縮和壓縮。解壓縮和壓縮操作是有損的且特定于數據的。數據特定意味著自動編碼器將只能實際壓縮已經訓練過的數據。例如,如果你用狗的圖像訓練一個自動編碼器,那么它會給貓帶來糟糕的表現。自動編碼器計劃學習表示對整個數據集的編碼。這可能會導致訓練網絡降低維數。重構部分也是通過這個學習的。

        有損操作意味著重建圖像的質量通常不如原始圖像清晰或高分辨率,并且對于具有更大損失的重建,差異更大,這被稱為有損操作。下圖顯示了如何使用特定損失因子對圖像進行編碼和解碼。

        自動編碼器是一種特殊類型的前饋神經網絡,輸入應該與輸出相似。因此,我們需要一種編碼方法、損失函數和解碼方法。最終目標是以最小的損失完美地復制輸入。輸入將通過一層編碼器,它實際上是一個完全連接的神經網絡,它也構成代碼解碼器,因此像 ANN 一樣使用相同的代碼進行編碼和解碼。代碼實現通過反向傳播訓練的 ANN 的工作方式與自動編碼器相同。在本文中,我們將討論 3 種類型的自動編碼器,如下所示:簡單的自動編碼器深度 CNN 自動編碼器去噪自動編碼器對于自動編碼器的實現部分,我們將使用流行的 MNIST 數字數據集。

        1. 簡單的自動編碼器我們首先導入所有必要的庫:import all the dependencies
        from keras.layers import Dense,Conv2D,MaxPooling2D,UpSampling2D
        from keras import Input, Model
        from keras.datasets import mnist
        import numpy as np
        import matplotlib.pyplot as plt
        然后我們將構建我們的模型,我們將提供決定輸入將被壓縮多少的維度數。維度越小,壓縮越大。encoding_dim = 15
        input_img = Input(shape=(784,))
        # encoded representation of input
        encoded = Dense(encoding_dim, activation='relu')(input_img)
        # decoded representation of code
        decoded = Dense(784, activation='sigmoid')(encoded)
        # Model which take input image and shows decoded images
        autoencoder = Model(input_img, decoded)
        然后我們需要分別構建編碼器模型和解碼器模型,以便我們可以輕松區(qū)分輸入和輸出。# This model shows encoded images
        encoder = Model(input_img, encoded)
        # Creating a decoder model
        encoded_input = Input(shape=(encoding_dim,))
        # last layer of the autoencoder model
        decoder_layer = autoencoder.layers[-1]
        # decoder model
        decoder = Model(encoded_input, decoder_layer(encoded_input))
        然后我們需要用ADAM優(yōu)化器和交叉熵損失函數擬合來編譯模型。autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
        然后你需要加載數據:(x_train, y_train), (x_test, y_test) = mnist.load_data()
        x_train = x_train.astype('float32') / 255.
        x_test = x_test.astype('float32') / 255.
        x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
        x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
        print(x_train.shape)
        print(x_test.shape)
        輸出 :(60000, 784)
        (10000, 784)
        如果你想查看數據的實際情況,可以使用以下代碼行:plt.imshow(x_train[0].reshape(28,28))
        輸出 :

        然后你需要訓練你的模型:autoencoder.fit(x_train, x_train,
                       epochs=15,
                       batch_size=256,
                       validation_data=(x_test, x_test))
        輸出 :Epoch 1/15
        235/235 [==============================] - 14s 5ms/step - loss: 0.4200 - val_loss: 0.2263
        Epoch 2/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.2129 - val_loss: 0.1830
        Epoch 3/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1799 - val_loss: 0.1656
        Epoch 4/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1632 - val_loss: 0.1537
        Epoch 5/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1533 - val_loss: 0.1481
        Epoch 6/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1488 - val_loss: 0.1447
        Epoch 7/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1457 - val_loss: 0.1424
        Epoch 8/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1434 - val_loss: 0.1405
        Epoch 9/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1415 - val_loss: 0.1388
        Epoch 10/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1398 - val_loss: 0.1374
        Epoch 11/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1386 - val_loss: 0.1360
        Epoch 12/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1373 - val_loss: 0.1350
        Epoch 13/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1362 - val_loss: 0.1341
        Epoch 14/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1355 - val_loss: 0.1334
        Epoch 15/15
        235/235 [==============================] - 1s 3ms/step - loss: 0.1348 - val_loss: 0.1328
        訓練后,你需要提供輸入,你可以使用以下代碼繪制結果:encoded_img = encoder.predict(x_test)
        decoded_img = decoder.predict(encoded_img)
        plt.figure(figsize=(20, 4))
        for i in range(5):
           # Display original
           ax = plt.subplot(2, 5, i + 1)
           plt.imshow(x_test[i].reshape(28, 28))
           plt.gray()
           ax.get_xaxis().set_visible(False)
           ax.get_yaxis().set_visible(False)
           # Display reconstruction
           ax = plt.subplot(2, 5, i + 1 + 5)
           plt.imshow(decoded_img[i].reshape(28, 28))
           plt.gray()
           ax.get_xaxis().set_visible(False)
           ax.get_yaxis().set_visible(False)
        plt.show()
        你可以分別清楚地看到編碼和解碼圖像的輸出,如下所示。

        2. 深度 CNN 自動編碼器由于這里的輸入是圖像,因此使用卷積神經網絡或 CNN 確實更有意義。編碼器將由一堆 Conv2D 和最大池化層組成,解碼器將由一堆 Conv2D 和上采樣層組成。代碼 :model = Sequential()
        # encoder network
        model.add(Conv2D(30, 3, activation= 'relu', padding='same', input_shape = (28,28,1)))
        model.add(MaxPooling2D(2, padding= 'same'))
        model.add(Conv2D(15, 3, activation= 'relu', padding='same'))
        model.add(MaxPooling2D(2, padding= 'same'))
        #decoder network
        model.add(Conv2D(15, 3, activation= 'relu', padding='same'))
        model.add(UpSampling2D(2))
        model.add(Conv2D(30, 3, activation= 'relu', padding='same'))
        model.add(UpSampling2D(2))
        model.add(Conv2D(1,3,activation='sigmoid', padding= 'same')) # output layer
        model.compile(optimizer= 'adam', loss = 'binary_crossentropy')
        model.summary()
        輸出 :Model: "sequential"
        _________________________________________________________________
        Layer (type)                 Output Shape              Param #  
        =================================================================
        conv2d (Conv2D)              (None, 28, 28, 30)        300      
        _________________________________________________________________
        max_pooling2d (MaxPooling2D) (None, 14, 14, 30)        0        
        _________________________________________________________________
        conv2d_1 (Conv2D)            (None, 14, 14, 15)        4065      
        _________________________________________________________________
        max_pooling2d_1 (MaxPooling2 (None, 7, 7, 15)          0        
        _________________________________________________________________
        conv2d_2 (Conv2D)            (None, 7, 7, 15)          2040      
        _________________________________________________________________
        up_sampling2d (UpSampling2D) (None, 14, 14, 15)        0        
        _________________________________________________________________
        conv2d_3 (Conv2D)            (None, 14, 14, 30)        4080      
        _________________________________________________________________
        up_sampling2d_1 (UpSampling2 (None, 28, 28, 30)        0        
        _________________________________________________________________
        conv2d_4 (Conv2D)            (None, 28, 28, 1)         271      
        =================================================================
        Total params: 10,756
        Trainable params: 10,756
        Non-trainable params: 0
        _________________________________________________________________
        現在你需要加載數據并訓練模型(x_train, _), (x_test, _) = mnist.load_data()
        x_train = x_train.astype('float32') / 255.
        x_test = x_test.astype('float32') / 255.
        x_train = np.reshape(x_train, (len(x_train), 28, 28, 1))
        x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))
        model.fit(x_train, x_train,
                       epochs=15,
                       batch_size=128,
                       validation_data=(x_test, x_test))
        輸出 :Epoch 1/15
        469/469 [==============================] - 34s 8ms/step - loss: 0.2310 - val_loss: 0.0818
        Epoch 2/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0811 - val_loss: 0.0764
        Epoch 3/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0764 - val_loss: 0.0739
        Epoch 4/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0743 - val_loss: 0.0725
        Epoch 5/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0729 - val_loss: 0.0718
        Epoch 6/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0722 - val_loss: 0.0709
        Epoch 7/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0715 - val_loss: 0.0703
        Epoch 8/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0709 - val_loss: 0.0698
        Epoch 9/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0700 - val_loss: 0.0693
        Epoch 10/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0698 - val_loss: 0.0689
        Epoch 11/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0694 - val_loss: 0.0687
        Epoch 12/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0691 - val_loss: 0.0684
        Epoch 13/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0688 - val_loss: 0.0680
        Epoch 14/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0685 - val_loss: 0.0680
        Epoch 15/15
        469/469 [==============================] - 3s 7ms/step - loss: 0.0683 - val_loss: 0.0676
        現在你需要提供輸入并繪制以下結果的輸出pred = model.predict(x_test)
        plt.figure(figsize=(20, 4))
        for i in range(5):
           # Display original
           ax = plt.subplot(2, 5, i + 1)
           plt.imshow(x_test[i].reshape(28, 28))
           plt.gray()
           ax.get_xaxis().set_visible(False)
           ax.get_yaxis().set_visible(False)
           # Display reconstruction
           ax = plt.subplot(2, 5, i + 1 + 5)
           plt.imshow(pred[i].reshape(28, 28))
           plt.gray()
           ax.get_xaxis().set_visible(False)
           ax.get_yaxis().set_visible(False)
        plt.show()
        輸出 :

        3. 去噪自動編碼器現在我們將看到模型如何處理圖像中的噪聲。我們所說的噪聲是指模糊的圖像、改變圖像的顏色,甚至是圖像上的白色標記。noise_factor = 0.7
        x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape)
        x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test.shape)
        x_train_noisy = np.clip(x_train_noisy, 0., 1.)
        x_test_noisy = np.clip(x_test_noisy, 0., 1.)
        Here is how the noisy images look right now.
        plt.figure(figsize=(20, 2))
        for i in range(1, 5 + 1):
           ax = plt.subplot(1, 5, i)
           plt.imshow(x_test_noisy[i].reshape(28, 28))
           plt.gray()
           ax.get_xaxis().set_visible(False)
           ax.get_yaxis().set_visible(False)
        plt.show()
        輸出 :

        現在圖像幾乎無法識別,為了增加自動編碼器的范圍,我們將修改定義模型的層以增加過濾器,使模型性能更好,然后擬合模型。model = Sequential()
        # encoder network
        model.add(Conv2D(35, 3, activation= 'relu', padding='same', input_shape = (28,28,1)))
        model.add(MaxPooling2D(2, padding= 'same'))
        model.add(Conv2D(25, 3, activation= 'relu', padding='same'))
        model.add(MaxPooling2D(2, padding= 'same'))
        #decoder network
        model.add(Conv2D(25, 3, activation= 'relu', padding='same'))
        model.add(UpSampling2D(2))
        model.add(Conv2D(35, 3, activation= 'relu', padding='same'))
        model.add(UpSampling2D(2))
        model.add(Conv2D(1,3,activation='sigmoid', padding= 'same')) # output layer
        model.compile(optimizer= 'adam', loss = 'binary_crossentropy')
        model.fit(x_train_noisy, x_train,
                       epochs=15,
                       batch_size=128,
                       validation_data=(x_test_noisy, x_test))
        輸出 :Epoch 1/15
        469/469 [==============================] - 5s 9ms/step - loss: 0.2643 - val_loss: 0.1456
        Epoch 2/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1440 - val_loss: 0.1378
        Epoch 3/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1373 - val_loss: 0.1329
        Epoch 4/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1336 - val_loss: 0.1305
        Epoch 5/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1313 - val_loss: 0.1283
        Epoch 6/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1294 - val_loss: 0.1268
        Epoch 7/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1278 - val_loss: 0.1257
        Epoch 8/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1267 - val_loss: 0.1251
        Epoch 9/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1259 - val_loss: 0.1244
        Epoch 10/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1251 - val_loss: 0.1234
        Epoch 11/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1241 - val_loss: 0.1234
        Epoch 12/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1239 - val_loss: 0.1222
        Epoch 13/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1232 - val_loss: 0.1223
        Epoch 14/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1226 - val_loss: 0.1215
        Epoch 15/15
        469/469 [==============================] - 4s 8ms/step - loss: 0.1221 - val_loss: 0.1211
        訓練結束后,我們將提供輸入并編寫繪圖函數以查看最終結果。pred = model.predict(x_test_noisy)
        plt.figure(figsize=(20, 4))
        for i in range(5):
           # Display original
           ax = plt.subplot(2, 5, i + 1)
           plt.imshow(x_test_noisy[i].reshape(28, 28))
           plt.gray()
           ax.get_xaxis().set_visible(False)
           ax.get_yaxis().set_visible(False)
           # Display reconstruction
           ax = plt.subplot(2, 5, i + 1 + 5)
           plt.imshow(pred[i].reshape(28, 28))
           plt.gray()
           ax.get_xaxis().set_visible(False)
           ax.get_yaxis().set_visible(False)
        plt.show()
        輸出 :

        尾注我們已經了解了自動編碼器工作的結構,并使用了 3 種類型的自動編碼器。自動編碼器有多種用途,如降維圖像壓縮、電影和歌曲推薦系統(tǒng)等。模型的性能可以通過訓練更多的 epoch 或增加我們網絡的維度來提高。

        聲明: 本文由入駐維科號的作者撰寫,觀點僅代表作者本人,不代表OFweek立場。如有侵權或其他問題,請聯(lián)系舉報。

        發(fā)表評論

        0條評論,0人參與

        請輸入評論內容...

        請輸入評論/評論長度6~500個字

        您提交的評論過于頻繁,請輸入驗證碼繼續(xù)

        暫無評論

        暫無評論

          掃碼關注公眾號
          OFweek人工智能網
          獲取更多精彩內容
          文章糾錯
          x
          *文字標題:
          *糾錯內容:
          聯(lián)系郵箱:
          *驗 證 碼:

          粵公網安備 44030502002758號

          主站蜘蛛池模板: 亚洲一区av| 免费av网站| 桃色91| 超碰91在线| 在线中文字幕av| 绥德县| 国产v自拍| 鸡西市| 自拍偷拍亚洲| 制服.丝袜.亚洲.中文.综合| 梓潼县| 九九色综合| 欧美日韩午夜| 亚洲成人黄色网| 欧美又粗又大| 兰溪市| 成人国产综合| 无码入口| 精品无码一区二区三区| 3p在线看| 久久久女人| 玖草视频在线观看| 五月婷婷丁香| 国产精品xxxxx| 久草成人网| www.男人的天堂| 国产熟女一区二区三区五月婷 | www.99| 尼勒克县| 久久老司机视频| 欧美成人18| 澄江县| 亚洲成人在线播放| 美女黄色网| 国产乱子伦真实精品!| 日韩www| 亚洲丝袜在线播放| 亚洲AV熟女| 亚洲色图偷拍| 精品一二三| 延川县|