一区二区三区三上|欧美在线视频五区|国产午夜无码在线观看视频|亚洲国产裸体网站|无码成年人影视|亚洲AV亚洲AV|成人开心激情五月|欧美性爱内射视频|超碰人人干人人上|一区二区无码三区亚洲人区久久精品

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評(píng)論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會(huì)員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識(shí)你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

如何使用卷積神經(jīng)網(wǎng)絡(luò)去檢測(cè)翻拍圖片?

OSC開源社區(qū) ? 來(lái)源:OSCHINA 社區(qū) ? 2023-04-04 10:00 ? 次閱讀
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

摘要:本項(xiàng)目主要介紹了如何使用卷積神經(jīng)網(wǎng)絡(luò)去檢測(cè)翻拍圖片,主要為摩爾紋圖片;其主要?jiǎng)?chuàng)新點(diǎn)在于網(wǎng)絡(luò)結(jié)構(gòu)上,將圖片的高低頻信息分開處理。

1 前言

當(dāng)感光元件像素的空間頻率與影像中條紋的空間頻率接近時(shí),可能產(chǎn)生一種新的波浪形的干擾圖案,即所謂的摩爾紋。傳感器的網(wǎng)格狀紋理構(gòu)成了一個(gè)這樣的圖案。當(dāng)圖案中的細(xì)條狀結(jié)構(gòu)與傳感器的結(jié)構(gòu)以小角度交叉時(shí),這種效應(yīng)也會(huì)在圖像中產(chǎn)生明顯的干擾。這種現(xiàn)象在一些細(xì)密紋理情況下,比如時(shí)尚攝影中的布料上,非常普遍。這種摩爾紋可能通過(guò)亮度也可能通過(guò)顏色來(lái)展現(xiàn)。但是在這里,僅針對(duì)在翻拍過(guò)程中產(chǎn)生的圖像摩爾紋進(jìn)行處理。

翻拍即從計(jì)算機(jī)屏幕上捕獲圖片,或?qū)χ聊慌臄z圖片;該方式會(huì)在圖片上產(chǎn)生摩爾紋現(xiàn)象

論文主要處理思路

對(duì)原圖作 Haar 變換得到四個(gè)下采樣特征圖(原圖下二采樣 cA、Horizontal 橫向高頻 cH、Vertical 縱向高頻 cV、Diagonal 斜向高頻 cD)

然后分別利用四個(gè)獨(dú)立的 CNN 對(duì)四個(gè)下采樣特征圖卷積池化,提取特征信息

原文隨后對(duì)三個(gè)高頻信息卷積池化后的結(jié)果的每個(gè) channel、每個(gè)像素點(diǎn)比對(duì),取 max

將上一步得到的結(jié)果和 cA 卷積池化后的結(jié)果作笛卡爾積

2、網(wǎng)絡(luò)結(jié)構(gòu)復(fù)現(xiàn)

如下圖所示,本項(xiàng)目復(fù)現(xiàn)了論文的圖像去摩爾紋方法,并對(duì)數(shù)據(jù)處理部分進(jìn)行了修改,并且網(wǎng)絡(luò)結(jié)構(gòu)上也參考了源碼中的結(jié)構(gòu),對(duì)圖片產(chǎn)生四個(gè)下采樣特征圖,而不是論文中的三個(gè),具體處理方式大家可以參考一下網(wǎng)絡(luò)結(jié)構(gòu)。

1c77c9da-d23d-11ed-bfe3-dac502259ad0.jpg

import math
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
# import pywt
from paddle.nn import Linear, Dropout, ReLU
from paddle.nn import Conv2D, MaxPool2D
class mcnn(nn.Layer):
 def __init__(self, num_classes=1000):
 super(mcnn, self).__init__()
 self.num_classes = num_classes
 self._conv1_LL = Conv2D(3,32,7,stride=2,padding=1,) 
 # self.bn1_LL = nn.BatchNorm2D(128)
 self._conv1_LH = Conv2D(3,32,7,stride=2,padding=1,) 
 # self.bn1_LH = nn.BatchNorm2D(256)
 self._conv1_HL = Conv2D(3,32,7,stride=2,padding=1,)
 # self.bn1_HL = nn.BatchNorm2D(512)
 self._conv1_HH = Conv2D(3,32,7,stride=2,padding=1,)
 # self.bn1_HH = nn.BatchNorm2D(256)
 self.pool_1_LL = nn.MaxPool2D(kernel_size=2,stride=2, padding=0)
 self.pool_1_LH = nn.MaxPool2D(kernel_size=2,stride=2, padding=0)
 self.pool_1_HL = nn.MaxPool2D(kernel_size=2,stride=2, padding=0)
 self.pool_1_HH = nn.MaxPool2D(kernel_size=2,stride=2, padding=0)
 self._conv2 = Conv2D(32,16,3,stride=2,padding=1,)
 self.pool_2 = nn.MaxPool2D(kernel_size=2,stride=2, padding=0)
 self.dropout2 = Dropout(p=0.5)
 self._conv3 = Conv2D(16,32,3,stride=2,padding=1,)
 self.pool_3 = nn.MaxPool2D(kernel_size=2,stride=2, padding=0)
 self._conv4 = Conv2D(32,32,3,stride=2,padding=1,)
 self.pool_4 = nn.MaxPool2D(kernel_size=2,stride=2, padding=0)
 self.dropout4 = Dropout(p=0.5)
 # self.bn1_HH = nn.BatchNorm1D(256)
 self._fc1 = Linear(in_features=64,out_features=num_classes)
 self.dropout5 = Dropout(p=0.5)
 self._fc2 = Linear(in_features=2,out_features=num_classes)
 def forward(self, inputs1, inputs2, inputs3, inputs4):
        x1_LL = self._conv1_LL(inputs1)
        x1_LL = F.relu(x1_LL)
        x1_LH = self._conv1_LH(inputs2)
        x1_LH = F.relu(x1_LH)
        x1_HL = self._conv1_HL(inputs3)
        x1_HL = F.relu(x1_HL)
        x1_HH = self._conv1_HH(inputs4)
        x1_HH = F.relu(x1_HH)
        pool_x1_LL = self.pool_1_LL(x1_LL)
        pool_x1_LH = self.pool_1_LH(x1_LH)
        pool_x1_HL = self.pool_1_HL(x1_HL)
        pool_x1_HH = self.pool_1_HH(x1_HH)
        temp = paddle.maximum(pool_x1_LH, pool_x1_HL)
 avg_LH_HL_HH = paddle.maximum(temp, pool_x1_HH)
 inp_merged = paddle.multiply(pool_x1_LL, avg_LH_HL_HH)
        x2 = self._conv2(inp_merged)
        x2 = F.relu(x2)
        x2 = self.pool_2(x2)
        x2 = self.dropout2(x2)
        x3 = self._conv3(x2)
        x3 = F.relu(x3)
        x3 = self.pool_3(x3)
        x4 = self._conv4(x3)
        x4 = F.relu(x4)
        x4 = self.pool_4(x4)
        x4 = self.dropout4(x4)
        x4 = paddle.flatten(x4, start_axis=1, stop_axis=-1)
        x5 = self._fc1(x4)
        x5 = self.dropout5(x5)
        out = self._fc2(x5)
 return out
model_res = mcnn(num_classes=2)
paddle.summary(model_res,[(1,3,512,384),(1,3,512,384),(1,3,512,384),(1,3,512,384)])
---------------------------------------------------------------------------
 Layer (type)     Input Shape          Output Shape         Param #    
===========================================================================
   Conv2D-1 [[1, 3, 512, 384]] [1, 32, 254, 190] 4,736 
   Conv2D-2 [[1, 3, 512, 384]] [1, 32, 254, 190] 4,736 
   Conv2D-3 [[1, 3, 512, 384]] [1, 32, 254, 190] 4,736 
   Conv2D-4 [[1, 3, 512, 384]] [1, 32, 254, 190] 4,736 
  MaxPool2D-1 [[1, 32, 254, 190]] [1, 32, 127, 95] 0 
  MaxPool2D-2 [[1, 32, 254, 190]] [1, 32, 127, 95] 0 
  MaxPool2D-3 [[1, 32, 254, 190]] [1, 32, 127, 95] 0 
  MaxPool2D-4 [[1, 32, 254, 190]] [1, 32, 127, 95] 0 
   Conv2D-5 [[1, 32, 127, 95]] [1, 16, 64, 48] 4,624 
  MaxPool2D-5 [[1, 16, 64, 48]] [1, 16, 32, 24] 0 
   Dropout-1 [[1, 16, 32, 24]] [1, 16, 32, 24] 0 
   Conv2D-6 [[1, 16, 32, 24]] [1, 32, 16, 12] 4,640 
  MaxPool2D-6 [[1, 32, 16, 12]] [1, 32, 8, 6] 0 
   Conv2D-7 [[1, 32, 8, 6]] [1, 32, 4, 3] 9,248 
  MaxPool2D-7 [[1, 32, 4, 3]] [1, 32, 2, 1] 0 
   Dropout-2 [[1, 32, 2, 1]] [1, 32, 2, 1] 0 
   Linear-1 [[1, 64]] [1, 2] 130 
   Dropout-3 [[1, 2]] [1, 2] 0 
   Linear-2 [[1, 2]] [1, 2] 6 
===========================================================================
Total params: 37,592
Trainable params: 37,592
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 9.00
Forward/backward pass size (MB): 59.54
Params size (MB): 0.14
Estimated Total Size (MB): 68.68
---------------------------------------------------------------------------
{'total_params': 37592, 'trainable_params': 37592}

3、數(shù)據(jù)預(yù)處理

與源代碼不同的是,本項(xiàng)目將圖像的小波分解部分集成在了數(shù)據(jù)讀取部分,即改為了線上進(jìn)行小波分解,而不是源代碼中的線下進(jìn)行小波分解并且保存圖片。首先,定義小波分解的函數(shù)

!pip install PyWavelets
import numpy as np
import pywt
def splitFreqBands(img, levRows, levCols):
 halfRow = int(levRows/2)
 halfCol = int(levCols/2)
    LL = img[0:halfRow, 0:halfCol]
    LH = img[0:halfRow, halfCol:levCols]
    HL = img[halfRow:levRows, 0:halfCol]
    HH = img[halfRow:levRows, halfCol:levCols]
 return LL, LH, HL, HH
def haarDWT1D(data, length):
    avg0 = 0.5;
    avg1 = 0.5;
    dif0 = 0.5;
    dif1 = -0.5;
    temp = np.empty_like(data)
 # temp = temp.astype(float)
    temp = temp.astype(np.uint8)
    h = int(length/2)
 for i in range(h):
        k = i*2
        temp[i] = data[k] * avg0 + data[k + 1] * avg1;
 temp[i + h] = data[k] * dif0 + data[k + 1] * dif1;
 data[:] = temp
# computes the homography coefficients for PIL.Image.transform using point correspondences
def fwdHaarDWT2D(img):
 img = np.array(img)
 levRows = img.shape[0];
 levCols = img.shape[1];
 # img = img.astype(float)
 img = img.astype(np.uint8)
 for i in range(levRows):
        row = img[i,:]
        haarDWT1D(row, levCols)
 img[i,:] = row
 for j in range(levCols):
        col = img[:,j]
        haarDWT1D(col, levRows)
 img[:,j] = col
 return splitFreqBands(img, levRows, levCols)
!cd "data/data188843/" && unzip -q 'total_images.zip'
import os 
recapture_keys = [ 'ValidationMoire']
original_keys = ['ValidationClear']
def get_image_label_from_folder_name(folder_name):
 """
 :param folder_name:
 
    """
 for key in original_keys:
 if key in folder_name:
 return 'original'
 for key in recapture_keys:
 if key in folder_name:
 return 'recapture'
 return 'unclear'
label_name2label_id = {
 'original': 0,
 'recapture': 1,}
src_image_dir = "data/data188843/total_images"
dst_file = "data/data188843/total_images/train.txt"
image_folder = [file for file in os.listdir(src_image_dir)]
print(image_folder)
image_anno_list = []
for folder in image_folder:
 label_name = get_image_label_from_folder_name(folder)
 # label_id = label_name2label_id.get(label_name, 0)
 label_id = label_name2label_id[label_name]
 folder_path = os.path.join(src_image_dir, folder)
 image_file_list = [file for file in os.listdir(folder_path) if
 file.endswith('.jpg') or file.endswith('.jpeg') or
 file.endswith('.JPG') or file.endswith('.JPEG') or file.endswith('.png')]
 for image_file in image_file_list:
 # if need_root_dir:
 #     image_path = os.path.join(folder_path, image_file)
 # else:
 image_path = image_file
 image_anno_list.append(folder +"/"+image_path +"	"+ str(label_id) + '
')
dst_path = os.path.dirname(src_image_dir)
if not os.path.exists(dst_path):
 os.makedirs(dst_path)
with open(dst_file, 'w') as fd:
 fd.writelines(image_anno_list)
import paddle
import numpy as np
import pandas as pd
import PIL.Image as Image
from paddle.vision import transforms
# from haar2D import fwdHaarDWT2D
paddle.disable_static()
# 定義數(shù)據(jù)預(yù)處理
data_transforms = transforms.Compose([
 transforms.Resize(size=(448,448)),
 transforms.ToTensor(), # transpose操作 + (img / 255)
 # transforms.Normalize(      # 減均值 除標(biāo)準(zhǔn)差
 #     mean=[0.31169346, 0.25506335, 0.12432463],        
 #     std=[0.34042713, 0.29819837, 0.1375536])
 #計(jì)算過(guò)程:output[channel] = (input[channel] - mean[channel]) / std[channel]
])
# 構(gòu)建Dataset
class MyDataset(paddle.io.Dataset):
 """
 步驟一:繼承paddle.io.Dataset類
    """
 def __init__(self, train_img_list, val_img_list, train_label_list, val_label_list, mode='train', ):
 """
 步驟二:實(shí)現(xiàn)構(gòu)造函數(shù),定義數(shù)據(jù)讀取方式,劃分訓(xùn)練和測(cè)試數(shù)據(jù)集
        """
 super(MyDataset, self).__init__()
 self.img = []
 self.label = []
 # 借助pandas讀csv的庫(kù)
 self.train_images = train_img_list
 self.test_images = val_img_list
 self.train_label = train_label_list
 self.test_label = val_label_list
 if mode == 'train':
 # 讀train_images的數(shù)據(jù)
 for img,la in zip(self.train_images, self.train_label):
 self.img.append('/home/aistudio/data/data188843/total_images/'+img)
 self.label.append(paddle.to_tensor(int(la), dtype='int64'))
 else:
 # 讀test_images的數(shù)據(jù)
 for img,la in zip(self.test_images, self.test_label):
 self.img.append('/home/aistudio/data/data188843/total_images/'+img)
 self.label.append(paddle.to_tensor(int(la), dtype='int64'))
 def load_img(self, image_path):
 # 實(shí)際使用時(shí)使用Pillow相關(guān)庫(kù)進(jìn)行圖片讀取即可,這里我們對(duì)數(shù)據(jù)先做個(gè)模擬
        image = Image.open(image_path).convert('RGB')
 # image = data_transforms(image)
 return image
 def __getitem__(self, index):
 """
 步驟三:實(shí)現(xiàn)__getitem__方法,定義指定index時(shí)如何獲取數(shù)據(jù),并返回單條數(shù)據(jù)(訓(xùn)練數(shù)據(jù),對(duì)應(yīng)的標(biāo)簽)
        """
        image = self.load_img(self.img[index])
        LL, LH, HL, HH = fwdHaarDWT2D(image)
        label = self.label[index]
 # print(LL.shape)
 # print(LH.shape)
 # print(HL.shape)
 # print(HH.shape)
        LL = data_transforms(LL)
        LH = data_transforms(LH)
        HL = data_transforms(HL)
        HH = data_transforms(HH)
 print(type(LL))
 print(LL.dtype)
 return LL, LH, HL, HH, np.array(label, dtype='int64')
 def __len__(self):
 """
 步驟四:實(shí)現(xiàn)__len__方法,返回?cái)?shù)據(jù)集總數(shù)目
        """
 return len(self.img)
image_file_txt = '/home/aistudio/data/data188843/total_images/train.txt'
with open(image_file_txt) as fd:
    lines = fd.readlines()
train_img_list = list()
train_label_list = list()
for line in lines:
 split_list = line.strip().split()
 image_name, label_id = split_list
 train_img_list.append(image_name)
 train_label_list.append(label_id)
# print(train_img_list)
# print(train_label_list)
# 測(cè)試定義的數(shù)據(jù)集
train_dataset = MyDataset(mode='train',train_label_list=train_label_list, train_img_list=train_img_list, val_img_list=train_img_list, val_label_list=train_label_list)
# test_dataset = MyDataset(mode='test')
# 構(gòu)建訓(xùn)練集數(shù)據(jù)加載器
train_loader = paddle.io.DataLoader(train_dataset, batch_size=2, shuffle=True)
# 構(gòu)建測(cè)試集數(shù)據(jù)加載器
valid_loader = paddle.io.DataLoader(train_dataset, batch_size=2, shuffle=True)
print('=============train dataset=============')
for LL, LH, HL, HH, label in train_dataset:
 print('label: {}'.format(label))
 break

4、模型訓(xùn)練

model2 = paddle.Model(model_res)
model2.prepare(optimizer=paddle.optimizer.Adam(parameters=model2.parameters()),
              loss=nn.CrossEntropyLoss(),
              metrics=paddle.metric.Accuracy())
model2.fit(train_loader,
 valid_loader,
        epochs=5,
        verbose=1,
 )

總結(jié)

本項(xiàng)目主要介紹了如何使用卷積神經(jīng)網(wǎng)絡(luò)去檢測(cè)翻拍圖片,主要為摩爾紋圖片;其主要?jiǎng)?chuàng)新點(diǎn)在于網(wǎng)絡(luò)結(jié)構(gòu)上,將圖片的高低頻信息分開處理。 在本項(xiàng)目中,CNN 僅使用 1 級(jí)小波分解進(jìn)行訓(xùn)練。 可以探索對(duì)多級(jí)小波分解網(wǎng)絡(luò)精度的影響。 CNN 模型可以用更多更難的例子和更深的網(wǎng)絡(luò)進(jìn)行訓(xùn)練。






審核編輯:劉清

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場(chǎng)。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請(qǐng)聯(lián)系本站處理。 舉報(bào)投訴
  • 傳感器
    +關(guān)注

    關(guān)注

    2565

    文章

    52971

    瀏覽量

    767179
  • cnn
    cnn
    +關(guān)注

    關(guān)注

    3

    文章

    354

    瀏覽量

    22740
  • 卷積神經(jīng)網(wǎng)絡(luò)

    關(guān)注

    4

    文章

    369

    瀏覽量

    12298

原文標(biāo)題:使用卷積神經(jīng)網(wǎng)絡(luò)實(shí)現(xiàn)圖片去摩爾紋

文章出處:【微信號(hào):OSC開源社區(qū),微信公眾號(hào):OSC開源社區(qū)】歡迎添加關(guān)注!文章轉(zhuǎn)載請(qǐng)注明出處。

收藏 人收藏
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

    評(píng)論

    相關(guān)推薦
    熱點(diǎn)推薦

    全連接神經(jīng)網(wǎng)絡(luò)卷積神經(jīng)網(wǎng)絡(luò)有什么區(qū)別

    全連接神經(jīng)網(wǎng)絡(luò)卷積神經(jīng)網(wǎng)絡(luò)的區(qū)別
    發(fā)表于 06-06 14:21

    卷積神經(jīng)網(wǎng)絡(luò)如何使用

    卷積神經(jīng)網(wǎng)絡(luò)(CNN)究竟是什么,鑒于神經(jīng)網(wǎng)絡(luò)在工程上經(jīng)歷了曲折的歷史,您為什么還會(huì)在意它呢? 對(duì)于這些非常中肯的問題,我們似乎可以給出相對(duì)簡(jiǎn)明的答案。
    發(fā)表于 07-17 07:21

    什么是圖卷積神經(jīng)網(wǎng)絡(luò)?

    卷積神經(jīng)網(wǎng)絡(luò)
    發(fā)表于 08-20 12:05

    卷積神經(jīng)網(wǎng)絡(luò)的優(yōu)點(diǎn)是什么

    卷積神經(jīng)網(wǎng)絡(luò)的優(yōu)點(diǎn)
    發(fā)表于 05-05 18:12

    卷積神經(jīng)網(wǎng)絡(luò)的層級(jí)結(jié)構(gòu)和常用框架

      卷積神經(jīng)網(wǎng)絡(luò)的層級(jí)結(jié)構(gòu)  卷積神經(jīng)網(wǎng)絡(luò)的常用框架
    發(fā)表于 12-29 06:16

    如何利用卷積神經(jīng)網(wǎng)絡(luò)更好地控制巡線智能車呢

    巡線智能車控制中的CNN網(wǎng)絡(luò)有何應(yīng)用?嵌入式單片機(jī)中的神經(jīng)網(wǎng)絡(luò)該怎樣使用?如何利用卷積神經(jīng)網(wǎng)絡(luò)
    發(fā)表于 12-21 07:47

    卷積神經(jīng)網(wǎng)絡(luò)一維卷積的處理過(guò)程

    。本文就以一維卷積神經(jīng)網(wǎng)絡(luò)為例談?wù)勗趺磥?lái)進(jìn)一步優(yōu)化卷積神經(jīng)網(wǎng)絡(luò)使用的memory。文章(卷積神經(jīng)網(wǎng)絡(luò)
    發(fā)表于 12-23 06:16

    卷積神經(jīng)網(wǎng)絡(luò)模型發(fā)展及應(yīng)用

    神經(jīng)網(wǎng)絡(luò)已經(jīng)廣泛應(yīng)用于圖像分類、目標(biāo)檢測(cè)、語(yǔ)義分割以及自然語(yǔ)言處理等領(lǐng)域。首先分析了典型卷積神經(jīng)網(wǎng)絡(luò)模型為提高其性能增加網(wǎng)絡(luò)深度以及寬度的模
    發(fā)表于 08-02 10:39

    卷積神經(jīng)網(wǎng)絡(luò)原理:卷積神經(jīng)網(wǎng)絡(luò)模型和卷積神經(jīng)網(wǎng)絡(luò)算法

    卷積神經(jīng)網(wǎng)絡(luò)原理:卷積神經(jīng)網(wǎng)絡(luò)模型和卷積神經(jīng)網(wǎng)絡(luò)算法 卷積
    的頭像 發(fā)表于 08-17 16:30 ?1862次閱讀

    卷積神經(jīng)網(wǎng)絡(luò)概述 卷積神經(jīng)網(wǎng)絡(luò)的特點(diǎn) cnn卷積神經(jīng)網(wǎng)絡(luò)的優(yōu)點(diǎn)

    卷積神經(jīng)網(wǎng)絡(luò)概述 卷積神經(jīng)網(wǎng)絡(luò)的特點(diǎn) cnn卷積神經(jīng)網(wǎng)絡(luò)的優(yōu)點(diǎn)?
    的頭像 發(fā)表于 08-21 16:41 ?3794次閱讀

    卷積神經(jīng)網(wǎng)絡(luò)的基本原理 卷積神經(jīng)網(wǎng)絡(luò)發(fā)展 卷積神經(jīng)網(wǎng)絡(luò)三大特點(diǎn)

    卷積神經(jīng)網(wǎng)絡(luò)的基本原理 卷積神經(jīng)網(wǎng)絡(luò)發(fā)展歷程 卷積神經(jīng)網(wǎng)絡(luò)三大特點(diǎn)?
    的頭像 發(fā)表于 08-21 16:49 ?3132次閱讀

    卷積神經(jīng)網(wǎng)絡(luò)層級(jí)結(jié)構(gòu) 卷積神經(jīng)網(wǎng)絡(luò)卷積層講解

    像分類、目標(biāo)檢測(cè)、人臉識(shí)別等。卷積神經(jīng)網(wǎng)絡(luò)的核心是卷積層和池化層,它們構(gòu)成了網(wǎng)絡(luò)的主干,實(shí)現(xiàn)了對(duì)圖像特征的提取和抽象。 一、
    的頭像 發(fā)表于 08-21 16:49 ?9865次閱讀

    卷積神經(jīng)網(wǎng)絡(luò)的介紹 什么是卷積神經(jīng)網(wǎng)絡(luò)算法

    卷積神經(jīng)網(wǎng)絡(luò)的介紹 什么是卷積神經(jīng)網(wǎng)絡(luò)算法 卷積神經(jīng)網(wǎng)絡(luò)涉及的關(guān)鍵技術(shù)
    的頭像 發(fā)表于 08-21 16:49 ?2339次閱讀

    卷積神經(jīng)網(wǎng)絡(luò)算法原理

    卷積神經(jīng)網(wǎng)絡(luò)算法原理? 卷積神經(jīng)網(wǎng)絡(luò)(Convolutional Neural Network,CNN)是一種深度學(xué)習(xí)(Deep Learning)的模型,它能夠自動(dòng)地從
    的頭像 發(fā)表于 08-21 16:49 ?1706次閱讀

    卷積神經(jīng)網(wǎng)絡(luò)模型搭建

    詳實(shí)、細(xì)致的指導(dǎo)。 一、什么是卷積神經(jīng)網(wǎng)絡(luò) 在講述如何搭建卷積神經(jīng)網(wǎng)絡(luò)之前,我們需要先了解一下什么是卷積
    的頭像 發(fā)表于 08-21 17:11 ?1279次閱讀