<form id="dlljd"></form>
        <address id="dlljd"><address id="dlljd"><listing id="dlljd"></listing></address></address>

        <em id="dlljd"><form id="dlljd"></form></em>

          <address id="dlljd"></address>
            <noframes id="dlljd">

              聯系我們 - 廣告服務 - 聯系電話:
              您的當前位置: > 關注 > > 正文

              全球熱推薦:世界上最簡潔的人臉識別庫:face_recognition

              來源:CSDN 時間:2023-02-17 09:16:49

              文章目錄


              (資料圖片僅供參考)

              前言一、face_recognition1.1 安裝1.2 檢測人臉位置1.3 識別人臉 二、PaddleDetection2.1 安裝2.2 運行 三、DeepFace3.1 安裝3.2 檢測人臉位置3.3 人臉屬性分析 四、insightface4.1 安裝4.2 運行 五、SeetaFaceEngine5.1 編譯5.2 人臉檢測5.3 face alignment5.4 人臉檢測相似率 六、OpenFace6.1 安裝6.2 運行 參考

              前言

              人臉識別是機器學習熱門領域之一,在 github 上有很多項目實現了各種人臉識別功能,以下面6個測試軟件使用

              https://github.com/ageitgey/face_recognitionhttps://github.com/PaddlePaddle/PaddleDetectionhttps://github.com/serengil/deepfacehttps://github.com/deepinsight/insightfacehttps://github.com/seetaface/SeetaFaceEnginehttps://github.com/TadasBaltrusaitis/OpenFace

              一、face_recognition

              face_recognition 是世界上最簡潔的人臉識別庫,可以使用 Python 和命令行工具提取、識別、操作人臉。

              face_recognition 項目的人臉識別是基于業內領先的C++開源庫 dlib中的深度學習模型,用Labeled Faces in the Wild人臉數據集進行測試,有高達99.38%的準確率。但對小孩和亞洲人臉的識別準確率尚待提升。

              1.1 安裝

              pip install face_recognition

              1.2 檢測人臉位置

              可以使用命令行命令 face_detection來識別人臉,下面以胡歌照片為例,來演示具體使用

              face_detection faces/huge.jpg# 輸出:faces/huge.jpg,101,221,173,149

              使用命令行只顯示了位置的具體坐標,不能準確的用肉眼查看,可以使用 python 來標記

              import face_recognitionfrom PIL import Image, ImageDrawimage = face_recognition.load_image_file("huge.jpg")face_locations = face_recognition.face_locations(image)pil_image = Image.fromarray(image)draw = ImageDraw.Draw(pil_image)for (top, right, bottom, left) in face_locations:    draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255))del drawpil_image.save("huge_face.jpg")

              多張人臉檢測

              原始照片

              標記人臉位置照片

              1.3 識別人臉

              face_recognition 不僅支持識別人臉所在照片位置,更能識別人臉所代表的人

              將 [‘劉詩詩.jpg’, ‘唐嫣.jpg’, ‘楊冪.jpg’, ‘胡歌.jpg’, ‘霍建華.jpg’, ‘黃志瑋.jpg’] 照片放在一個文件夾下,例如我的是 known 文件夾下,再將仙劍三海報 all.jpg 放在和腳本同一目錄下,開始識別人臉

              測試的6張照片都是從網上找的,鏈接如下

              劉詩詩唐嫣楊冪胡歌霍建華黃志瑋

              import face_recognitionimport osfrom PIL import Image, ImageDraw, ImageFontimport numpy as npfont = ImageFont.truetype("C:\\Windows\\Fonts\\simsun.ttc", 40, encoding="utf-8")known_path = "../known"known_face_names = []known_face_encodings = []images = os.listdir(known_path)print(images)for image in images:    if image.endswith("jpg"):        known_face_names.append(os.path.basename(image).split(".")[0])        image_data = face_recognition.load_image_file(os.path.join(known_path, image))        known_face_encodings.append(face_recognition.face_encodings(image_data)[0])all_face_path = "all.jpg"all_image = face_recognition.load_image_file(all_face_path)all_face_locations = face_recognition.face_locations(all_image)all_face_encodings = face_recognition.face_encodings(all_image, all_face_locations)pil_image = Image.fromarray(all_image)draw = ImageDraw.Draw(pil_image)for (top, right, bottom, left), face_encoding in zip(all_face_locations, all_face_encodings):    matches = face_recognition.compare_faces(known_face_encodings, face_encoding, tolerance=0.5)    name = "未知"    face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)    best_match_index = np.argmin(face_distances)    if matches[best_match_index]:        name = known_face_names[best_match_index]    draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255))    text_width, text_height = draw.textsize(name, font=font)    draw.text((left + 6, bottom - text_height - 5), name, fill=(255, 255, 255, 255), font=font)del drawpil_image.save("all_faces.jpg")

              二、PaddleDetection

              PaddleDetection為基于飛槳 PaddlePaddle 的端到端目標檢測套件,內置30+模型算法及250+預訓練模型,覆蓋目標檢測、實例分割、跟蹤、關鍵點檢測等方向,其中包括服務器端和移動端高精度、輕量級產業級SOTA模型、冠軍方案和學術前沿算法,并提供配置化的網絡模塊組件、十余種數據增強策略和損失函數等高階優化支持和多種部署方案,在打通數據處理、模型開發、訓練、壓縮、部署全流程的基礎上,提供豐富的案例及教程,加速算法產業落地應用。

              2.1 安裝

              下載源碼,根據readme安裝,注意下載源碼版本需要根 paddlepaddle 版本對應。

              安裝過程中,安裝cython bbox 失敗,解決方法:windows下安裝cython-bbox失敗。下載資源:cython+bbox-0.1.3

              2.2 運行

              PaddleDetection 內置一個高效、高速的人臉檢測解決方案,包括最先進的模型和經典模型

              python tools/infer.py -c configs/face_detection/blazeface_1000e.yml -o weights=https://paddledet.bj.bcebos.com/models/blazeface_1000e.pdparams --infer_img=C:\Users\supre\Desktop\faces\all.jpg --output_dir=infer_output/ --draw_threshold=0.6

              三、DeepFace

              Deepface 是一個輕量級的人臉面部識別和面部屬性分析(年齡、性別、情感和種族)框架。它是一個混合的人臉識別框架,包裝了最先進的模型:VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace, Dliband SFace.

              實驗表明,人類在面部識別任務上的準確率為97.53%,而這些模型已經達到并通過了這個準確率水平。

              3.1 安裝

              pip install deepface

              3.2 檢測人臉位置

              from deepface import DeepFacefrom deepface.detectors import FaceDetectorimport cv2img_path = "C:\\Users\\supre\\Desktop\\faces\\all.jpg"detector_name = "opencv"img = cv2.imread(img_path)detector = FaceDetector.build_model(detector_name) #set opencv, ssd, dlib, mtcnn or retinafaceobj = FaceDetector.detect_faces(detector, detector_name, img)faces = []regions = []for o in obj:    face, region = o    faces.append(face)    regions.append(region)for (x, y, w, h) in regions:    cv2.rectangle(img, (x, y), (x+w, y + h), (0, 0, 255), 2)cv2.imwrite("all_deep_face.jpg", img)cv2.imshow("faces", img)cv2.waitKey(0)print("there are ",len(obj)," faces")

              3.3 人臉屬性分析

              運行下面代碼會從 github 下載訓練好的模型文件,如果下載太慢可手動下載:https://github.com/serengil/deepface_models/releases/

              from deepface import DeepFaceobj = DeepFace.analyze(img_path = "faces/huge.jpg",         actions = ["age", "gender", "race", "emotion"])print(obj)

              輸出:

              {"age": 31, "region": {"x": 141, "y": 90, "w": 92, "h": 92}, "gender": "Man", "race": {"asian": 86.62416855240873, "indian": 0.2717677898641103, "black": 0.025535856615095234, "white": 11.001530200334203, "middle eastern": 0.36970814565319693, "latino hispanic": 1.707288910883004}, "dominant_race": "asian", "emotion": {"angry": 4.005255788877951, "disgust": 1.1836746688898558e-05, "fear": 91.75890038960578, "happy": 1.023393651002267, "sad": 0.9277909615809299, "surprise": 2.081933555420253, "neutral": 0.20271948350039026}, "dominant_emotion": "fear"}

              四、insightface

              insightface 是一個開源的二維和三維深度面部分析工具箱,主要基于 PyTorch 和 MXNet。實現了很多人臉識別、人臉檢測和人臉對齊算法,為訓練和部署進行了優化。

              4.1 安裝

              pip install insightface

              4.2 運行

              運行出現報錯:TypeError: __init__() got an unexpected keyword argument "provider_options"

              查找資料Error “got an unexpected keyword argument ‘provider_options’” when running quick example of insightface得知:是由于onnxruntime 版本過低導致,更新版本

              pip install onnxruntime==1.6.0

              import cv2import numpy as npimport insightfacefrom insightface.app import FaceAnalysisfrom insightface.data import get_image as ins_get_imageapp = FaceAnalysis(providers=["CUDAExecutionProvider", "CPUExecutionProvider"])app.prepare(ctx_id=0, det_size=(640, 640))img = ins_get_image("C:\\Users\\supre\\Desktop\\faces\\all")faces = app.get(img)rimg = app.draw_on(img, faces)cv2.imwrite("./all_output.jpg", rimg)

              五、SeetaFaceEngine

              SeetaFaceEngine 是一個開源的C++人臉識別引擎,由中科院計算所山世光研究員帶領的人臉識別研究組研發。代碼基于C++實現,且不依賴于任何第三方的庫函數,開源協議為BSD-2,可供學術界和工業界免費使用它可以運行在CPU上。它包含人臉檢測、人臉對準和人臉識別三個關鍵部分,是構建真實人臉識別應用系統的必要和充分條件

              5.1 編譯

              SeetaFaceEngine 包含三部分,所以需要使用 cmake 編譯三次,編譯方法見 readme

              5.2 人臉檢測

              #include#include#include#include#include "opencv2/highgui/highgui.hpp"#include "opencv2/imgproc/imgproc.hpp"#include "face_detection.h"using namespace std;int main(int argc, char** argv) {  const char* img_path = "C:\\Users\\supre\\Desktop\\faces\\all.jpg";  seeta::FaceDetection detector("E:\\tmp\\SeetaFaceEngine-master\\FaceDetection\\model\\seeta_fd_frontal_v1.0.bin");  detector.SetMinFaceSize(40);  detector.SetScoreThresh(2.f);  detector.SetImagePyramidScaleFactor(0.8f);  detector.SetWindowStep(4, 4);  cv::Mat img = cv::imread(img_path, cv::IMREAD_UNCHANGED);  cv::Mat img_gray;  if (img.channels() != 1)    cv::cvtColor(img, img_gray, cv::COLOR_BGR2GRAY);  else    img_gray = img;  seeta::ImageData img_data;  img_data.data = img_gray.data;  img_data.width = img_gray.cols;  img_data.height = img_gray.rows;  img_data.num_channels = 1;  long t0 = cv::getTickCount();  std::vectorfaces = detector.Detect(img_data);  long t1 = cv::getTickCount();  double secs = (t1 - t0)/cv::getTickFrequency();  cout << "Detections takes " << secs << " seconds " << endl;  cout << "Image size (wxh): " << img_data.width << "x"      << img_data.height << endl;  cv::Rect face_rect;  int32_t num_face = static_cast(faces.size());  for (int32_t i = 0; i < num_face; i++) {    face_rect.x = faces[i].bbox.x;    face_rect.y = faces[i].bbox.y;    face_rect.width = faces[i].bbox.width;    face_rect.height = faces[i].bbox.height;    cv::rectangle(img, face_rect, CV_RGB(0, 0, 255), 4, 8, 0);  }  cv::namedWindow("Test", cv::WINDOW_AUTOSIZE);  cv::imwrite("all_1.jpg", img);  cv::imshow("Test", img);  cv::waitKey(0);  cv::destroyAllWindows();}

              5.3 face alignment

              face alignment 指 通過一定量的訓練集(人臉圖像和每個圖像上相對應的多個landmarks),來得到一個model,使得該model再輸入了一張任意姿態下的人臉照片后,能夠對該照片中的關鍵點進行標記.

              #include#include#include#include#include "cv.h"#include "highgui.h"#include "opencv2/highgui/highgui.hpp"#include "opencv2/imgproc/imgproc.hpp"#include "face_detection.h"#include "face_alignment.h"int main(int argc, char** argv){// Initialize face detection model  std::string MODEL_DIR = "E:\\tmp\\SeetaFaceEngine-master\\FaceAlignment\\model\\";  std::string DATA_DIR = "E:\\tmp\\SeetaFaceEngine-master\\FaceAlignment\\data\\";  std::string IMG_PATH = DATA_DIR + "all.jpg";  int pts_num = 5;  seeta::FaceDetection detector("E:\\tmp\\SeetaFaceEngine-master\\FaceDetection\\model\\seeta_fd_frontal_v1.0.bin");  detector.SetMinFaceSize(40);  detector.SetScoreThresh(2.f);  detector.SetImagePyramidScaleFactor(0.8f);  detector.SetWindowStep(4, 4);  // Initialize face alignment model  seeta::FaceAlignment point_detector((MODEL_DIR + "seeta_fa_v1.1.bin").c_str());  //load image  cv::Mat img = cv::imread(IMG_PATH, cv::IMREAD_UNCHANGED);  cv::Mat img_gray;  if (img.channels() != 1)    cv::cvtColor(img, img_gray, cv::COLOR_BGR2GRAY);  else    img_gray = img;  seeta::ImageData img_data;  img_data.data = img_gray.data;  img_data.width = img_gray.cols;  img_data.height = img_gray.rows;  img_data.num_channels = 1;  std::vectorfaces = detector.Detect(img_data);  int32_t face_num = static_cast(faces.size());  std::cout<<"face_num:"<<FACE_NUM; if="" (face_num="=" 0)="" {return 0;  }  cv::Rect face_rect;  for (int32_t i = 0; i < face_num; i++) {face_rect.x = faces[i].bbox.x;    face_rect.y = faces[i].bbox.y;    face_rect.width = faces[i].bbox.width;    face_rect.height = faces[i].bbox.height;    cv::rectangle(img, face_rect, CV_RGB(0, 0, 255), 4, 8, 0);    // Detect 5 facial landmarks   seeta::FacialLandmark points[5];   point_detector.PointDetectLandmarks(img_data, faces[i], points);   for (int i = 0; i< pts_num; i++)   {cv::circle(img, cvPoint(points[i].x, points[i].y), 2, CV_RGB(0, 255, 0), CV_FILLED);   }  }  cv::namedWindow("Test", cv::WINDOW_AUTOSIZE);  cv::imwrite("test.jpg", img);  cv::imshow("Test", img);  cv::waitKey(0);  cv::destroyAllWindows();  return 0;}

              5.4 人臉檢測相似率

              #include#include#include "opencv2/highgui/highgui.hpp"#include "opencv2/imgproc/imgproc.hpp"#include "face_identification.h"#include "recognizer.h"#include "face_detection.h"#include "face_alignment.h"#include "math_functions.h"#include#include#include#includeusing namespace seeta;using namespace std;std::string DATA_DIR = "E:\\tmp\\SeetaFaceEngine-master\\FaceIdentification\\data\\";std::string MODEL_DIR = "E:\\tmp\\SeetaFaceEngine-master\\FaceIdentification\\model\\";int main(int argc, char* argv[]) {// Initialize face detection model  seeta::FaceDetection detector("E:\\tmp\\SeetaFaceEngine-master\\FaceDetection\\model\\seeta_fd_frontal_v1.0.bin");  detector.SetMinFaceSize(40);  detector.SetScoreThresh(2.f);  detector.SetImagePyramidScaleFactor(0.8f);  detector.SetWindowStep(4, 4);  // Initialize face alignment model  seeta::FaceAlignment point_detector("E:\\tmp\\SeetaFaceEngine-master\\FaceAlignment\\model\\seeta_fa_v1.1.bin");  // Initialize face Identification model  FaceIdentification face_recognizer((MODEL_DIR + "seeta_fr_v1.0.bin").c_str());  std::string test_dir = DATA_DIR + "test_face_recognizer/";  //load image  cv::Mat gallery_img_color = cv::imread(test_dir + "images/liushishi_1.jpg", 1);  cv::Mat gallery_img_gray;  cv::cvtColor(gallery_img_color, gallery_img_gray, CV_BGR2GRAY);  cv::Mat probe_img_color = cv::imread(test_dir + "images/liushishi_2.jpg", 1);  cv::Mat probe_img_gray;  cv::cvtColor(probe_img_color, probe_img_gray, CV_BGR2GRAY);  ImageData gallery_img_data_color(gallery_img_color.cols, gallery_img_color.rows, gallery_img_color.channels());  gallery_img_data_color.data = gallery_img_color.data;  ImageData gallery_img_data_gray(gallery_img_gray.cols, gallery_img_gray.rows, gallery_img_gray.channels());  gallery_img_data_gray.data = gallery_img_gray.data;  ImageData probe_img_data_color(probe_img_color.cols, probe_img_color.rows, probe_img_color.channels());  probe_img_data_color.data = probe_img_color.data;  ImageData probe_img_data_gray(probe_img_gray.cols, probe_img_gray.rows, probe_img_gray.channels());  probe_img_data_gray.data = probe_img_gray.data;  // Detect faces  std::vectorgallery_faces = detector.Detect(gallery_img_data_gray);  int32_t gallery_face_num = static_cast(gallery_faces.size());  std::vectorprobe_faces = detector.Detect(probe_img_data_gray);  int32_t probe_face_num = static_cast(probe_faces.size());  if (gallery_face_num == 0 || probe_face_num==0)  {std::cout << "Faces are not detected.";    return 0;  }  // Detect 5 facial landmarks  seeta::FacialLandmark gallery_points[5];  point_detector.PointDetectLandmarks(gallery_img_data_gray, gallery_faces[0], gallery_points);  seeta::FacialLandmark probe_points[5];  point_detector.PointDetectLandmarks(probe_img_data_gray, probe_faces[0], probe_points);  for (int i = 0; i<5; i++)  {cv::circle(gallery_img_color, cv::Point(gallery_points[i].x, gallery_points[i].y), 2,      CV_RGB(0, 255, 0));    cv::circle(probe_img_color, cv::Point(probe_points[i].x, probe_points[i].y), 2,      CV_RGB(0, 255, 0));  }  cv::imwrite("gallery_point_result.jpg", gallery_img_color);  cv::imwrite("probe_point_result.jpg", probe_img_color);  // Extract face identity feature  float gallery_fea[2048];  float probe_fea[2048];  face_recognizer.ExtractFeatureWithCrop(gallery_img_data_color, gallery_points, gallery_fea);  face_recognizer.ExtractFeatureWithCrop(probe_img_data_color, probe_points, probe_fea);  // Caculate similarity of two faces  float sim = face_recognizer.CalcSimilarity(gallery_fea, probe_fea);  std::cout << "相似率:"<<SIM <<endl;="" return="" 0;}

              使用兩張劉詩詩照片對比:相似率為 0.679915

              六、OpenFace

              OpenFace, 一個旨在為計算機視覺和機器學習研究人員、情感計算社區和有興趣構建基于面部行為分析的交互式應用程序的人使用的工具。OpenFace是第一個能夠進行面部地標檢測、頭部姿態估計、面部動作單元識別和眼睛-注視估計的工具包,它具有可用的源代碼,可用于運行和訓練模型。代表 OpenFace 核心的計算機視覺算法在上述所有任務中都展示了最先進的結果。此外,我們的工具能夠實時性能,并能夠運行在非專業的硬件上, 例如一個簡單的網絡攝像頭。

              6.1 安裝

              window 32位:https://github.com/TadasBaltrusaitis/OpenFace/releases/download/OpenFace_2.2.0/OpenFace_2.2.0_win_x86.zip64位:https://github.com/TadasBaltrusaitis/OpenFace/releases/download/OpenFace_2.2.0/OpenFace_2.2.0_win_x64.zip Linux:https://github.com/TadasBaltrusaitis/OpenFace/wiki/Unix-InstallationMac:https://github.com/TadasBaltrusaitis/OpenFace/wiki/Mac-Installation

              6.2 運行

              OpenFace windows 版安裝完成后還需要下載模型數據:https://github.com/TadasBaltrusaitis/OpenFace/wiki/Model-download,放在安裝目錄\model\patch_experts下面。

              OpenFace 還提供了一些工具用于在命令行實現人臉識別

              FaceLandmarkImg 從照片中識別人臉,還是以仙劍3海報做例子放在samples下面,再新建輸出文件夾out_dir,開始識別人臉

              FaceLandmarkImg.exe -f "samples/all.jpg" -out_dir "out_dir"

              輸出結果為:

              FaceLandmarkVid 從視頻中識別人臉

              FaceLandmarkVidMulti 從多個視頻中識別人臉

              FeatureExtraction 用于包含單個人臉的分析

              參考

              windows下安裝cython-bbox失敗

              責任編輯:

              標簽:

              相關推薦:

              精彩放送:

              新聞聚焦
              Top 中文字幕在线观看亚洲日韩