Encoding Facial Behavior in Videos for Identification Purposes- סמינר מחלקה פיסיקלית

סמינר זה יחשב כסמינר שמיעה לתלמידי תואר שני

03 בספטמבר 2023, 10:00 
Kitot Building, Room 011 
 Encoding Facial Behavior in Videos for Identification Purposes- סמינר מחלקה פיסיקלית

 

 

סמינר זה יחשב כסמינר שמיעה לתלמידי תואר שני

 

You are invited to attend a lecture on 3rd September 2023 at 10:00

 

Kitot Building, Room 011

 

Join Zoom Meeting

https://tau-ac-il.zoom.us/j/6690388255

 

Encoding Facial Behavior in Videos for Identification Purposes

 

By:

Mor-Avi Azulay

 

MSc student under the supervision of Prof. David Mendlovic and Dr. Dan Raviv

 

Abstract

 

Facial appearance-based methods have settled as state-of-the-art approaches for the task of human re-identification. However, the captured appearance can change greatly under different environmental conditions or cameras. Most methods base their identification upon static data, i.e., one or more representations of single frames. Hence, dynamic data such as temporal changes which convey meaningful identity-unique information, i.e., facial motion patterns, are omitted.

The research conducted in this thesis focuses on a novel approach to extract facial behavior characteristics from a video and encode it into a light representation we call Motion-model, which acts as a dictionary of the typical behavior for different expression-based states, or states transition, within a high-dimensional embedding space, and can naturally modeled as a graph. We further present MotionDNANet, a deep graph-based neural network with a dedicated architecture that learns a layer-specific graph adjacency matrix, to process Motion-models for identification purposes.

Experimental results over the large-scale VoxCeleb2 dataset, show that our facial behavior-based method offers a better user experience in highly secured systems that require a low false acceptance rate (FAR) where the true acceptance rate (TAR) drops dramatically for other tested methods. I.e., our TAR at FAR=0.1% is 87.48% and all others are below 15%. Moreover, results over the VideoForensicsHQ dataset show that our Motion-model concept can help detect false identifications in the case of DeepFake videos.

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש שנעשה בתכנים אלה לדעתך מפר זכויות
שנעשה בתכנים אלה לדעתך מפר זכויות נא לפנות בהקדם לכתובת שכאן >>