%0 Conference Paper %A Hoang, Tieu Binh %A Ma, Thi Chau %A Akihiro, Sugimoto %A Bui, The Duy %B 2018 the 7th International Conference on Computer and Communication Engineering (ICCCE) %C Malaysia %D 2018 %F SisLab:3123 %T Selecting active frames for action recognition with 3D convolutional network %U https://eprints.uet.vnu.edu.vn/eprints/id/eprint/3123/ %X Recent applications of Convolutional Neural Networks, especially 3-Dimensional Convoltutional Neural Networks (3DCNNs) for human action recognition (HAR) in videos have widely used. In this paper, we use a multi-stream framework which is a combination from separated networks with different kind of input generated from unique video dataset. To achieve the high results, firstly, we proposed a method to extract the active frames (called Selected Active Frames - SAF) from a videos to build datasets for 3DCNNs in video classifying problem. Second, we deploy a new approach called Vote fusion which considered as an effective fusion method for ensembling multi-stream networks. From the various datasets generated from videos, we extract frames by our method and feed into 3DCNNs for feature extraction, then we carry out training and then fuse the results of softmax layers of these streams. We evaluate the proposed methods on solving action recognition problem. These method are carried on three well-known datasets (HMFB51, UCF101, and KTH). The results are also compared to the state-of-the-art results to illustrate the efficiency and effectiveness in our approach