VNU-UET Repository: No conditions. Results ordered -Date Deposited. 2020-10-26T13:33:34ZEPrintshttps://eprints.uet.vnu.edu.vn/images/sitelogo.pnghttps://eprints.uet.vnu.edu.vn/eprints/21632012020-10-13T08:45:45Z2020-10-13T08:45:45Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4081This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40812020-10-13T08:45:45ZLow-power High-performance 32-bit RISC-V Microcontroller on 65-nm Silicon-On-Thin-BOX (SOTB)In this paper, a 32-bit RISC-V microcontroller in a 65-nm Silicon-On-Thin-BOX (SOTB) chip is presented. The system is developed based on the VexRiscv Central Processing Unit (CPU) with the Instruction Set Architecture (ISA) extensions of RV32IM. Besides the core processor, the System-on-Chip (SoC) contains 8KB of boot ROM, 64KB of on-chip memory, UART controller, SPI controller, timer, and GPIOs for LEDs and switches. The 8KB of boot ROM has 7KB of hard-code in combinational logics and 1KB of a stack in SRAM. The proposed SoC performs the Dhrystone and Coremark benchmarks with the results of 1.27 DMIPS/MHz and 2.4 Coremark/MHz, respectively. The layout occupies 1.32-mm2 of die area, which equivalents to 349,061 of NAND2 gate-counts. The 65-nm SOTB process is chosen not only because of its low-power feature but also because of the back-gate biasing technique that allows us to control the microcontroller to favor the low-power or the high-performance operations. The measurement results show that the highest operating frequency of 156-MHz is achieved at 1.2-V supply voltage (VDD) with +1.6-V back-gate bias voltage (VBB). The best power density of 33.4-µW/MHz is reached at 0.5-V VDD with +0.8-V VBB. The least current leakage of 3-nA is retrieved at 0.5-V VDD with -2.0-V VBB.Trong Thuc HoangCkristian DuranKhai Duy NguyenTuan Kiet DangQuang Nhu Quynh NguyenPhuc Hong ThanXuan Tu Trantutx@vnu.edu.vnDuc Hung LeAkira TsukamotoKuniyasu SuzakiCong Kha Phampham@ee.uec.ac.jp2020-10-13T08:21:31Z2020-10-13T08:21:31Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4075This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40752020-10-13T08:21:31ZMotion-Encoded Particle Swarm Optimization for Moving Target Search Using UAVsThis paper presents a novel algorithm named the motion-encoded particle swarm optimization (MPSO) for finding a moving target with unmanned aerial vehicles (UAVs). From the Bayesian theory, the search problem can be converted to the optimization of a cost function that represents the probability of detecting the target. Here, the proposed MPSO is developed to solve that problem by encoding the search trajectory as a series of UAV motion paths evolving over the generation of particles in a PSO algorithm. This motion-encoded approach allows for preserving important properties of the swarm including the cognitive and social coherence, and thus resulting in better solutions. Results from extensive simulations with existing methods show that the proposed MPSO improves the detection performance by 24\% and time performance by 4.71 times compared to the original PSO, and moreover, also outperforms other state-of-the-art metaheuristic optimization algorithms including the artificial bee colony (ABC), ant colony optimization (ACO), genetic algorithm (GA), differential evolution (DE), and tree-seed algorithm (TSA) in most search scenarios. Experiments have been conducted with real UAVs in searching for a dynamic target in different scenarios to demonstrate MPSO merits in a practical application.Manh Duong Phungduongpm@vnu.edu.vnQuang Haquang.ha@uts.edu.au2020-10-13T08:21:17Z2020-10-13T08:21:17Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4074This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40742020-10-13T08:21:17ZMotion-Encoded Particle Swarm Optimization for Moving Target Search Using UAVsThis paper presents a novel algorithm named the motion-encoded particle swarm optimization (MPSO) for finding a moving target with unmanned aerial vehicles (UAVs). From the Bayesian theory, the search problem can be converted to the optimization of a cost function that represents the probability of detecting the target. Here, the proposed MPSO is developed to solve that problem by encoding the search trajectory as a series of UAV motion paths evolving over the generation of particles in a PSO algorithm. This motion-encoded approach allows for preserving important properties of the swarm including the cognitive and social coherence, and thus resulting in better solutions. Results from extensive simulations with existing methods show that the proposed MPSO improves the detection performance by 24\% and time performance by 4.71 times compared to the original PSO, and moreover, also outperforms other state-of-the-art metaheuristic optimization algorithms including the artificial bee colony (ABC), ant colony optimization (ACO), genetic algorithm (GA), differential evolution (DE), and tree-seed algorithm (TSA) in most search scenarios. Experiments have been conducted with real UAVs in searching for a dynamic target in different scenarios to demonstrate MPSO merits in a practical application.Manh Duong Phungduongpm@vnu.edu.vnQuang Haquang.ha@uts.edu.au2020-10-13T08:18:56Z2020-10-13T08:20:11Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4073This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40732020-10-13T08:18:56ZInfluence Maximization with Priority in Online
Social NetworksThe Influence Maximization (IM) problem, which finds a set of k nodes (called seedset)
in a social network to initiate the influence spread so that the number of influenced nodes after
propagation process is maximized, is an important problem in information propagation and social
network analysis. However, previous studies ignored the constraint of priority that led to inefficient
seed collections. In some real situations, companies or organizations often prioritize influencing
potential users during their influence diffusion campaigns. With a new approach to these existing
works, we propose a new problem called Influence Maximization with Priority (IMP) which finds out
a set seed of k nodes in a social network to be able to influence the largest number of nodes subject
to the influence spread to a specific set of nodes U (called priority set) at least a given threshold T in
this paper. We show that the problem is NP-hard under well-known IC model. To find the solution,
we propose two efficient algorithms, called Integrated Greedy (IG) and Integrated Greedy Sampling (IGS)
with provable theoretical guarantees. IG provides a �
1 − (1 − 1
k
)
t
�
-approximation solution with t
is an outcome of algorithm and t ≥ 1. The worst-case approximation ratio is obtained when t = 1
and it is equal to 1/k. In addition, IGS is an efficient randomized approximation algorithm based
on sampling method that provides a �
1 − (1 − 1
k
)
t − e
�
-approximation solution with probability
at least 1 − δ with e > 0, δ ∈ (0, 1) as input parameters of the problem. We conduct extensive
experiments on various real networks to compare our IGS algorithm to the state-of-the-art algorithms
in IM problem. The results indicate that our algorithm provides better solutions interns of influence
on the priority sets when approximately give twice to ten times higher than threshold T while running
time, memory usage and the influence spread also give considerable results compared to the others.Xuan Huan Hoanghuanhx@vnu.edu.vnVan Canh PhamK.T. Dung HaC Quang VuSu Anh Nguyen2020-10-13T08:18:06Z2020-10-13T08:18:28Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4068This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40682020-10-13T08:18:06ZBisimulation and bisimilarity for fuzzy description logics under the Gödel semanticsDescription logics (DLs) are a suitable formalism for representing knowledge about domains in which objects are described not only by attributes but also by binary relations between objects. Fuzzy extensions of DLs can be used for such domains when data and knowledge about them are vague and imprecise. One of the possible ways to specify classes of objects in such domains is to use concepts in fuzzy DLs. As DLs are variants of modal logics, indiscernibility in DLs is characterized by bisimilarity. The bisimilarity relation of an interpretation is the largest auto-bisimulation of that interpretation. In DLs and their fuzzy extensions, such equivalence relations can be used for concept learning. In this paper, we define and study fuzzy bisimulation and bisimilarity for fuzzy DLs under the Gödel semantics, as well as crisp bisimulation and strong bisimilarity for such logics extended with involutive negation. The considered logics are fuzzy extensions of the DL ALCreg (a variant of PDL) with additional features among inverse roles, nominals, (qualified or unqualified) number restrictions, the universal role, local reflexivity of a role and involutive negation. We formulate and prove results on invariance of concepts under fuzzy (resp. crisp) bisimulation, conditional invariance of fuzzy TBoxex/ABoxes under bisimilarity (resp. strong bisimilarity), and the Hennessy-Milner property of fuzzy (resp. crisp) bisimulation for fuzzy DLs without (resp. with) involutive negation under the Gödel semantics. Apart from these fundamental results, we also provide results on using fuzzy bisimulation to separate the expressive powers of fuzzy DLs, as well as results on using strong bisimilarity to minimize fuzzy interpretations.Linh Anh NguyenQuang Thuy Hathuyhq@vnu.edu.vnNgoc Thanh Nguyenngoc-thanh.nguyen@pwr.wroc.plThi Hong Khanh NguyenThanh Luong Tran2020-10-09T07:11:16Z2020-10-09T07:11:16Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4077This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40772020-10-09T07:11:16ZOn Rectifying the Mapping between Articles and Institutions in Bibliometric DatabasesToday, bibliometric databases are indispensable sources for researchers and research institutions. The main role of these databases is to find research articles and estimate the performance of researchers and institutions. Regarding the evaluation of the research performance of an organization, the accuracy in determining institutions of authors of articles is decisive. However, current popular bibliometric databases such as Scopus and Web of Science have not addressed this point eﬃciently. To this end, we propose an approach to revise the authors’ aﬃliation information of articles in bibliometric databases. We build a model to classify articles to institutions with high accuracy by assembling the bag of words and n-grams techniques for extracting features of aﬃliation strings. After that, these features are weighted to determine their importance to each institution. Aﬃliation strings of articles are transformed into the new feature space by integrating weights of features and local characteristics of words and phrases contributing to the sequences. Finally, on the feature space, the support vector classifier method is applied to learn a predictive model. Our experimental result shows that the proposed model’s accuracy is about 99.1%.Kien Tuan NgoDinh Hieu Vohieuvd@vnu.edu.vnNgoc Thang Buithangbn@vnu.edu.vnLe Viet Anh PhamKhanh Ly PhamHai Phan2020-10-09T07:10:50Z2020-10-09T07:10:50Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4076This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40762020-10-09T07:10:50ZFormal Analysis of Database Trigger Systems Using Event-BMost modern relational database systems use triggers to implement automatic tasks in response to specific events happening inside or outside a system. A database trigger is a human readable block code without any formal semantics. Frequently, people can check if a trigger is designed correctly after it is executed or by manual checking. In this article, the authors introduce a new method to model and verify database trigger systems using Event-B formal method at design phase. First, the authors make use of similar mechanism between triggers and Event-B events to propose a set of rules translating a database trigger system into Event-B constructs. Then, the authors show how to verify data constraint preservation properties and detect infinite loops of trigger execution with RODIN/Event-B. The authors also illustrate the proposed method with a case study. Finally, a tool named Trigger2B which partly supports the automatic modeling process is presented.Hong Anh Lelehonganh@humg.edu.vnVan Khanh Tokhanhtv@vnu.edu.vnNinh Thuan Truongthuantn@vnu.edu.vn2020-10-09T07:10:25Z2020-10-09T07:10:25Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4071This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40712020-10-09T07:10:25ZApplication of WRF-Chem to simulate air quality over Northern VietnamThe WRF-Chem (Weather Research and Forecasting with Chemistry) model is implemented and validated against ground-based observations for meteorological and atmospheric variables for the first time in Northern Vietnam. The WRF-Chem model was based on HTAPv2 emission inventory with MOZCART chemical-aerosol mechanism to simulate atmospheric variables for winter (January) and summer (July) of 2014. The model satisfactorily reproduces meteorological fields, such as temperature 2 m above the ground and relative humidity 2 m above the ground at 45 NCHMF meteorological stations in January, but lower agreement was found in those simulations of July. PM10 and PM2.5 concentrations in January showed good temporal and spatial agreements to observations recorded at three CEM air monitoring stations in Phutho, Quangninh, and Hanoi, with correlation coefficients of 0.36 and 0.59. However, WRF-Chem model was underestimated with MFBs from − 27.9 to − 118.7% for PM10 levels and from − 34.2 to − 115.1% for PM2.5 levels. It has difficulty in capturing day-by-day variation of PM10 and PM2.5 concentrations at each station in July, but MFBs were in the range from − 27.1 to − 40.2% which is slightly lower than those in January. It suggested that further improvements of the model and local emission data are needed to reduce uncertainties in modeling the distribution of atmospheric pollutants. Assessment of biomass burning emission on air quality in summer was analyzed to highlight the application aspect of the WRF-Chem model. The study may serve as a reference for future air quality modeling using WRF-Chem in Vietnam.Thi Nhu Ngoc Dongocdtn@fimo.edu.vnXuan Truong Ngotruongnx@fimo.edu.vnVan Ha Phamhapv@fimo.edu.vnNhu Luan Vuongluannv@cem.gov.vnHoang Anh LeChau Thuy PhamQuang Hung Buihungbq@vnu.edu.vnThi Nhat Thanh Nguyenthanhntn@vnu.edu.vn2020-10-02T03:30:04Z2020-10-02T03:30:04Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4069This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40692020-10-02T03:30:04ZGenerate Test Data from C/C++ Source Code using Weighted CFG and Boundary ValuesThis paper presents two test data automatic generation methods which are based on weighted control flow graph (named WCFT) and boundary values of input parameters (named BVTG). Firstly, WCFT method generates a CFG from a given unit function, updates weight for it, then generates test data from the greatest weight test paths. In the meantime, WCFT can find dead code that can be used for automatic source code errors fix. Secondly, BVTG method generates test data from boundary values of input parameters of the given unit function. The combination of the two generated test data sets from these two methods will improve the error detection ability while maintaining a high code coverage. An implemented tool (named WCFT4Cpp) and experimental results are also presented to show the effectiveness of the two proposed methods in both time required to generate test data and error detection ability.Nguyen Huong Tran17028005@vnu.edu.vnMinh Kha Do17020827@vnu.edu.vnHoang Viet Tranvietth2004@gmail.comNgoc Hung Phamhungpn@vnu.edu.vn2020-09-23T04:41:10Z2020-09-23T04:41:10Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4065This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40652020-09-23T04:41:10ZSmartphone Indoor Positioning Based on Enhanced BLE Beacon Multi-laterationIn this paper, we introduce a smartphone indoor positioning method using Bluetooth Low Energy (BLE) beacon multilateration. At first, based on signal strength analysis, we construct a distance calculation model for BLE beacons. Then, with the aims to improve positioning accuracy, we propose an improved Lateral method (Range-based method) which is applied for 4 nearby beacons. The method is intended to design a real-time system for some services such as emergency assistance, personal localization and tracking, location-based advertising and marketing, etc. Experimental results show that the proposed method achieves high accuracy when compared with the state of the art lateral methods such as geometry-based (conventional trilateration), Least Square Estimation-based (LSE-based) and weighted LSE-based.Ngoc Son Duongduongson.vnu@gmail.comThi Thai Mai Dinhdttmai@vnu.edu.vn2020-09-14T02:55:49Z2020-09-14T02:55:49Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4061This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40612020-09-14T02:55:49ZA new constraint programming model and a linear programming-based adaptive large neighborhood search for the vehicle routing problem with synchronization constraintsWe consider a vehicle routing problem which seeks to minimize cost subject to time window and synchronization constraints. In this problem, the fleet of vehicles is categorized into regular and special vehicles. Some customers require both vehicles’ services, whose service start times at the customer are synchronized. Despite its important real-world application, this problem has rarely been studied in the literature. To solve the problem, we propose a Constraint Programming (CP) model and an Adaptive Large Neighborhood Search (ALNS) in which the design of insertion operators is based on solving linear programming (LP) models to check the insertion feasibility. A number of acceleration techniques is also proposed to significantly reduce the computational time. The computational experiments show that our new CP model finds better solutions than an existing CP-based ALNS, when used on small instances with 25 customers and with a much shorter running time. Our LP-based ALNS dominates the CP-based ALNS, in terms of solution quality, when it provides solutions with better objective values, on average, for all instance classes. This demonstrates the advantage of using linear programming instead of constraint programming when dealing with a variant of vehicle routing problems with relatively tight constraints, which is often considered to be more favorable for CP-based methods. We also adapt our algorithm to solve a well-studied variant of the problem, and the obtained results show that the algorithm provides good solutions as state-of-the-art approaches and improves four best known solutions.Minh Hoang Haminhhoang.ha@vnu.edu.vnTat Dat NguyenDuy Thinh NguyenHoang Giang PhamThuy DoLouis-Martin Rousseau2020-09-14T02:55:14Z2020-09-14T02:55:14Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4064This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40642020-09-14T02:55:14ZImprovements on the performance of SCM/WDM-based RoF systemRadio over Fiber (RoF) techniques are good candidates to create the backbone of the next generation of wireless networks. Many parameters affect RoF communications such as amplified spontaneous emission noise (ASE), four-wave mixing nonlinearity (FWM), the modulation, channel spacing, switching voltage, and phase shifter. In this paper, we propose an improved model of RoF communication systems using subcarrier multiplexing/wavelength division multiplexing (SCM/WDM) technique with unequal channel spacing and 1-km Erbium-doped fiber amplifier (EDFA). Simulation results confirmed that we could obtain the lowest bit error rate and noises when the EDFA is placed at 1 km from the transmitter by using optical single-sideband (OSSB) modulation at frequencies 193.1, 193.2, 193.35, and 193.6 THz.Duc Tan Trantantd@vnu.edu.vnTrung Ninh Buininhbt@vnu.edu.vn2020-09-14T02:52:10Z2020-09-14T02:52:41Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4060This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40602020-09-14T02:52:10ZSolving the k-dominating set problem on very large-scale networksThe well-known minimum dominating set problem (MDSP) aims to construct the minimum-size subset of vertices in a graph such that every other vertex has at least one neighbor in the subset. In this article, we study a general version of the problem that extends the neighborhood relationship: two vertices are called neighbors of each other if there exists a path through no more than k edges between them. The problem called “minimum k-dominating set problem” (MkDSP) becomes the classical dominating set problem if k is 1 and has important applications in monitoring large-scale social networks. We propose an efficient heuristic algorithm that can handle real-world instances with up to 17 million vertices and 33 million edges. This is the first time such large graphs are solved for the minimum k-dominating set problemMinh Hai NguyenMinh Hoang Haminhhoang.ha@vnu.edu.vnDiep Nguyen NDiep.Nguyen@uts.edu.auThe Trung Trantrung@fpt.edu.vn2020-09-14T02:51:25Z2020-09-14T02:51:25Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4056This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40562020-09-14T02:51:25ZOpposite Partial Response Filter for Shingled Magnetic Recording SystemsShingled magnetic recording (SMR) is regarded as the most influential technology for the next-generation magnetic recording systems. The SMR tracks are partly overlapped by each other, and the SMR systems can obtain higher area densities by increasing track per inch density. As a result, this brings more interference from the adjacent sidetracks, i.e., intertrack interference (ITI) while reading the home track. In this letter, we are interested in applying the filtering process to the squeezed tracks before recording to improve the quality of retrieved data. The preprocessing is to reduce the effect of unwanted signals from the sidetracks on the home track. The results show that the SMR system's performance is vastly improved even under the extremely severe effects of the ITI.Dinh Chi NguyenThu Phuong NguyenSinh Cong Lamcongls@vnu.edu.vn2020-09-08T09:59:25Z2020-09-08T09:59:25Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4057This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40572020-09-08T09:59:25ZA General Model of Fractional Frequency Reuse: Modelling and Performance AnalysisFractional Frequency Reuse (FFR) is a promising to improve the spectrum e ciency in the LongTerm Evolution (LTE) cellular network. In the literature, various research works have been conducted to evaluate the performance of FFR. However, the presented analytical approach only dealt with the special cases in which the users are divided into 2 groups and only two power levels are utilised. In this paper, we consider a general case of FFR in which the users are classified intogroups and each group is assigned a serving power level. The mathematical model of the general FFR is presented and analysed through a stochastic geometry approach. The derived analytical results in terms of average coverage probability can covered all the related well-known results in the literature.Sinh Cong Lamcongls@vnu.edu.vnQuoc Tuan Nguyentuannq@vnu.edu.vnKumbesan Sandrasegarankumbesan.sandrasegaran@uts.edu.au2020-09-08T09:58:26Z2020-09-08T09:58:26Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4058This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40582020-09-08T09:58:26ZEnergy Harvesting Technique Utilizing Resource Allocation Algorithm in 5G Wireless ChannelThe technology utilized in Energy harvesting from wireless channels of cellular communication is one of the critical aspects of future generation of cellular telecommunication, as it will be one of its alternatives for conserving power, by transmitting power to devices through the cellular networks. There have been numerous techniques that were proposed in 5G cellular technology for sustainability and enhancing the lifespan of centralized power sources in mobile and future IoT devices. Throughout this research paper, a literature review of the main current advancements of EH technology and SWIPT techniques will be presented as well as its achievements. An implementation of one of the Energy Harvesting techniques which is the SWIPT, in a cooperative relay scenario is presented in this paper and it is applied with a proper simulation tool. The simulation will mainly be implemented in MATLAB, which is divided into several functions computing the essential primary values for the proposed EH network. Firstly, the relay scenario that comprises a base station and two receivers with one receiver acting as a relay station while the other act as the final receiver for both transmissions. Secondly, the SWIPT technique is implemented in the Relay stations, where the AF protocol is utilized. Finally, the last receiver will receive both signals which represent the information decoding and the EH signals from the relay stations. The evaluation of the simulation results will be discussed in the final stages of the research. The outcome of this research will be the simulation results in which the estimation of the EH efficiency is evaluated based on two factors including; the number of EH subcarriers in the final receiver, and the average allocated power for EH transmission received by the final receiver. The idea behind this research paper was to search for an efficient method to harvest energy from the next nominated waveform of the 5th generation mobile communication OFDM. The anticipated outcome of this paper is beneficial in minimizing the cellular operation cost, extending the user equipment (UE) battery life, and creating an overall conserving green communication.H. AldhanhaniandSandrasegaran KumbesanKumbesan.Sandrasegaran@uts.edu.auSinh Cong Lamcongls@vnu.edu.vn2020-09-08T09:55:19Z2020-09-08T09:55:19Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4059This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40592020-09-08T09:55:19ZHow to Forecast the Students' Learning Outcomes Based on Factors of Interactive Activities in a Blended Learning CourseThis paper summarizes the research results of identifying the influencing factors in the online learning phase of a blended learning course. From such factors, we propose a model for predicting student outcomes. In our study, we have conducted several models in order to predict the student's learning outcomes, using a course of 231 participants. Obtained data from the logs file of an LMS system is analyzed using learning analytics and machine learning techniques, and the results propose that the four factors are the number of views, the number of posts, the number of forum views, and the number of on-time submitted assignments impact on the student's learning outcomes. For the forecast of the final exam grade based on the results of the formative assessment tests, Bayesian Ridge is the most accurate among the four conducted models (Linear Regression, KNR, SVM, Bayesian Ridge). Our study can be a useful material for lecturers and course designers in effectively organizing blended learning courses.Minh Duc Leduclm@vnu.edu.vnHoa Huy NguyenDuc Loc NguyenViet Anh Nguyenvietanh@vnu.edu.vn2020-09-07T10:02:36Z2020-09-07T10:02:36Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4052This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40522020-09-07T10:02:36ZA framework for assume-guarantee regression verification of evolving softwareThis paper presents a framework for verifying evolving component-based software using assume-guarantee logic. The goal is to improve CDNF-based assumption generation method by having local weakest assumptions that can be used more effectively when verifying component-based software in the context of software evolution. For this purpose, we improve the technique for responding to membership queries when generating candidate assumptions. This technique is then integrated into a proposed backtracking algorithm to generate local weakest assumptions. These assumptions are effectively used in rechecking the evolving software by reducing time required for assumption regeneration within the proposed framework. The proposed framework can be applied to verify software that is continually evolving. An implemented tool and experimental results are presented to demonstrate the effectiveness and usefulness of the framework.Hoang-Viet TranPham Ngoc HungViet-Ha NguyenToshiaki Aoki2020-09-07T08:18:12Z2020-09-07T08:18:12Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4048This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40482020-09-07T08:18:12ZA thermal distribution, lifetime reliability prediction and spare TSV insertion platform for stacking 3D NoCsNam Khanh Dangdnk0904@gmail.comAkram Ben AhmedFakhrul Zaman RokhaniAbderazek Ben Abdallahbenab@u-aizu.ac.jpXuan Tu Trantutx@vnu.edu.vn2020-09-07T08:16:54Z2020-09-07T08:16:54Zhttp://eprints.uet.vnu.edu.vn/eprints/id/eprint/4049This item is in the repository with the URL: http://eprints.uet.vnu.edu.vn/eprints/id/eprint/40492020-09-07T08:16:54ZA thermal-aware on-line fault tolerance method for TSV lifetime reliability in 3D-NoC systemsThrough-Silicon-Via (TSV) based 3D Integrated Circuits (3D-IC) are one of the most advanced architectures by providing low power consumption, shorter wire length and smaller footprint. However, 3D-ICs confront lifetime reliability due to high operating temperature and interconnect reliability, especially the Through-Silicon-Via (TSV), which can significantly affect the accuracy of the applications. In this paper, we present an online method that supports the detection and correction of lifetime TSV failures, named IaSiG. By reusing the conventional recovery method and analyzing the output syndromes, IaSiG can determine and correct the defective TSVs. Results show that within a group, $R$ redundant TSVs can fully localize and correct $R$ defects and support the detection of $R+1$ defects. Moreover, by using $G$ groups, it can localize up to GxR and detect up to Gx(R+1) defects. An implementation of IaSiG for 32-bit data in eight groups and two redundancies has a worst-case execution time (WCET) of 5,152 cycles while supporting at most 16 defective TSVs (50\% localization).
By integrating IaSiG onto a 3D Network-on-Chip, we also perform a grid-search based empirical method to insert suitable numbers of redundancies into TSV groups. The empirical method takes the operating temperature as the factor of accelerated fault due to the fact that temperature is one of the major issues of 3D-ICs. The results show that the proposed method can reduce the number of redundancies from the uniform method while still maintaining the required Mean Time to Failure.Nam Khanh Dangdnk0904@gmail.comAkram Ben AhmedAbderazek Ben Abdallahbenab@u-aizu.ac.jpXuan Tu Trantutx@vnu.edu.vn