Barcode is one of the existing systems which is very fast in scanning and more accurate when compared with other coding systems. It is extensively used because speed of scanning the barcode is very high as compared with manual data entry. To increase the capacity of 2D monochrome QR code to 3 fold, 2D colour QR code is developed. The challenge in the development of colour barcode is in its decoding, since the intensity and depth of colours vary during the printing and scanning process. We need to understand the decoding process and make it insensitive to such variations. A lot of work has been already done to deal with such variations but acceptable results have not yet been achieved. The objective behind colour barcode is to increase the capacity to 3 fold as compared with 2D monochrome barcode. In this paper we proposed a novel approach that will increase the capacity of barcode beyond 3 fold and deals with decoding problem of intensity variation. In the proposed technique, quantization of grey levels is specified to handle the problem of intensity variation.
In this paper, we investigate the problem of analysis of low probability of interception (LPI) radar signals with intra-pulse frequency modulation (FM) under low signal-to-noise ratio conditions from the perspective of an airborne electronic warfare (EW) digital receiver. EW receivers are designed to intercept andanalyse threat radar signals of different classes, received over large dynamic range and operating independently over large geographical spread to advice host aircraft to undertake specified actions. For an EW receiver, primary challenges in interception and analysis of LPI radar signals are low received power, intra-pulse modulations,multi-octave frequency range, wide signal bandwidth, long pulse width, vast and multi-parametric search space, etc. In the present work, a method based on match filterbank localization and Taylor’s seriesapproximation for analysing the entire family of intra-pulse FM radar signals is proposed. The method involves progressive, joint time–frequency (TF) localization of the signal of interest (SOI), under piecewise linearity andcontinuity assumptions on instantaneous frequency, to effectively capture local TF signatures. Detection is by information-theoretic criterion based hypotheses testing, while estimation and classification are based on polynomial approximation. Fine signal analysis is followed by synthetic reconstruction of the received signal slope. Detection, estimation and classification performances for the prominent FM radar signal classes are quantified based on simulation study statistics. Stagewise implementation of analysis and FM slope reconstruction,in realistic radar threat scenarios, is demonstrated for the potential SOIs. Subject discussion is organized from the perspective of practical EW system design and presented within the realm of signal processing architecture of concurrent EW digital receivers.Keywords. Digital receiver (DRx); electronic warfare (
Scrum has become popular among the agile methodologies due to the substantial support that it provides to the project management teams. The scrum process delivers quick functionality in the form of sprints.Most of the quality assurance and quality control activities are skipped in scrum because of its short lifecycleand due to the absence of a dedicated quality assurance team. The development team pays more attention to delivery of products according to the customer satisfaction, and the parameters used for this type of assessment are the story success criteria and user acceptance testing. Only acceptance testing and integration testing are not sufficient to achieve a quality product. There is still a clamouring need felt to incorporate other quality control activities in scrum to achieve a quality product. In this work, we have made an attempt to implement the quality control activities in the scrum philosophy by introducing the concept of test backlog. The enhanced scrum model provides quality assessment methodology on the basis of frequency of remaining functional bugs. The concept of test backlog proposed in this study aims at implementing state of the art testing process in scrum. A case study was carried out to validate the model, which produced satisfactory results. Besides, we conducted a survey to access the state of the quality control in scrum. The proposed model and case study results are reported herein
Image visibility is affected by the presence of haze, fog, smoke, aerosol, etc. Image dehazing using either single visible image or visible and near-infrared (NIR) image pair is often considered as a solution to improve the visual quality of such scenes. In this paper, we address this problem from a visible–NIR image fusion perspective, instead of the conventional haze imaging model. The proposed algorithm uses a Laplacian–Gaussian pyramid based multi-resolution fusion process, guided by weight maps generated using local entropy,local contrast and visibility as metrics that control the fusion result. The proposed algorithm is free from any human intervention, and produces results that outperform the existing image-dehazing algorithms both visually as well as quantitatively. The algorithm proves to be efficient not only for the outdoor scenes with or without haze, but also for the indoor scenes in improving scene visibility.
In this paper a multi-sensor data fusion approach for wireless sensor network based on bayesian methods and ant colony optimization techniques has been proposed. In this method, each node is equipped with multiple sensors (i.e., temperature and humidity). Use of more than one sensor provides additional information about the environmental conditions. The data fusion approach based on the competitive-type hierarchical processing is considered for experimentation. Initially the data are collected by the sensors placed in the sensing fields and then the data fusion probabilities are computed on the sensed data. In this proposed methodology, the collected temperature tand humidity data are processed by multi-sensor data fusion techniques, which help in decreasing the energy consumption as well as communication cost by fusing the redundant data. The multipledata fusion process improves the reliability and accuracy of the sensed information and simultaneously saves energy, which was our primary objective. The proposed algorithms were simulated using Matlab. The executions of proposed arnd low-energy adaptive clustering hierarchy algorithms were carried out and the results show that the proposed algorithms could efficiently reduce the use of energy and were able to save more energy, thus increasing the overall network lifetime.
Recently, critical projects were developed using wireless sensor network (WSN) such as medical and military projects. Performing reliable communication in WSN is extremely important for such projects. To be able to perform reliable communication, packet loss must be minimal. Packet loss, energy and latency are the most serious problems for transport protocols. In this paper, reliable, energy- and delay-sensitive transport layer protocol (ALORT) has been developed for WSN. The proposed protocol provides optimum reliability, optimum packet latency, minimum energy cost and high packet delivery ratio by changing loss recovery mechanism (LRM) according to channel error rates. ALORT algorithm is compared with PSFQ and DTC in terms of packet delivery ratio, end-to-end latency and energy cost by MIXIM framework in OMNET??. ALORT algorithm reduces energy cost and end-to-end latency and increases packet delivery ratio.
An adaptive gravitational search algorithm (GSA) that switches between synchronous and asynchronous update is presented in this work. The proposed adaptive switching synchronous–asynchronous GSA (ASw-GSA) improves GSA through manipulation of its iteration strategy. The iteration strategy is switched from synchronous to asynchronous update and vice versa. The switching is conducted so that the population is adaptively switched between convergence and divergence. Synchronous update allows convergence, while switching to asynchronous update causes disruption to the population’s convergence.The ASw-GSA agents switch their iteration strategy when the best found solution is not improved after a period of time. The period is based on a switching threshold. The threshold determines how soon is the switching, and also the frequency of switching in ASw-GSA. ASw-GSA has been comprehensively evaluated based on CEC2014’s benchmark functions. The effect of the switchingthreshold has been studied and it is found that, in comparison with multiple and early switches, one-time switching towards the end of the search is better and substantially enhances the performance of ASw-GSA. The proposed ASw-GSA is also compared to original GSA, particle swarm optimization (PSO),genetic algorithm (GA), bat-inspired algorithm (BA) and grey wolf optimizer (GWO). The statistical analysis results show that ASw-GSA performs significantly better than GA and BA and as well as PSO,the original GSA and GWO.12
A study of abundance estimation has vital importance in spectral unmixing of hyperspectral image. Recently, various methods have been proposed for spectral unmixing to achieve higher performance using an evolutionary approach. However, these methods are based on unconstrained optimisation problems. Theirperformance was also based on proper tuning parameters. We have proposed a new non-parametric algorithm using teaching-learning-based optimisation technique with an inbuilt constraints maintenance mechanism using the linear mixing model. In this approach, the unmixing problem is transformed into a combinatorial optimisation problem by introducing abundance sum to one constraint and abundance non-negative constraint. A comparative analysis of the proposed algorithm is conducted with other two state-of-the-art algorithms.Experimental results in known and unknown environments with varying signal-to-noise ratio on simulated and real hyper spectral data demonstrate that the proposed method outperforms the other methods.
Inability of a heart to contract effectually or its failure to contract prevents blood from circulating efficiently, causing circulatory arrest or cardiac arrest or cardiopulmonary arrest. The unexpected cardiac arrest is medically referred to as sudden cardiac arrest (SCA). Poor survival rate of patients with SCA is one of themost ubiquitous health care problems today. Recent studies show that heart-rate-derived features can act as early predictors of SCA. Addition of angiographic and electrophysiological features can increase the robustness of the prediction system. Early warning has the capability of saving many lives. Risk of recurrent terminal cardiac arrest is high for out-of-hospital survivors. Foregoing studies indicate that recurrent cardiac events are time dependent and, while in clinical follow-up, are highly probable, predominantly in early phase. In this paper, we observe the changing risk of and changing influence of various clinical, angiographic and electrophysiological parameters on subsequent cardiac arrest recurrence with time. Various medical and synthetic datasets such as ECG dataset from PhysioNet, Pima Indian Diabetes dataset from UCI Machine Learning Repository and gene expression dataset from GEO are used, which are unique as compared with related works. Various classifiers such as LogitBoost with simple regression function, random forest and multilayer perceptron are used for recurrence risk prediction. Collection of these classifiers together forms the ensemble classifiers. Classifiers are compared based on various measures like accuracy and precision. Based on the classification, risk scores are calculated using logistic regression with backward elimination. The proposed method is used for final risk estimation. The same datasets are used for risk score calculation model development. Experimental results are found to be encouraging.
This paper presents engine gearbox fault diagnosis based on empirical mode decomposition (EMD) and Naı¨ve Bayes algorithm. In this study, vibration signals from a gear box are acquired with healthy and different simulated faulty conditions of gear and bearing. The vibration signals are decomposed into a finite number of intrinsic mode functions using the EMD method. Decision tree technique (J48 algorithm) is used for important feature selection out of extracted features. Naı¨ve Bayes algorithm is applied as a fault classifier to know the status of an engine. The experimental result (classification accuracy 98.88%) demonstrates that the proposed approach is an effective method for engine fault diagnosis.
The behaviour of the fluid flowing over a square cylinder with rounded edges subjected to an upstream steady laminar flow was investigated numerically. Here, the commercial CFD software Fluent was used. A two-dimensional steady laminar flow has been investigated numerically at low Reynolds number 5<=Re<=45, different corner radii (r = 0.50, 0.51, 0.54, 0.59, 0.64 and 0.71) and blockage 0.05. The effects of the parameters such as Reynolds number and corner radius on the drag and laminar boundary layer have been studied for the first time. The results are shown in the form of drag coefficient, boundary layer and pressure coefficient on the cylinder surface. It is found that the boundary layer thickness and the displacement thickness decrease with decreasing of the corner radius for a particular Re and also the boundary layer profile shifted downwards on decreasing Re.
This paper models a humanitarian relief chain that includes a relief goods supply chain and an evacuation chain in case of a natural disaster. Optimum network flow is studied for both the chains by considering three conflicting objectives, namely demand satisfaction in relief chain, demand satisfaction in evacuation chain and overall logistics cost.The relief goods supply chain consists of three echelons: suppliers, relief camps and affected areas. The evacuation chain consists of two echelons: evacuation camps and affected areas. The model has been made more resilient by considering multiple paths between any two locations and disruptionof camps and paths due to natural factors. The Mixed Integer Programming problem has been solved using NSGA-III and results have been compared to those from benchmark algorithms. The model has been successfully tested on generated real-life-like data.
Quarter-car models are popular, simple, unidirectional in kinematics and enable quicker computation than full-car models. However, they do not account for three other wheels and their suspensions, nor for the frame’s flexibility, mass distribution and damping. Here we propose a generalized quarter-car modelling approach, incorporating both the frame as well as other-wheel ground contacts. Our approach is linear, uses Laplace transforms, involves vertical motions of key points of interest and has intermediate complexity with improved realism. Our model uses baseline suspension parameters and responses to step force inputs at suspensionattachment locations on the frame. Subsequently, new suspension parameters and unsprung mass compliance parameters can be incorporated, for which relevant formulas are given. The final expression for the transfer function, between ground displacement and body point response, is approximated using model orderreduction. A simple Matlab code is provided that enables quick parametric studies. Finally, a parametric study and wheel hop analysis are performed for a realistic numerical example. Frequency and time domain responses obtained show clearly the effects of other wheels, which are outside the scope of usual quarter-car models. The displacements obtained from our model are compared against those of the usual quarter-car model and show ways in which predictions of the quarter-car model include errors that can be reduced in our approach. In summary, our approach has intermediate complexity between that of a full-car model and a quarter-car model, and offers corresponding intermediate detail and realism.
Storage capacity of hydropower reservoirs is lost due to sediment deposition. The problem is severe in projects located on rivers with high sediment concentration during the flood season. Removing the sediment deposition hydraulically by drawdown flushing is one of the most effective methods for restoring the storagecapacity. Effectiveness of the flushing depends on various factors, as most of them are site specific. Physical/mathematical models can be effectively used to simulate the flushing operation, and based on the results of the simulation, the layout design and operation schedule of such projects can be modified for better sediment management. This paper presents the drawdown flushing studies of the reservoir of a Himalayan River Hydroelectric Project called Kotlibhel in Uttarakhand, India. For the hydraulic model studies, a 1:100 scale geometrically similar model was constructed. Simulation studies in the model indicated that drawdown flushing for duration of 12 h with a discharge of 500 m3/s or more is effective in removing the annual sediment deposition in the reservoir. The model studies show that the sedimentation problem of the reservoir can be effectively managed through hydraulic flushing.
To reduce the embodied carbon dioxide of structural concrete, Portland cement (PC) in concrete can be partially replaced with ground granulated blast furnace slag (GGBS). In this research effect of partial replacement of cement with GGBS on strength development of concrete and cured under summer and wintercuring environments is established. Three levels of cement substitution i.e., 30%, 40% and 50% have been selected. Early-age strength of GGBS concrete is lower than the normal PC concrete which limits its use in the fast-track construction and post-tensioned beams which are subjected to high early loads. The strength gainunder winter curing condition was observed as slower. By keeping the water cement ratio low as 0.35, concrete containing GGBS up to 50% can achieve high early-age strength. GGBS concrete gains more strength than the PC concrete after the age of 28 day till 56 day. The mechanical properties of blended concrete for various levels of cement replacement have been observed as higher than control concrete mix having no GGBS.
A novel intra-ply woven fabric polyester composite with glass fibre yarns in one direction and natural fibre yarns in another direction of basket-type woven fabric has been investigated for mechanical and dynamic mechanical characteristics. Individual glass fibre woven fabric, natural fibres woven fabric and intraplynatural fibres woven fabric composites are also investigated for the comparison purpose. Results reveal that the intra-ply woven fabric hybridization enhances impact and damping properties of the composite significantly than the tensile and flexural properties. Intra-ply woven fabrics with glass fibre yarns in warp direction and jute fibre yarns in weft direction (WGWJ) exhibit better impact properties compared with woven fabric with other combinations. Dynamic mechanical analysis results reveal that intra-ply woven fabric composite with glass fibreyarns in warp direction and jute and banana fibre yarns in weft direction (WGWJAB) gives higher damping characteristics due to the multi-level interaction between fibre–fibre and fibre–matrix interactions.