One of the main challenges in the Traffic Anomaly Detection (TAD) system is the ability to deal with unknown target scenes. As a result, the TAD system performs less in detecting anomalies. This paper introduces a novelty in the form of Adaptive Neuro-Fuzzy Inference System-Lossy-Count-based Topic Extraction (ANFIS-LCTE) for classification of anomalies in source and target traffic scenes. The process of transforming the input variables, learning the semantic rules in source scene and transferring the model to target scene achieves the transfer learning property. The proposed ANFIS-LCTE transfer learning model consists offour steps. (1) Low level visual items are extracted only for motion regions using optical flow technique. (2)Temporal transactions are created using aggregation of visual items for each set of frames. (3) An LCTE is applied for each set of temporal transaction to extract latent sequential topics. (4) ANFIS training is done with the back-propagation gradient descent method. The proposed ANFIS model framework is tested on standard dataset and performance is evaluated in terms of training performance and classification accuracies. Experimental results confirm that the proposed ANFIS-LCTE approach performs well in both source and targetdatasets.
This paper deals with design and simulation of a three-phase shunt hybrid power filter consisting of a pair of 5th and 7th selective harmonic elimination passive power filters connected in series with a conventional active power filter with reduced kVA rating. The objective is to enhance the power quality in a distributionnetwork feeding variety of non-linear, time-varying and unbalanced loads. The theory and modelling of the entire power circuit in terms of synchronously rotating reference frame and leading to a non-linear control scheme is presented. This work involves introduction of individual fuzzy logic controllers for d and q axiscurrent control and for voltage regulation of the DC link capacitor. The simulation schematic covering the power and control circuits have been developed taking into account severe harmonic distortion caused by non-linear and unbalanced loads. The effectiveness of the fuzzy logic controller for the compensation of harmonics and reactive power has been verified by successive simulation runs and analysis of the results. The proposed controller is also able to compensate the distortion generated by the voltage- and current-fed non-linear loads, unbalanced and dynamically varying loads. Further, excellent regulation of the DC link voltage is accomplished, which significantly contributes to improvement of power quality.
In this study, a new technique is suggested for simplification of linear time-invariant systems.Motivated by optimization and various system simplification techniques available in the literature, the proposed technique is formulated using Cuckoo search in combination with Le´vy flight and Eigen spectrum analysis. Theefficacy and powerfulness of the new technique is illustrated by three benchmark systems considered from previously published work and the results are compared in terms of performance indices.
The present study aims to study the propagation of Rayleigh-type wave in a layer, composed of isotropic viscoelastic material of Voigt type, with the effect of yielding base and rigid base in two distinct cases.With the aid of an analytical treatment, closed-form expressions of phase velocity and damped velocity for both the cases are deduced. As a special case of the problem it is found that obtained results are in good agreement with the established standard results existing in the literature. It is established through the study that volume viscoelastic and shear-viscoelastic material parameter and yielding parameter have significant effect on phaseand damped velocities of Rayleigh-type wave in both the cases. Numerical calculations and graphical illustration have been carried out for both the considered cases in the presence and the absence of viscoelasticity. Acomparative study has been performed to analyse the effect of layer with yielding base, traction-free base and rigid base on the phase and damped velocities of Rayleigh-type wave.
In a nuclear power plant, periodic sensor calibration is necessary to ensure the correctness of measurements. Those sensors which have gone out of calibration can lead to malfunction of the plant, possibly causing a loss in revenue or damage to equipment. Continuous sensor status monitoring is desirable to assure smooth running of the plant and reduce maintenance costs associated with unnecessary manual sensor calibrations.In this paper, a method is proposed to detect and identify any degradation of sensor performance. The validation process consists of two steps: (i) residual generation and (ii) fault detection by residual evaluation.Singular value decomposition (SVD) and Euclidean distance (ED) methods are used to generate the residual and evaluate the fault on the residual space, respectively. This paper claims that SVD-based fault detection method isbetter than the well-known principal component analysis-based method. The method is validated using data from fast breeder test reactor.
Nowadays, the number of software vulnerabilities incidents and the loss due to occurrence of software vulnerabilities are growing exponentially. The current existing security strategies, the vulnerability detection and remediating approaches are not intelligent, automated, self-managed and not competent to combat against the vulnerabilities and security threats, and to provide secured self-managed software environment to the organizations. Hence, there is a strong need to devise an intelligent and automated approach to optimize security and prevent the occurrence of vulnerabilities or mitigate the vulnerabilities. The autonomic computing is a nature-inspired and self-management-based computational model. In this paper, an autonomic-computing-based integrated framework is proposed to detect, fire the trigger of alarm, assess, classify, prioritize, mitigate and manage the software vulnerability automatically. The proposed framework uses a knowledge base and inference engine, which automatically takes the remediating actions on future occurrence of software security vulnerabilities through self-configuration, self-healing, self-prevention and self-optimization as per the needs. The proposed framework is beneficial to industry and society in various aspects because it is an integrated, crossconcern and intelligent framework and provides more secured self-managed environment to the organizations. The proposed framework reduces the security risks and threats, and also monetary and reputational loss. It canbe embedded easily in existing software and incorporated or implemented as an inbuilt integral component of the new software during software development.
The implementation of extended Kalman filter-based simultaneous localization and mapping is challenging as the associated system state and covariance matrices along with the memory requirements become significantly large as the information space increases. Unique and consistent point features representing a segment of the map would be an optimal choice to control the size of covariance matrix and maximize the operating speed in a real-time scenario. A two-wheel differential drive mobile robot equipped with a Laser Range Finder with 0.02 m resolution was used for the implementation. Unique point features from the environment were extracted through an elegant line fitting algorithm, namely split only technique. Finally, the implementation showed remarkably good results with a success rate of 98% in feature identification and ±0.08 to ±0.11 m deviation in the generated map
The synthetic aperture radar (SAR) images are mainly affected by speckle noise. Speckle degrades the features in the image and reduces the ability of a human observer to resolve fine detail, hence despeckling is very much required for SAR images. This paper presents speckle noise reduction in SAR images using a combination of curvelet and fuzzy logic technique to restore speckle-affected images. This method overcomes the limitation of discontinuity in hard threshold and permanent deviation in soft threshold. First, it decomposes noise image into different frequency scales using curvelet transform, and then applies the fuzzy shrinking technique to high-frequency coefficients to restore noise-contaminated coefficients. The proposed method does not use threshold approach only by proper selection of shrinking parameter the speckle in SAR image is suppressed. The experiment is carried out on different resolutions of RISAT-1 SAR images, and results are compared with the existing filtering algorithms in terms of noise mean variance (NMV), mean square difference (MSD), equal number of looks (ENL), noise standard deviation (NSD) and speckle suppression index (SSI). A comparison of the results shows that the proposed technique suppresses noise significantly, preserves the details of the image and improves the visual quality of the image
Due to constant advancement of computer tools, automated conversion of images of typed,handwritten and printed text is important for various applications, which has led to intense research for several years in the field of offline handwritten character recognition. Handwritten character recognition is complex because characters differ by writing style, shapes and writing devices. To resolve this problem, we propose a fuzzy-based multi-kernel spherical support vector machine. Initially, the input image is fed into the pre-processing step to acquire suitable images. Then, histogram of oriented gradient (HOG) descriptor is utilised forfeature extraction. The HOG descriptor constitutes a histogram estimation and normalisation computation. The features are then classified using the proposed classifier for character recognition. In the proposed classifier, we design a new multi-kernel function based on the fuzzy triangular membership function. Finally, a newly developed multi-kernel function is incorporated into the spherical support vector machine to enhance the performance significantly. The experimental results are evaluated and performance is analysed by metrics such as false acceptance rate, false rejection rate and accuracy, which is implemented in MATLAB. Then, the performance is compared with existing systems based on the percentage of training data samples. Thus, the outcome of our proposed system attains 99% higher accuracy, which ensures efficient recognition performance
This paper proposes a structural design and multi-objective optimization of a two-degree-of-freedom (DOF) monolithic mechanism. The mechanism is designed based on compliant mechanism with flexure hinge and is compact in size (126 mm by 107 mm). Unlike traditional one-lever mechanisms, a new doublelever mechanism is developed to increase the working travel amplification ratio of the monolithic mechanism. The ideal amplification ratio, the working travel, the statics and the dynamics of the mechanism are taken into consideration. The effects of design variables on the output responses such as the displacement and first natural frequency are investigated via finite-element analysis based on response surface methodology. The fuzzy-logicbased Taguchi method is then used to simultaneously optimize the displacement and the first natural frequency. Experimental validations are conducted to verify the optimal results, which are compared to those of the original design. On using a finite-element method, the validation results indicated that the displacement and frequency are enhanced by up to 12.47% and 33.27%, respectively, over those of the original design. The experiment results are in a good agreement with the simulations. It also revealed that the developed fuzzy-logic-based Taguchi method is an effectively systematic reasoning approach for optimizing the multiple quality characteristics of compliant mechanisms. It was noted that the working travel/displacement of the double-levermechanism is much larger than that of the traditional one-lever mechanism. It leads to the conclusion that the proposed mechanism has good performances for manipulations and positioning systems.
A methodology to identify the partial blockages in a simple pipeline using genetic algorithms for non-harmonic flows is presented in this paper. A sinusoidal flow generated by the periodic on-and-off operation of a valve at the outlet is investigated in the time domain and it is observed that pressure variation at the valve is influenced by the opening size of blockage and its location. In this technique, the unsteady (steady oscillatory) pressure time series at only one location is required to identify two blockages. In the proposed methodology, thesolution of the governing hyperbolic PDEs of pipe flow is obtained using the method of characteristics. For any piping system similar to the hypothetical pipe system used in the simulations, generalized best amplitude and best frequency of the valve operation are determined, which give maximum deviation in pressure responses for a specific blockage at different locations for a given constant-head reservoir. The generalized best amplitude and best frequency of the valve operation are also obtained for two blockages. Accuracy of the proposed methodology in identifying blockages in a hypothetical simple pipe system with increased noise in the simulated measurements is studied. A non-dimensional variable is proposed to determine whether the proposed methodology is applicable to isolate partial blockages in a piping system. Finally, the proposed methodology is experimentally validated on a laboratory piping system for a single blockage and two blockages.
Dam failure has been the subject of many hydraulic engineering studies due to its complicated physics with many uncertainties involved and the potential to cause many losses of lives and economical losses. A primary source of uncertainties in many dam failure analyses refers to prediction of the reservoir’s outflow hydrograph, which is studied in the present investigation. This paper presents an experimental study on instantaneous dam failure flood under different reservoir’s capacities and lengths in which the side slopes change within a range of 30°–90°. Thus, several outflow hydrographs are calculated and compared. The results reveal the role of the side slopes on dam break flood wave, such that lower side slope creates more catastrophic outflow.The reservoir capacity and length are also recognized to be important factors, such that they do affect peak discharge and time to peak of the outflow hydrograph. Finally, the paper presents two simple relations for peak discharge and maximum water level estimation at any downstream location.
A new analytical solution is derived for tide-driven groundwater waves in coastal aquifers using higher-order Boussinesq equation. The homotopy perturbation solution is derived using a virtual perturbation approach without any pre-defined physical parameters. The secular term removal is performed using a combinationof parameter expansion and auxiliary term. This approach is unique compared with existing perturbation solutions. The present first-order solution compares well with the previous analytical solutions and a 2D FEFLOW solution for a steep beach slope. This is due to the fact that the higher-order Boussinesq equationcaptures the streamlines better than ordinary Boussinesq equation based on Dupuit’s assumption. The slope of the beach emerges as an implicit physical parameter from the solution process.
Barrel finishing (BF) process is widely used to improve the surface finish and dimensional features of metallic and non-metallic parts using different types of media. As a matter of fact the change in shore hardness (SH) features of fused deposition modelling (FDM)-based master pattern is one of the important considerations from its service point of view. The main objective of present research work is to investigate the effect of BF process on SH of acrylonitrile–butadiene–styrene (ABS)-based master patterns prepared by FDM. Six controllable parameters of FDM and BF, namely, geometry of prototype, layer density, part orientation, types of BF media, weight of media and finish cycle time, were studied using Taguchi’s L18 orthogonal array in order to find their effect on SH of master pattern. Results indicated that process parameters significantly affectthe SH of master patterns. It has been found that FDM part layer density contributed the maximum (about 67.52%) for SH of master patterns
In this study, the thermal performances of single- and counter-flow solar air heaters with a normal cover and with quarter- and half-perforated covers were investigated experimentally. In this work, on two of the perforated covers, the holes were made in the first quarter at the top side of the covers. As for the other two covers, half of the cover area on the top side was perforated. The hole diameter, D, was 0.3 cm. The holes in the covers had a centre-to-centre distance of 20D (6 cm) or 10D (3 cm). It was found that the efficiency of the air heater with the quarter-perforated cover was slightly higher than that of the one with the half-perforated cover for both single- and counter-flow collectors. The average efficiencies of the double-pass solar collector with 20D and 10D quarter-perforated covers were 51.38% and 54.76%, respectively, and the ones for the collector with 20D and 10D half-perforated covers were 48.21% and 51.17%, respectively, at mass flow rate of 0.032 kg/s. At the same mass flow rate, the average efficiency of the double-pass air heater with normal cover was 50.92%.
Numerical modelling is broadly used for assessing complex scenarios in underground mines, including mining sequence and blast-induced vibrations from production blasting. Sublevel stoping mining methods with delayed backfill are extensively used to exploit steeply dipping ore bodies by Canadian hard-rockmetal mines. Mine backfill is an important constituent of mining process. Numerical modelling of mine backfill material needs special attention as the numerical model must behave realistically and in accordance with the site conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. Themodelling strategy is studied using a case study mine from Canadian mining industry. In the end, results of numerical model parametric study are shown and discussed.
Various initiatives, strategies and programmes have been taken by the Government of Malaysia to resolve issues pertaining to road traffic deaths. Nevertheless, the implementation of the programmes outlined in Malaysian Road Safety Plan 2006 needs to be enhanced in order to achieve the set targets. In this regard, it is imperative for all parties concerning road safety to determine the factors that significantly contribute to road traffic deaths. According to the Ministry of Works, Malaysia, the blackspot treatment programme (which is centred on the elimination of road hazards by engineering approaches) is successful in reducing the number of injuries due to road traffic accidents up to a certain extent. This study is focussed on analysing road traffic deaths caused by various road environment elements recorded by the police from 2000 to 2011 in order to determine their distribution, proportion and relationship with fatal accidents. The Chi-square test and Marascuilo procedure with 5% level of significance are used in this study. Based on locality, the number of road traffic deaths in rural area (66%) is significantly higher compared with that in urban areas (34%). Based on road category, the number of road traffic deaths is the highest for federal roads, whereas the highest rate of fatalities per kilometre is recorded for expressways. Based on road segment, the number of road traffic deaths is the highest for straight road segments, followed by bends. In addition, the number of road traffic deaths is the highest for Y/T junctions,followed by cross junctions. The lowest number of road traffic deaths is recorded for interchanges and roundabouts. The results show that only 11.25% of the total road traffic deaths are related to road defects. The highest proportion of deaths due to road defects (48.6%) is associated with lack of street lighting provision,whereas road shoulder edge drop-off and potholes contribute 15.4% and 11.2% of road traffic deaths,respectively.
Bearing-only passive target tracking is a well-known underwater defence issue dealt in the recent past with the conventional nonlinear estimators like extended Kalman filter (EKF) and unscented Kalman filter (UKF). It is being treated now-a-days with the derivatives of EKF, UKF and a highly sophisticated particle filter(PF). In this paper, two novel methods based on the Estimate Merge Technique are proposed. The Estimate Merge Technique involves a process of getting a final estimate by the fusion of a posteriori estimates given by different nonlinear estimates, which are in turn driven by the towed array bearing-only measurements. The fusion of the estimates is done with the weighted least squares estimator (WLSE). The two novel methods, one named as Pre-Merge UKF and the other Post-Merge UKF, differ in the way the feedback to the individual UKFsis applied. These novel methods have an advantage of less root mean square estimation error in position and velocity compared with the EKF and UKF and at the same time require much lesser number of computations than that of the PF, showing that these filters can serve as an optimal estimator. A testimony of the aforementioned advantages of the proposed novel methods is shown by carrying out Monte Carlo simulation in MATLAB R2009a for a typical war time scenario