• Issue front cover thumbnail

      Volume 44, Issue 7

      July 2019

    • An improved design for cellular manufacturing system associating scheduling decisions

      R SUBHAA N JAWAHAR S G PONNAMBALAM

      More Details Abstract Fulltext PDF

      This paper presents a model for the design of Cellular Manufacturing System (CMS) to evolve simultaneously structural design decisions of Cell Formation (CF) and operational issue decisions of optimal schedule. This integrated decision approach is important for designing a better performing cell. The modelallows machine duplication and incorporates cross-flow for scheduling flexibility. The cross-flow is the term introduced to mean the inter-cell movement of parts from parent cell to identical machines in other cells though machines are available in the parent cell. This cross-flow facilitates routing flexibility and paves way for reduced schedule length thereby optimizing resources leading to minimized operational cost. A non-linear integer mathematical programming model is formulated with the objective function of minimizing operating cost whichis the sum of Machine Utility Cost (MUC) and inter-cell costs. The MUC is a new cost parameter based on machine utility and it integrates CF, scheduling, and machine duplication decisions. The proposed model belongs to the class of NP-hard problems. A hybrid heuristic (HH) that has ‘‘Simulated Annealing Algorithm(SAA) embedded with Genetic Algorithm (GA)’’ is proposed. A comparison with the mathematical solution reveals that the proposed HH is capable of providing solutions closer to optimal in a computationally efficient manner. The model is validated by studying the effect of integrated decisions, machine duplications, andassociation of scheduling and cross-flow. The model validation reveals that the proposed CMS model evolves CF, scheduling, and machine duplication decisions with minimum operating cost. Thus, it can be inferred that the proposed model gives optimal integrated decisions for designing an effectively and efficiently performing cells and thus evolves improved CMS design decisions.

    • Information extraction framework for Kurunthogai

      C N SUBALALITHA

      More Details Abstract Fulltext PDF

      Kurunthogai is a classical Tamil poetic masterpiece and it is the second book of Ettuthokai which is one of the Sangam literary works. The poems of Kurunthogai expresses the love life between men and women who lived during the Sangam age. Kurunthogai is a massive work written by many authors. The poems arewritten based on the five different landscapes namely, Kurinchi, Mullai, Marutham, Neythal, and P̅lai. So, the poems contain much valuable historical information related to these landscapes. This paper proposes a template basedInformation Extraction (IE) framework for Kurunthogai which automatically extracts the names of flora, fauna, foods, vessels, and water bodies described in it. Furthermore, it extracts Noun Unigrams, Verb Unigrams, Adjective-Noun Bigrams, and Adverb-Verb Bigrams. Tamil Morphological Analyzer tool has been used toextract the N-grams. The state-of-art IE techniques have attempted to extract information from expository texts, whereas, the proposed IE framework extracts information from a literature-based text. The existing techniques extract information from monolingual texts, whereas, the proposed IE framework extracts information from bilingual texts. The proposed IE framework has achieved a precision of 88.8%. The proposed framework can be applied for any literature type of texts and be used in various applications of Natural Language Processing.

    • A new batch verification scheme for ECDSA* signatures

      APURVA S KITTUR ALWYN R PAIS

      More Details Abstract Fulltext PDF

      In this paper, we propose an efficient batch verification algorithm for ECDSA*(Elliptic Curve Digital Signature Algorithm)* signatures. Our scheme is efficient for both single and multiple signers. ECDSA* signature is a modified version of ECDSA, which accelerates the verification of ECDSA signature by more than40%. However, the highlighting feature of our proposed scheme is its efficiency for varied batch sizes. The scheme is resistant to forgery attacks by either signer or intruder. The performance of our scheme remains consistent for higher batch sizes too (≥8). Our paper also discusses the possible attacks on ECDSA signaturesand also how our scheme is resistant to such attacks.

    • A four-layered model for flow of non-Newtonian fluid in an artery with mild stenosis

      R PONALAGUSAMY RAMAKRISHNA MANCHI

      More Details Abstract Fulltext PDF

      The present article deals with a four-layered mathematical model for blood flow through an artery with mild stenosis. The four-layered model comprises a cell-rich core of suspension of all the erythrocytes described as a non-Newtonian (Jeffrey) fluid, a peripheral zone of cell-free plasma (Newtonian fluid) and the stenosed artery with porous wall consisting of a thin transition (Brinkman) layer followed by Darcy region. Analytical expressions have been obtained for velocity profiles in all the four regions, total volumetric flow rate, wall shear stress and flow impedance. MATLAB software is employed to compute numerical values of the pressure gradient. The influences of different parameters such as variable core fluid viscosity, hematocrit, thickness of the plasma layer, Brinkman and Darcy layer thickness, Darcy number, Jeffrey fluid parameter, and size and shape parameters of stenosis on the physiologically vital flow characteristics, specifically velocityprofile, volume flow rate, wall shear stress and flow impedance, have been examined. It is observed that the wall shear stress and resistive impedance decrease with the increase of plasma layer thickness, Jeffrey fluid parameter, Darcy number and Darcy slip parameter, and increase with the rise of hematocrit. The results in the case of variable core viscosity and constant core viscosity are compared to investigate the impact of variable core viscosity in managing the flow of blood.

    • Numerical investigation of the effect of second order slip flow conditions on interfacial heat transfer in micro pipes

      SONER ŞEN

      More Details Abstract Fulltext PDF

      Heat transfer that occurs in micro scale devices has a very important place among the engineering applications that cooling or heating. This heat transfer mechanism in devices having dimensions at micron level is a completely different problem in the macro level analysis. Therefore, in the calculations made, the flowevents and heat transfer in micron scale pipes are calculated by using more realistic expressions. For this reason, in this study heat transfer in a circular micro pipe with wall and fluid conjugation for laminar rarefied gas flow intransient regime is investigated under the second order slip boundary conditions at the interface. Patankar’s control volume method is used here to solve the problem numerically. This analysis includes of axial conduction, viscous dissipation and rarefaction effects which are indispensable in micro-flow structure. From theresults, it is seen that the values that are indicating heat transfer are excessively affected by wall thickness, viscous heating and gas rarefaction especially in transient regime.

    • Optimisation of density of infra-red decoy flare pellets by Taguchi method

      SUKAMAL ADHIKARY HIMANSHU SEKHAR DINESH G THAKUR

      More Details Abstract Fulltext PDF

      Magnesium/Teflon/Viton or MTV pyrotechnic composition has been widely preferred to prepare decoy flares as countermeasures against heat seeking Infra-red (IR) missiles. Though MTV for military applications are available in the global market, the manufacturing process and performance characteristics of these flares have not been explicitly defined. The pellets which are an essential sub-assembly of the flares need to be extensively studied to develop these flares for military applications. The study paper attempts to optimise the density of compacted 50 mm diameter cylindrical pellets. The pyrotechnic composition is initially subjected to various sensitivity tests namely impact, friction and spark to assess the threshold values of initiation of this composition. Three levels of process parameters for pelleting have been considered and L27 array has been selected to represent the process parameters namely charge mass (A), applied load (B), dwell time (C) and their interactions. The Taguchi robust experiment method arrived at the optimal result as A1B3C3 (100 g of charge mass, 8 tons of applied load and 20 s of dwell time). Analysis of Variance (ANOVA) highlighted that parameters A and B significantly influenced the density of the pellets. Finally, general regression equation was derived with R2 value of 0.94.

    • Towards smartphone-based touchless fingerprint recognition

      PARMESHWAR BIRAJADAR MEET HARIA PRANAV KULKARNI SHUBHAM GUPTA PRASAD JOSHI BRIJESH SINGH VIKRAM GADRE

      More Details Abstract Fulltext PDF

      The widely used conventional touch-based fingerprint identification system has drawbacks like the elastic deformation due to nonuniform pressure, fingerprints collection time and hygiene. To overcome these drawbacks, recently the touchless fingerprint technology is gaining popularity and various touchless fingerprintacquisition solutions have been proposed. Nowadays due to the wide use of the smartphone in various biometric applications, smartphone-based touchless fingerprint systems using an embedded camera have been proposed inthe literature. These touchless fingerprint images are very different from conventional ink-based and live-scan fingerprints. Due to varying contrast, illumination and magnification, the existing touch-based fingerprint matchers do not perform well while extracting reliable minutiae features. A touchless fingerprint recognition system using a smartphone is proposed in this paper, which incorporates a novel monogenic-wavelet-based algorithm for enhancement of touchless fingerprints using phase congruency features. For the comparativeperformance analysis of our system, we created a new touchless fingerprint database using the developed android app and this is publicly made available along with its corresponding live-scan images for further research. The experimental results in both verification and identification mode on this database are obtained using three widely used touch-based fingerprint matchers. The results show a significant improvement in Rank-1 accuracy and equal error rate (EER) achieved using the proposed system and the results are comparable to thatof the touch-based system.

    • Histogram-Equalized Hypercube Adaptive Linear Regression for Image Quality Assessment

      N BALAKRISHNAN S P SHANTHARAJAH

      More Details Abstract Fulltext PDF

      Image Quality Assessment (IQA) becomes intensely salient in several applications, namely, acquisition of images, watermarking, image compression, image transmission, enhancement of images and so on, due to the extensive use of digital images. In the past decades, considerable advancements have beendeveloped in IQA using Region of Interest (ROI). However, ROI localization is a labour-intensive process that takes multiple passes of sliding-window in search of proper ROI. The efficiency of examination, reduction in the time taken for ROI localization by multiple passes and the quality of the image can be improved by the proposed method, Histogram-Equalized Hypercube Adaptive Linear Regression (HE-HALR) scheme. HE-HALR scheme first performs the pre-processing step for input images. In this step, the features used to describe the quality of images are analysed using Histogram-Equalization-based Contrast Masking (HE-CM) model. The HE-CM model performs ROI localization with the parallelization programming that identifies the contrast masking and luminance value in a parallel manner. With the resultant feature vectors, dimensional reduction is performed using machine learning technique, namely, hypercubical neighbourhood. Finally, IQA is performedwith the dimensionality-reduced features using Adaptive Linear Regression.

    • Performance monitoring of wind turbines using advanced statistical methods

      ANIL KUMAR KUSHWAH RAJESH WADHVANI

      More Details Abstract Fulltext PDF

      Estimation of wind power generation for grid interface helps in calculation of the annual energy production, which maintains the balance between electricity production and its consumption. For this purpose, accurate wind speed forecasting plays an important role. In this paper, linear statistical predictive models such asautoregressive integrated moving average (ARIMA), generalized autoregressive score (GAS) model and a GAS model with exogenous variable x (GASX) have been applied for accurate wind speed forecasting. Along with this, a non-linear statistical predictive modelling technique called non-linear GASX (NLGASX) has been proposed and applied to model non-linear time-series data. Furthermore, the proposed NLGASX model is optimized using modelling techniques based on neural networks, namely Sigmoid, TANH, Softmax and RELU. The proposed optimized NLGASX model performs far better as compared with other models. Wind speed is also used as an input to wind power curve model for predicting the wind power. According to the predicted wind power the annual energy has been calculated.

    • Estimation of azimuth of a macro cell through user data for LTE access network

      BRIJESH SHAH GAURAV DALWADI RAHUL BHASKER HARDIP SHAH NIKHIL KOTHARI

      More Details Abstract Fulltext PDF

      The number of antennas on a site increases due to a simultaneous deployment of multi-band and multi-mode radios to combat extremely growing data demand in the network. The correct values of physical parameters of antennas, including azimuth, height and tilt, are essential to optimize the radio frequency (RF)network automatically. It is seen that poor results in RF network optimization are mainly due to incorrect azimuth. The proposed algorithm can estimate the azimuth of an antenna in the field using passive monitoring data from the user equipment. It has been developed to identify the correct value of azimuth without doing thefield audit, which can significantly reduce the time for optimization and operational expenditure (OPEX) as well. The field trial reveals that the estimated azimuth value closely matches within ±12° range in comparison to theactual value in the field. Moreover, field results show that the same algorithm is equally applicable for urban and rural morphologies as well. It can also be automated to sanctify the physical site database with proper azimuthvalues at large level without introducing any kind of human error.

    • Design of two-dimensional PR quincunx filter banks with Euler– Frobenius polynomial and lifting scheme

      MUKUND B NAGARE BHUSHAN D PATIL RAGHUNATH S HOLAMBE

      More Details Abstract Fulltext PDF

      Two-dimensional (2-D) filter banks (FBs) have played a significant role in retrieving the directional information of images. In this paper, we propose a technique to design 2-D two-channel perfect reconstruction (PR) FBs with quincunx sampling. The proposed design method comprises two stages. In the first stage, wepropose the design of a new halfband polynomial using Euler–Frobenius polynomial (EFP). This is constructed by imposing vanishing moment and PR constraints on EFP. The resulting new polynomial is a maximally flatEuler–Frobenius halfband polynomial (EFHBP). Later, in the second stage, EFHBP is used in a modified 2-D lifting scheme to design 2-D filters. The design examples for 2-D filters are presented and compared to existing filters. The performance shows that proposed filters have better regularity, symmetry and less energy of the error compared with existing FBs. Finally, performance of designed filters is evaluated in image denoising application.

    • Distributed cat modeling based agile framework for software development

      B PRAKASH V VISWANATHAN

      More Details Abstract Fulltext PDF

      Software development is a challenging process that requires in-depth understanding and an effective model such that the developed software inherits good quality and reliability, and attains customer satisfaction towards achieving the goals successfully. The effectiveness of the software is enabled by modifyingthe operating modules of the software through a model, like agility. In this paper, the catastrophic and distributed computing models are integrated into the software development process. The proposed model is termed as a distributed cat model that is developed with the aim to handle the risk factors engaged in various developing stages of the agile model. The risk factors that affect the communication, planning, release, design, coding and testing modules of the agile modules are deeply learned and executed such that the risk factors are tackled byvarious modules present in the proposed distributed cat model. The effectiveness of the proposed model is analysed based on the performance metrics such as Index of Integration (IoI) and Usability Goals Achievement Metric (UGAM), for which five products, including the hotel management system, Customer Relationship Management system (CRM), rainfall prediction system, temperature monitoring system and meta-search system, are employed. The analysis is performed using the parameters like mean difference, variance, standard deviationand correlation coefficient. The result proves that the proposed model offers a great positive deviation contributing to high degree of performance in software development.

    • Experimentation modelling and optimization of electrohydrodynamic inkjet microfabrication approach: a Taguchi regression analysis

      AMIT KUMAR BALL RAJU DAS SHIBENDU SHEKHAR ROY DAKSHINA RANJAN KISKU NARESH CHANDRA MURMU

      More Details Abstract Fulltext PDF

      Electrohydrodynamic (EHD) inkjet is a modern non-contact printing approach, which uses a direct writing technology of functional materials to achieve micro/nanoscale of printing resolution. As an alternative to conventional inkjet technology, the goal of the EHD inkjet printing is to generate uniformly minimized droplets on a substrate. In this study, the effects of applied voltage, standoff height and ink flow rate on droplet diameter formation in EHD inkjet printing process were analysed using Taguchi methodology and regression analysis.Several experiments were carried out using an L27 (313) orthogonal array. Based on signal to noise (S/N) ratioand mean response, optimal droplet diameter was achieved. The analysis of variance (ANOVA) was used to find the significance and percentage of contribution of each input parameter along with their interaction on the output droplet diameter. Analysis of the results revealed that the ink flow rate was the dominant factor that affected the droplet diameter mostly. The effect of the applied voltage is significant until regular ejection starts. It helps reduce droplet diameter more than five times compared with its initial droplet diameter in the absence of the electric field. A confirmation test was carried out with a 90% confidence level to illustrate the effectiveness of the Taguchi optimization method. Both linear and quadratic regression analysis were applied to predict the output droplet diameter. The predicted result from the model and actual test results are very close to each other,justifying the significance of the models.

    • Word Sense Disambiguation in Bengali language using unsupervised methodology with modifications

      ALOK RANJAN PAL DIGANTA SAHA

      More Details Abstract Fulltext PDF

      In this work, Word Sense Disambiguation (WSD) in Bengali language is implemented using unsupervised methodology. In the first phase of this experiment, sentence clustering is performed using Maximum Entropy method and the clusters are labelled with their innate senses by manual intervention, as thesesense-tagged clusters could be used as sense inventories for further experiment. In the next phase, when a test data comes to be disambiguated, the Cosine Similarity Measure is used to find the closeness of that test data withthe initially sense-tagged clusters. The minimum distance of that test data from a particular sense-tagged cluster assigns the same sense to the test data as that of the cluster it is assigned with. This strategy is considered as the baseline strategy, which produces 35% accurate result in WSD task. Next, two extensions are adopted over this baseline strategy: (a) Principal Component Analysis (PCA) over the feature vector, which produces 52% accuracy in WSD task and (b) Context Expansion of the sentences using Bengali WordNet coupled with PCA,which produces 61% accuracy in WSD task. The data sets that are used in this work are obtained from the Bengali corpus, developed under the Technology Development for the Indian Languages (TDIL) project of the Government of India, and the lexical knowledge base (i.e., the Bengali WordNet) used in the work is developed at the Indian Statistical Institute, Kolkata, under the Indradhanush Project of the DeitY, Government of India. The challenges and the pitfalls of this work are also described in detail in the pre-conclusion section.

    • A pareto design of evolutionary hybrid optimization of ANFIS model in prediction abutment scour depth

      HAMED AZIMI HOSSEIN BONAKDARI ISA EBTEHAJ SAEID SHABANLOU SEYED HAMED ASHRAF TALESH ALI JAMALI

      More Details Abstract Fulltext PDF

      In this paper, a novel pareto evolutionary structure of adaptive neuro-fuzzy inference system (ANFIS) network is presented for abutment scour depth predicting. The genetic algorithm (GA) and singular value decomposition (SVD) is utilized in optimizing design of nonlinear antecedent parts and linear consequentparts of TSK-type of fuzzy rules simultaneously in ANFIS design for the first time. To this end, first the parameters affecting the scour in the vicinity of abutments are detected. After that, 11 ANFIS-GA/SVD models are introduced through the combination of the parameters affecting the scour. Based on the modeling results, the ANFIS-GA/SVD models predict the scour around abutments with a reasonable accuracy. The superior model forecasts more than 63% of scours with an error of less than 8%. The correlation coefficient (R) for the model is computed roughly 0.978. The value of the average discrepancy ratio for the model is obtained 0.981. In addition, the results of the sensitivity analysis demonstrate that the Froude number (Fr) and the ratio of the flow depth to the radius of the scour hole (h/L) are the most noticeable parameters affecting the scour depth in the vicinity ofthe abutments. Ultimately, a comparison between the superior model and the previous studies are presented which reveal that the current study has better performance to predict scour depth around abutments.

    • Parametric analysis of axial wall conduction in a microtube subjected to two classical thermal boundary conditions

      NISHANT TIWARI MANOJ KUMAR MOHARANA

      More Details Abstract Fulltext PDF

      Heat transfer in laminar flow microtube is numerically explored with an objective of discriminating conjugate heat transfer process experienced in a microtube under two different thermal conditions. Two classical thermal conditions – constant heat flux and constant wall temperature – are imposed separately on the outersurface of a microtube. Wide parametric variations are considered in this study, for the two thermal conditions, albeit the problem under consideration being very classical from both geometry and thermal condition point of view. The parametric variations considered in this work include wall thickness, wall conductivity and coolant flow rate. An expression for Nusselt number in terms of radial (or transverse) and axial conduction number is presented and validated against existing theoretical correlation as well as reported experimental data for bothcircular and non-circular channels. Dominance of axial conduction over radial (or transverse) conduction is explored and it is found that the effect of wall material on conjugate heat transfer plays an important role. Additionally, it is also observed that with the increase in coolant flow rate, the ratio of radial to axial conductionnumber increases for both thermal boundary conditions.

    • MPSAGA: a matrix-based pair-wise sequence alignment algorithm for global alignment with position based sequence representation

      JYOTI LAKHANI AJAY KHUNTETA ANUPAMA CHOUDHARY DHARMESH HARWANI

      More Details Abstract Fulltext PDF

      The proposed algorithm is a novel matrix-based global pair-wise sequence alignment with a de novo sequence representation. Needleman–Wunsch, noblest, Emboss-Needle, ALIGN, LALIGN, FOGSAA, DIALIGN, ACANA, MUMmer, etc. are few other algorithms that are most commonly used for global pair-wise sequence alignment. Needleman–Wunsch algorithm is one of the most popular algorithms that provides the best possible pair-wise sequence alignment, but the algorithm output is associated with high time and space complexities. To resolve these complex issues, researchers have proposed several algorithms to reduce time andspace complexities in the pair-wise sequence alignment. Most of these algorithms provide solutions, but compromise the optimal result in favor of plummeting time and space complexities. An attempt has been made in the present research to develop MPSAGA and a completely unique positional matrix (PM) based sequence representation to deal with the time and space complexities without compromising sequence alignment results (MPSAGA is in public domain available at https://github.com/JyotiLakhani1/MPSAGA). A bench marking ofthe proposed algorithm has also been performed with other popular pair-wise sequence alignment algorithms with and without positional matrix-based sequence representation. The use of an integer instead of string data type and exclusive clustering method in MPSAGA with positional matrix based sequence representation resulted in a noteworthy reduction in the memory usage (space) and execution time in the pair-wise alignment of biological sequences.

    • Aiding and opposing mixed convection of water with density inversion about a wall of varying temperature

      RAJENDRA PRASAD SONI MADHUSUDHANA R GAVARA

      More Details Abstract Fulltext PDF

      Mixed convection of water over a vertical surface of varying temperature with density inversion is studied for aiding and opposing convection configurations. The temperature of the surface is assumed to be an arbitrary function of vertical distance. The governing equations are transformed using dimensionless streamfunction and temperature. The temperature differentials of varying wall temperature are used as perturbation functions. The dimensionless stream function and temperature are expanded in power series of perturbation variables and coefficient functions. The obtained coefficient functions are valid for any arbitrary wall temperature function, and hence they are ’universal functions’. Power law wall temperature variation is chosen to show the usefulness of universal functions. The results are presented for velocity and temperature distributions in the boundary layer, velocity and thermal boundary layer thicknesses, skin friction coefficient and heat transfer rates for various values of governing parameter and wall temperature power law index for both aiding and opposing flows. It is found that for the range of Gry=Re2yvalues considered in the study, the skin friction coefficient and heat transfer rates vary almost linearly with wall temperature power law index value for a given Gry=Re2y value for both aiding and opposing flows. For special wall temperature cases, the present results are compared to benchmark solutions available in literature and good agreement is found.

  • Sadhana | News

    • Editorial Note on Continuous Article Publication

      Posted on July 25, 2019

      Click here for Editorial Note on CAP Mode

© 2017-2019 Indian Academy of Sciences, Bengaluru.