Venue : Coral Deira - Dubai, Deira, Dubai, UAE..  &  Date : April 24 ~ 25, 2015

Accepted papers

  • Data Mining Approach to Filter Click-spam in Mobile Ad networks
    D.Vasumati1 ,R.Bhramaramba 2 , M. Sree Vani3,O.Yaswanth Babu4,Jawaharlal Nehru Technological University,India1, Gandhi Institute of Technology and Management, India2,Mahatma Gandhi Institute of Technology3 and Computer Maintenance Corporation,India.4
    The mobile revolution created by smart phones and mobile devices with multiple reasons resulted in enriching the marketing ad-network also by leaps and bounds. This has resulted in frauds by ad-publishers either knowingly or unknowingly. With the result Ad-advertisers as well as Ad- networks are put to great loss because of steep fall in their revenues. The users are also not free from these problems as their time and money also get wasted due to spam. It is also hard for Ad-network to assess the time used by the user on the advertiser site. Apps play a vital role to attract mobile advertising. Popular apps can generate millions of dollars in profit and collect valuable personal user information solving these issues without modifying the browser and without hurting the user experience. Solving the spam issue is of great significance for the smooth development of the mobile revolution. This calls for identifying the nature of click-spam, detection of click-spam through appropriate data mining techniques. Even though ad networks are taking latest measures to block click/touch spam today, but not in mobile advertising the presence of spam in mobile advertising is largely unknown. In this research work, the possibilities of appearance of click spam/web spam in mobile advertising were discussed. numerously applied various effective data mining classification algorithms on extracted novel set of features from Publisher’s/Apps, App developers, Ad-control location, User interest ratings etc to identify spam web pages. Later appropriate SpamRank algorithm was developed to reduce revenue losses to the advertisers through grouping of publishers based on computed degree of spam rank. Finally, suitable data mining techniques were evolved for instantaneous detection of user clicks to filter the fraud ones. We validate our methodology using data sets of FDMA 2012. BuzzCity provides a snapshot of their click and publisher database. Our approach flags publisher as FRAUD for discounting the click, has good performance on ROC and precision-recall curves, and is mitigating approach to detect click-spam.
  • A Feature Covariance Deviation Method for Feature Reduction in Intrusion Detection
    B.V.RamNaresh Yadav 1, B.SatyaNarayana2, D.Vasumati1,Jawaharlal Nehru Technological University,India and1,Sri Krishnadevaraya University,India2.
    Data security is primary concern in all service providing systems. Intrusion detection system is being popularly used for safeguard the data. But, traditional intrusion detection systems are based on derived knowledge of signature of known attacks which limit the scope of intrusion detection. The wide use of internet and its services in today life make high dependency over computer network and Web services systems. The dependency demands for a high network security for the exchange of confidential and secure information over the network communication channel. A secure information exchange can be made through deploying efficient intrusion detection for protection from various network attacks. This paper proposes a feature reduction approach based on feature covariance deviation method (FCDM) and a modified naïve Bayesian algorithm for efficient classification in intrusion detection. Evaluation measures of the proposed reduction method is performed in compare with other feature reduction methods and classification approaches shows a better performance using NSL-KDD data set.
  • Determining Electrical Load Profiles by Using the Extended Hyperbolic Smoothing Clustering Method
    Adilson E. Xavier1 , Luiz A. A. Oliveira2 , Jose F. M. Pessanha2 , and Vinicius L. Xavier1,Federal University of Rio de Janeiro,Brazil1and Electric Energy Research Center,Brazil.2
    The paper proposes to investigate the use of the Extended Hyperbolic Smoothing Clustering Method (AHSCM) in identifying typical daily load profiles, a key information in various stages of the planning and operation of electric power systems. These profiles are used for many applications, such as: calculation of tariffs, evaluation of the energy losses, dispatch of generating units, demand forecasting and dimensioning of electrical installations. The AHSCM considers the solution of the minimum sum-of-squares clustering problem. The mathematical modelling of this problem leads to a min-sum-min formulation which hás the significant characteristic of being strongly non-diferentiable. The proposed resolution method adopts the Hyperbolic Smoothing (HS) strategy using a special C8 diferentiable class function. The final solution is obtained by solving a sequence of low dimension diferentiable unconstrained optimization sub-problems which gradually approach the original problem. The proposed algorithm applies also a partition of the set of observations into two non overlapping groups: "data in frontier" and "data in gravitational regions". The resulting combination of the HS methodology with the partition scheme for the MSSC problem has interesting properties, which drastically simplify the computational tasks. Computational experiments were performed with synthetic very large instances with 5000000 observations in spaces with up to 10 dimensions. The obtained results show a high level performance of the algorithm according to the diferent criteria of consistency, robustness and efficiency. The robustness and consistency performances can be attributed to the complete differentiability of the approach. The high speed of the algorithm can be attributed to the partition of the set of observations into two non overlapping parts, which simplies drastically the computational tasks.
  • Research On Integrated Framework And Realization Model Of EIS
    Fangxiao Wang,Nanjing Telecommunication Technology Institute,China
    The paper analyses the construction model of enterprise informtion systems(EIS),and puts forward a layering and service-oriented integrated framework.The paer expounds that the major function and realization model from four aspects of data,component,service,and portal integration,which provides an effective and theoretical direction for establishing an integrative environment of development,integratio and operation.
  • Extended fast search clustering algorithm: widely density clusters, no density peaks
    WenKai Zhang and Jing Li,University of Science and Technology of China
    CFSFDP (clustering by fast search and find of density peaks) is recently developed density-based clustering algorithm. Compared to DBSCAN, it needs less parameters and is computationally cheap for its non-iteration. Alex. at al have demonstrated its power by many applications. However, CFSFDP performs bad when there are more than one density peak for one cluster, what we name as ”no density peaks”. In this paper, inspired by the idea of a hierarchical clustering algorithm CHAMELEON, we propose an extension of CFSFDP, E CFSFDP, to adapt more applications. In particular, we take use of original CFSFDP to generating initial clusters first, then merge the sub clusters in the second phase. We have conducted the algorithm to several data sets, of which, there are ”no density peaks”. Experiment results show that our approach outperforms the original one due to it breaks through the strict claim of data sets.
  • Al-Harm Expansion Movie Based On Virtual Reality
    Alanoud Salem, Sara Musallam and El-Shaimaa Nada,Taibah University, Kingdom of Saudi Arabia.
    Animated movies are excellent virtual environments for creating models in high quality. Animated movies can include 3D models, sounds and lights effects, and detailed maps. In this paper, a virtual reality movie is applied to Al-Haram Expansion stages including the future stage of expansion. 3DMAX program is used to rich the maximum benefits of using 3D modeling. Maps with details are built by using ARCGIS program in order to understand the real difference between the three different expansions stages clearly and effectively. A novel technique is presented in order to insert 2D maps and other details in the 3D model built by 3DMAX.
    The animated movie includes introduction, three basic expansion stages, maps, and conclusion. It is in Arabic with translation to English and sign language is also included. Accurate and documented information is applied in AL-Haram expansion movie that helps people to know about one of the most radical expansions that is occurred in the world.
  • Improvising Technique of Privacy Preserving in Outsourced Transaction Database
    Jinal Parmar1 , Vinit Gupta 2,Hasmukh Goswami Collage Of Engineering,India.1and Gujarat Technological University,India2
    Database Outsourcing is promising data storage and management system in which data-owner stores the important information at third party service provider’s site. The service-provider collect, manages and administers the database and avails the ready-made services to the data owner and their clients to create, update, delete and access the database. Although database security is required because more service providers are not trustworthiness.
    In this paper we have proposed a novel approach of preserving privacy in outsourced transaction database based on fake tuple adding in original database using different cryptographic techniques. Additionally we have proposed RSA algorithm at server side which provides encryption of data and protect against forging the contents of the communication between data-owner and server. Our scheme ensures that it enhances security services like privacy and integrity and increase complexity in the structure of original TDB to confused an attacker. This algorithm enhances the “corporate privacy”.
  • Exponential Moving Maximum Filter for Predictive Analytics in Network Reporting
    Bin Yu, Les Smith, Mark Threefoot,Infoblox Inc.,California
    In networking industry, there are various services that are mission critical. For example, DNS and DHCP are essential and common network ser-vices for a variety of organizations. An appliance that provides these services comes with a reporting system to provide visual information about the system status, resource usage, performance metrics, and their trends, etc. Furthermore, it’s desirable and important to provide prediction against these measures so that users can be well prepared for what is going to happen to prevent downtime. Among the predictive measures, there are many that reflect peak or maximum values such as peak volume or resource usage. The peak value prediction is crit-ical for the IT managers to ensure its organization is ahead of the cycles in terms of the network capacity. There have been many algorithms and methods for prediction of trended time series data. However, peak values often don’t fall into a trend by nature. The traditional trend prediction methods don’t perform well against this type of data. In this paper, we present a novel filtering algo-rithm named exponential moving maximum (EMM) before applying a predic-tion algorithm on peak time series data. We also provide some experimental re-sults on real data for comparison of prediction methods between use and not use of the EMM filtering.
  • Network Fault Diagnosis Using Data Mining Classifiers
    Eleni Rozaki,Cardiff University,United Kingdom.
    Mobile networks are under more pressure than ever before because of the increasing number of smartphone users and the number of people relying on mobile data networks. With larger numbers of users, the issue of service quality has become more important for network operators. Identifying faults in mobile networks that reduce the quality of service must be found within minutes so that problems can be addressed and networks returned to optimised performance. In this paper, a method of automated fault diagnosis is presented using decision trees, rules and Bayesian classifiers for visualization of network faults. Using data mining techniques the model classifies optimisation criteria based on the key performance indicators metrics to identify network faults supporting the most efficient optimisation decisions. The goal is to help wireless providers to localize the key performance indicator alarms and determine which Quality of Service factors should be addressed first and at which locations.