ACIT'2010 Proceedings
26076
ACIT'2024 will be held in Zarqa University, December 10-12, 2024 -Zarqa, Jordan
26076
Effectiveness of E-learning Process and Design of E-learning Environments
Gamal S. Ahmed Allam
Abstract: Theoretical background of the learning researches is important to illustrate the theories and practices which are effective on achieving the educational objectives. The e-learning still needs the studies to respond for inquiries that face expanding and development of this system globally. For example:( a) the disagreements of the researches' findings in e-learning effectiveness on the learning domains achievement (Ruth Clarck, 2007 / Thomas Brus & John Saye and others, 2009), (b) looking for defining the e-learning process that attains actual e-learning practices and objectives, (c) necessity to define the formal and constructional characteristics of the e-learning environment.(Thomas deVaney, Nan Adams and Cynthia Elliott, 2008, 165-174).
The present research investigates two main concepts. First, the e-learning process that enables learners to practice and achieve high levels of e-learning objectives. Second, the formal and constructional characteristics of e-learning environment which direct the educators and designers to develop the effectiveness of e-learning environment. Finally, processing these issues can reinforce the challenges that face the e-learning growth, and clarify the differences and debates that surround the outcomes, certificates and accreditation of the e-learning internationally.
Keywords: e-learning process (LP) - the theoretic, practical and organic views of (LP) - e- learning environment (LE) - the formal and constructional characteristics of (LE).
معايير استحداث تخصصات كليات الحوسبة
فراس محمد العزة و عبد الفتاح عارف التميمي
الملخص (Abstract) :
هذا البحث يستعرض معايير وآليات استحداث التخصصات الفرعية في كليات الحوسبة بهدف تخريج متخصصين من حملة درجة البكالوريوس، ويحدد باعتماد وسائل التحليل الرياضي مدى ملائمة هذه التخصصات للاحتياجات الفعلية للمجتمع، ومستوى تداخلها وتمايزها ومدى ترابط هذه التخصصات مع رسالة الجامعة في استنباط المعرفة (البحث العلمي) ونقل المعرفة (العملية التعليمية) ونشر المعرفة (رفد المجتمع بالكوادر المتخصصة).
الكلمات المفتاحية (Keywords):
بكالوريوس، حوسبة، علم الحاسوب، تكنولوجيا المعلومات، نظم المعلومات، هندسة البرمجيات، خطط دراسية، معايير.
NK-Sorting Algorithm
Nidhal K. El Abbadi and Zayed Yahya A. Karem
Abstract: Sorting has been a profound area for the algorithmic researchers and many resources are invested to suggest more works for sorting algorithms. For this purpose, many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity.
Many algorithms are very well known for sorting the unordered lists.
In this proposed algorithm, we suggested a new algorithm for sorting integers number depending on dividing the input array to many sub-arrays (which represents a vector or array with one dimension), according to the number of digits in each integer number, the relation between sub-array elements is determined, and this relation used to determines the right location of each element in sub-arrays.
Collision may happen, which is solved by moving elements in sub-array to next location. Finally, all ordered sub-arrays will be merged together to rebuild the origin array. The proposed algorithm compared with many famous algorithms gives promising results.
Keywords: Sorting, Time complexity, Integers, Comparison, Time analysis, Space analysis.
Hybrid Model for Software Development Processes
Nabil Mohammed Ali Munassar and A. Govardhan
Abstract: this research deals with a vital and important issue in computer world. it is concerned with the software management processes that examine the area of software development through the development models, which are known as software development life cycle. the main objective of this research is to design a development model that meets the needs of different systems and eliminates the defects presented in the previous development models. the present research proposes a model, "hybrid model" which combines the features of the five common development models: waterfall, iteration, spiral, v-shaped and extreme programming. the proposed model in this research has the advantages and some features of the previous models with some modification. because of this, it avoids and overcomes many software problems that exist in the previous models. thus, the new proposed model is an integrated model, which is relevant to most software programs and systems.
Keywords: Software Management Processes, Software Development Processes, Development Models, Software Development Life Cycle, Hybrid Model.
Comparative Analysis on Data Discretization Technique for Naïve Bayes Classifier
Almahdi Mohammed Ahmed, Azuraliza Abu Bakar, and Abdul Razak Hamdan
Abstract: Data mining applications often involve quantitative data. Nevertheless, learning from quantitative data is often less effective and less efficient than learning from qualitative data. Discretization addresses this issue by turing up quantitative data into qualitative data. In this paper we attempt to evaluate the performances of various discretization methods on several domain areas. The main characteristic of the work is its ability to maintain sufficient number of intervals with high classification accuracy compared to other six benchmarks techniques. An experiment is carried out toward ten benchmark data from UCI. The experimental results showed that ID3 generates larger number of intervals in an attribute, and less number rules with comparable accuracies within ten tested datasets. It also performs well towards information loss compared to other comparison methods.
Keywords: Data mining, machine learning, Discretization, and Dynamic Intervals
Preferscapes: Maps of Residents’ Predicted Preferences
Ray Wyatt
Abstract: This paper describes a computer program that will enable real-world policy planners to generate location maps of probable community support for each plan that they are considering. We first describe the program itself. It deduces, from the inputs of its past users, how to predict different groups’ plan preferences, and because such predictions are possible no matter what planning problem is being addressed, our software constitutes a truly generic, decision support system. We then describe how to take this system’s predicted, group preferences for plans, combine them with Census information about where such groups live, and so generate Google Earth images showing where strong and weak community support, for each plan, is likely to be located. Such images are here dubbed preferscapes, and we argue that they constitute a step towards planning that is more aware of people’s attitudes and feelings.
Keywords: Planning, preferences, mapping, decision aiding, multi-criterion decision making
Facial Features Segmentation for Extraction of Facial Expression
Amin BADRAOUI and Mahmoud NEJI
Abstract: The aim of our paper is to propose an emotional agent architecture to detect automatically the emotion of the learner during e-learning session. For this, we present a classification system of facial expressions of the learner identified on the six universal emotions. The proposed classification algorithm based on characteristic distances analysis calculated on the skeletons of expressions derived from a contours segmentation of permanent facial features (lips, eyes eyebrows). Based on these distances, we develop an expert system of facial expressions recognition learner, compatible with MPEG-4 standard.
Keywords: emotional agent, facial expressions, emotions, MPEG-4.
Robustness Technique for Digital Image Watermarking Using Frequency Domain Transformations
Raoof Smko Asif, and Omed Saleem Khalind
Abstract: This paper introduces a new technique for embedding watermark into digital images. The technique based on using an index of the dictionary representing the characters of the embedded messages instead of the characters themselves. The technique uses multiple frequency domains for embedding these indexes into the chosen bitmap image. By using discrete cosine transform DCT, discrete wavelet transform DWT, and a combination of both of them, we test the technique in a software package designed specially for implementing this technique, and we got very good results in terms of robustness, and imperceptibility which are the most two important properties of digital watermarking.
Keywords: Digital Watermarking, DCT, DWT, Security, Steganography.
Supporting Macro Antivirus Programs by Designing Undetected Virus
Wesam S. Bhaya
Abstract: As virus writers developed new viruses, virus scanners became stronger in their defense against them.
The aim of this paper is to build a reliable, compatible, and undetected computer virus, that infects data files (MS-Word documents) with macro capabilities (Macro Virus) as a helping support to develop antivirus programs, our defenses.
This paper explain a construction of a macro virus that works under all versions of Microsoft Word (compatible virus) and infects data Documents that belong to MS-Word (The Microsoft Office programs are most well known and widely-used program in the world). Also, the proposed virus is undetected by most current commercial antivirus programs especially which used heuristic techniques and other techniques to detect unknown viruses. Thus, it can reveal some related antivirus vulnerabilities.
The virus implemented using Visual Basic for Application language and Pentium processors under win32 operating systems.
Keywords: Computer Security, Computer Virus, Antivirus, Macro Virus, MS-Word Data Document, Visual Basic for Application.
Person identification technique using human iris recognition
S.M. Ali
Abstract: A new iris recognition and person identification technique is introduced. The method is based on tracing the Eye image boundary, using the Marr-Hildreth edge technique. The Eye's pupil boundary is identified, using the contour follower which is based on the chain coding method. The coordinates of the pupil's center is then recognized as the center of figure of the pupil boundary chained points. The pupil radius (i.e. the iris inner radius) is determined as the average distance between the pupil center and the chained points on the pupil boundary. An image of 128×128 pixels size surrounding the pupil center is then extracted which involve all the required necessary information for the identification process. The extracted raster form images then transformed from the Euclidian representation into polar representation (i.e. r-θ coordinates), all having the same size 360×60 pixels (i.e. rows number for inner iris radius ≤ r ≤ outer iris radius, and columns number for 0o ≤ θ ≤ 360o). The transformed polar represented images are, then, inserted (each as a column vector) in an array, their average is determined and subtracted from each. The covariance of the average subtracted array then computed, the eigenvectors of the covariance matrix are, then, calculated and used to identify the tested eye by comparing its eigenvector with those already preserved as Database, using the minimum mean-square criterion.
Keywords: iris recognition, iris identification, iris verification, edge detection, contour following.
How Higher Engineering Researchers in Libya Perceive The Use of Internet Technology
Abdussalam E. Elzawi, and J. Underwood
Abstract: This study investigated the perception of engineering faculty members at Alfateh University on the educational advantages offered by the use of the internet. This population is of interest because of the conflict between a strong desire to maintain traditional Libya culture, which restricts travel and the interaction of its members, and the recognition that the staff of Libyan engineering faculties are valuable resources who can contribute significantly to the development of society. This study investigated one part of this problem: how members perceive the potential of the internet as a means of changing the way they work and contribute to society. Specifically, a grounded theory method was used to investigate the perceptions of advantages and disadvantages of internet use by members of a Libyan engineering faculty. An extensive questionnaire survey was carried out over a year to obtain the views of thirtytwo engineers in three academic disciplines (civil / construction/ built and environmental management) were obtained in an attempt to discover whether faculty members use the internet effectively, particularly their perceptions of internet use for research. This study lays the groundwork for an exploration of the relationships of the nodes and their levels or values as regards a rich body of knowledge specific to internet access. It also suggests factors that would assist in the increased use of the internet by Libyan academics.
Keywords: Libya, faculty, researchers, internet, perception, user acceptance, technology transfer, information technology
Thermographic Imagery for Quantifying Near Surface of the Concrete Crack Damage
Atif Massuod and Shahid Kabir
Abstract: This study has assessed and evaluated the performance efficiency of the various types of NDT imaging methods of thermographic, and grayscale imagery, in detecting different types of concrete crack in order to quantifying Near-surface Structural Damage by using image processing techniques, such as optical and thermographic images, and also validation is done for damaged structures obtained through image processing technique with data from visual inspection. With respect to the imagery techniques, the thermographic classifications produced higher accuracies than the greyscale classifications. This is sequel to the fact that, thermographic images contain less variability within the concrete imagery, and equally increases the visibility of cracks that may be otherwise imperceptible, so that results indicates little or no difference between the image processing technique (IR thermography and Optical imaging) and the results obtained by using visual inspection (Feeler gauge and crack detection microscope). Currently, Infrared (IR) thermography has become a common technique for non-destructive inspections in various engineering fields, The IR thermography technique identifies and measures near surface defects by detecting the temperature gradient on the surface of a target object (e.g. a concrete structure).
Keywords: IR thermography, visual inspection, concrete crack damage
An Intelligent Approach for Network Management Based on Hidden Markov Model
Nabil M. Hewahi
Abstract: Most of the techniques used for management approaches for networking are traditional methods with a little intelligence. In this paper, we propose an intelligent networking management system based on the induced Modified Censored Production Rules (MCPRs) extracted from a networking structure based on Hidden Markov Model (HMM). The advantage of using this technique is that MCPRs are very useful in real time applications and can be adapted over time based on the obtained experience of the networking working process.
Keywords: Hidden Markov model, Networking Management, Real Time Systems
Analysis of Delay Latency of the Active Reliable Multicast Protocols
Lakhdar Derdouri, Congduc Pham, and Mohamed Benmohammed
Abstract: This paper quantifies the reliability gain of combining classes for reliable multicasting in lossy networks in which the active network approach is the most promising. We define the delay latency of recovery as performance metric for reliability. We then study the impact of multicast group size and loss probability on the performance of compared approaches. Our simulation results show that combining classes significantly reduces the delay latency in lossy networks compared to the receiver-initiated class. Interestingly, combining classes can outperform receiver-initiated class depending on the network size and loss probability.
Keywords: Active Networks, Reliable Multicast, Sender-Initiated, Receiver-Initiated, Delay Latency
Face detection based on a hybrid method
Omri Majid and Neji Mahmoud
Abstract: Automatic face detection is an important phase in any system of face recognition given the fact this system performance basically depends on the quality of face detection.
Given its potential utility, face detection has attracted the attention of several researchers from various fields, which explains the diversity of detection methods used. In this paper, we present a face detection system based on a hybrid method which combines the skin color detection approach and the approach of generalized Hough transform. This system takes advantage of color information to make a skin and non skin classification. The generalized Hough transform is applied to the skin regions to quickly find the face in the region of skin.
Keywords: Detection, face, location, skin color, Hough transform.
An Application of the Balanced Scorecard Tool to the Activites of the General Electricity Company of Libya
Faraj Omeish and Habib Lejmi
Abstract: In order to survive in today`s economic environment, organizations must be flexible, service-oriented, and cost-effective.
An organization can not manage what it does not measure - therefore, in order to improve economic performance, it must formulate and measure objectives.
Balanced Scorecard (BSC) can help an organization review and improve its financial performance, customer service, internal processes, and human capital aspects in order to optimize end results. BSC is a measurement, management and communication instrument that an organization can use to increase its capacity for change and become a learning organization. This Paper addresses the following aspects:
- Examination of the BSC and how it can be used in non-profit and public organizations.
- Application of the BSC at the General Electricity Company of Libya (GECOL) in the form of a pilot project in the Customers Affairs business unit.
- Lessons learned and recommendations for the systematic introduction of a BSC based Performance Management at Public Institutions.
Keywords: Balanced Scorecard, Performance management, Public Sector, Case Study.
Comparison of Crisp And Fuzzy Knn Classification Algorithms
Faraj A. El-Mouadib and Amal F. Abdalsaalam
Abstract: Recently, with the advances in information technology tools in the collection and generation of massive amounts of data had overwhelmed the human ability and traditional data analysis capabilities to analyze and use such data. The cognitive science field, learning theory and human learning processes are essential areas to create new intelligent computer systems that seek to execute tasks similar to natural human tasks such as classification. Such needs had called for the development of new field such as Machine Learning and Knowledge Discovery in Databases to utilize the use and benefits from such data in the form of knowledge. Classification is one of the most important and well known tasks in the field of Data Mining.
In this paper, we focus on the Instant-Based Learning (IBL) classification method especially on the study of the K-nearest Neighbor classification algorithm from the crisp point view as well as the fuzzy. Computer system software is developed for the crisp and fuzzy K-nearest Neighbor classification algorithms with the introduction of the concept of Windowing of ID3. Our system is developed in Visual-Basic.net programming language. Some experiments as conducted by the use of well known data sets to conduct some comparison of the results.
Keywords: Classification, Data mining, K-Nearest Neighbor classification algorithms,Cognitive Learning, Supervised learning, Unsupervised learning.
نظم المعلومات الحديثة ( ERP Systems ) وأثرها على أداء المؤسسات دراسة حالة على الشركة العامة للكهرباء - ليبيا
محمد مسعود أحمد
ملخص: مع التقدم العصري الحديث وحدوث التطور الكبير في مجال تكنولوجيا المعلومات والاتصالات وظهور نظريات ومفاهيم حديثة أصبح من الضروري التواكب مع هذا التقدم , ولقد شهد الـعالـم تقدماً تكنولوجياً لم يسبق لـه مثيل خلال الربـع الأخـير من الـقرن العشرين وأخذت كافة مجالات العمل إتجاهاً عاماً نحو تحقيق الأسرع والأقوى والأكثر دقة والسعي للوصول إلى مفهوم الجودة الشاملة ..
ومن هنا ظهرت مفاهيم جديدة مثل مفهوم ال(ERP) . Enterprise Resource Planning) ( تخطيط موارد المؤسسات وهذا المفهوم ليس إلا (وسيلة لدمج البيانات والعمليات الخاصة بالمؤسسة في نظام واحد) وهذا ما سوف نوضحه بالورقة البحثية فلا يخفى على كل المستخدمين لنظم المعلومات أن كل العمليات التي تقوم بها المؤسسة تنتج عنها آثار مالية , وقد تكون تلك العمليات هي عمليات إيرادات أو مشتريات أو تصنيع أو تأدية خدمات للعملاء....الخ وبالتالي يأتي مفهوم الـ ERP لدمج بيانات تلك العمليات في نظام واحد خاص بالمؤسسة ويرتبط هذا المفهوم بكثرة مع البرمجيات ونظم المعلومات الآلية .
CST New Slicing Techniques To Improve Classification Accuracy
Omar A. A. Shiba
Abstract: Classification problem is one of the core topics in data mining. Within the artificial intelligence community, classification has been demonstrated to be an important modality in diverse problem-solving situations. The goal of this paper is to improve the classification problem solving accuracy in data mining. The paper achieves this goal by introducing a new algorithm based on slicing techniques. The proposed approach called Case Slicing Technique (CST). The idea is based on slicing cases with respect to slicing criterion. The paper also compares the proposed approach against other approaches in solving the classification problem. ID3, K-NN and Naive Bayes approaches are used in this comparison. This paper also presents the experimental results of the CST on three real-world datasets. The main results obtained showed that the proposed approach is a promising classification method in solving the decision-making in classification problem.
Keywords: Data Mining, Case-Slicing Technique, Case Retrieval, Classification Accuracy.
Application of Artificial Neural Networks Technique in Modeling Water Networks
Abdelwahab M. Bubtiena and Ahmed H. Elshafie
Abstract: Artificial Neural networks ANNs are dynamic systems which have the ability not only to capture the relationship between input and output parameters of complex systems but also highly effective when there is no any mathematical formula or model for the system. Therefore, they are very potential and appropriate for design of systems whose functions cannot be expressed explicitly in the form of mathematical model. If significant variables are known, without knowing the exact relationships, ANN is suitable to perform a kind of function fitting by using multiple parameters on the existing information and predict the possible relationships in the near future. This is the case in the water distribution network design or operation problems wherein the input (pipe diameters, lengths, age, soil, etc...)-output (reliability of the network) relationship is given by the set of nonlinear continuity equations, path head loss equations and the head-discharge relationship. This paper introduces a methodology of establishing ANN of modeling the pipe breaks from which rehabilitation strategies (proactive maintenance strategy), prioritization of rehabilitation implementation, finding the optimum time for rehabilitation of the pipe and determining the parameters that most affect the likelihood of pipe breaks, can be determined for predicting the number of breaks for each individual pipe in the water distribution system of Benghazi city (WDSB). Because this work is a part of a research has not completed yet, this paper presents only the modeling technique using ANN to achieve the main objective which is; expected number of pipe breaks.
Keywords: water distribution system, Artificial Neural Network, pipe break, prediction, rehabilitation strategy.
A Tool of Annotation and Semantic Research of Video-Conferences Founded on Conceptual Graphs
Ameni Yengui, Mahmoud Neji
Abstract: OSVIRA (Ontology-based System for Semantic Visio-conference Information Retrieval and Annotation) is a system devoted to the development of help to the annotation and to the semantic research of multimedia videoconferencing resources. It is founded on the use of dense ontology associated with a thesaurus. OSVIRA allows to describe semantically the content of a pedagogic multimedia resources on the basis of an intuitive model of annotation based on the triplet {Object, Relation, Object}. It formally represents this content with the aid of conceptual graphs.
Keywords: Ontology, thesaurus, annotation, conceptual graphs, Video-conferencing documents
Arabic and English Words Contour Compression Using Trapezoid Method
Ali A. Ukasha, Rasem A. Ali, and Adel Hamidat
Abstract: Two different algorithms of the Trapezoid method for Arabic and English letters contour compression are presented and compared in the paper. The compression abilities, level of introduced distortion, complexity (expressed by the computational time) of these algorithms are measured and compared with some other appreciated methods such as Ramer and Triangle. Cartesian co-ordinates of an input contour are processed in such a manner that finally contours is presented by a set of selected vertices of the edge of the contour. In this paper the main ideas and effectiveness of the Trapezoid Method for both algorithms are performed. The mean square error, signal-to-noise ratio and number of required operations are used as the main criterions of this comparison. Experimental results are obtained both in terms of image quality, compression ratios, and speedily. Finally, results of experiments and advantages and drawbacks of the Trapezoid Method are discussed.. The main advantages of the analyzed algorithms are simplicity and small numbers of the arithmetic operations compared to the existing algorithms.
Keywords: contour compression, polygonal approximation, Ramer and Tangent methods
Human Skin Color Modeling, Segmentation and Correction for Face Detection
Sinan Naji, Roziati Zainuddin ,and Jubair Al-Jaafar
Abstract: Human skin segmentation in colored images is closely related to face detection and face recognition systems as a preliminary critical required step where it is necessary to search for the precise face location. An important challenge for any image segmentation approach is to accommodate varying illumination conditions and different ethnic groups. We present a general approach for modeling, segmentation, and color correction of human skin that suppress false negatives errors in face detection problem. The proposed approach shifted from mono-skin model to multi-skin models using HSV color space. We use the color information explicitly to segment skin-like regions by transforming the 3-D color space to 2-D without losing of color information. Four skin color clustering models were used, namely: standard-skin model, shadow-skin model, light-skin model, and redness-skin model. Our approach combines pixel-based segmentation and region-based segmentation approach using the four skin-map layers and iterative merge process. To improve the detection rate, adequate attention was paid to automatic skin color correction. We present experimental results of our implementation and demonstrate the feasibility of our approach to be general framework which does not depend on additional preconditions and assumptions.
Keywords: Illumination correction; Skin segmentation; Skin color modelling; Face detection; HSV
التعليم الإلكتروني في العراق .. الواقع والطموح
نضال خضير العبادي
ملخص: خلال العقد الماضي كانت هناك ثورة ضخمة في تطبيقات الحاسوب التعليمي, ولا يزال استخدام الحاسوب في مجال التربية والتعليم في بداياته والتي تزداد يوماً بعد يوم، بل بدأ يأخذ أشكالا عدة, فمن الحاسوب في التعليم إلى استخدام الإنترنت في التعليم وأخيراً ظهر مفهوم التعليم الإلكتروني الذي يعتمد على التقنية لتقديم المحتوى التعليمي للمتعلم بطريقة جيدة وفعالة.
هدف هذا البحث هو ابراز مفهوم التعليم الالكتروني والاسباب التي تدفعنا الى الاهتمام بهذا الجانب اضافه الى تحديد المعوقات التي تحول دون التقدم او تطبيق هذا الفرع من التعليم في العراق, وما هي الحلول المقترحه لتطوير العمل في مجال التعليم الالكتروني. أعتمدت الدراسه على الاطلاع على واقع التعليم الالكتروني في العراق من خلال الزيارات الميدانيه والأتصال بالمسؤولين ومنتسبي الجامعات العراقيه اضافه الى عمل عدد من الاستبيانات. وخلص البحث الى التاكيد على وجود نقص كبير في هذا المجال اضافه الى عدم توفر البنى التحتيه للنهوض بالتعليم الالكتروني.. وختمت هذه الدراسه بتقديم عدد من المقترحات والتوصيات التي ركزت على الاهتمام بالتوعيه للمجتمع والمعلم والمتعلم, والعمل على توفير او تطوير البنى التحتيه, وتوفير الدعم الكامل للمؤسسات والافراد, فضلا على التركيز على خلق المحتوى التعليمي وفق معايير الجوده العالميه.
الكلمات المفتاحيه: التعليم الالكتروني, الجامعات العراقيه, المناهج الدراسيه, التربيه والتعليم
A Study of Multilevel Association Rule Mining
Faraj A. El-Mouadib and Amina O. El-Majressi
Abstract: Recently, the discovery of association rules has been the focus topic in the research area of data mining. For many applications, it is difficult to find strong associations among data items at low or primitive levels of abstraction due to the sparsity of data in multidimensional space. Mining association rules at multiple levels may lead to more informative and refined knowledge from data. Therefore, data mining systems should provide capabilities to mine association rules (refined knowledge) at multiple levels of abstraction.
The objective of this paper is set to explore the concept of the multilevel association rules mining and to study some of the available algorithms for such concept. The work here is carried out in the form of implementing a system for two algorithms, namely; ML-T2 and ML-T2+, for multilevel association rules mining that have been proposed in [7]. UML is used for the analysis and design of our system. The VB6 programming language is used for the implementation. Our system is tested via 66 experiments and the data used in these experiments are mainly synthetic with different sizes ranging from 1.19 MB to 81.8 MB.
Keywords: Association rule mining, Concept hierarchies, Data Mining, Knowledge Discovery in Databases, Multi level association rules.
Refactoring of Test Case to Improve Software Quality
Divya Prakash Shrivastava and Rached Omer Agwil
Abstract: Refactoring is the process of changing a software system aimed at organizing the design of source code, making the system easier to change and less error-prone, while preserving observable behavior. This concept has become popular in Agile software methodologies, such as eXtreme Programming (XP), which maintains source code as the only relevant software artifact. Although refactoring was originally conceived to deal with source code changes.
Two key aspects of eXtreme Programming (XP) are unit testing and merciless refactoring. We found that refactoring test code is different from refactoring production code in two ways: (1) there is a distinct set of bad smells involved, and (2) improving test code involves additional test code refactorings. we describe a set of code smells indicating trouble in test code and a collection of test code refactorings explaining how to overcome some of these problems through a simple program modification.
The goal of our present investigations is to share our experience in improving test code with other XP practitioners.
Keywords:-Test Smell, Test Case, Refactoring, Unit Testing, Object Oriented, TDD.
Modeling the Financial Performance of Construction Companies, Using Artificial Neural Network
El-Kassas E.M, Mohamad H.H, Bassouiny H.M and Farghaly K.M
Abstract: Financial performance of construction companies is seriously affected by many different factors. Such factors can be generally classified into macro-economic factors, industry related factors and company related factors. In order to arrive at a reliable prediction for the expected performance of any company, such factors should be seriously considered. The objective of this research paper is the development of an ANNs model for construction companies performance in Egypt. The process of data collection will be deeply discussed. The process of model development and validation will be also explained.
A Multi-Agent System for the Modeling of the HIV Infection
Laroum Toufik, Bornia Tighiouart, and Mohamed Redjimi
Abstract: The mathematical tool was used for a long time to model the dynamics of the populations, at present approach of modelling Multi-agents seem to be promising in the views of its capacity to master the complexity of the studied systems. In this work we try to model the population of cells occurring during the infection by the virus of the human immunodeficiency (HIV) to show the efficiency of the approach Multi-agents by comparing with the mathematical approach. The obtained results allowed to bring to light the behaviour and the interactions between the various cells studied in agreement with the biological reports.
Keywords: Simulation Multi-Agents, dynamics of the populations, the infection HIV, the virtual community
An Arabic-English Indexing System Using Inverted Index Algorithm
Ahmad T. Al-Taani, Ahmed S. Ghorab ,and Hazem M. Al-Najjar
Abstract: In this study we have implemented the inverted index scheme on a collection of Arabic documents. Many researchers have tried to implement an information retrieval system that support the Arabic language, but most of them concentrate on stemming techniques, and some of them evaluated the existing systems. To enable the system to deal with the Arabic and English languages, we have used an implemented package; AraMorph, which uses a technique called transliteration, which is writing the words of any language with the letters of another language. We have evaluated the proposed system by calculating the precision and the recall to the most frequently used Arabic words in our collected corpus. Experiments showed the effectiveness of the proposed system to index the collected documents in the corpus.
Keywords: Indexing, Inverted Index, Arabic Information Retrieval, Document Indexing, Arabic Language Processing, Vector Space Model.
Studying Relations between Websites Structural and Popularity Metrics
Izzat Alsmadi, Ahmad T. Al-Taani, and Nahed Abu Zaid
Abstract: There are several metrics in which we can evaluate websites. The relations between those metrics can vary depending on the type and nature of the metric and also depending on the specific website or its domain. There are many software tools and related websites that measure websites attributes such as vulnerability, performance, navigability, structure, etc. This paper focuses on studying website structural and related metrics that can be used as indicators of the complexity of the websites. Websites structural metrics can be also used to predict maintainability requirements. Examples of some structural metrics evaluated in this study include: size, complexity, page loading speed, inlinks, and outlinks. While results showed that structural metrics are not good indicators for the popularity of the website, they may affect indirectly on the popularity through their effect on the performance or the usability of those websites. A tool is developed to collect navigability metrics such as: inlink and outlink. Results showed inconsistent relations between inlinks and outlinks or structural and popularity metrics in general. Many small website in nature can be very popular relation to large websites of many pages and links.
KeyWords: Web Metrics, Testing, Websites Complexity, Navigability
Implementation of ICT and E-Learning in Libyan Education: A Study
Nasreddin Bashir El Zoghbi,. P.G.V.Suresh Kumar, and V. Sudhakar Naidu
Abstract: The rapid growth of Science and Technology brought a radical change in all fields of life. By adopting modern technology, education plays a vital role in learning and teaching. The Libyan government has eventually responded to this challenge and started investing heavily on the reconstruction of its educational system and initiating national programs to introduce Information and Communication Technology (ICT) in education. Besides, there is a plan to establish virtual campuses in many universi¬ties and colleges to provide an advanced platform for learners and instructors. This paper presents the application of ICT and e-learning in Lib¬yan education. It focuses the issues that need to be considered in adopting ICT in learning and teaching process including technological infrastructure, cur¬riculum development, cultural and language aspects and management support. The paper outlines the prospects for the integration of e-learning in Libyan education and con¬cludes with the proposal of an integrated approach to introduce e-learning in Libya.
Keywords: developing country, e-learning, ICT teacher training, information and communication technology, Libyan education, technology transfer, technological infrastructure, internationalization.
Applying Attribute Level Locking to Decrease the Deadlock on Distributed Database
Khaled S. Maabreh and Alaa Al-Hamami
Abstract: In a distributed database, a transaction consists of several participants' or agents to execute over all sites; all participants must guarantee that any change to data will be permanent in order to commit the transaction. If any of participants fails to make this guarantee, the entire transaction fails and aborts. There are many approaches according to where the lock management is performed. One of them is the centralized locking, where one site is responsible for granting locks, because it’s the only site that has a lock manager.
This research applying a new method for reducing the size of lockable entities, it is possible to do that by increasing the granularity hierarchy tree one more level down at the attributes, to allow several transactions to access the same raw simultaneously.
The experimental results proved that, using Attribute level locking will decrease the competition for acquiring the data and increases the concurrency control for the Distributed Database.
Keywords: Distributed Database, Locking, Deadlocks, Concurrency Control, Performance
Genetic Algorithms and Fuzzy Logic with Voice Recognition in Neural Networks
Nune Sreenivas, P.G.V.Suresh Kumar, and M.V.Lakshmaiah
Abstract: In this paper we describe the use of neural networks, genetic algorithms and fuzzy logic for voice recognition. In particular, we consider the case of speaker recognition by analyzing the sound signals with the help of intellectual techniques, such as the neural networks and fuzzy systems. We use the neural networks for analyzing the sound signal of an unknown speaker, and after this first step, a set of type-2 fuzzy rules is used for decision making. We need to use fuzzy logic due to the uncertainty of the decision process. We also use genetic algorithms to optimize the architecture of the neural networks. We illustrate our approach with a sample of sound signals from real speakers in our institution.
Keywords: Genetic Algorithms, Voice Recognition, Fuzzy Logic, Neural Networks
Computer Simulation of 2D-Electric Voltage Distribution in the Anode of the Aluminum Smelter
Ibrahim Hammamu, Khalaf Jehad abed , and Rabeea Badr
Abstract: Steady state electric voltage distribution in pre-packed anode of the aluminum smelter has been studied by using a computer simulation method in two dimensions. A local computer simulation code based on Laplace’s equation has been developed to find the electric potential at each grid point inside the anode, since computer simulation is the only available means that can be used to predict the voltage. The simulation shows that contact resistance plays a major rule in electric power consumtio the in aluminum production process.
Keywords: Smelter Anode, Laplace equation, Contact Resistance.
تطبيق المواصفة الليبية 27001 لأمـن وسرية البيانات الرقمية في مراكز نظم المعلومات الجغرافية بالجماهيرية العظمى
Implementing the Libyan Standard 27001 in GIS Data Centers in Libya
عثمان ابوبكر القاجيجي
المستخلـــص
إن نظم المعلومات الجغرافية تستخدم الحاسوب ومُلحقاته لمعالجة البيانات الجغرافية والمكانية، وتتكون هذه النظم من بُنيـة تحتية معلوماتية تعتمد على تبادل البيانات الرقمية بين المُكونات المختلفة؛ من منظومات الخادمات والمُلحقات والمحطات، ومنظومات برمجية لمعالجة الخرائط والبيانات، ومنظومات سيطرة ومعالجة، ومنظومات تراسل وتبادل البيانات، ومنظومات إتصالات، ومنظومات مُساندة، وغيرها من المنظومات الرقمية، والتي تعمل بصورة مُترابطة من أجل تقديم أفضل الخدمات وبأوفر السُـبل والإقتصاديات.
ومن هنا يأتي دور المنظومات الرقمية، من حواسيب وشبكات وإتصالات وأجهزة تخزين ومُعدات رقمية طرفية وأجهزة تحكم مركزية وفرعية، وغيرها من المنظومات. وهي كلــها تعمل بواسطة النظام الرقمي، وتعتمد في عملها على تبادل البيانات الرقمية فيما بينها. وتستخدم الإتصالات السلكية من كوابل نحاسية وألياف زجاجية وأخرى لاسلكية من الموجات الكهرومغناطيسية لمهمة الربط، وفق بروتوكولات فنية خاصة. ولضمان حسن آداء هذه المنظومات المُتداخلة، فإن تراسل البيانات بين المُكونات المُتعددة يسلتزم المحافظة على سرية وأمـن البيانات الرقمية وضمان وصولها سليمة. وإن إتباع المواصفات القياسية في مجال تقنيات المعلومات والإتصالات يضمن حسن عمل المنظومات، ويوفر جانباً من الحماية للبيانات الرقمية.
تهدف هذه الورقة للتعريف بالمواصفة الدولية والليبية 27001 ومحتوياتها، ومجالات عملها، مراكز التحكم وعددها 11، ونـتطرق الى الإجراءات التي من شأنها وضع هذه المواصفة موضع التنفيذ لحـماية البيانات الرقمية وطرق التعامل معها. كما تم التطرق بالتفصيل الى مركز التحكم (A.15) من المواصفة بشأن التوافقية مع التشريعات النافذة. إن خطر ضياع البيانات أو تلفها أو وصولها الى جهات غير المُوجهة إليها وغيرها من الإختراقات الأمـنية، قد يسبب في إضاعة الأموال أو إتلاف المُكونات المختلفة للمنظومة، وبالتالي يصبح إتباع الإجراءات التي تُحددها المواصفة العالمية والليبية 27001 من لوازم التعامل مع البيانات الرقمية.
ونقترح في نهاية هذه الورقة جُملة من الخُطط والإرشادات العملية التي ستساهم في مُساعدة مراكز المعلومات الجغرافية GIS-DC بالجماهيرية العظمى لتطبيق هذه المواصفة، وحماية البيانات الرقمية والمحافظة على سريتها.
الكلمات المفتاحية: أيزو 27001، سرية البيانات، نظم الحماية الرقمية، نظم المعلومات الجغرافية، تقنيات الإتصالات والمعلومات، إجراءات الحماية.
Multi Agent in Medical Protocol Monitoring for Secure Communication in Hospital Environment
Abu baker Salem Mohamed shibani
Abstract: This paper describes the architecture of multi-agent system (MAS), to assist and supervise the implementation of the Agreement in the hospital's medical environment. Our model of health care in the hospital environment, specifically the domain agent and interpretation of medical protocols as an agent between the negotiation processes. The medical service agreements may involve several specialized areas of medical factors that are independent of the negotiation process and the autonomy of its monitoring mandate. As the Internet is not a secure communication channel, we define a strong interface, providing privacy, integrity and identity during the exchange of information among agents.
Keywords: Multi agent, secure communication, medical protocol.
Non-Destructive Technique for PCCP Monitoring Using Equivalent Circuit and Artificial Neural Networks (A Simulation Study in GMRA)
Y. Shakmak, and S. Awami
Abstract: A non-destructive test (NDT) was a successful approach that used in many fields to monitor systems under considerations. In this work, the technique is tested for monitoring Pre-stressed Concrete Cylinder Pipe (PCCP) status using the equivalent circuit of the type of Embedded Cylinder pipe (ECP). The equivalent circuit model is designed and developed based on Dave’s model for PCCP. This study used the simulation approach to show the possibility and visibility of the technique to monitor PCCP in the Great Man-Made River Authority (GMRA). MATLAB, the scientific technical computing language were used to simulate the designed model of PCCP, this simulation provides a lot of information when the model is simulated at different conditions of the pipe. The obtained simulated results improve the understanding of condition monitor of PCCP. The designed model that was based on equivalent circuit of the PCCP provides data that reflect the condition of simulated pipe; these data can be measured in the real world without any destructive action and can give condition status of the monitoring pipe. This work introduce this method of monitoring is based on equivalent circuit measurement parameters, which is the exciter current change that is compared with the exciter current of the case of no defect. An Artificial Neural Network (ANN) was designed and trained using the obtained simulated data. It is found that, it is possible to use the exciter current for PCCP condition monitoring, the defect location and severity can be determining using the designed neural network. In this work, large diameter pipe was simulated by using its physical parameters that is designed by GMRA.
Keywords: PCCP Inspection; PCCP Modeling; NDT; ANN; PCCP Condition Monitoring, Eddy Current.
A New Technique for Detecting and Locating Inconsistencies in Software Requirements
Randa Ali Numan Khaldi
Abstract: In this paper we present a new technique for diagnosing requirements and if these requirements contain inconsistencies or not. We are starting by detecting and locating inconsistencies, then by identifying the cause of these inconsistencies, the type of action that caused inconsistencies, the owner of the action that caused inconsistencies, using an explicit set of consistency rules, which will capture the inconsistencies. These rules will be refined as the process for detecting and locating proceeds. Using consistency rules will provide us with great accuracy, and will reduce the time needed, also it will make it more intelligent by using previous experiences. This technique will help us in detecting, locating and identifying inconsistencies not only from the early stages but till the end of the development process.
Keywords: Matrix of Requirements and rules, Process for detecting and Locating Inconsistency, consistency rules, Diagnosing inconsistencies.
Multi-Robots Cooperation Strategies in an Exploration Task
Ashraf S. Huwedi
Abstract: It has been recognized that several tasks can be performed more efficiently and robustly using multiple robots. Further spatial applications are envisaged in the future in which fleets of autonomous robots achieve tasks that are too risky for humans. In this paper, we present a master-slave based framework for deployment of multiple autonomous robots in an unknown indoor environment in order to explore their surrounding within a good exploration time. The main contributions of the paper are divided into two steps, the first is to describe the master-slave approach and software tools used which allow the development of the framework for multi robot coordination. The second is to ease the development of cooperative tasks among the robots in order to achieve the work very efficiently. A study of the efficiency of multi-robot system is presented. Different strategies of cooperation are used in order to perform the performance of the system, based on some factors such as the area to be explored, the number of the robots in the team, the time to extract a new feature during the exploration, and the degree of the cooperation level. These strategies are implemented by using simple communication mechanism. The algorithm and the framework are demonstrated on an experimental testbed that involves a team of two mobile robots. One of them is working as a master equipped with stereo camera. Whereas the second is involved as a slave equipped with a rotated laser scanner sensor.
Keywords: Multi-Mobile Robots, Cooperation Strategy, Robot Simulation, Exploration, Coordination.
Performance Evaluation of Mobile Voice Service Using Different Audio Codecs Over a Wi-Fi Network
Raniem B.Almongoush, and Adel Aneiba,
Abstract: Various new mobile multimedia streaming systems are being deployed on the market. Audio content is and will be the main content type in streaming, with or without other content types. A good audio codec should provide consistent high quality for different audio types like speech, music, and speech mixed with music and maintain this high quality consistency even at low bit rates. That is essential when audio is delivered with some other content or data consuming most of the channel capacity. In this paper, different audio codecs have been evaluated and validated experimentally. The research found that different CODECs provide very different results for the same content and wireless channel. The evaluation was carried out based on a developed model called mobile voice message (MVM) service. The results show that AMR codec can outperform than G.711ulaw and PCM codecs in terms of bandwidth utilisation and file transfer time.
Keywords: Audio codecs, Wi-Fi Networks, Streaming, JAVA ME, Tomcat, Nokia N96.
B2C E-Commerce Banking In Developing Countries: An Empirical Study in Egypt
Tarek Taha Ahmed
Abstract: The accelerate growth of e-commerce and advanced Internet technology have changed the way banking services are designed and delivered. Electronic commerce has meant that currently most banking products and services are conducted through the Internet. However, most of the previous works about e-commerce banking have focused mainly on developed countries, and less so on other developing countries. The underlying assumption is that most e-commerce users are from Western countries and they have a more favorable Internet environment. Thus, this paper aims to help address some of gaps in the current body of literature, specifically in our local context, through proposing an empirical model that can predict and indentify the factors that have the most influence on the customer’s intention to use e-commerce banking, and simultaneously assess the extent to which this technology is actually applied in Egypt as an example of developing country. This paper attempted to integrate and encompass the most frequently cited factors in the e-commerce literature, and applied them in the local context in order to best examine the phenomenon under investigation. Thus, the proposed model contained variables that have not been tested simultaneously in previous works.
Keywords: B2C e-commerce, e-commerce application, e-commerce banking, e-commerce transactions, Internet banking services, Internet technology.
Arrhythmia Classification from ECG signals using Data Mining Approaches
Ali Kraiem, and Faiza Charfi
Abstract: The objective of this paper is to develop a model for ECG (electrocardiogram) classification based on Data Mining techniques. The MIT- BIH Arrhythmia database was used for ECG classical features analysis. This work is divided into two important parts. The first parts deals with extraction and automatic analysis for different waves of electrocardiogram by time domain analysis and the second one concerns the extraction decision making support by the technique of Data Mining for detection of EGC pathologies. Two pathologies are considered: atrial fibrillation and right bundle branch block. Some decision tree classification algorithms currently in use, including C4.5, Improved C4.5, CHAID and Improved CHAID are performed for performance analysis. The bootstrapping and the cross-validation methods are used for accuracy estimation of these classifiers designed for discrimination. The Bootstrap with pruning by 5 attributes achieves the best performance managing to classify correctly.
Keywords: ECG, MIT-BIH, Data Mining, Decision Tree, classification rules.
Communication Media in Business Industries: Perspectives on Effectiveness
Reynaldo Gacho Segumpan, Joanna Soraya Abu Zahari, and Saif Said Ali Al Ghafri
Abstract: Arguably, communication is the backbone of organizational dynamics. This research examined perceptions of employees on the effectiveness of using certain media to communicate in work, such as electronic mails (e-mails), intranet, newsletter, memos, faxes, manager, voicemail, and phones in hotlines and co-workers. The study also looked into significant differences in employee opinions of media effectiveness when grouped by age, type of industry, and job function. A total of 55 respondents working in certain industries (logistics, manufacturing, services, and trading) in an Industrial Zone completed the questionnaire. The findings revealed that a mixture of media was perceived to be effective in different industries, and that the respondents’ perceptions seemed to be not consistent when grouped by age, type of industry, and job function.
Keywords: Communication Media, Business, Effectiveness
Fault Diagnosis of Rotating Machinery Using Wavelet Transform and Principal Component Analysis
Hocine Bendjama, Mohamad S. Boucherit, and Saleh Bouhouche
Abstract: Fault diagnosis is playing today a crucial role in industrial systems. To improve the reliability, safety and efficiency advanced methods of fault diagnosis become increasingly important for many systems. In this paper, fault diagnosis of rotating machinery is performed using a combination between Wavelet Transform (WT) and Principal Component Analysis (PCA) methods. The WT is employed to decompose the vibration signal of measurements data in different frequency bands. The obtained decomposition levels are used as input to the PCA method for fault detection and diagnosis. The objective of this method is to obtain the information contained in the frequency bands of the measured data. The proposed method is evaluated using experimental measurements data with mass unbalance and gear fault.
Keywords: Vibration measurement, Fault Diagnosis, Wavelet Analysis, Principal Component Analysis, Mass Unbalance, Gear Fault
Qualitative Learning through Knowledge Management in a web environment
Abdullah Alani, O.K. Harsh, and Sohail Iqbal
Abstract: The role of knowledge and reusable knowledge in web Information Systems has been discussed to understand the web-based learning. It is being proposed that a qualitative learning environment is possible which can be viewed by knowledge and quality based models like revised Nonaka and ADRI. It has been found that knowledge (including reusable knowledge) in the web environment not only broadens the learning strategies while it also increases the quality of knowledge by effective utilizing existing resources. Thus knowledge management and quality learning in the context of revised Nonaka and ADRI models can be employed to create a better web-based learning.
In this way it is possible to create a better strategy to accomplish learning by using web information systems in a three dimensional environment where the concepts of time, space, place, technology, and interaction can play a vital role in understanding the more useful involvement of knowledge.
Keywords: Adri Model, Knowledge Reuse, Management, Learning and Quality.
Post-Implementation Evaluation of Enterprise Resource Planning (ERP) Implementation – in Public Utilities
Murad Musa Sakkah, and Habib Lejmi
Abstract: Although ERP system implementations can lead to major performance changes, there is already evidence of high failure risks related to ERP system implementation projects. In the past few years, one of the major research issues in the ERP system area has been the study of ERP system implementation success in order to help better plan and execute ERP system implementations. A typical approach used to define and measure ERP system implementation success has been the identification and validation of critical success factors (CSFs) for ERP system implementations. In this research CSFs for ERP system implementations identified in the litertaure are consolidated and the use of these factors in practice is analyzed. A case study research is conducted in order to evaluate their impact on an ERP system implementation at a public utilities institution - a subsidary company of the General Electricity Company of Libya (GECOL). The research conducted leads to the recommendation that especially the requirements analysis phase plays a decisive role for the ERP system implementation success. It is therefore essential to accord the necessary attention to this phase and not to start the implementation until functional and technical specifications are elaborated.
Keywords: ERP, post-implementation evaluation, public sector, case study
A Neuro-Fuzzy Recognition of Premature Ventricular Contraction
M. A. Chikh, M. Ammar, and R. Marouf
Abstract: This paper presents a fuzzy rule based classifier and its application to discriminate premature ventricular contraction (PVC) beats from normals. An Adaptive Neuro-Fuzzy Inference System (ANFIS) is applied to discover the fuzzy rules in order to determine the correct class of a given input beat. The main goal of our approach is to create an interpretable classifier that also provides an acceptable accuracy. The performance of the classifier is tested on MIT-BIH (Massachussets Institute of Technology-Beth Israel Hospital) arrhythmia database. On the test set, we achieved an overall sensitivity and specificity of 97.92 % and of 94.52% respectively. Experimental results show that the proposed approach is simple and effective in improving the interpretability of the fuzzy classifier while preserving the model performances at a satisfactory level.
Keywords: Adaptive Neuro-Fuzzy Inference System, interpretable classification, MIT-BIH arrhythmia database.
A Proposed Security Evaluator for Knapsack Public Key Cryptosystems Based on ANN
Sattar B. Sadkhan, Nidaa A. Abbas, and Muhammad K Ibrahim
Abstract: Sometimes the users of any security system need to evaluate the security (complexity) of the system under consideration. For that reason the research about the foundation of security evaluation method (approach) is considered as an important field in cryptology. This paper presents (for the first time as we know) the use of Artificial Neural Network (ANN) as a security evaluator for Knapsack type PKC. The proposed evaluator considers the following Knapsack cryptosystems including Merkle Hellman Cryptosystem (based on Super Increasing Sequence (SIS)), Lu-Lee cryptosystem (based on building a vector depends on Factorization), Goodman-Maculey cryptosystem (based on Standardized Multiplication (SM)), Adina di Parto cryptosystem (based on factorization for more than two prime numbers), etc.
The proposed evaluation method based mainly on considering the attacking methods applied on the cryptosystems mentioned above, and the density of the knapsack vector used in each cryptosystem. The main contribution is related to the adaptation of ANN as security evaluator to find the suitable network for such task. The paper considers three ANN types: Perception Network, Linear Network, and Back Propagation Network. For every knapsack cryptosystem two parameters are calculated: Method of Hiding Knapsack vector and Density of the knapsack vector.
Keywords: Security evaluation, knapsack cryptosystems, artificial Neural Network, knapsack Vector Density
Blind Receiver of OFDM System Based on ICA for Single- Input Single- Output Systems
Sattar B. Sadkhan, Hanan A. Akkar, and Wafaa M. Shakir
Abstract: We consider blind receiver of (OFDM) system based on Blind Source Separation (BSS) adopting the ICA algorithm, which increases the spectral efficiency effectively compared to training based systems, and provides considerable performance enhancement over conventional training methods of detections. Several experiments were performed to verify the validity of the proposed ICA receiver. The (C-FastICA) algorithm had been adopted to separate all subcarriers of the OFDM signal for a Single Input Single Output (SISO) system.
Keywords: Blind Source Separation (BSS), Independent Component Analysis (ICA), FastICA algorithm, OFDM systems.
Classification Based on Association-Rule Mining Techniques:
A General Survey and Empirical Comparative Evaluation
Alaa Al Deen, Mustafa Nofal, and Sulieman Bani-Ahmad
Abstract: In this paper classification and association rule mining algorithms are discussed and demonstrated. Particularly, the problem of association rule mining, and the investigation and comparison of popular association rules algorithms. The classic problem of classification in data mining will be also discussed. The paper also considers the use of association rule mining in classification approach in which a recently proposed algorithm is demonstrated for this purpose. Finally, a comprehensive experimental study against 13 UCI data sets is presented to evaluate and compare traditional and association rule based classification techniques with regards to classification accuracy, number of derived rules, rules features and processing time.
Keywords: Data mining, Classification, Association, Associative Classification, MMAC, CBA, C4.5, PART, RIPPER
Toward an Automatic Segmentation Method of Endocardial Border in Cardiac Magnetic Resonance Images
Mohammed Ammar , Mohammed Amine, Dr.Chikh, and Saıd Mahmoudi
Abstract: Currently, the evaluation of cardiac function involves the global measurement of volumes and ejection fraction (EF). This evaluation requires the segmentation of the left ventricle (LV) contour. This paper describes a new method for an automatic detection of the endocardial border in cardiac magnetic resonance images. The segmentation process starts by the application of a Hough transform to detect circular shape and ended up through an active contour algorithm. Validation was performed by comparing resulting segmentation to the manual contours traced by tow experts using a database contains, one automated and two manual segmentations for each sequence of images.
This comparison showed good results with an overall average similarity area of 93.5%.
Keywords: automatic detection, the endocardial border, Hough transform, active contour
نموذج لإستخلاص المعرفة الذهنية المتولدة أثناء أعمال هندسة متطلبات البرمجيات
عبدالمجيد حسين محمد و نسرين على الحربى
ملخص: تشكل أعمال الصيانة اكبر التحديات التي يواجهها مهندسي البرمجيات ، فهي عملية بالغة التعقيد والصعوبة خاصة وأنها غالباً تُجرى بعد مضى وقت طويل على بناء البرمجيات. حينها قد لا يتوافر مهندسي البرمجيات الذين قاموا بإنجاز النظام أول مرة،وهذا يشكل عبئا كبيرا أمام مهندسي الصيانة فى فهم واستيعاب مكونات النظام قيد الصيانة ولماذا صُمم ونُفذ على النحو الذي هو عليه. فالنظم قيد الصيانة عادة تعانى من غياب المعلومات اللازمة حول مبررات القرارات التى أُتخذت حول خيارات متطلبات النظم وطرق انجازها. فحصيلة المعارف التى تمخض عنها قرارات تصميم وبناء النظم فى الغالب تظل حبيسة أذهان مهندسي البرمجيات الذين قاموا بالعمل، وتتمثل هذه المعارف الذهنية في وجهات النظر المختلفة والفرضيات التى تم اخدها فى الاعتبار اثناء انجاز المشروع. هذه المعارف عادة لا يتم تدوينها ضمن وثائق النظم التى تشمل المخططات الوصفية وشفرة البرمجيات وعينات الاختبار وغيرها. نظراً لفقدان المعرفة الذهنية المتعلقة بالمشاريع السابقة، عادة يلجأ مهندسي الصيانة الى التخمين والاجتهاد فى فهم عناصر النظام والذى قد يكون اجتهادات خاطئة فى بعض الأحيان. ضمن مراحل بناء البرمجيات تشكل مرحلة هندسة المتطلبات المرحلة الأكثر خصوبة فى مجال النقاشات وتبادل الرأي بين المعنيين من ممثلي الزبون واعضاء فريق التطوير من محللي نظم و مصممين. وتشكل الاجتماعات والمداولات مجالا خصبا لتوليد واستغلال المعرفة الذهنية، إذ أن مدونات محاضر اجتماعات فرق العمل تحوى فقط ماتم التوصل اليه من قرارات حول متطلبات النظام بينما كافة الآراء ووجهات النظر المطروحة تظل حبيسة الأذهان. هذه الورقة تعرض طريقة لحصر وتدوين المعرفة الذهنية لمهندسي البرمجيات، وتحديداً المعرفة الذهنية المتعلقة بمرحلة هندسة المتطلبات والتى تتولد أثناء مداولات جمع وتدقيق متطلبات النظم قيد البناء.
كلمات مفتاحية: هندسة المتطلبات، معرفة ذهنية، معرفة تصريحيه، إدارة المعرفة، إعادة استخدام المعرفة.
Issues in Network Operations and Performance for the NOC (Libya) Network Infrastructure
Otman A. El-Ghajiji, and Amira Mohamed
Abstract: With the fast development in the information technology field in Libya, more and more documents and applications are being hosted on electronic systems, such as the case of the National Oil Corporation NOC, where a vast amount of sensitive data is stored on servers, and hundred of business-related e-mails are exchanged everyday. The NOC computer network represents an ideal example of an active and a very critical system, including 4 geographical locations with more than 12 servers, 600 client computers, and a huge amount of sensitive information to protect from attackers, intruders, and hackers, who target the NOC as one of the foundations of this country.
Based on all of the above, this research project was adopted by the IT department of the NOC, to study and inspect the network architecture and business processes, define problems and points o weaknesses in terms of security, performance and overall efficiency, and then to develop solutions to those problems. This research research project has delivered its objectives by defining the problems and then addressing them with appropriate designs of valid solutions, which were then physically implemented and deployed to the possible extent. The main solutions provided by this research research project to the NOC network are: a monitoring mechanism, a security policy, an audit policy, a De-Militarized Zone (DMZ) with internet usage policies, and Virtual Private Networking (VPN).
Ontology of Information Science Based On OWL for the Semantic Web
Ahlam Sawsaa, and Joan Lu
Abstract: In these days, with a big shift from the information age to the era of knowledge, and beside the rapid development in technology such as the internet, data and Information has become distributed on the internet as database, repositories is difficult to control this massive flow and diversified in all areas . That flow requires the need of organize and manage knowledge effectively. Ontology is an approach used to presents domain knowledge on the World Wide Web (WWW). This paper presents creating ontology of Information science as conceptual model representation of terms in a particular domain. Furthermore, improving the information retrieve on the internet.
Keywords: Information Science ontology -knowledge presentation – OWL web ontology language- semantic web
The Intranet and Its Impact in the Hold of Decisions at the Level of Enterprises
Mohamed-Khireddine Kholladi, Chahinez Kholladi, and Sirine Kholladi
Abstract: Principal objective of the introduction of a computerized information system using the new technologies of information and the communication (NTIC) by the Internet and the intranet are: improvement of the quality of the hold in charge of problems, assessment of service activities, restraint of the administrative management, restraint of the continuous increase of management costs and facilitated the hold of decisions. The management of the enormous mass of information with the conventional means: better organization, good administration and qualified and sufficient staff, drag a difficult and complex management: not of control serious of management and men, not of transparency in activities and bad administrative management. Solutions susceptible aim to optimize the use rational of the enormous material, human and financial resources of enterprise to satisfy our objectives and to introduce the techniques most modern of treatments of information by the setting up of a leading diagram of the enterprise. contributions of the computer tool by the new technologies of information and communication “NTIC” in the domain of the management are: better organization of activities, considerable gain in time, minimum of expense, better exploitation of information in real time and to improve methods, means of storage, treatments of data and holds of decisions efficiently. In the setting of this communication we are going to approach aspects of the Intranet of enterprises in the setting of the decision hole.
Keywords: Intranet, Internet, NTIC, System of information, Enterprise and Takes Decisions.
Exact Method for Resolving the Q3AP Problem on Calculation Grid
Melle Sabrina Sahraoui, and Mohamed-Khireddine Kholladi
Abstract: Quadratic assignment to three dimensions (Q3AP) is one of the most difficult of problems Combinatorial Optimization, this problem is NP-complete and has several applications in data transmissions. Its exact resolution pass by the enumeration of a very broad search tree that contains a billion summits for instances of size medium. The resolution optimally large instances of the problem require implementing in work of the complex methods requiring more than power of calculation. Currently, with the growth of calculation grids, many parallel models for exact methods have been proposed. For a good grids environmental exploitation, the initial problem is divided into many units work. These are then distributed on a thousand of processors on the grid. The objective is to push the more far as possible the exact resolution Q3AP. Large scale tests on the Q3AP are still in perspective.
Keywords: Combinatorial Optimization, Complex Systems, Calculation Grid, Parallel Exact Methods, Quadratic Assignment.
Applying the Australian Model in the SVU
Ayub Al-Badowi
Abstract: This paper reviews the feedback and expected outcomes for the surveyed students of the Syrian Virtual University (SVU) in designing and implementation of e-Learning based on Australian Flexible Learning Framework (AFLF). This research used Soft Systems Methodology (SSM) to analyze and understand the complex and 'messy' situation of the current case study. The study emphasizes the effective use of e-business services available in the e-learning. An outline for the steps for designing and implementation of e-learning is included. Goals are defined as broad objectives of a survey that are measured after students responded, whereas outcomes are defined as recommendations.
How far is the Syrian Virtual University from Australian Flexible Learning Framework? How can the Syrian Virtual University design e-learning contents based on Australian Flexible Learning Framework? How can the Syrian Virtual University assess its performance based on Australian Flexible Learning Framework? These important questions and some others will be the main titles for this research as an attempt to formulate the e-learning concept and e-learning web services.
The study finished that e-learning constantly creates new market needs to be resolved by qualified the e-business services to progress towards a new concept of knowledge helps both students and SVU to achieve their goals. The application of SSM allowed more introspection than was expected. The basic approach was not "problem solving" but finding a way to improve the situation.
Keywords: E-learning, Framework, E-learning Design, Service Oriented Architecture (SOA),Web service, Soft Systems Methodology (SSM).
Bandwidth Allocation for Handover calls in Mobile Wireless Cellular Networks – Genetic Algorithm Approach
Khaja Kamaluddin, and Abdalla Radwan
Abstract: The proposal of this paper is the channel allocation in mobile wireless cellular networks using genetic algorithm. Time slots are allocated for handover calls based upon their fitness score. Our simulations have shown that higher fitness score call gets preference against lower fitness score ones. Lower fitness calls will get minimum bandwidth in worst case. This solution enables the maximum utilisation of cell bandwidth by which the wastage of bandwidth is avoided.
Keywords: Channel allocation, Dropping Probability, Genetic algorithm
Analysis of Primary School Arabic Language Textbooks
B. Belkhouche, H. Harmain, H. Al Taha, L. Al Najjar, S. Tibi
Abstract: This paper reports on a preliminary analysis of a corpus consisting of Arabic language textbooks used in primary schools. The input to our process is a raw text extracted from Arabic textbooks used by the Emirates curriculum of grades 1 through 6. Various aspects, including word and root frequencies, parts-of-speech distribution, phonology, and themes, are investigated. A comparison among parts of speech, UAE grade 1 and Libya grade 1 is performed. Our analysis raises several issues on the criteria for selecting a word list that is appropriate for enriching the vocabulary of school children.
Came-BDO: A Tool for Designing Data Marts from Object Databases
Salma Ben Mefteh, Jamel Feki, and Yasser Hachaichi
Abstract: This work presents a software tool called CAME-BDO that helps DSS (Decision Support System) designers to construct data mart schemas from an object database. This tool relies on an approach that first gets the object database schema from the object DBMS repository. Secondly, it implements a set of heuristics those extract multidimensional concepts (i.e., facts, measures, dimensions and their attributes), classify them by relevance level and then, generates and returns a set of star schemas. In order to assist the decision makers, CAME-BDO allows them to adjust the generated star schemas according to their analytical requirements. In addition, being identified from objects of the enterprise transactional system, these schemas carry information useful for the implementation of the data mart (e.g., data type, length…), and for the generation of the ETL (Extract, Transform and Load) procedures to feed the data mart with data directly from the enterprise transactional system.
Keywords: Object database, Matisse Object DBMS, multidimensional modeling, star schema, data mart.
A Modified K-Means Clustering Algorithm for Gray Image Segmentation
Ibrahim A. Almerhag, Idris S.El-Feghi, and Ali A. Dulla
Abstract: The performance of an iterative image clustering algorithm depends highly on the choice of cluster centers in each step. In this paper we propose an effective algorithm to compute new cluster centers for each iterative step of K-means clustering.
The proposed algorithm is based on the optimization formulation of the problem and a novel iterative method, which is a modified version of the generic k-means algorithm, by finding nearest initial values of centers to get minimum repeating cycles. The cluster centers computed using this algorithm are found to be very close to the desired cluster centers, for iterative clustering algorithms.
Keywords: image clustering, image segmentation, gray image, standard k-means algorithm, modified k-means
W-CDMA Via High Altitude Platform Station System
Mohammad A. M. Al-Rawas
Abstract: In similar with terrestrial and satellite wireless networks, a new alternative based on platforms located in the stratosphere has recently introduced, known as High Altitude Platforms (HAPS). HAPS are either airships or aircraft positioned between 17 and 22.5 km above the earth surface. It has capability to deliver a wide spectrum of applications to both mobile and fixed users over a broad coverage area. Wideband code division multiple access (WCDMA) has emerged as the mainstream air interface solution for 3G networks. Also the ITU has specifically authorized the use of some IMT-2000 (3G) frequency bands from HAPS. This paper addresses only forward link power control for high altitude platform station for a WCDMA under the assumption of power control imperfections. Power control improves the uplink and the downlink performance both by equalizing the powers of all users in a cell and by compensating for the channel fading. However in real systems power control imperfections disgrace the system capacity. The performance of two distance based forward link power control schemes (nth-power-of distance control schemes) are evaluated for high altitude platform station (HAPS) W-CDMA systems. For a HAPs system with 19 beams, the total capacity of the system would be in the order of 1206 voice users or 144 data users. The coverage of the platform with 19 beams each with a radius of 1.2 km can by approximated by a circle with a radius of 6 km. It has been shown that HAPS UMTS gives capacity and resource management improvements
Keywords: HAPS, WCDMA, power control, interference.
Arabic Text Classification Based on Features Reduction Using Artificial Neural Networks
Fawaz Al-Zaghoul, and Sami Al-Dhaheri
Abstract: Despite the huge textual information that is available online and it increases every day, effective retrieval is becoming more difficult. Text categorization is one solution to tackle this problem. In this paper, we present and analyze the results of the application of Artificial Neural Network (ANN) for the classification of Arabic language documents. The work on automatic categorization of Arabic documents using Artificial Neural Network is limited. The system’s primary source of knowledge is an Arabic text categorization (TC) corpus built locally at the University of Jordan and available at http://nlp.ju.edu.jo. This corpus is used to construct and test the ANN model. Methods of assigning weights and features reductions that reflect the importance of each term are discussed. Each Arabic document is represented by the term weighting scheme. Since the number of unique words in the collection set is big, features reduction methods have been used to select the most relevant features for the classification. The experimental results show that ANN model using features reduction methods achieves better result than the performance of basic ANN on classifying Arabic document.
Keywords: Text Classification, Neural Network, PCA, Vector Space Model.
An Optimized Edge Detector Based on Fuzzy Logic
Talai Zoubir, and Talai Abdelouaheb
Abstract: Edge detection is an important topic in computer vision and image processing. In this paper, an optimized edge detector based on fuzzy logic is proposed. The fuzzy If-Then rule system is designed to model edge continuity criteria. The maximum entropy principle is used in the parameter adjusting process. We also discuss the related issues in designing fuzzy edge detectors. We compare it with the popular edge detectors: Canny edge detectors. The proposed fuzzy edge detector does not need parameter setting as Canny edge detector does, and it can preserve an appropriate detection in details. It is very robust to noise and can work well under high level noise situations, while other edge detectors cannot. The detector efficiently extracts edges in images corrupted by noise without requiring the filtering process. The experimental results demonstrate the superiority of the proposed method to existing ones.
Keywords: Segmentation, Image Processing, Fuzzy Logic, Edge Detection, Maximum Entropy Principle, Fuzzy Inference Rules.
Challenges of Security, Protection and Trust on E-Commerce:
A Case of Online Purchasing in Libya
Abdulghader.A.Ahmed.Moftah, Hadya.S.Hawedi, Siti Norul Huda Sheikh Abdullah, and U.C. Ahamefula
Abstract: E-commerce is a successful business-based innovation via internet. This form of business transaction strategy offers many opportunities for growth in business and marketing services in various aspects. Online shopping is an intermediary mode between marketers or sellers to the end user or the consumers. Nature of online transaction in Libya is constrained by instability resulting from insecurity, unprotected transaction as well as trust. Online shopping could become predominant source of shopping method, if the barriers associated with insecurity, trust and customer’s protection are tackled. Owing to the significance of e-commerce towards Libyan economic growth, this paper highlights the limitations associated with e-commerce transaction in Libya and proposes relevant steps towards overcoming these constrains. Relevance of integrating e-commerce in Libyan economic system is discussed with.
Keywords: E-commerce, online shopping, security, protection, trust
Face Region Detection Using Skin Region Properties
Kenz A. Bozed, Ali Mansour, and Osei Adjei
Abstract: Face detection is an important step in face image processing. This paper presents an efficient method to detect the face region based on skin region properties. Skin detection and morphological operations are used to detect roughly skin regions. The properties of these regions are used to detect the face region and discard other regions automatically. More refining is applied to isolate the face region by removing the neck and forehead. Experiments with different images of different subjects show that the proposed technique is able to discover the face region faster with a high detection rate. Different races and subjects with different occlusions such as wearing glasses or having beards/moustaches, etc. were tested with good results obtained.
Keywords: Face detection, Skin detection, Normalized space colour, Thresholding, Morphological operation and Properties of regions.
A Neuro-Fuzzy Recognition of Premature Ventricular Contraction
M. A. Chikh, M. Ammar, and R. Marouf
Abstract: This paper presents a fuzzy rule based classifier and its application to discriminate premature ventricular contraction (PVC) beats from normals. An Adaptive Neuro-Fuzzy Inference System (ANFIS) is applied to discover the fuzzy rules in order to determine the correct class of a given input beat. The main goal of our approach is to create an interpretable classifier that also provides an acceptable accuracy. The performance of the classifier is tested on MIT-BIH (Massachussets Institute of Technology-Beth Israel Hospital) arrhythmia database. On the test set, we achieved an overall sensitivity and specificity of 97.92 % and of 94.52% respectively. Experimental results show that the proposed approach is simple and effective in improving the interpretability of the fuzzy classifier while preserving the model performances at a satisfactory level.
Keywords: Adaptive Neuro-Fuzzy Inference System, interpretable classification, MIT-BIH arrhythmia database
Extracting Arabic Collocations Based on Jape Rules
Soraya zaidi, Mohamed-Tayeb Laskri, and Ahmed Abdelali
Abstract: The massive amount of digital information available in all disciplines has generated a critical need to organize and structure their content. Among the existing tools for languages such as English or French can easily be adapted to Arabic language. In some cases a simple configuration is sufficient while in other cases significant modifications must be made to obtain acceptable results. We present in this paper a rule-based method for extracting collocations in Arabic language using Gate (General Architecture for Text Engineering). We use the extracted collocations as domain terms to build Arabic text-based ontologies. We validated the approach using The Crescent Quranic Corpus in order to build automatically the Quran ontology.
Keywords: Collocations extraction, Arabic language, Jape, NLP, GATE.
A Strategy That Improves Quality of Software Engineering Projects in Classroom
Khalid Slhoub
Abstract: The purpose of this paper is to propose a light-weight, learner-centric small-scale strategy for managing and improve software quality in classroom projects. The strategy was made up of a development process, a set of quality metrics and a set of associated standards. The development process is based on Agile practices. The quality metrics are generated via a goal-question-metrics process (GQM) and include metrics for tracking product and process quality, specifically risk, product satisfaction, prototype suitability, prototype development duration, work week, productivity, efficiency, defect density, and maintainability (with respect to coding standards and documentation standards). Associated standards include benchmarks for each metric, and a set of documentation, coding, and logging procedures.
Keywords: software Quality, Process Model, Software Testing
Development of an Intelligent Monitoring Greenhouse System Based On Wireless Sensors Network Nodes with Fuzzy Logic Controller
Mohamed Fezari, Hamza Atoui and M. S. Boumaza
Abstract: In this paper a hardware and software design is presented to control and monitoring greenhouse parameters such as: air temperature, humidity provision and irrigation by means of simultaneous ventilation and enrichment. A fuzzy logic controller paradigm is simulated for future implementation on wireless sensor network nodes. A set of smart sensor modules to control and monitoring system were designed and tested. The heart of the smart sensor is a microcontroller that receives data on greenhouse environment conditions from many sensors installed inside and outside. The smart sensor transfers the data to and from a PC via a wireless transmission system, a simulation based on fuzzy logic controller is provided in order to make decision. Accordingly, it changes the state of greenhouse command devices, heaters, fans and vapor injectors to reach the desired condition. A friendly GUI using high level language was developed to carry out the monitoring tasks. The program implements the control algorithms comparing the received data with set points, sending control signals to the smart sensors in order to reach the desired conditions. Performance of the designed system was tested by installing it in the model greenhouse with a set of two smart sensors.
Keywords: Greenhouse monitoring, node sensors, wireless sensors network, Fuzzy logic controller, microcontroller.
Alkhalil Morpho Sys1: A Morphosyntactic analysis System for Arabic texts
A. Boudlal, A. Lakhouaja, A. Mazroui, A. Meziane,
M. Ould Abdallahi Ould Bebah, And M. Shoul
Abstract: Alkhalil Morpho Sys[1] is a morphosyntactic parser of Standard Arabic words. The system can process non vocalized texts as well as partially or totally vocalized ones. Our approach is based on modelling a very large set of Arabic morphological rules, and also on integrating linguistic resources that are useful to the analysis, such as the root database, vocalized patterns associated with roots, and proclitic and enclitic tables. As an output of the analysis, we have a highly informative table mainly containing vocalization of the stem, its grammatical category, its possible roots associated with corresponding patterns, proclitics and enclitics.
Keywords: Arabic language processing, analyser, morphosyntactic parser, Standard Arabic.
Forecasting Model Based on Fuzzy Time Series Approach
Samira M. BOAISHA and Saleh M. AMAITIK
Abstract: Virtually, forecasting plays everywhere a major role in human life, especially in making future decisions such as weather forecasting, university enrollment, production, sales and finance, etc. Based on these forecasting results, we can prevent damages to occur or get benefits from the forecasting activities. Up to now, many qualitative and quantitative forecasting models were proposed. However, these models are unable to deal with problems in which historical data take form of linguistic constructs instead of numerical values. In recent years, many methods have been proposed to deal with forecasting problems using fuzzy time series. In this paper, we present a new method to predict the calendar day for average Arabian Gulf Oil Company using fuzzy time series approach based on average lengths of intervals. A visual-based programming is used in the implementation of the proposed model. Results obtained demonstrate that the proposed forecasting model can forecast the data effectively and efficiently
Keywords: Fuzzy time series, Forecasting, Fuzzy sets, Average-based length
Surface-Damage Classification of Concrete Structure Using Image Processing Method
Ekrem Emhmed and Shahid Kabir
Abstract: Damage in concrete structures can be assessed by analyzing the texture of surface deterioration using optical concrete imagery. This research proposes the application of an enhanced method of texture analysis, based on the non-destructive technique of optical imaging, to characterize and quantify structural damages. Different types of imagery, colour, greyscale, and binary are evaluated for their effectiveness in representing surface deterioration. Different types of images was taken from different concrete parts which is having different deterioration problems such as, erosion, spalling, pop-outs, corrosion and cracks. The analysis of the images was done using the supervised minimum distance. The images were later classified and analyzed by using an image classification and analysis software called ENVI v4.0.Classifications based on the combination datasets were used to determine the different levels of damage in the concrete. The damage analysis conducted on the optical imagery provided quantitative information concerning total damage and crack-width opening, which was calculated using the mix and mean opening along single cracks. Results show that optical imaging is a good method of structural damage analysis and is cost effective and efficient.
Method for Distributed Database Design
Taher Ali Al-Rashahy
Abstract: In this work we presented a method to design a distributed database. Such a method includes a number of procedures, namely, Fragmentation which a mechanism to divide the database files in both directions vertically, horizontally as well as in a hybrid way. We also provide a way to locate the positions of the files in the distributed system with a minimum cost (communication cost, processing cost, and accessing cost) during the reading, writing operations. We also presented a mechanism to determine the number of copies available for each file, and their locations, so that to minimize the accessing cost to these files within the system.
Keywords: Distributed database design, fragmentation, data allocation, Replication, communication cost.
Implementing Tram Using Amine to Enhance Cg Tool Interoperability
Antesar M. Shabut
Abstract: This piece of work is an effort to cover the issues around the conceptual structures (CGs) tool interoperability in an attempt to implement Transaction Agent Model or (TrAM) using Amine 5 as a tool to build ontology and make use of conceptual structures' components using SAP ECC exemplar that stands for SAP ERP Central Component as a research based case study. The consequent discussion assesses the potential transformation between TrAM model and Amine platform. It reveals that it is possible to transform TrAM's CGs graphical form into linear form using Amine conceptual structures including definition, canon, situation, and lately rules. The interpretation of peirce logic into Amine csrule, and the conduction of inference process can be performed.
Keywords: Interoperability, TrAM, Amine 5, Conceptual Structure, CGs
Weighting of Conceptual Neighborhood Graph for Multimedia Documents Temporal Adaptation
Azze-Eddine MAREDJ and Nourredine TONKIN
Abstract: In the approaches of the semantic adaptation of a multimedia document, where this last is represented by an abstract structure of a relations graph that expresses all the relations that exist between its contained media objects, the adaptation consists in modifying this abstract structure in a minimal way to lead it to satisfy the target profile. The profile defines constraints that must be satisfied by the document to be played.
At this level, the transgressive adaptation becomes necessary when no specification model exists to satisfy this profile.
In this paper, we propose a conceptual neighborhood graph based approach where the weighting of this graph is also proposed for from one side, reduce substantially the adaptation time and from the other side, find the closest substitution relations to those to be replaced.
Keywords: transgressive semantic adaptation, multimedia document, conceptual neighborhood graph.
Evaluation of the Jordanian E-Government Websites Evolution
Qasem A. Al-Radaideh and Doha A. Al-Smadi
Abstract: Since the first adoption of E-government, several ministries and agencies in Jordan and the Arab region started providing services to their citizens through their E-Government websites. The main aim of this paper is to study the evolution of some E-government websites in Jordan. The evolution of the websites is assessed according to usability and transparency metrics between the years 2005 and 2008. The study is accomplished by using a web archived system called WayBack Machine and several Websites evaluation tools. This study assessed the usability of the websites according to navigability, content quality, accessibility, browser compatibility, and the privacy policy issues. The most evolving measures were navigability and content quality. The website of the ministry of Jordanian Information and Communication Technology (MOICT) had the best improvements in terms of navigability. The results conclude that there are several weaknesses in developing Jordanian E-government websites through time. On the other hand, the selected websites from the Arab region evolved more noticeably comparing to the Jordanian E-Government official website (JEG), in term of usability metrics. The official E-government website of UAE had the best results comparing to JEG and Libya EGovernment website.
Keywords: Jordan E-Government, Website evaluation metrics, Website usability, Website evolution.
Performance Evaluation of Ad Hoc Routing Protocols (Dsdv and Olsr) In High Mobility Scenario for Manet
Sofian Ali Ben Mussa and Mazani Manaf
Abstract: Routing in mobile ad hoc network is important to be efficient due to unpredictably and rapidly changes of network topology in a mobile ad hoc network (MANET). Unexpected node mobility increases the chance of experiencing link breakage. Link breakage detection is key factor to determine whether routing protocols in mobile ad hoc networks are efficient or not. The two certain routing protocols are studied. They are designed for mobile ad hoc network namely Destination-Sequenced Distance Vector (DSDV) and Optimized Link State Routing (OLSR). The performance of these two proactive ad hoc routing protocols in handling link breakage in high mobility scenario will be observed. The observation of their respective performance will be evaluated and compared based on the packet delivery ratio, throughput and packet loss ratio. The simulations are performed using NS2 simulator. The studies have shown it is obvious that OLSR outperformed DSDV in all cases. The simulation experiments and analysis details of the results which we have obtained are contained in this paper.
Keywords: performance study, network simulation comparison, link breakage, VANET
Linear Multiuser MMSE Detector Performance in Synchronous Cdma System
Tarik Suleiman and Abdelgader M. Legnian
Abstract: The bit error rate performance of the conventional and liner MMSE detectors is analyzed and simulated in synchronous CDMA channel under Multiple Access Interference (MAI) and Near Far Problem circumstances. A brief overview on the multiuser detection (MUD) is given. Simulation results show the performance of the linear MMSE multiuser detector compared with the conventional multiuser detector. Conclusions are drawn from simulation results.
Keywords: Multiuser detection, MMSE detector, CDMA, Conventional detector.
Performance Comparison for Mobile Ad Hoc Routing Protocols In Large Networks
Muattaz Elaneizi and Ibrahim Buamud
Abstract: Mobile Ad hoc Network (MANET) is a wireless ad hoc self-configuring network of mobile routers (and associated hosts) connected by wireless links, the union of which forms an arbitrary topology, cause of the random mobility of the nodes. In this paper, an attempt has been made to compare these three protocols DSDV, AODV and DSR on the performance basis under different traffic protocols namely CBR and TCP in a large network. The simulation tool is NS2, the scenarios are made to see the effect of pause times. The results presented in this paper clearly indicate that the different protocols behave differently under different pause times. Also, the results show the main characteristics of different traffic protocols operating on MANETs and thus suggest some improvements on the above mentioned protocols.
Keywords: Proactive protocols, Reactive protocols, Random waypoint model, CBR, TCP, Awk
Digital Correlation Method Based On Microgeometrical Texture Patterns for Displacement Field Measurements
Halima Bubaker-Isheil1, Jerome Serri , and Jean Francois Fontaine
Abstract: Digital correlation method is widely used in the experimental engineering to obtain the deformation field. Currently this method is applied with digital images of the initial and deformed surface sprayed with black or white paint. Speckle patterns are then captured and the correlation can be made with high accuracy about 0.01 pixels in two dimensions (2-D) cases. In three-dimensions (3-D), the stereo-correlation can be used with a lower accuracy.
The work, presented in this paper, is a first approach based on the use of a 3D laser scanner in the objective of 3-D displacement field measurement. The digital speckle patterns are not given by gray level but from the micro-geometrical surface texture. It is assumed that the waviness and the roughness remain sufficiently small in the deformation motion to give a good estimation of the displacement particle. The principle of the measurement and the methodology to obtain digital images from measured points clouds are described. Some experimental results are presented for displacement fields of bi-dimensional as a translation solid by 5mm, a rotation of plane by 4°, and a rotation of 6° is applied for three-dimensional objects.
Keywords: Digital Laser Scanner, Roughness Pattern and Displacement Field Measurement.
Performance of Blind Adaptive Multiuser Detection in Synchronous Cdma Channel
Khalifa M. Ali and Abdelgader M. Legnain
Abstract: In this paper, we provide an overview of recent methods in linear multiuser detection (MUD) and linear adaptive multiuser detection in synchronous CDMA channel. The idea of linear adaptive multiuser detection is to send a training signal, which can be used to update the equalizer coefficients using a chosen algorithm. An error term is calculated from the difference between the training signal and the soft decision output. Then an algorithm implementing the function is used to update the filter coefficients. Blind adaptive receiver does not require training sequences and it just requires knowledge of only the signature waveform and the time of desired user, this the same knowledge that conventional receiver does.
Keywords: Multiuser detection, CDMA, MMSE, linear adaptive filter, blind equalization
A Pattern Language of Conceptual Data Model Patterns Based On Fully Communication Oriented Information Modeling (Fco-Im)
Fazat Nur Azizah, and Guido P. Bakema
Abstract: A problem in data modeling is that creating a high quality data model is not an easy task and data modelers often deliver low quality results due to unfamiliarity to data modeling principles and standards. This problem can be reduced if they are acquainted with standards which ensure that the data models both meet business needs and be consistent. In this research, we propose the use of data model patterns arranged in a pattern language based on Fully Communication Oriented Information Modeling (FCO-IM) as the modeling approach. In this paper, we present the concepts of a pattern language of conceptual data model patterns (including the concept of Information Grammar for Pattern/IGP), as well as a list of conceptual data model patterns based on FCO-IM. We have carried out some tests to prove whether the resulting conceptual data models (FCO-IM information grammar) which are produced using the pattern language hold high quality. The test is carried out on four measurements: syntax correctness, feasible validity, feasible completeness, and feasible comprehension. Based on the test results, we conclude that the conceptual data models are of high quality, particularly in syntax correctness and validity aspect.
Keywords: data modeling, conceptual data model pattern, pattern language, FCO-IM, information grammar for pattern
Decision Tree Classifier for Supertagging Arabic Texts
Chiraz Ben Othmane Zribi, Fériel Ben Fraj, and Mohamed Ben Ahmed
Abstract: In this paper, we talk about supertagging Arabic texts with ArabTAG formalism, a semi-lexicalised grammar based on Tree Adjoining Grammar (TAG) and adapted for Arabic. Supertagging is a very practical task because it reduces and speeds the work of parsing by assigning to each word in a sentence its syntactic elementary tree. We view this problem as a classification task. Supertags (elementary structures) are classes to be assigned to a sequence of words based on their discriminant attributes (morphosyntactic and contextual information). The classifier we used is a Decision tree classifier. To train and evaluate this classifier, we used a corpus of 5,000 words. The obtained results gave a fairly satisfactory accuracy rate of 70% in spite of the small size of the training corpus.
Keywords: Supertagging, ArabTAG, Tree Adjoining Grammar (TAG), machine learning, classification, Decision tree, Arabic language
The Concept of "Background Component" In Iasa Software Model.
Abdelfetah Saadi, Abderrazek Henni, and Djamel Bennouar
Abstract: This paper proposes a new concept for the software component model. In this paper, we have proposed the dynamic adaptation of software systems. We have introduced the concept of "Background Component" in the IASA component of our software model. The Background component is composed of a set of components, when the instance starts, that run in the background. Thus our proposed concept is closer to the phenomena of intelligent agents that manage all types of events during the deployment or the implementation of the architecture. Moreover, the Background Component allows the software models to accommodate all types of systems. This make software model more generic, flexible and adaptive.
Keywords: Software architecture, Background Component, component model, dynamic adaptation.
Deploying Holistic approach for Strategic Alignment of Information System
Azedine Boulmakoul, Noureddine Falih, and Rabia Marghoubi
Abstract: Strategic Information System Alignment is considered as one of the main pillars of the information systems governance in the company. In this paper, we propose the deployment of a holistic approach centred on the extended enterprise meta-model (ISO/DIS 19440 2007). This meta-modeling incorporates specific structures borrowed from best practices for driving the IT process of Cobit. Such a structure can bring systemic tools from the structural paradigm for a better assessment of the strategic alignment of Information System. In this work we use in particular, the Galois lattice and Guttman scales for the process reengineering.
Keywords: Strategic alignment, Meta-modelling, Structural paradigm, Systemic, Galois Lattice, Guttman scales.
Ns2 Implementation for Reactive Channel-Hopping and Error-Correcting Codes
Ali Mohammed Al-Sharafi, M. Hayel Al-Hakimi, and Talal Al-Sharabee
Abstract: To copes with jamming problem in 802.11-based multi-radio wireless networks, an approach of channel-hopping and error-correcting codes have been proposed and the problem of maximize goodput also have been introduced and modeled. This model considered the software-based channel-hopping in a multi-radio setting rather than single-radio.
In this paper, we present an architecture for design and implement infrastructure-based 802.11 multi-radio wireless networks, design and implement a simulation model which will be used to verify the reactive defense model of channel- hopping and error- correcting codes against scanning attacks. We integrate this architecture in the popular ns-2 network simulator (it has limited support for simulating infrastructure- based 802.11 Multi-radio wireless networks. This module will benefit academic researchers and industrial developers in developing the Multi-radio system. Beside, the architecture of the module is also present in this paper.
Keywords: Channel-Hopping, Error-Correction Code
Segmentation of images by Mean Shift method:
Application to satellite images
Imine R. Ait Amrane M. Ammour A. Yahia Berrouiguet S. Benyettou A.
Abstract: This article describes a method of segmentation of images colors resting on the principle of classification in a space of positions and colors. The method of Mean shift registers on the whole of models non parametric, which instead of modeling in a parametric manner the distribution of positions and colors, we use the empirical estimate of its modes (the local maxima).¶The fundamental idea of mean shift method is to estimate the modes without required explicit calculation of the distribution. Our methodological choices are tested on satellite images.
Keywords: Segmentation, areas of characteristics, Mean shift, filtration, and evaluation.
Simulation of Wimax Emergency Backup Communication Network Using Ns2
Ali Mohammed Al-Sharafi, Adham M.Jaffar, Abdullah Ali Salem,and Abdussalam Mahyob ghaleb
Abstract: Wireless communication continues to evolve quickly as the relatively new IEEE 802.16 standard is introduced as the big brother of Wi-Fi (IEEE 802.11), with much bigger coverage and much wider range of different frequencies [1], [2], WiMAX is even allowed to work in the 700Mhz [3], making WiMAX capable to penetrate walls and overcome many physical obstacles, these concepts have made WiMAX very suitable for civilian, government, and crisis management applications. The idea came up after the attacks of September 11th on the world trade center in New York, initial reports indicated that communication assets near the affected area were either congested or incapacitated, this has made real concerns that very important federal agencies in Washington, DC were at risk of losing critical wire line telecommunication services. For this reason an alternative route is essential for crisis management that would contribute in saving lives. Whether partial or complete, the failure of telecommunication assets could lead to more loss of lives and damage to property, and that is due to errors and delays in emergency response and relief efforts after a crisis hits. The main idea of this paper is to use WiMAX technology as backup communication network, replacing the damaged wired structure in a city, the WiMAX network would help rescue teams in locating victims and managing relief efforts more efficiently, and in coordination between officials and rescue teams in the affected area.
The results of this study showed that WiMAX technology is very efficient in providing telecommunication services, even one base station can deal with 100 victims near the base station, or as far as four kilometers away. Also this technology showed capability in delivering signals while using modulation techniques for extreme bad weather, in data rates close to a dial up connection rate.
Keywords: WiMAX, Disaster management, NS2.
Prediction of Pipe Failures in Water Mains Using
Artificial Neural Network Models
Nasser M. Amaitik and Saleh M. Amaitik
Abstract: Many owners of Pre-stressed Concrete Cylinder Pipe (PCCP) water mains around the world experience regular failures in their pipelines. The condition and performance of any water pipeline can be assessed by direct inspection using techniques such as electromagnetic resonance, acoustic monitoring, or GPR radar. It is common practice to inspect only a few sections of a pipeline at any point in time. This is largely due to very high costs associated with direct inspection and the inability to apply direct inspection techniques under the operating conditions that prevail inside the pipeline. Thus, direct inspection activities can only provide a very incomplete picture of the state of the water mains. The situation can be improved with the use of intelligent models capable of predicting the current condition and performance of the pipeline system based on observations of historical conditions and inspection results. We have developed such models for PCCP wire break predictions using Artificial Neural Network (ANN) techniques. The models are applied to real-world acoustic monitoring data collected from Great Man Made River Project (GMRP) in Libya. The ANN models are in good agreement with the training patterns and show good prediction performance (“R2”~0.99 for Special Pipes and “R2”~0.60 for Standard Pipes). The results of ANN models are compared with that of Multiple Linear Regression (MLR) model (“R2”<0.28 for both Special & Standard pipes).
Keywords: Water mains, Concrete pipes, Wire Breaks, Neural Network, Acoustic Monitori
EEG Recognition By Using Backpropagation Neural Network Based On Linear Prediction Filter Coefficients
AbdulSattar Khidhir, and Najla Safar
Abstract: In this research, It has been used linear prediction filter coefficients method as base of quantifying changes with time series for Electroencephalograms (EEG) obtained from four states two abnormal and two healthy with eyes open and eyes closed in both. Then it has been submit these linear prediction filter coefficients to the back propagation neural network for the purpose of signal distinction by the intelligent methods. It has been gives a good results at testing to the values of features extractions that they not been training with.
The results for classifying EEG using back propagation neural network shows that Alzheimer sickness can be detected with power 80-100% in many channels in case in taken EEG for the abnormal with eyes closed. The transformed inputs (from the original data of the signal to the features intentional in the research) are ideally suited for effective classification of EEG data. Recognition rates vary for each EEG channel data for correct recognition in the four cases (ho, hc, ao, ac). The follow up method can be useful in several applications including time-series analysis, signal processing and speech recognition.
Keywords: Electroencephalograms (EEG), EEG recognition, backpropagation neural network, linear prediction filter coefficients (LPC).
Integrating Architectural Description Into OMG Infrastructure
Adel Smeda
Abstract: Software architecture and object oriented modeling concerns with describing the architecture of a software system. Software architecture uses formal languages (Architecture Description Languages or ADLs) to define systems. Object oriented modeling is defined by the OMG infrastructure. OMG modeling is based on four semantic levels: meta-meta model, meta model, model, application. Each approach has its advantages and weaknesses and to profit from the advantages of the two approaches a way to integrate ADLs notations into OMG infrastructure is needed. Early works tried to establish a mapping in the meta level, however a meta model must be defined by a meta-meta model and since software architecture does not support such a level, this way of mapping does not always work. In this paper we define a meta-meta model for software architecture and then we use this meta-meta model to integrate the two approaches.
Keywords: Software architecture, Architecture Description Languages, Object -Oriented Modeling, UML, MOF, OMG, Mapping.
A Retroactive Quantum-inspired Evolutionary Algorithm
Zakaria Laboudi and Salim Chikhi
Abstract: This study outlines some weaknesses of existing Quantum-inspired Evolutionary Algorithms (QEA) by explaining how a bad choice of the rotation angle of qubit quantum gates can slow down optimal solutions discovery. A new algorithm, called Retroactive Quantum inspired Evolutionary Algorithm (rQEA), is proposed. With rQEA the rotation of individual’s amplitudes is performed by quantum gates according to a retroactive strategy making the algorithm more adaptive and thus leading to a good balance between intensification and diversification. Our algorithm was tested and compared to Classical Genetic Algorithm (CGA) and to QEA on two benchmark problems. Experiments have shown that rQEA performs better than both CGA and QEA in terms of genericity, adaptability and accuracy.
Keywords: Quantum-inspired Evolutionary Algorithm, Quantum Computing, Knapsack Problem, Onemax Problem, Adaptive Memory Programming, Retroaction
A New Technique for Arabic Handwritten Recognition Based on UHS Database
Omar M. El-Sallabi, and Amany Aljamal
Abstract: Many studies have been conducted on the field of Arabic character recognition, but still no accurate results have been found, because the complexity of shapes of Arabic letters makes the recognizing is more difficult than other languages like Latin, Chinese, ..etc.
In this paper we present a new technique for use in the field of Arabic Handwriting Recognition whether offline or online handwritten optical recognition. This technique based on very simple Database that called UHS-DB. UHS-DB is User Handwritten Sample database.
Our database specifically depends on user handwriting, exactly such as speak recognition systems depend on user voice. Therefore, the proposed technique can't be used in multi users case, but this characteristic support the system to be more accuracy for recognition characters and privacy for the user.
Keywords: Arabic characters features, OCR. AH-Recognizer.
An Algorithm for Qualitative Causal Reasoning
In The Cognitive Maps
Tahar Guerram, Ramdane Maamri, Zaidi Sahnoun, and Salim Merazga
Abstract: A cognitive map, also called a mental map, is a representation and reasoning tool on causal knowledge. It is a directed, labeled and cyclic graph whose nodes represent causes or effects and whose arcs or edges represent causal relations between these nodes such as increases, decreases, supports, and disadvantages relations. A cognitive map represents the beliefs (knowledge) which we lay out about a given field of discourse and is useful as a decision-making support when studying complex systems. There are several types of cognitive maps but the most used are fuzzy cognitive ones. We present in this paper an algorithm allowing “abductive” or “backward” causal reasoning in the cognitive maps. Contrary to the existing reasoning method in the cognitive maps which is based on the answer to the question “what will happen if…?”, our algorithm allows linking a concept - effect to concepts- causes by answering to the question “ If I want to have…, what have I to do ? “. In order to validate our idea we applied it to the viral infection biological process.
Keywords: qualitative reasoning, cognitive maps,
causal reasoning .
Qualitative Modeling of Complex Systems by Neutrosophic Cognitive Maps: Application to the Viral Infection
Tahar Guerram, Ramdane Maamri, Zaidi Sahnoun, and Salim Merazga
Abstract: A cognitive map, also called a mental map, is a representation and reasoning model on causal knowledge. It is a directed, labeled and cyclic graph whose nodes represent causes or effects and whose edges represent causal relations between these nodes such as “increases”, “decreases”, “supports”, and “disadvantages”. A cognitive map represents beliefs (knowledge) which we lay out about a given domain of discourse and is useful as a means of explanation and support in decision making processes. There are several types of cognitive maps but the most used are the fuzzy cognitive maps. This last treat the cases of existence and nonexistence of relations between nodes but does not deal with the case when these relations are indeterminate. Neutrosophic cognitive maps proposed by F. Smarandache [1] make it possible to take into account these indetermination and thus constitute an extension of fuzzy cognitive maps. This article tries to propose a modeling and reasoning tool for complex dynamic systems based on neutrosophic cognitive maps. In order to be able to evaluate our work, we applied our tool to a medical case which is the viral infection.
Keywords: qualitative reasoning, fuzzy cognitive maps, neutrosophic cognitive maps, causal reasoning.
A Component Model For The Synthesis Of Communication Interfaces
Yasmine Mancer, Djamal Bennouar and Nadjia Benblidia
Abstract: The increasing complexity of electronic systems led to exceeding the design time while, paradoxically, economic competition imposes a shorter time-to-market. In an attempt to reduce this gap, new design methodologies are required, the hardware / software co-design can cope with this problem. In the co-design, systems are composed of a software part and a hardware part. Their execution is ensured through the use of hardware / software interfaces. In this paper, we present the synthesis of communication interfaces in a general context of co-design with the different problems related to this field and we give a proposal to implement a model of components of an integrated approach of the software architecture for the synthesis of communication interfaces.
Keywords: co-design, communication, component, software architecture.
IT Demand Analysis of the Leadership Management
Program at the Libyan National Economic
Development Board
Mahmud Sherif, Fouzi Elmozogi, Habib Lejmi, and Tarek Tantoush
Abstract: In order to meet the competition challenge, it is necessary for organizations to have highly prepared leaders mainly in the directive and managerial levels. These leaders are the ones who should head the administrative processes of public institutions and private companies.
The National Economic Development Board (NEDB) took the mission to improve and develop well-qualified leaders for both public and private sectors in Libya. This target will be reached by means of a leadership management program.
The appropriate Information Technology (IT) support represents an important success factor for every change management effort. Therefore it is important for the success of the NEDB’s leadership management program to consider current developments in the IT area and to use them. In this paper, the initial situation at the NEDB leadership management program is analyzed and IT demand analysis is conducted. Based on a mapping of the revealed requirements against state of the art information systems, recommendations are formulated for process improvements and for the selection of future IT applications.
This case study revealed new potentials for NEDB’s leadership management program in order to ensure the realization of its goals and to maximize the efficiency of the processes and procedures in place.
Keywords: IT Strategy, IT Demand Management, Leadership management, Human Resources, Human Capital Management (HCM)
Transforming Business Patterns to Labelled Petri Nets using Graph Grammars
Karima Mahdi, Allaoua Chaoui, and Raida Elmansouri
Abstract: In this paper we propose an approach and a tool for transforming business patterns to labelled Petri nets for which efficient analysis techniques exist. We specify first, business patterns and labelled Petri nets Meta-Models in UML Class Diagram formalism with the Meta-Modelling tool Atom3, and then we generate visual modelling tools according to the proposed Meta-Models. Finally, we define a graph grammar which transforms Business Patterns models to Labelled Petri Nets model for analysis purposes. The approach is illustrated with examples.
Keywords: Business Patterns, Labelled Petri Nets, Meta-Models, Graph Transformation.
The Effect of Datasets Characteristic in the Induction of Fuzzy Regression Trees Using Elgasir
Fathi Gasir, Zuhair Bandar, and Keeley Crockett
Abstract: Elgasir algorithm is a novel fuzzy regression tree algorithm which masters the weaknesses often associated with crisp regression trees. Whilst Elgasir has been shown to improve the performance of crisp decision trees, it has not been applied to a range of diverse datasets which exhibit different characteristics such as attributes numbers, attributes type, and datasets sizes.. This paper investigates the effects of such datasets in the performance of fuzzy regression trees created through the application of Elgasir. The Elgasir algorithm is applied to four diverse real-world datasets using Trapezoidal membership functions for fuzzification and Takagi-Sugeno fuzzy inference for aggregate the final predicted values. The modified version of Artificial Immune Network model is used for optimization of the membership functions. The experimental results confirmed the capability of the Elgasir to produce robust fuzzy regression trees using real-world datasets with very different characteristics.
Keywords: Fuzzy inference system, Fuzzy Regression tree, Data mining, Machine learning, Evolutionary algorithms, Artificial Immune system.
Motion Planning and Control of Non-Holonomic Mobile Manipulator Using Fuzzy Logic.
M.Hamani and A.Hassam
Abstract: In this paper, a combined control of a mobile manipulator is studied, the platform is velocity controlled and the manipulator is torque controlled. Our objective is to develop a control method which generates the adequate articular torques and kinematics control to move the end-effector and the platform for executing the task efficiently without any collisions between the mobile manipulator (manipulator and platform) and static or dynamic obstacles. In this way, we consider that the platform has a slow and imprecise dynamic response, so we propose to use a fuzzy kinematics control; conversely the manipulator has a rapid and precise dynamic response, a fuzzy controller optimised by a genetic algorithm is proposed for this reason.
The simulation results of the 2-links planar non-holonomic mobile manipulator are given to show the effectiveness of the proposed method.
Keywords: Combined Control, Genetic Algorithms, Optimization, Mobile Manipulator
A Tool for Design Pattern Detection
Imen Issaoui, Nadia Bouassida, and Hanene Ben-Abdallah
Abstract: The advantages of design patterns, as good quality generic solutions for reoccurring problems, are irrefutable. They can be gained either at the design or the maintenance phases of a system. They motivated the proposition of several methods and tools for design pattern detection. The presented tool is distinguished by its detection method which identifies design patterns through an XML document retrieval approach. This identification method tolerates structural differences between the examined design and the identified pattern. In addition, it accounts for the important concepts representing the essence of a design pattern. Furthermore, to ensure the applicability of the method on large scale designs, the presented tool augments it with a decomposition method that applies a set of heuristics to divide the design into manageable segments.
Keywords: Automatic design pattern detection, XML document retrieval
Lexical Study of a Spoken Dialogue Corpus in Tunisian Dialect
Marwa Graja, Maher Jaoua and Lamia Hadrich Belguith
Abstract: The aim of this paper is to present a lexical study of a spoken dialogue corpus in Tunisian dialect since such resources does not currently existing. The lexical analysis permits to take into account the specificity of Tunisian dialect by identifying lexical varieties and significant elements used in the spoken dialogue. This can lead us to provide a useful characterization for dialogue systems and help us to develop models and methods specifically designed for Tunisian dialect.
Keywords: lexical study, dialogue system, dialect analysis.
Energy-Efficient Traffic-Aware Multihop Routing
For Wireless Sensor Networks
Khaled Daabaj and Tarek Sheltami
Abstract: Since link failures and packet losses are inevitable, ad hoc wireless sensor networks may tolerate a certain level of reliability without significantly affecting packets delivery performance and data aggregation accuracy in favor of efficient energy consumption. An efficient hybrid approach is to make trade-offs between energy, reliability, cost, and agility while improving packet delivery, maintaining low packet error ratio, minimizing unnecessary control packets transmissions, and adaptively reducing control traffic in favor of high success reception ratios of representative data packets. Based on this approach, the proposed routing scheme achieves moderate energy consumption and high packet delivery ratio even though the link failure rate is high. The performance of our proposed routing scheme is experimentally investigated using a testbed of TelosB motes in addition to ns2 simulations. It is shown to be more robust and energy efficient than the current TinyOS2.x collection layer. Our results show that our scheme maintains higher than 95% connectivity in interference-prone medium while achieving an average of over 35% energy savings. Our routing scheme is rigorously tested in outdoor testbed and is currently being investigated using large-scale simulations.
Keywords: wireless sensor networks; energy balancing; reliable routing; data aggregation
Association Rules Mining From Data Warehouses:
A Comparative Study
Eya Ben Ahmed and Faiez Gargouri
Abstract: On-line analytical processing allows an extraction of information to better explore and analyze the stored data. However, there is no discovery of hidden information from those large data. Many approaches were developed to extract knowledge from OLAP, particularly using association rules technique. This
paper presents a comparative study of the most important algorithms used in mining inter-dimensional association rules from OLAP.
Keywords: Data warehouse, data cube, OLAP, association rules, Multi-dimensional association rules, inter-dimensional association rules.
The Semantic Web Framework (SWF)
Ahmed Nada and Badie Sartawi
Abstract: It is important to find a mechanism that organizes and restructures the web resources in the most effective way to get the best benefit from it; this importance comes from the dependability and popularity of such digitalized world where human and intelligent agents practice their activities. Many solutions were put to resolve this organization; one of the most valuable solutions was suggested is the Semantic Web. The purpose of this research was to find a new and standard Semantic Web Framework (SWF), the SWF is a system that provides a framework to manage the web contents and help in organizing its resources to achieve the maximum reusability of them. The Resource Description Framework (RDF) Generation in the main service in the SWF, aims to automate the tradition RDF creation process by accessing the website pages and uses domain-oriented approach to find relevant contents and eliminate unwanted data. The Web Query language and RDF Ranking System are significant service in the SWF that is provided by this system facilitates the web searching using modified SQL and ranking algorithm. The SWF can cover the entire web, but in this research we took the educational domain as the implementation scope using Microsoft C#.Net 2008 and Microsoft SQL Server 2008 to develop a web-based application.
Keywords: Semantic Web, RDF, Web SQL, RDF Ranking.
Adaptive ACK: A Novel Intrusion Detection System to Mitigate Intended Packet Dropping in MANETs
A. Al-Roubaiey, T. Sheltami, A. Mahmoud, E. Shakshuki, and Khaled Daabaj
Keywords: MANET, IDS, Watchdog, TWOACK, Packet dropping, DSR.
A Graph Grammar Approach for Durational Action Timed Automata Determinization
Djamel Eddine Saidouni, Ilhem Kitouni, and Hiba Hachichi
Abstract: Durational Action Timed Automata (DATA) is a semantic model for expressing the behavior of real time systems where actions have durations. In this paper, we propose an approach for translating a DATA structure to a corresponding deterministic one. For this purpose, a meta-model of DATA model and a transformation grammar are defined. Programs are written in Python language and implemented under the ATOM3 environment.
Keywords: Concurrent system, Formal Verification, Graph transformation, DATA, Determinization, ATOM3, Meta-models, Formal testing approach.
A Multimodal Approach for Soccer Video Summarization
Maher Jaoua, Lamia Hadrich Belguith, Fatma Kallel Jaoua, and Adelmajid Ben Hamadou
Abstract: We present in this paper AVIDAS system: an Automatic System for soccer VIDeo Analysis and Summarization which is based on a multimodal approach. The proposed method experimented in this system includes the analysis of online commentary and cinematographic strategy to localize boundaries of important events in the video (goal, attempt, cards). A low-level image processing is used to extract the most important sequences from identified boundaries. The system organizes detected events in a video summary augmented by the extract of online commentary as a subtitle. The proposed system is evaluated over a large data set, consisting of more than 67 hours of soccer video, captured from world cup 2010, and European champion league 2009-2010.
Keywords: Soccer video summarization,Cinematographic strategy, Replay detection, Online commentary analysis
Evaluation of Adaptive Statistical Sampling Versus Random Sampling For Video Traffic
Aboagela Dogman, Reza Saatchi, and Samir Al-Khayatt
Abstract: The growth in real-time applications transmitted over computer networks means that the quality of service (QoS) parameters of these applications need to be assessed and quantified in order for critical real-time applications such as videoconferencing to be delivered with an appropriate level of quality. However, most real-time applications generate a large amount of traffic data. The process of measuring QoS parameters for these data is not practically feasible in real-time. Therefore, in order to reduce the data processing and storage, sampling is an essential operation.
In fixed rate sampling the number of data packets processed remains unchanged even when the traffic characteristics change. In adaptive sampling the number of packets sampled varies in accordance with traffic rate. This makes the processing more efficient.
In this paper, a comparison of adaptive statistical sampling approach versus random sampling was carried out. The adaptive statistical sampling method adjusts the sampling rate by determining the statistical variations of packet arrival rate. A suitable network was simulated using ns-2 package to carry out this investigation. The study demonstrated that the performance of adaptive statistical sampling was better than random sampling.
Keywords: adaptive sampling, random sampling computer network performance
Use of Multi-Agent Systems for Detecting Features Interaction in Telecommunication Systems
Mohamed Elammari and Saria Eltalhi
Abstract: A telecommunications system comprises multiple features and services, which are expanded in range and complexity by the addition of a new, advanced technology features. In order to avoid, detect and resolve potential feature interactions, many researchers have been trying to develop sophisticated methods, which frequently fail to anticipate the conflicts that occur when new modern features are added. The users of traditional methods in particular encounter many difficulties, especially regarding code management and the addition of new features to the system.
In this paper, a design stage approach for detecting feature interactions by using Multi-agent Systems MAS is presented. This approach is implemented in two stages. The first presents a model for describing visually (using UCMs notation) telephone features that help designers to obtain a general view or a global picture of the system. The second stage is agent specification where agent (derived from UCMs) represents the feature.
Keywords: Feature conflicts, Blackboard, Use Case Maps, Communications services, Detection
Base-Area Detection and Slant Correction Techniques Applied For Arabic Handwritten Characters Recognition Systems
Mohamed A. Ali, and Kasmiran Bin Jumari
Abstract: Proper scanning of a page containing Arabic handwritten text-lines may result in a 0-skewed output image; nevertheless traditional detection of base-line by using horizontal projection may not serve the purpose of reliable detection of all characters consisting the words of those text-lines. The algorithm implemented in this research locates appropriate upper and lower reference lines of what we call base-area using the text-line height as a reference. On the other hand, this paper introduces a reliable slant correction algorithm based on using a combination of projection profile technique and the Wigner-Ville distribution (WVD). The WVD was used in order to estimate the slant angle that can range between +45 and -45 degree with respect to the original position.
Keywords: Slant and Base-Area Detection, Character recognition, Wigner-Ville distribution, Arabic OCR
Software Architecture Approaches for the Specification of Human-Computer Interfaces: An overview
CHERFA Imane, BENNOUAR Djamal
Abstract: Software architectures have recently played a key role in the design and specification of complex software systems. However, they have so far not been widely adopted for the specification of Human-Computer Interfaces (HCIs) in interactive systems. In this paper, we overview major software architecture approaches proposed for HCI specification.
Keywords: Software Architecture, Component Models, Human-Computer Interactions.
Towards a General Architecture for Building Intelligent, Flexible, and Adaptable Recommender System Based on MAS Technology
Mohamed Elammari and Rabeia .N. Abdleati
Abstract: Nowadays, recommender systems have been used to reduce information overload and to find the items that are of interest to the user. Many techniques have been proposed for providing recommendations to consumers or users. All currently available recommender techniques have strengths and weaknesses. Thus, numerous researcher studies have attempted to develop techniques that would overcome the various limitations of current recommender systems by combining existing techniques in different ways. On the other hand, we have found that many currently available recommender systems are still designed for some restricted domains.
This paper presents our attempt to use agent technology to enhance recommender systems based on agent’s property advantages with the goal to analyze and design a general architecture easily adaptable to several domains.
Keywords: Recommender Systems (RS), general architecture, Multi agent system (MAS), User modeling
Neural and Fuzzy Methods for Handwritten Character Recognition
Adnan Shaout, and Jeff Sterniak
Abstract: Correctly recognizing and classifying objects is inherently a knowledge-based process. As the variability of the objects to be identified increases, the process becomes increasingly difficult, even if the objects come from a small set. In this paper, two methods of encoding knowledge in a system are covered—neural networks and fuzzy logic—as they are currently applied to offline handwritten numeral recognition, which is by nature subject to high degrees of variability. In the former method, the system takes on knowledge primarily through its training process, while in the latter the system designer must hard code linguistic rules describing the characters. The resulting differences in system structure and development process are highlighted, along with currently representative error rates on common test sets.
This paper also proposes a recognition system based on fuzzy logic and presents results from testing on the MNIST character database.
Keywords: Artificial Neural Network, Fuzzy Logic, Offline Character Recognition, Energy-Based Model, Convolutional Network, Membership Function, TSK Model, MNIST Database
Modeling the Data of an Arabic Treebank as Patterns of Syntactic Trees
Feriel Ben Fraj, Chiraz Ben Othmane Zribi, and Mohamed Ben Ahmed,
Abstract: The corpora are very important sources of information. They are used for different linguistic applications. Generally, the Treebanks are rich, voluminous and heterogeneous. Thus, we have defined a new modeling approach that organizes the knowledge of the Treebank into models that we called “patterns of syntactic trees”. In these patterns, we encapsulate, in the same structure, different information in a stratified manner. Thus, we reduce the size of the Treebank and eliminate the redundancies. This modeling strategy has been tested using a little Arabic Treebank. We emphasize the syntactic component. Consequently, the obtained patterns are syntactically very rich. They have been used to train a parser for the Arabic texts. The parsing results are very satisfactory. And we have denoted different gains in terms of time and space of research at the moment of the training.
Keywords: Treebank, Patterns of syntactic trees, Patterns extraction, Arabic language.
تعزيز واجهات التعامل بشر-حاسوبى إعتماداً على خصائص النظرية الشيئية لهندسة البرمجيات.
فاطمة عبدالله الغالى وعبدالمجيد حسين محمد
ملخص:منذ الثمانيات نالت النظرية الشيئية OBJECT ORIENTED شهرة عالية فى مجال تحليل وتصميم وتنفيذ النظم البرمجية. عند ظهورها أول مرة، لم تكن الطرق الشيئية مجرد وجه آخر لعملة صناعة البرمجيات بل انها شكلت تغييرا جذريا فى التصورات حول تصميم وبناء البرمجيات. فالطرق الشيئية برزت لتمثل فلسفة جديدة فى نمذجة وتصميم النظم البرمجية. والسؤال المطروح هو الى أى مدى اختلفت التطبيقات البرمجية الشيئية عن مثيلاتها التى صممت ونُفذت بالطرق التقليدية أى الهيكلية. هذه الورقة تعرض رأيا نقديا حول قصور اغلب التطبيقات البرمجية الشيئية وإخلالها ببعض المبادئ التى شكلت القواعد الأساسية لبروز المنهجية الشيئية. وتتركز نواحى القصور تحديداً فى نواحى تصميم شاشات التعامل مع المستخدمين. فإلى اى مدى يمكن تمييز التطبيقات البرمجية الشيئية عن غيرها بالنظر الى مظهرها الخارجي وأسلوب التعامل معها من قبل المشغلين. الورقة تعرض نموذجاً لتصميم شاشات التحاور بشر-حاسوبى التى تؤكد الالتزام بأحد المبادئ الأساسية للنظرية الشيئية والمتمثل فى تطبيع أساليب التحاور بشر-حاسوبى.
كلمات مفتاحية: واجهات التعامل بشر-حاسوبي، سهولة الاستخدام، الطرق الشيئية، نماذج إدراك المستخدم، سهولة الاستخدام
Fuzzy Logic for Regen Enhanced Eco-Routing of Automobiles
Dean T. Wisniewski, and Adnan K. Shaout
Abstract: This paper is an investigation of the appropriateness of the application of fuzzy logic to the Eco-Routing of automobiles. In addition, this paper proposes an extension to the Eco-Routing Process; Regen-Routing. Regen-Routing is not an industry term but its use, in this paper, will facilitate discussion.
Keywords: Fuzzy logic, eco routing, vehicle navigation, regenerative braking, electric vehicle
Offline Handwritten Arabic Recognition Using KOHONEN Map and CPANN
Lamia Abusedra, and Amal El Gehani
Abstract: The aim of this study is to evaluate the performance of directional features in describing the handwritten isolated Arabic characters, using the Kohonen map and CPANN classifier.
The implemented feature extraction technique assigns one of twelve directional feature inputs depending upon the pixels gradients in respect to its neighboring pixels. These extracted features are passed to a Kohonen map and CPANN classifiers. A modular classifier scheme has been developed.
This scheme involves dividing the isolated characters shapes into different groups. Each modular classifier is trained to recognize the characters of only two of these groups. The final classification result is defined by the combination of the individual simpler classifiers.
The experimental result shows that the twelve directional features provide better recognition results when compared to other techniques. This study also covers the performance of the modular classifies across the different character groups. The results show the classification scheme described in this work to provide on average 91% recognition accuracy.
Keywords: Character recognition, gradient direction, features Extractions, Kohonen maps, CPANN.
Numerical Simulations for 1+2 Dimensional Nonlinear Schrodinger Type Equations
Thiab R. Taha Wei Yu, and M. S. Ismail
Abstract: The nonlinear Schrodinger equation is of tremendous interest in both theory and applications. Various regimes of pulse propagation in optical fibers are modeled by some form of the nonlinear Schrödinger equation. In this paper we introduce sequential and parallel numerical methods for numerical simulations of the 1+ 2 dimensional nonlinear Schrödinger type equations. The parallel methods are implemented on the recluster multiprocessor system at the University of Georgia(UGA). Our preliminary numerical results have shown that these methods give good results and considerable speedup.
Keywords: Split-step method, NLS, Parallel algorithms, FFTW, pseudo-spectral method
التعليم المدمج .. مستقبل التعليم التقليدي
أحمد عبد القادر فضل عثمان
Abstract: Is the subject of this paper in an attempt to study the possibility of integrating e-learning in traditional education, through the use of e-learning tools such as e-mail and educational sites and virtual classroom and educational electronic media of various types and other tools of modern technology in an environment of traditional education
Technological advances however sublime and the evolution not a substitute for traditional methods of teaching and learning, as I did not sing of electronic commerce for traditional commerce and also did not sing E-mail postal mail did not sing of information technology for the paper, the learning-mail will not be a substitute for traditional learning and for teacher rights, in the classroom and included university.
This type of learning combines e-learning and traditional learning is learning does not eliminate the e-learning and traditional learning that a combination of both technological development does not cancel, but we use are functional in our classes in our labs and classrooms.
And deals with our perception of the mixture and learning strategies as one of the solutions proposed to resolve some problems of e-learning.
The paper concludes the findings and recommendations reached by the researcher.
Keywords: Blended Learning, e -learning, Learning Management System.
Integrated Semi-Handwritten Hindu Numerals Recognition Using Support Vector Machine
Ashwag M. Gadeed, Mohamed E. M. Musa
Abstract: Automatic Reading system is the technology that will enable us to get rid from Keyboards and allows the computer reads our writing. This paper introduces a new large data set for Arabic numeral recognition. The paper also describes the performance of a One Against All (OAA) Support Vector Machine (SVM)trained on this new data set. While the result of the classification is not very good, the experiments show the main problems of Hindu digit classification that such classifier could face. The analysis of the results reflects that the accuracy of this classifier could easily be enhanced to obtain much better result. As the data set is very new future work could show much better results using this data set.
Keywords: pattern recognition, Machine Learning, Optical Character Recognition (OCR), Support Vector Machine (SVM).
Centralized Dynamic Protection against SQL Injection Attacks in Web Applications
Ramzy Esmail Salah and Ammar Zahary
Abstract: Structured Query Language (SQL) injection is an attack method used by hackers to retrieve, manipulate, fabricate or delete information in organizations’ relational databases through Web applications. Construction of secure software is not easy task, given the complexities that may be faced. SQL injection is increasingly exploiting the weaknesses of software year after year around the world. Security relevant issues in this area had not been properly addressed in relevant literatures during the development cycle of software. This paper conducts an approach called Centralized Dynamic Protection against SQL Injection Attacks in Web Applications (CDPIA) that creates a data type for checking system to prevent data type mismatch in dynamically generated SQL queries. To strengthen the approach, CDPIA utilizes encryption technique using Rivest, Shamir and Adleman (RSA) algorithm. The paper also discusses and presents most common Web application vulnerabilities with possible attack scenarios. An implementation of the system is described by using an MS SQL written in Microsoft Visual Studio with C#. The presented approach has been tested and verified using both manual and automated method. Results show that the implemented approach can handle most common SQL injections and data type mismatches.
Keywords: SQL Injection Attack (SQLIA), Web application vulnerabilities, Centralized Dynamic Protection, RSA Encryption Algorithm