Software Engineering

ISSN Online: 2376-8037 ISSN Print: 2376-8029

Archive Home / Archive

Volume 6, Issue 4, December 2018

  • Author: Ezekiel Uzor Okike

    Abstract: Many software products and services deployed in user environments at times fail to meet user needs satisfactorily. This may be due to the fact that the product or service failed to meet user requirements from the outset (inception) of the Information Systems (IS) project. This study proposes a Flexible Qualifier Weighted Customer Opinion with Safeguard Estimates (FQWCOS) model for measuring the satisfaction of users of software products and services. The FQWCOS model is a variant of the Qualifications Weighted Customer Opinion with Safeguard questions (QWCOS). The FQWCOS model was verified with empirical data using samples from 40 users of ASAS software product. Descriptive statistics were also used to obtain the frequencies, mean values, relative frequencies, standard error, and standard deviation. From these values, it was possible to compute the normalized score of customer opinion Oi and the external measures E for QWCOS and Ei (i=1-4) for FQWCOS were computed. Results from the study reveal that there was no difference between the external measures for QWCOS and FQWCOS. However, the result suggest that external measures were higher when standard error (SE) was used to obtain the measures at different levels 31.58, 19.79, 21.76, 35.69 and 31.06 than when external measure was computed using standard deviation (STD) which yielded the values 4.99, 3.13, 3.44, 5.64 and 4.07. We conclude that FQWCOS and QWCOS yield the same values probably due to small sample used. However, FQWCOS provides a flexible and simple approach, and reveals the need to use the standard error instead of standard deviation since this yields higher magnitude values appropriate for expressing external measures in percentages.

    Received: Nov. 10, 2018 Accepted: Dec. 11, 2018 Published: Dec. 28, 2018

    DOI: 10.11648/ View: Downloads:

  • Authors: Ikerionwu Charles, Isonkobong Christopher Udousoro

    Abstract: The limited available storage and bandwidth required for successful transmission of large images make image compression a key component in digital image transmission. Digital image application in various industries, such as entertainment and advertising, has brought image processing to the fore of these industries. However, the entire image processing is faced with the problem of data redundancy, which is mitigated through image compression. This is simply the art and science of reducing the number of bits/data of an image before it is transmitted and stored easily while the quality of image is maintained. Thus, through an exploratory study, this paper examines image compression as discussed in extant literature and emphasises on different methods used in image compression. The paper reviewed relevant literature from Elsevier, Emerald, IEEE, ProQuest and Google scholar databases. Specific methods are lossy and lossless techniques, which are further divided into run length encoding, and entropy encoding. In conclusion, the paper recommends compression techniques to adopt depending on the industry’s’ goals. Preferably, lossy compression is used to compress multimedia data which includes audio, video and images, while lossless compression technique is used to compress text and data files.

    Received: Dec. 9, 2018 Accepted: Dec. 22, 2018 Published: Jan. 16, 2019

    DOI: 10.11648/ View: Downloads:

  • Authors: Sisay Wuyu, Patrick Cerna

    Abstract: Risk management has long been a topic worth pursuing, and indeed several industries are based on its successful applications, insurance companies and banks being the most notable. Data Mining (DM) - is one of the most effective alternatives to extract knowledge from the great volume of data, discovering hidden relationships, patterns and generating rules to predict and correlate data, that can help the institutions in faster decision-making or, even reach a bigger degree of confidence. This research was conducted in a form of case study in the Ethiopian Insurance Corporation (EIC) at its main branch located at Legehar- Addis Ababa. The general objective of the study is to examine the potential of data mining tools and techniques in developing models that could help in the effort of Risk level pattern analysis with the aim of supporting insurance risk assessment activities at EIC. In this research two data mining technique which are decision tree and neural network. The best decision tree model, which is selected as a working model among the numerous models generated during the training phase, was able to correctly classify 75% percent of the 3100 policies in the validation data set. 96% of low-risk policies were correctly classified. Significant number of misclassification was observed on high risk level. The output of these experiments indicated that the classification task of records using the Risk level, both decision tree and neural network have performed with significant error. Decision tree has shown an accuracy rate of 75 percent while neural networks classified 58% records correctly. The overall performance of decision tree was better in classifying values than neural network.

    Received: Jun. 25, 2018 Accepted: Dec. 5, 2018 Published: Jan. 17, 2019

    DOI: 10.11648/ View: Downloads: