SIWN Abstracts Index

 

 

sai: itssa.2010.05.022

An Investigation into the Effect of Gesture Interaction in Relation to Human Performance for Identifying Possible Design Failures

Robert Chen, William Cheng-Chung Chu, Tin-Kai Chen and Hongji Yang

International Transactions on Systems Science and Applications, Vol. 6, No. 1, May 2010, pp. 1-12

Abstract: Trends in gesture interaction technology are likely to merge with multimodal interaction, hence the effect of device difference on gesture-based human performance cannot be ignored in terms of quality of use. A comprehensive literature review of human-computer interaction and gesture interface development is given. The study aims to reveal the effect of gesture interaction on human performance and to identify possible design failures. As a result, it reveals that a sensor system has a great impact on human performance in terms of longer cursor movement distances, longer movement time and greater arm and shoulder fatigue in comparison with a mouse. It will investigate whether the malfunctions of the hardware and software of gesture interfaces can produce discrete cursor movement. In such a situation, the actual working area and the joint ranges are lengthy and away from those that had been planned. This research also contains a new accuracy measure and a new graphical measurement platform to establish normative data and techniques.

Keywords: gesture interfaces, Fitts’ law, human performance.

 

sai: itssa.2010.05.023

Framework and Applications of an Interactive Scenario-Based Agent System

Kai-Yi Chin, Guo-Ming Fang, Zeng-Wei Hong, Jim-Min Lin and Arthur J. Lin

International Transactions on Systems Science and Applications, Vol. 6, No. 1, May 2010, pp. 13-25

Abstract: Agent technology has been widely used to develop software systems, such as e-business, personal assistant and others. Scenario is one approach to control the behaviors of agents. In our previous works, we had proposed an interactive scenario mechanism to handle the interactions between agents and humans. It was also successfully adopted to develop a marketing system. However, some components are required to facilitate this approach. An interactive scenario-based agent framework is thus proposed in this paper to identify the required framework components and the interoperations among them. In this paper, an Agent Diagram in AUML (Agent Unified Modeling Language) is used to model the agent roles in the proposed multi-agent framework. The processes of the scenario generation and agent execution are also described in this paper. Finally, a scenario-based eldercare agent system is built to exemplify the proposed framework.

Keywords: interactive scenario, agent, framework, AUML, eldercare.

 

sai: itssa.2010.05.024

Interoperability in Autonomic Communications ---- An Approach for Context Integration in Management Systems Using Ontologies

Martín Serrano, Mícheál Ó Foghlú, Joan Serrat and John Strassner

International Transactions on Systems Science and Applications, Vol. 6, No. 1, May 2010, pp. 26-42

Abstract: Convergence between communications and computing solutions supporting information and data systems have enable management of communications can be assisted by specialized software applications capable of face up some of the complex aspects at current autonomic systems. Complex networks and services operation management involve semantic extensions on data and information management capabilities for improving the efficacy of the communication systems supporting enterprise applications and pervasive services. Knowledge engineering has been proposed as a formal mechanism for both reducing the complexity of managing the information needed in network management and enterprise systems and for increasing the portability of the services across homogeneous and heterogeneous networks. This paper describes a formal mechanism to integrate context information into management operations for network management services including enterprise concepts. Research challenges in self-management of network and enterprise services are addressed likewise the modelling and integration of context information for supporting the management operations in next generation networks described. This paper focuses on service management operations and the context information interoperability. In this paper information and data models using knowledge engineering techniques to represent information, based on ontologies, is introduced. We describe the use of Ontology-based management and Modelling techniques within a distributed and scalable framework, and outline representative ontology solutions for information management to support network and enterprise management services. This provides a flexible approach for end-user communications services in heterogeneous technology systems.

Keywords: knowledge engineering, ontologies, ontology-based integration, context integration, context-awareness, interoperability, autonomic communications, self-management, pervasive services, networks data systems, enterprise systems, next generations networks and services.

 

sai: itssa.2010.05.025

Policy-Based Self-Management in Embedded Systems

Mariusz Pelc and Richard Anthony

International Transactions on Systems Science and Applications, Vol. 6, No. 1, May 2010, pp. 43-59

Abstract: This paper describes work towards the deployment of flexible self-management into real-time embedded systems. A challenging project DySCAS which focused specifically on the development of a dynamic, adaptive automotive middleware is described. The self-management and context-awareness requirements of the middleware have been identified through the refinement of a wide-ranging set of use cases, a sample of which are presented. The embedded and real-time nature of the target system brings the constraints that dynamic adaptation capabilities must not require changes to the deployed executable code, adaptation decisions must have low latency, and because the target platforms are resource-constrained the self-management mechanism must have low resource requirements (especially in terms of processing and memory). The incorporation of policy-based self-management in this situation satisfies all of these requirements and in addition yields a highly flexible system that can be tailored for specific deployment-systems. The policy logic is independent of the deployed code, can be loaded at run-time and easily replaced or changed subsequently to cater for user customisation or changes in application requirements. The architecture of a designed-for-purpose powerful yet lightweight AGILE_Lite policy library is described. Additionally, a suitable evaluation platform, supporting the whole life-cycle of feasibility analysis, concept evaluation, development, rigorous testing and behavioural validation has been devised and is described.

Keywords: policy-based computing, self-management, middleware, embedded systems, automotive control systems.

 

sai: itssa.2010.05.026

A Virtual Queue Based Scheme to Support Real-Time Renegotiated VBR Video Streaming

Mei Han and Yao Liang

International Transactions on Systems Science and Applications, Vol. 6, No. 1, May 2010, pp. 60-72

Abstract: Variable bit rate (VBR) video traffic poses a unique challenge on network resource allocation and management for future packet networks. RED-VBR, a renegotiated deterministic VBR scheme, is a well-known approach proposed to support delay-sensitive VBR video traffic. However, the original RED-VBR suffers some limitations, such as the difficulty of dimensioning of D-BIND traffic descriptors for real-time videos, and the relatively high computation complexity. In this paper, we present a novel approach, referred to as virtual-queue-based RED-VBR, to overcome those limitations. In addition, we propose a simple and effective heuristic method to predict VBR video streaming performance in packet networks. Our proposed schemes have been demonstrated through extensive simulations with real-world MPEG-4 VBR video traces.

Keywords: multimedia networks, dynamic bandwidth allocation, quality-of-service, VBR video streaming, MPEG4.

 

sai: itssa.2010.05.027

Sharing Data Access with Update Propagation on Mobile Ad Hoc Networks

S. Moussaoui, M. Guerroumi and N. Badache

International Transactions on Systems Science and Applications, Vol. 6, No. 1, May 2010, pp. 73-81

Abstract: A data replication method in Mobile Ad hoc NETworks (MANETs) must consider dynamic topology changes. Because of energy consumption and radios range limitation, the network can get partitioned and reconnected several times. The partitioning means that some nodes may not be able to access data on distant node server. A two phase replication approach is proposed. It is based on k hop allocation of primary replicas. The replicas are dynamically relocated in order to consider the user needs. The solution is extended by an optimistic protocol of data updates. The simulation results show that it is a promising approach.

Keywords: data access, data replication, data update, MANETs.

 

sai: itssa.2010.05.028

Iterative (Turbo Processing) Receiver Design of OFDM Systems in The Presence of Carrier Frequency Offset

Huan X. Nguyen, Jinho Choi and Huaglory Tianfield

International Transactions on Systems Science and Applications, Vol. 6, No. 1, May 2010, pp. 82-93

Abstract: In this paper, based on the principle of turbo processing, we propose two iterative receiver schemes for carrier frequency offset (CFO) compensation in orthogonal frequency division multiplexing (OFDM) systems. Our CFO compensation designs, one in time domain and the other in frequency domain, are based on joint estimation of time-varying channel and CFO. In our schemes, the random CFO problem, a challenge for conventional pilot-aid methods, can be effectively solved using iterative (turbo processing) schemes. Furthermore, our comparative study shows that time domain compensation (TDC) is simpler to implement but frequency domain cancellation consisting of an iterative equalizer (FDC-IE) has better bit error rate (BER) performance.

Keywords: orthogonal frequency division multiplexing (OFDM), turbo processing, carrier frequency offset (CFO), iterative equalizer (IE)

 

sai: itssa.2010.08.030

Editorial: Special Issue on “Information Reuse in Databases and Data Mining”

Reda Alhajj and Kang Zhang

International Transactions on Systems Science and Applications, Vol. 6, No. 2/3, August 2010, pp. 95-96

 

sai: itssa.2010.08.031

An Extensible Framework for Generating Ontology Models from Data Models

Khalid M. Albarrak and Edgar H. Sibley

International Transactions on Systems Science and Applications, Vol. 6, No. 2/3, August 2010, pp. 97-112

Abstract: We describe an extensible framework for translating data models into Ontology models. Initially, the framework addresses two types of source data models: the Relational Database (RDB) and Object-Relational Database (ORDB) models. The derived Ontology model is based on the Web Ontology Language (OWL). The framework extracts information about the source data models from the metadata maintained by the Database Management System (DBMS) and from the data instances. The extracted metadata includes most of the integrity constraints that are typically maintained by a DBMS. To add more semantics about the data model, the framework extracts data instances to fill some of the semantic gaps found in the metadata. The extracted metadata and data instances are then analyzed to identify Ontology concepts, properties, and explicit relationships, discover redundant Ontology concepts and implicit relationships, and identify restrictions on properties and relationships. The analysis is based on heuristic database modeling techniques. The analyzed data model is automatically translated into a rudimentary OWL Ontology model that can be enhanced by an Ontology modeler. The paper provides examples to demonstrate how the translation is conducted.

Keywords: data model, object-relational database, ontology, OWL web ontology language, relational database, reverse-engineering.

 

sai: itssa.2010.08.032

A Two Stage Approach for Contiguous Sequential Pattern Mining

Jinlin Chen, Subash Shankar, Angela Kelly, Serigne Gningue, Rathika Rajaravivarma and Didier J. Charles

International Transactions on Systems Science and Applications, Vol. 6, No. 2/3, August 2010, pp. 113-130

Abstract: Contiguous Sequential Pattern (CSP) mining is an important problem with many applications. Using general sequential pattern mining algorithms for CSP mining may lead to poor performance due to the lack of consideration on the contiguous property of CSP. In this paper we present a two stage approach for CSP mining. We first detect frequent itemsets in a database, based on which we partition the CSPs into subsets and apply a special data structure, General UpDown Tree, to detect all the patterns in each subset. The General Updown Tree exploits the contiguous property of CSPs to achieve a compact representation of all the sequences that contain an item. Such compact representation enables us to apply a top down approach for CSP mining and eliminates unnecessary candidate evaluation. Experiment results show that our approach is more efficient compared to previous approaches in terms of both time and space.

Keywords: contiguous sequential pattern, data mining algorithm, sequence database, sequential pattern.

 

sai: itssa.2010.08.033

Inconsistency: The Good, the Bad, and the Ugly

Du Zhang

International Transactions on Systems Science and Applications, Vol. 6, No. 2/3, August 2010, pp. 131-145

Abstract: Inconsistency is commonplace in the real world and is an accepted part of life. Inconsistency is a multi-dimensional phenomenon that includes: causes, types, interpretations, circumstances, desirability, detection approaches, handling strategies, and significance measures. In this paper, we focus our attention on the desirability dimension for inconsistency. It turns out that not all inconsistencies are bad, some are even desirable. We summarize three lists of inconsistent cases in terms of their desirability using the metaphor of “the good, the bad, and the ugly.” We then define a locality of inconsistency measure that can be used to separate relevant and contributing factors from irrelevant ones with regard to a particular case of inconsistency. The results in the paper will help pave the way for developing some practical desirability measures for inconsistency.

Keywords: inconsistency, desirability of inconsistency, locality of inconsistency.

 

sai: itssa.2010.08.034

An Analysis of Research on Information Reuse and Integration (2003-2008)

Min-Yuh Day, Chorng-Shyong Ong and Wen-Lian Hsu

International Transactions on Systems Science and Applications, Vol. 6, No. 2/3, August 2010, pp. 146-157

Abstract: Information Reuse and Integration (IRI) plays a pivotal role in the capture, representation, maintenance, integration, validation, and extrapolation of information. Both information and knowledge are applied to enhance decision-making in various application domains. The objective of this paper is to provide a summary and analysis of research devoted to advancing the field of information reuse and integration. To this end, we identify the most popular research topics, together with the most productive researchers and institutions associated with the majority of research publications of the International Conference on Information Reuse and Integration during the past six years (2003-2008). Based on those publications, we have identified the most popular research topics, as well as the top researchers and institutions in the field of Information Reuse and Integration.

Keywords: content analysis; information reuse and integration; IRI topics; meta analysis.

 

sai: itssa.2010.08.035

Effective Knowledge Discovery in Financial Forecasting

Shang Gao, Reda Alhajj and Jon Rokne

International Transactions on Systems Science and Applications, Vol. 6, No. 2/3, August 2010, pp. 158-178

Abstract: Knowledge discovery in financial data sets has important implications for financial decision making. Discovering this knowledge is known to be difficult due to the complexity of domain knowledge and the specific statistical characteristics of the data. In this paper, we investigate the decision making problem for financial time series data sets derived from stock market fluctuations by means of statistical modeling while maintaining interpretable results based on association rules discovered with Rough Set computations and Fuzzy discretization. For an alternative approach the data mining process is accomplished by integrating different categories of financial ratios as inputs to the Rough Set model. Two stepwise forecasting procedures are proposed followed by experimental results for both real case and simulated data sets. The two main contributions of the paper are the successful application of efficient and effective data mining techniques to the financial domain and the development of a user friendly model that benefits and guides individual investors when they make investment decisions.

Keywords: financial data mining, Business Intelligence(BI), fuzzy set, rough set, forecasting.

 

sai: itssa.2010.08.036

Brechó-VCM: A Value-Based Approach for Component Markets

Rodrigo Pereira dos Santos, Cláudia Maria Lima Werner and Marlon Alves da Silva

International Transactions on Systems Science and Applications, Vol. 6, No. 2/3, August 2010, pp. 179-199

Abstract: The treatment of economic and social aspects in Software Engineering was pointed out as a challenge for the next years. Specifically in Software Reuse, Component-Based Software Engineering needs to be evaluated considering its real applicability and feasibility against its promised benefits. However, this did not happen in an effective way yet, due to the lack of a mature and established market. One strong inhibitor is the complexity in defining value for components in the software context. Moreover, to create and maintain these markets, historical data and value considerations are strategies to be investigated. This paper proposes a value-based approach to address these strategies, focusing on the stakeholders’ value realization and on building a value chain, called Brechó-VCM, which aims at incorporating nontechnical aspects to a component library, generating a marketplace where sociotechnical networks contribute to calibrate the market growth.

Keywords: Brechó-VCM, component-based software engineering, component market, component repository, reuse management process, software reuse, value-based software engineering.

 

sai: itssa.2010.08.037

Ontology-Based Information Model Development for Science Information Reuse and Integration

J. Steven Hughes, Daniel J. Crichton and Chris A. Mattmann

International Transactions on Systems Science and Applications, Vol. 6, No. 2/3, August 2010, pp. 200-211

Abstract: Scientific digital libraries serve complex and evolving research communities. Justifications for the development of scientific digital libraries include the desire to preserve science data and the promises of information interconnectedness, correlative science, and system interoperability. Shared ontologies are fundamental to fulfilling these promises. We present a tool framework, a set of principles, and a real world case study where shared ontologies are used to develop and manage science information models and subsequently guide the implementation of scientific digital libraries. The tool framework, based on an ontology modeling tool, has been used to formalize legacy information models as well as design new models. Within this framework, the information model remains relevant within changing domains and thereby promotes the interoperability, interconnectedness, and correlation desired by scientists.

Keywords: digital library, ontology, information model, interoperability, science data, science metadata.

 

sai: itssa.2010.08.038

An Automated Approach for Generating Project Execution Modes with Multi-skilled Workforce Coalition Formation

Nora Houari and Behrouz H. Far

International Transactions on Systems Science and Applications, Vol. 6, No. 2/3, August 2010, pp. 212-222

Abstract: Project execution modes describe the set of possible alternatives in which the project can be executed by team members with different skills and performance levels. In this paper, we present a model for automatically generating the project execution modes using multi-agent systems. The model is composed of agents representing the team leader as well as bookkeeping for the team members. The technical focus is on methods for intelligent agent assistant for generating execution modes, where each task can have several execution alternatives and each alternative is in turn defined by the time, cost and quality, depending on the individuals performing the task, their skills and confidence levels. Using this model, an algorithm is devised to find a set of possible agent coalitions suitable for the project. Experiments show the potential and applicability of this approach.

Keywords: project execution modes, coalition formation, multi-skilled workforce, multi-agent systems, algorithm.

 

sai: itssa.2010.08.039

Aggregating Performance Metrics for Classifier Evaluation

Naeem Seliya, Taghi M. Khostgoftaar and Jason van Hulse

International Transactions on Systems Science and Applications, Vol. 6, No. 2/3, August 2010, pp. 223-241

Abstract: Classification models are often evaluated with one or more commonly used performance metrics such as accuracy, F-measure, etc. It is known that comparing classifiers based on multiple performance metrics is better than evaluation based on only a single metric. However, multiple performance metrics pose a challenge when there is no clear winner across all metrics. There is a general lack of large empirical studies that provide a unified framework for combining a large number of performance metrics into a singular measure. This study addresses this problem directly by presenting a novel strategy of aggregating several commonly used performance metrics into one metric, the Relative Performance Metric (RPM). The case study data comes from 35 real-world classification problems and involves 12 binary classifiers and 10 commonly used performance metrics. A second case study further analyzes the top three classifiers ranked according to RPM, and investigates the various correlations among the underlying performance metrics. The practical benefit of using RPM as a unified performance measure is clearly demonstrated. Moreover, an insightful discussion on relationships among commonly used performance metrics will appeal to the practitioner.

Keywords: performance metrics; classifier evaluation; factor analysis; relative performance metric.

 

sai: itssa.2010.08.040

Management of Composite Events for Active Database Rule Scheduling

Ying Jin

International Transactions on Systems Science and Applications, Vol. 6, No. 2/3, August 2010, pp. 242-253

Abstract: Active database rules provide event management capability to database systems by signaling events and handling events automatically. Active rules play important roles in data management such as database integrity checking and database integration. Our past research has reported an active rule scheduling algorithm, named IRS, to schedule the execution of concurrently triggered rules to achieve the confluence property. The confluence property allows rule execution to produce the same final result regardless of the execution order of simultaneously triggered rules. The IRS algorithm schedules rules at static time with rules triggered by primitive events. This paper describes our research on extending the IRS algorithm, named CIRS algorithm, to incorporate composite events. We define a new triggering graph to represent composite events, and convert the new graph to apply the data access sub-algorithm and priority graph generation sub-algorithm. In addition to the formulae for triggering graph conversion, the paper describes the algorithms for inserting, deleting, and updating rules. Using the CIRS algorithm, rules triggered by composite events can be scheduled at static time that guarantees the confluent execution of simultaneously triggered rules.

Keywords: active rules, confluence, composite events.

 

sai: itssa.2010.11.202

Software Application Design Subject to Cost Constraints: An Evolutionary Computation Approach

Swapna S. Gokhale and Lance N. Fiondella

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 255-270

Abstract: The growing reliance of our society on software systems mandates their reliable operation. The primary factor that governs the reliability of most commercial systems, however, is the development and maintenance costs. A challenging objective then is to design a software system so that maximum reliability is achieved within a pre-specified cost constraint. A software system may be designed by selecting components from different sources or by allocating resources to the development and testing of components. This design process must consider the influence of each component on the application reliability which will be determined by the application architecture. This paper presents an evolutionary computation approach to software system design in order to maximize its reliability for a given cost based on its architecture. The choice of evolutionary computation is motivated by three facts, namely, a potentially large and discontinuous search space, usually nonlinear and discrete but monotonic relation between the cost and reliability of individual modules, and complex software architectures giving rise to nonlinear dependencies between individual module reliabilities and the overall system reliability.

Keywords: genetic algorithms, software architecture, software reliability.

 

sai: itssa.2010.11.204

Application of Systemic Analysis and Fuzzy Logic on a Grain Silo

M. N. Lakhoua

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 271-285

Abstract: After a presentation of current methods used to enhance participation in information system planning and requirements analysis, we present, according to a systemic analysis approach, grading system of cereals in Tunisia. A model describing the functioning of the complex system was established and allowed us the identifying of the information that ruled it. This paper tries to identify one of the more important aspects to consider in the development of an information system which is the use of adequate methodology and tools. An application of systemic methods for the management of a grain silo is presented. In fact, the analysis and the modelling of the grading system of cereals which allows us to determine the cereals transactions price are based on the two methods OOPP (Objectives Oriented Project Planning) and SADT (Structured Analysis and Design Technique). In order to approach the problem of classification of samples of cereals, we present an application of the fuzzy logic on the grading system of cereals.

Keywords: systemic analysis, information system, upgrading, SADT, OOPP, fuzzy logic.

 

sai: itssa.2010.11.044

On the Throughput of Multicasting with Rateless Erasure Codes

I-Chung Lee

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 286-297

Abstract: Recently several rateless erasure codes were developed to speed up the multicasting process successfully. In this paper, we take a theoretical approach to justify that methodology. We consider two multicasting models that use rateless erasure codes. In these models, the sender uses an ideal rateless erasure code to map a group of n message packets to an arbitrarily larger set of check packets so that any collection of kn check packets received by a receiver can be used to recover the original n message packets (k ³ 1). There is one sender and rn receivers (r > 1). In the direct multicasting model, packets to the receivers are lost independently with probability q (0 <q< 1). For this model, we prove a strong law of large numbers for the asymptotic throughput as n ® ¥. The asymptotic throughput is characterized by the unique solution of an equation in terms of k, q and r. The strong law shows how the number of message packets n scales with the number of receivers rn while keeping a nonzero throughput. For the model with one common shared link, we use the fact that conditioning on the transmission result of the shared link, the spatial loss correlation among the receivers can be removed. As such, we can extend the previous strong law to this model.

Keywords: large deviations, law of large numbers, multicast, rateless erasure code, throughput.

 

sai: itssa.2010.11.083

Registration-Free Paging for Multiaccess

Haitao Tang

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 298-308

Abstract: This work designs a registration-free paging scheme for multiaccess networks and devices, where the specific radio accesses with and without own paging functions can coexist and be benefited. It presents a scalable paging approach for the multiaccess networks and addresses the privacy concerns of the multiaccess devices, especially when they are in the service scope of foreign networks. It also works to benefit those radio accesses without own paging functions for their further power saving in the multiaccess environment. Numerical analysis is then done to investigate the different degrees of battery lifetime saving under different communication usages, network and device settings.

Keywords: paging, multiaccess, power management, battery lifetime, numerical analysis.

 

sai: itssa.2010.11.084

A New Authentication Considering Convergence of Wireless Networks

SuJung Yu and JooSeok Song

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 309-316

Abstract: The converged network of existing wireless networks evolves to all Internet Protocol (IP)-based services. Specifically, this paper focuses on converged network of Universal Mobile Telecommunications System (UMTS) and Digital Video Broadcasting-Handheld (DVB-H) networks. UMTS networks provide broadcasting service as DVB networks to users with various return channels as well as access to the external world and a high mobility two-way multimedia service with a medium bit rate. The DVB-H provides high bit rate mobile reception, but restricted to a unidirectional one-to-one. The converged networks of telecommunication and broadcasting have problem about how to support mutual authentication between a user and a broadcasting network. In this paper, we propose a new authentication scheme that authenticates each other. The access terminal checks the user identity (Uid) with Program Specific Information/Service Information (PSI/SI) which is transmitted through interface Ii. When the user moves into the new cell in the converged network, the DVB-H checks confirm message in PSI/SI table. If PSI/SI table is correct, the DVB-H network transmits data with unnecessary authentication. The new authentication scheme provides efficient authentication and re-authentication using return channel in the converged network.  The session key is used in every session to encapsulate the data sent to the user. Our proposed authentication scheme provides a fast mutual authentication for the converged network.

Keywords: authentication, DVB-H, mobile converged networks, PSI/SI table, re-authentication, security, UMTS.

 

sai: itssa.2010.11.131

Building Knowledge Networks Using Panoramic Images

Stefano Valtolina, Stefano Franzoni and Pietro Mazzoleni

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 317-325

Abstract: This paper presents a system in which 360 panoramic images are used to disseminate cultural heritage information by accessing open networks of knowledge. The system has been designed following the patterns of Interaction Design. During the development of the system, two new patterns especially useful to the dissemination of cultural heritage content have been recognized. The first, called “Knowledge Network”, offers a solution to the problem of re-contextualizing collections of artifacts according to a given theme the user can choose. The second, called “Virtual Visit”, guides the development of a seamless virtual space composed by a network of panoramic images. The two patterns can be naturally combined: the items organized by Knowledge Network can be displayed to the user in a virtual exhibition arranged by Virtual Visit. In this paper is argued how this approach can facilitate the development of applications easily customizable by the user and characterized by a high level of interactivity.

Keywords: cultural heritage, interaction patters, panoramic images.

 

sai: itssa.2010.11.142

An Agent Based Simulation for Testing the Emergence of Meaning

Toomas Kirt

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 326-332

Abstract: To understand the essence of meaning is a crucial point to build intelligent systems. It is proposed that meaning emerges if an agent starts to distinguish objects or events that have positive or negative impact on survival and to prefer desirable and avoid undesirable states. In this paper a simulation is proposed to evaluate whether it is possible that from a random initial configuration with the help of an evolutionary process an evaluation system emerges that helps an agent to distinguish and gather energy rich resources and to avoid dangerous matter.

Keywords: meaning, agents, artificial life, evolutionary computation.

 

sai: itssa.2010.11.144

An Optimal Approach To Determine the Minimum Architecture for Real-Time Embedded Systems Scheduled by EDF

Jean-François Hermant and Laurent George

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 333-338

Abstract: This paper presents a sensitivity analysis on the Worst-Case Execution Times of sporadic tasks for the dimensioning of real-time embedded systems in which tasks are executed according to the preemptive Earliest Deadline First (EDF) scheduling policy. The timeliness constraints of the tasks are expressed in terms of late termination deadlines. A general case is considered, where the task deadlines are independent of the task sporadicity intervals (also called periods). New results for EDF are shown, which enable us to determine the minimum architecture corresponding to the minimum processing speed architecture such that all the task deadlines are met. This minimum architecture is obtained from the analysis of EDF in a reference architecture in a time interval of length equals to the least common multiple of the task periods. From this analysis, it is then straightforward to determine, if the sporadic task set is feasible with another processor speed.

Keywords: real-time scheduling, embedded systems, earliest deadline first, sensitivity analysis, C-space, feasibility domain, minimum architecture.

 

sai: itssa.2010.11.146

An Architectural Refinement Model for Group-wide Communications with Priorities Applied to the Rosace Project Senario

Ismael Bouassida Rodriguez, Khalil Drira, Christophe Chassot and Mohamed Jmaiel

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 339-349

Abstract: In this paper, we propose a refinement-based adaptation approach for the architecture of distributed group communication support applications. Unlike most of previous works, our approach reaches implementable, context-aware and dynamically adaptable architectures. To model the context, we manage simultaneously four parameters that influence Qos provided by the application. These parameters are: the available bandwidth, the exchanged data communication priority, the energy level and the available memory for processing. These parameters make it possible to refine the choice between the various architectural configurations when passing from a given abstraction level to the lower level which implements it. Our approach allows the importance degree associated with each parameter to be adapted dynamically. To implement adaptation, we switch between the various configurations of the same level, and we modify the state of the entities of a given configuration when necessary. We adopt the direct and mediated Producer/Consumer architectural styles and graphs for architecture modelling. In order to validate our approach we elaborate a simulation model.

Keywords: software architecture, producer/consumer style, adaptation, context-aware, graphs.

 

sai: itssa.2010.11.148

Reactive Common Sense Reasoning for Knowledge-Based Self-Optimization

Michael Cebulla

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 350-356

Abstract: We discuss a membrane-based calculus for the combination of conceptual spaces during runtime. We claim that properties like self-optimization and context-adaptive behavior can be supported by the runtime combination of such situational models. Since our goal is to support emergent properties of behavior (and due to the fact that it is not possible to define a complete calculus for all situations) we introduce terms which are capable of self-modification. Terms from situational descriptions can evolve according to simple rules thus providing various possibilities for reactions. This strategy is well-suited to support a decentral approach towards the modeling of distributed behavior.

Keywords: self-optimization, common sense-reasoning, situation awareness.

 

sai: itssa.2010.11.150

A Simulation Study of Grid Scheduling

Petros Papadopoulos, Huaglory Tianfield and Mike Mannion

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 357-363

Abstract: This paper conducts an initial study of existing Grid scheduling solutions and performs simulations to test common Grid scheduling algorithms.

Keywords: grid, grid scheduling, grid simulation

 

sai: itssa.2010.11.152

Engineering Self-Management into Legacy Systems

Jens Steiner and Ursula Goltz

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 364-369

Abstract: For a few years now, the ever increasing complexity of technical systems has been one of the major obstacles for further advancements. Inspired by nature, concepts like self-management and self-organization have found their way into artificial systems which in turn exhibit self-optimization, self-healing or other so called self-* properties. While first engineering approaches for such systems exist, there is no methodology yet, that is capable to infuse self-management into legacy systems and in addition can prove that functional and non functional requirements are still met after the reengineering. This paper proposes a methodical approach for this task, emphasizing the use of model-based lightweight and formal methods for validation and verification.

Keywords: self-management, autonomic computing, self-* properties, methodology, validation, verification, model-based development.

 

sai: itssa.2010.11.153

Flexible Application and Context Aware Adaptation in a Pervasive File System

Gustavo C. Frainer, Luciano da Silva, Iara Augustin, Adenauer Yamin and Cláudio Geyer

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 370-375

Abstract: This paper presents the Pervasive File Space (PFS), a service that provides pervasive access to files, using context and application aware adaptation to provide a better service without burdening the user. The PFS introduces a new model for application-aware adaptation that enables it to have a larger number of adaptative behaviors and to deal with any context element the application deems important.

Keywords: application-aware adaptation, file system, pervasive computing.

 

sai: itssa.2010.11.155

Market-Based Coordination Strategies for Large-Scale Multi-Agent Systems

MyungJoo Ham and Gul Agha

International Transactions on Systems Science and Applications, Vol. 6, No. 4, November 2010, pp. 376-386

Abstract: This paper studies market-based mechanisms for dynamic coordinated task assignment in large scale agent systems carrying out search and rescue missions. Specifically, the effect of different auction mechanisms and swapping are studied. The paper describes results from a large number of simulations of homogeneous agents, where by homogeneous we mean that agents in a given simulation use the same strategy. The information available to agents and their bidding strategies are used as simulation parameters. The simulations provide insight about the interaction between the strategy used by individual agents and the market mechanism. Performance is evaluated using several metrics: mission time, distance traveled, communication and computation costs, and workload distribution. Some of the results obtained include: limiting information may improve performance, different utility functions may affect the performance in non-uniform ways, and swapping may help improve the efficiency of assignments in dynamic environments.

Keywords: auction, market-based approach, multi-agent system, task assignment.

 

 

 

-----------------------------------------------------------------------------------------------

 

Copyright © 2010 Systemics and Informatics World Network