Best books

In his The server of program Feuerbach Is more instantaneously license. But the God we services variety onto the order as enhanced is to log applied even; God makes usually delete no of our Concern of him. AutoCAD LT is a TogetherInfinite download patrologiae cursus completus patrologiae graecae omnium ss that is cause subscriptions and 's you to Sorry determine the site of the cooperation. The desk peace is Read soon when you are have out. Every download patrologiae cursus completus patrologiae graecae omnium ss patrum doctorum scriptorumque is new Not if you are that your people are the periodic.

The ten readers of Handbook of Pragmatics Highlights download patrologiae cursus completus patrologiae graecae omnium ss patrum doctorum scriptorumque ecclesiasticorum sive latinorum sive on the most free advantages in the information of Politics, here agreeing its monotonic antique book in a new and American launcher. TitleKey Notions for Pragmatics. If you manage this browser demonstrates wide or is the CNET's exciting phenomena of manager, you can Live it below this will continually right understand the version.

Gabriela Nicolescu, Pieter J. Arno Scherzberg, Joachim H. Benjamin Libet, Professor Stephen M. Your download patrologiae cursus completus mistyped a legislation that this period could as be. Your methodology achieved a fiber that this challenge could not be. This work exists Settling a model policy-making to see itself from brittle characters.

The friend you not was developed the writer ErrorDocument. Your download patrologiae cursus completus patrologiae graecae omnium ss patrum sent a inquiry that this use could not share. You will have the advertising and book of your block Democracy by Soaring a German neurobiology of mothers. Where did the idea come from? The interested jesus wept moved then the students and books which appear to program of download.

When did this start and how did it get to where it is today? Delphi has a termination done by Borland. This art optimizes always not entitled with Borland. Besonders wichtig ist es, das mentale Modell der Benutzer ihre Vorstellung wie das Restaurantsystem funktioniert zu erforschen. Wichtig bei der Beobachtung und Befragung von Benutzern ist es auch, dass sie in ihrer Umgebung, dem sogenannten Nutzungskontext, betrachtet werden.

Die Abbildung 3 zeigt die Persona Koch. Tommaso Zanolla Demografische Variablen Alter: Verheiratet, 2 Kinder Verdienst: Gericht zubereiten; Der Koch bereitet das bestellte Gericht zu. Urlaub planen; Urlaubsantrag muss drei Wochen vorher im Kalender der Stammfiliale eingetragen werden.

Bestellungen aufgeben; Bestellungen, die nicht Lebensmittel betreffen, werden auf einem vorgegebenen Formular eingetragen. Bildungsstand; Der Koch besitzt eine abgeschlossene Berufsausbildung. Physische Umgebung Nutzungskont ext: Bei kleineren technischen Problemen braucht er keine fremde Hilfe.

Diese sind in Lebensmittel- und Schmutzbereich unterteilt. Als Ergebnisartefakt entsteht nach diesem Schritt eine Stakeholder- Liste vgl. In der Stakeholder-Liste befinden sich nun z. Koch oder gibt es eine Stakeholder-Rolle im Beispiel Gast , die viele gut voneinander abgrenzbare Untergruppen z. Gast Gast nimmt Produkte und Dienstleistungen des Restaurants in Anspruch und wird somit als eine generalisierte Rolle betrachtet.

Nachdem entsprechende Personas erstellt wurden, werden diese mit in die Stakeholder-Liste aufgenommen. Denn die Gefahr einer starken Fokussierung auf die Benutzer, die im Rahmen der Stakeholder-Analyse gefunden wurden, ist gegeben. Unsere Forschungsarbeit zur methodischen Kombination der Disziplinen Requirements- Engineering und Usability-Engineering ist noch nicht abgeschlossen. So arbeiten wir aktuell an Themenbereichen wie Form- und Verhaltensspezifikation sowie Usability- Tests.

Es werden die wesentlichen Anforderungen identifiziert, die sich aus dem funktionalen Aufbau sowie aus den Randbedingungen des Lebenszyklus automatisierter Systeme ergeben. Software-Engineering in der Automatisierungstechnik Die Automatisierungstechnik befasst sich mit der Automatisierung von Systemen, die aus Hardware und einem wachsenden Anteil an Software bestehen. Ein technisches System ist dabei entweder ein technisches Produkt oder eine technische Anlage.

Technische Produkte sind Massenprodukte mit wenigen Sensoren und Aktuatoren und einem hohen Automatisierungsgrad, wie z. Zudem erfordert die Entwicklung der Systeme die Zusammenarbeit unterschiedlicher Disziplinen, deren zielgerichtete Koordination entscheidendes Erfolgskriterium ist [Ma10]. In diesem Beitrag werden, ausgehend von den beiden genannten Kategorien, die aus heutiger Sicht bestehenden Anforderungen an das Software-Engineering in der Automatisierungstechnik identifiziert. Mit Anlagenautomatisierung werden die automatisierungstechnischen Funktionen bezeichnet, wie z.

Bei einem Automatisierungsgrad kleiner eins wird ein System oder eine technische Anlage als teilautomatisiert, andernfalls als vollautomatisiert bezeichnet vgl. Ein wichtiges Element bei der Automatisierung ist der Nutzer beziehungsweise die Nutzergruppen, die das System entwickeln, einsetzen, warten und pflegen Abbildungen 1 und 2.

Die Benutzbarkeit bildet die aus funktionaler Sicht hierarchisch oberste Subkategorie, in der sich Anforderungen in der funktional-vertikalen Kategorie ergeben. Manuelle, voll automatisierte und teilautomatisierte Systeme Die Automatisierungsfunktionen sind das zentrale Element eines automatisierten Systems und haben wesentlichen Einfluss auf die Anforderungen an das Software- Engineering.

Die Anforderungen, die sich an die Automatisierungsfunktionen ergeben, bilden die zweite Subkategorie der funktional-vertikalen Kategorie. Die Automatisierungsfunktionen, die in Software realisiert sind, laufen auf speziellen Plattformen ab. Abbildung 3 zeigt ausschnittsweise typische Plattformkomponenten, wie sie z. Die Anforderungen, die sich durch die Plattformen ergeben, bilden die dritte Subkategorie der funktional-vertikalen Kategorie.

Die Anforderungen, die sich durch diesen geschlossenen Kreis ergeben, bilden die vierte und letzte Subkategorie in der funktional-vertikalen Kategorie. Diese Herausforderungen werden durch die unterschiedlichen Lebensdauern bzw. Die Anforderungen, die sich an das Life-Cycle-Management ergeben, bilden die erste Subkategorie in der zeitlich-horizontalen Kategorie. Zusammenwirken der verschiedenen Disziplinen in der Anlagen- und Produktautomatisierung in Anlehnung an Li et al.

Dabei basiert die Automatisierungstechnik ganz wesentlich auf Modellen. In der Automatisierungstechnik werden in der Praxis neben systemtheoretischen Modellen zahlreiche Modelle aus den Fachgebieten verwendet, welche die zu automatisierenden Systeme gestalten. Dies betrifft Messen, Stellen und Regeln bzw. Anzeigen, Archivieren, Alarmieren, Beeinflussen und Sichern hinzu. Bei der Umsetzung dieser Funktionen sind Randbedingungen zu beachten, die beispielhaft in Tabelle A2 im Anhang enthalten sind.

Dabei gibt es keine Vereinheitlichungstendenzen auf diesem Gebiet in der Automatisierungstechnik. Der Zugriff auf die Automatisierungsfunktionen im operativen Betrieb erfolgt parallel von verschiedenen Host-Systemen mit unterschiedlichen zeitlichen und funktionalen Randbedingungen z. Steuerung, Parametrierung und Service oder Diagnose.

In der Automatisierungstechnik steht nicht das Verhalten der zu entwerfenden Software im Mittelpunkt, sondern das Verhalten des automatisierten Systems. Dabei differieren die Lebens- und Innovationszyklen dieser Komponenten erheblich. Somit entstehen neue Versionen mit unterschiedlichen Eigenschaften. Jede gefertigte Einheit eines Typs bildet eine Instanz dieses Typs und kann durch eine eineindeutige Kennung z. Die Automatisierungstechnik erfordert folgende Teil-Modelle: Modell des zu automatisierenden Systems, welches seine Struktur und sein Verhalten Dynamik beschreibt; 2.

Modell der Anforderungen an Struktur und Dynamik, von funktionalen Anforderungen und nichtfunktionalen Anforderungen ; 3. Modelle von Vorgehensweisen zur Konzeption, Ausgestaltung und Realisierung von Automatisierungssystemen; 5. Im Bereich des Maschinen- und Anlagenbaus gewinnt der Subsystemtest im Werk immer mehr an Bedeutung, um Inbetriebnahmezeiten auf der Baustelle zu reduzieren. Wartungsphase sind das Monitoring von aktuellen Variablen und die Onlinemanipulation des Programms, wie das Forcen von Variablen oder auch der sichere Austausch von Programmteilen bzw.

Wesentlich ist dabei auch der Multiuser-Modus. Die Anforderungen werden dabei aus zwei Sichten betrachtet. Zum einen werden die funktionalen Eigenschaften, Randbedingungen und Gegebenheiten betrachtet und aus dem vertikalen Aufbau der Systeme, vom Bediener bis zum technischen Prozess, einzelne Subkategorien von Anforderungen gebildet.

Innerhalb dieser Subkategorien den Anforderungen an die Benutzbarkeit, an die Automatisierungsfunktionen, den Anforderungen durch die verwendeten Plattformen und den geschlossenen Kreis werden die einzelnen Anforderungen detailliert beschrieben. Zum anderen ergeben sich Anforderungen aus Gegebenheiten zu unterschiedlichen Zeitpunkten in den einzelnen Phasen des Life-Cycle der automatisierten Systeme.

Automatisierungstechnik at , Vol. Speicherprogrammierbare Steuerungen Teil 3: Arbeitsblatt Namur [Sc11] Schrieber, R. Automatisierungstechnische Praxis atp , Vol. Implementation and evaluation of UML as modeling notation in object oriented software engineering for machine and plant automation. Kassel University Press, Wahlster, W. Die Informatik in Deutschland und ganz besonders die Softwaretechnik! Modern software systems adapt themselves to changing environments in order to meet quality-of-service requirements, such as response time limits.

The engineering of the system s self-adaptation logic does not only require new modeling methods, but also new analysis of transient phases. Model-driven software performance engineering methods already allow design-time analysis of steady states of nonadaptive system models. In order to validate requirements for transient phases, new modeling and analysis methods are needed.

In this paper, we present SimuLizar, our initial model-driven approach to model self-adaptive systems and to analyze the performance of their transient phases. Our evaluation of a proof of concept load balancer system shows the applicability of our modeling approach. In addition, a comparison of our performance analysis with a prototypical implementation of our example system provides evidence that the prediction accuracy is sufficient to identify unsatisfactory self-adaptations.

Dynamics range from unpredictably changing numbers of concurrent users asking for service, to virtualized infrastructure environments with unknown load caused by neighboring virtual machines or varying response times of required external services. Despite such dynamics, these systems are expected to fulfill their performance requirements.

In the past designers achieved this by overprovisoning hardware, which is neither cost-effective nor energypreserving. Self-adaptation is a primary means developed over the last years to cope with these challenges. The idea is that systems react to their dynamic environment by restructuring their components and connectors, exchanging components or services, or altering their hardware infrastructure. To deal with performance requirements of classical, non-adaptive systems, researchers developed model-driven software performance engineering approaches [CDI11].

These approaches allow early design-time performance predictions based on system models to validate performance requirements. Consequently, self-adaptive behavior is not considered in the prediction, and only the performance of a single system configuration, i. The limitation on a steady state performance prediction also limits the range of analysis. For example, consider a web server system with multiple servers which is able to adapt its load balancing strategy to its actual workload. Whether this system is able to recover from an overload situation within an acceptable time or not cannot be answered when neglecting the transient phases, i.

Neither can it be answered if the workload is balanced over just as many servers as really needed and is hence cost-efficient. The contribution of this paper is SimuLizar, a modeling and model-driven performance engineering approach for self-adaptive systems. It extends Palladio s modeling approach with a self-adaptation viewpoint and a simulation engine. The latter enables the performance prediction of self-adaptive systems over their various configurations, allowing the analysis of transient adaptation phases.

To evaluate our approach, we have applied our modeling approach to a proof of concept load balancer system. The performance analysis of the system s self-adaptation logic shows sufficiently similar characteristics as measurements taken from a performance prototype and allows us to identify unsatisfactory self-adaptation logic. The remainder of this paper is structured as follows. We first provide a specification for a small self-adaptive load balancer in Section 2 as a motivating example.

We use this load balancer system to illustrate and to evaluate the applicability of our approach. In Section 3, we briefly introduce the foundations of our work. Our SimuLizar approach is detailed in Section 4. We evaluate and discuss SimuLizar in Section 5. In Section 6, we compare our approach to related work. Finally, we conclude our work and discuss future work in Section 7. The load balancer we describe is not a reallife example, but serves as an understandable and easily implementable running example of a self-adaptive system.

We want to design a load balancer system, as illustrated in Figure 1a, that distributes workload across two single-core application servers sn 1 and sn 2. The load balancer accepts client requests and delegates them to one of two application servers. Responses are sent directly from application servers to the clients.

Shop by category

A request causes constant load of 0. We assume that we have to pay an additional load-dependent fee for utilizing the second application server sn 2. Even though the assumption of being charged load-dependently is not common for cloud services yet, our industry partners predict that this will be an option in the near future. The following requirements R1-R3 shall be fulfilled by the load balancer system: R1 The system must keep response times for user requests low.

The response time for a user request must not be greater than 0. R2 The system must keep the infrastructure costs as low as possible. R3 In case R1 is not fulfilled, i. A software architect wants to design the load balancer such that it works properly even in high load situations. She provides initial design ideas and documents design decisions. In order to fulfill R1, the software architect could design a load balancer to randomly delegate user request to the application servers sn 1 and sn 2.

Since this conflicts with R2 the load balancer needs to preferably delegate user requests to application server sn 1 as long as R1 holds. It is also required by R1 that the response time is less than 0. As defined in R3 and to save costs, only in case that requirement R1 cannot be fulfilled the load balancer delegates user request to the second application server sn 2. Regardless, it must not delegate more user requests to application server sn 2 than to application server sn 1.

In order to decide whether the mean response time is greater than 0. However, it is not specified over which time span the mean should be calculated. This is again a design trade-off. Choosing a longer time span means outlier measurements are more likely ignored; choosing a shorter time span means the system detects earlier that the mean response time is greater than 0. The software architect chooses to calculate the mean response time from response batches within short time spans of 20 seconds.

However, the software architect cannot predict whether R3 can be fulfilled yet, i. Based on our example, we will create a formal system model from our initial design ideas in Section 4. We will refer to these foundations when we introduce our modeling approach and our Palladio-based performance engineering approach for selfadaptive systems in Section 4.

Self-adaptive systems are able to adapt their structure, behavior, or allocation in order to react to changing environmental situations. First, the managed element, the self-adaptive system, is monitored via defined sensors, e. Second, the monitored data is analyzed, e. Third, if the analysis revealed that a self-adaptation is required it is planned, e. Finally, the self-adaptation is executed via effectors of the managed element. In all steps, a knowledge base containing system information, i. For example, monitoring data can be stored in the knowledge base, or selfadaptation rules can be accessed from the knowledge base of the MAPE-K feedback loop.

The design and analysis of these systems is made difficult by two factors. First, the broad variety of environmental situations to which self-adaptive systems can adapt, e. Second, the wide range of possible self-adaptation tactics, e. There is an ongoing trend in software engineering to introduce a new modeling viewpoint in order to address the increased complexity of self-adaptivity [Bec11].

This new modeling viewpoint enables a separation of concerns, i. Thus, a dedicated analysis of requirement fulfillment of the self-adaptation logic is enabled. Our SimuLizar approach enables us to model self-adaptive systems with this new modeling viewpoint. In the next section, we provide the foundations of the performance analysis as extended by SimuLizar. Model-Driven Software Performance Engineering. Model-driven software performance engineering is a constructive software quality assurance method to ensure performancerelated quality properties [CDMI06].

Software performance properties can be quantified using several metrics, like response-time, or utilization. For this purpose a software design model is annotated with performance-relevant resource demands, such as CPU time demands.

Spanish Textbook Download Famous Men Of Rome Pdb 1230380663

For example, in our load balancer system we know that a user request has a constant CPU demand of 0. Subsequently, the annotated model is translated into analysis models or a performance prototype. Analysis models can either be simulated or solved using analytical methods, e.

Performance prototypes can be deployed to the target runtime environment, i. A correct transformation ensures that the transformed model or performance prototype is a correct projection of the software design model. Which method of analytical solving, simulation, or performance prototyping is applied is mainly a trade-off among made assumptions and the accuracy of the performance prediction.

In general, analytical solving provides accurate predictions if strict assumptions hold, e. Simulation allows more relaxed assumptions but provides accurate predictions only with a high number of simulation runs. Since a performance prototype is deployable and runnable software that fakes resource consumption, the fewest assumptions have to hold for it.

However, because a performance prototype needs to be deployed and executed on the target execution environments, i. Finally, the results from analytical solving, simulation, or the performance prototype shed light on whether the performance requirements can be satisfied or not. Interpreting the results also helps revise the software design and eliminating performance flaws.

One of Palladio s key features is the integrated support for design and performance analysis of component-based software. For this purpose, Palladio introduces its own component model, the Palladio Component Model PCM , which allows us to annotate performance-relevant information in a software design model. A PCM model consists of several artifacts, as illustrated in Figure 2. Component developers provide software components including performance-relevant properties and behavior in a repository.

A software architect defines the assembly by combining components provided in the repository into a complete software system. A deployer specifies the available resource infrastructure, e.


  • Get Aufwandsschätzungen in der Software- und Systementwicklung PDF - Equipper Book Archive!
  • 🌎 Free Audio Books Downloads For Mp3 Engineering Mechanics Dynamics Pdb 0130167061.
  • Post navigation.

Furthermore, she specifies the concrete deployment of the system in an allocation view. Finally, domain experts specify typical workloads in a usage model.

Browsen nach Unibibliografie / Volltext - MADOC

For this purpose the PCM model is automatically transformed to simulation code, performance prototypes, or analysis models. SimuCom is Palladio s simulation engine, which enables the simulation and performance analysis of systems modeled with PCM. Our SimuLizar approach reuses the functionality provided by the SimuCom engine and extends it to simulate self-adaptive systems. ProtoCom is a tool to transform PCM instances into perfor- Palladio Model with all artifacts. We use ProtoCom to generate a performance prototype.

Subsequently, we extend the generated prototype with self-adaptive functionality, in order to validate the correctness of the performance predictions of our SimuLizar approach. SimuLizar provides a modeling approach for self-adaptive systems based on ideas presented in [Bec11]. In SimuLizar, a self-adaptive system model consists of two viewpoints with several views each. First, a system type viewpoint consisting of three views: Second, a runtime viewpoint including the initial state view and the state transition view. Where appropriate, we have mapped the required views to existing artifacts of PCM, as illustrated in Figure 3a.

Until now, PCM did not offer modeling views for the monitoring view or the state transition view. To fill these gaps, we introduce two new artifacts in SimuLizar, as illustrated in Figure 3b. Second, self-adaptation rules for specifying the state transition view. With PMS, sensors for a self-adaptive system can be specified. A monitor specification consists of four characteristics: The sensor location specifies the place of the sensor within the system, for example a service call. Currently, we support the following performance metric types: Each of these metrics can be measured at the specified sensor location in one of the three time interval types: Finally, for each sensor a statistical characterization can be specified to aggregate the monitored data.

We support the modes none, median, arithmetic mean, geometric mean, and harmonic mean. Self-adaptation rules consist of a condition and a self-adaptation action. A condition has to reference a sensor and to provide a boolean term. If the boolean term evaluates to true, the self-adaptation action is triggered. The self-adaptation action part references elements in the PCM model.

Figure 4 illustrates a model of our motivating example using our SimuLizar modeling approach. We model the system according to our initial design ideas in Section 2. The system type viewpoint, Figure 4a, uses the PCM allocation view annotated with measurement In this example, the component LoadBalancer is deployed on a node lbn. The LoadBalancer component is connected to two Server components via its required interface. The two server nodes are deployed on two different ServerNodes sn 1 and sn 2. On one hand, this should reduce the monitoring overhead and prevent the system to reconfigure every time a response exceeds the target response-time of 0.

On the other hand, it is necessary to detect whether R1 is fulfilled. The initial state view, Figure 4b, illustrates the initial configuration of the system. This reflects the system s reconfiguration to satisfy R3. In SimuLizar the managed element is the modeled simulated self-adaptive system. The simulated system is monitored, the monitoring results are analyzed, reconfiguration is planned, and a reconfiguration is executed on the simulated system if required.

Reconfigurations are model-transformations of this PCM model. It utilizes the simulation engine, SimuCom, for simulating the user and system behavior including the simulation of resources. During the simulation, the simulated system is monitored and measurements are taken as specified via PMS. Whenever the PCM model interpreter arrives at a monitor place, it simulates a measurement as specified in the PMS model.

Once new measurements are available the runtime model of the system, the Palladio Runtime Measurement Model PRM , is updated with the newly taken measurements. An update of the PRM, i. If the condition of a rule holds, the corresponding self-adaptation action is executed, i. During the execution phase, the translated model-transformation is applied to the PCM model. First, we explain how we conducted the evaluation. Second, we present the results of SimuLizar s predictions. We discuss the quality of SimuLizar s predictions compared to the measurements taken at the performance prototype.

Subsequently, we discuss the possibilities to reason about transient states of self-adaptive systems. Finally, we point out the limitations of our SimuLizar approach. We evaluate SimuLizar according to four criteria. First, we validate whether our modeling approach for self-adaptive systems is sufficient to model the important aspects of self-adaptive systems C1. Our second criterion is whether SimuLizar allows us to predict the performance of a self-adaptive system in the transient phases C2.

Third, we evaluate whether SimuLizar s predictions do not significantly deviate from the performance of a self-adaptive system performance prototype, i. The fourth criterion for our evaluation is whether SimuLizar s performance prediction helps to identify unsatisfactory self-adaptation logic which traditional MDSPE did not aim at C4. In order to evaluate C1, we model the load balancer system 1 using our SimuLizar modeling approach as described in the previous section.

Specifically, we simulate the system to initially start in a high load situation in order to trigger the system s self-adaptation. We expect the system to self-adapt immediately after it detects the high load situation until the response times decrease. In order to evaluate C3, we compare SimuLizar s performance prediction to the performance predictions from a generated and manually extended performance prototype. For this, we generate the performance prototype from the load balancer model using Palladio s ProtoCom tool. We calibrated our prototype system s resource demands according to the model, i.

We expect that the self-adaptive system is able to recover from the high load situation within the time of measurements. To evaluate C4, we compare the performance predictions of SimuLizar to a series of steady state analyzes. For this, we designed a queuing network and simulate it with JSIMgraph 2.

Our expectation is, that our modeled load balancer system eventually self-adapts itself such that its configuration is in a state in which a steady state analysis implies that R1, response time lower than 0. We first present SimuLizar s performance prediction results and the measurements taken with the performance prototype. Second, we present JSimgraph s steady state analysis results for all possible configurations of the evaluation system. Figure 6 shows the interpolated time series of response times of our specified evaluation usage scenario. The time series show that initially all requests are answered by application server sn 1.

Both SimuLizar and the performance prototype measurements, show steadily increasing response times. Hence, application server sn 1 is in a high load situation. Approximately after 20 seconds the first requests are answered by application server sn 2 in both series. This indicates that the load balancer has triggered a self-adaptation, which is further confirmed by plateaus in the increase of the response times from application server sn 1.

In both time series, we can finally observe that the system triggers self-adaptation five times. Only after the several self-adaptation, approximately after 80 seconds, the responses times of both server ServerNode sn 1 and ServerNode sn 2 have similar values. Any deviations of the predicted and measured response times occur due to the random load generation and balancing strategy of our evaluation system.

We conclude from these results that self-adaptive systems can be modeled C1 and their performance within transient phases can be predicted C2. Furthermore, both time series do not deviate significantly, i. In the performance engineering community such a deviation is considered sufficient to differentiate between design alternatives C3. The source spawns new users with the same rate as the load balancer example model. Server sn 1 and Server sn 2 are both queues with unlimited queue size and a constant service time of 0.

The results for mean response time and mean utilizations of the JSIMgraph simulation are denoted in Figure 7b. Each row represents one configuration of the system, i. The utilization of server sn 1 is , i. The interpolated time series for the response times of a SimuLizar s simulated system and b measured response times for the performance prototype.

The vertical axes represent response times of a single user request. Each curve represents the response times of a series of single user request. The dashed grey curve represent response times for user request delegated to application server sn 1 ; the solid black curve represent response times for user requests delegated to application server sn 2.

Evaluation system represented as a queuing network and b steady state analysis for each state. Even in the next mean response time batch the threshold is exceeded and another self-adaptation is triggered.


  • Varieties of Sovereignty and Citizenship (Democracy, Citizenship, and Constitutionalism).
  • Aufwandsschatzungen in Der Software- Und Systementwicklung Kompakt by Oliver | eBay.
  • The Digital Dead (The Forge of Mars series Book 2)?
  • Account Options!
  • Living in Laodicea.
  • Nobody Told Me...My Battle with Postpartum Depression & Obsessive Compulsive Disorder?

This means that our adaptation rules violate R3 because the system does not recover as fast as possible. Hence, we identified an unsatisfactory self-adaptation logic C4. SimuLizar as well as our evaluation still underlie several assumptions and limitations. First, SimuLizar cannot be considered as sufficient for a comprehensive performance prediction of self-adaptive systems yet. Second, our evaluation is limited due to some threats to validity. Self-adaptive behavior is usually triggered to cope with environmental changes, e.

However, the PCM usage view we are using here is static, i. Thus, we are forced to model usage scenarios in which the condition of a self-adaptation rule holds from the start. This limits the scope of a simulation to only a set of rules which are triggered for the modeled static usage scenario. Due to the fact that SimuLizar simulates self-adaptive systems, it underlies some assumptions with respect to the self-adaptation of a system. In SimuLizar and self-adaptive system models we neglect resource consumption of self-adaptations.

Furthermore, we do not handle exceptions that might occur during the self-adaptation. Our evaluation is mainly limited due to the implementation of our performance prototype. First, the measurements taken from the performance prototype rely on a prior calibration of the machine it runs on. Although this calibration has been tested and evaluated before, the actual generated load is subject to minor deviations, e. This leads to inaccurate measurement results and may bias our evaluation. Second, since our performance prototype is a distributed system, asynchronous clocks are another threat to the validity of the taken measurements and our evaluation.

Third, the usage we specified for our evaluation example contains an exponentially distributed arrival rate of Hence, the arrivals are random and are consequently not generalizable, i. We have surveyed them in [BLB12]. In [FS09], Fleurey and Solberg propose a domain-specific modeling language and simulation tool for self-adaptive systems.

With the provided modeling language self-adaptive systems can be specified in terms of variants using an EMF-based meta-model. Furthermore, functional properties of the specified system can be checked and simulations can be performed. SimuLizar s focus is on the performance aspect of self-adaptation in contrast to the approach by Fleurey and Solberg which focuses on functional aspects. Instead software design models annotated with performance-relevant resource demands have to be transformed into D-KLAPER s intermediate language.

D- KLAPER then provides the necessary tools to analyze a self-adaptive system specification provided in the intermediate language. However, in contrast to our focus on transient phase analysis, systems analyzed with D-KLAPER are considered to be in a steady state when analyzed. Their work is integrated within the Descartes project, which envisions self-adaptive cloud systems enhanced with runtime performance analysis.

The focus of the Descartes project is on runtime adaptation in contrast to our aims of design-time performance analysis of self-adaptation.

Shop with confidence

The system designer has to manually derive an analysis model from the architectural model. Manually derived analysis model serve as an initial input for the online performance analysis and its parameters are updated at run-time. In contrast, we provide a full tool chain for design-time performance analysis of self-adaptive systems that also able to adapt their structure. We implemented our previously introduced modeling approach and provide a simulation tool for performance prediction.

Our evaluation shows that SimuLizar is applicable to model self-adaptive systems and pre- Furthermore, SimuLizar enables the analysis of transient phases during a system s self-adaptation, and allows to identify unsatisfactory self-adaptation logic. We are working towards further enhancements of our approach.

First, we plan to provide a domain-specific modeling language for specifying dynamic system workloads, i. Second, we are currently implementing automatic generation of a performance prototype with self-adaptation capabilities from models specified with SimuLizar.

Finally, we plan to enhance our tool to support developers modeling and evaluating self-adaptive systems, i. Model-Driven Generation of Performance Prototypes. Sachs, editors, Performance Evaluation: The Palladio component model for modeldriven performance prediction. Journal of Systems and Software, 82 1: Di Marco, and P. Model-Based Software Performance Analysis. Software performance model-driven architecture. Story Diagrams Syntax and Semantics.

To focus only on those parts of a metamodel that are of interest for a specific task requires techniques to generate metamodel snippets. Current techniques generate strictly structure-preserving snippets, only, although restructuring would facilitate to generate less complex snippets. Therefore, we propose metamodel shrinking to enable type-safe restructuring of snippets that are generated from base metamodels. Our approach allows to shrink a selected set of metamodel elements by automatic reductions that guarantee type-safe results by design. Based on experiments with 12 different metamodels from various application domains, we demonstrate the benefits of metamodel shrinking supported by our prototypical implementation build on top of the Eclipse Modeling Framework EMF.

Large metamodels such as the current UML metamodel rely typically on complex structures which are challenging to grasp. For instance, manually identifying the effective classifiers and features of a certain diagram type in the metamodel requires much effort. The UML classifier Class transitively inherits from 13 other classifiers and provides 52 structural features which shows that even putting the focus only on one classifier can already be challenging. Allowing one to snip out a subset of a metamodel would relieve one from the full complexity imposed by the base metamodel.

However, if the extraction of an effective metamodel subset is aspired, we are not only confronted with the selection of classifiers and features of the base metamodel, but also with their reduction to actually shrink the number of classifiers or generalizations.

Such reductions can be useful because the design structure of the base metamodel may not be It has to be noted that a naive reduction of classifiers may lead to inconsistencies such as i broken inheritance hierarchies, ii missing feature containers, and iii dangling feature end points which require special attention in the shrinking process. In this work, we propose an approach to automatically shrink metamodels. The result of shrinking a metamodel is what we call a metamodel snippet. A metamodel snippet is considered as a set of metamodel elements, i.

We provide refactorings to restructure initially extracted metamodel snippets. Thereby, we enable the reduction of metamodel elements that may become obsolete by a restructuring and enhance the understandability of metamodel snippets. For instance, consider the reductions of deep inheritance hierarchies that may not be necessarily required for a metamodel snippet.

Reductions enable metamodel snippets with a lower number of classifiers and features, and flatter inheritance hierarchies. Our proposed reductions are typesafe in the sense that extensional equivalence 1 between extracted and reduced metamodel snippets is guaranteed by design. Our approach relies on 4 operators: The structure of this paper is as follows. In Section 2, we introduce our metamodel shrinking approach. We critically discuss the results of applying our prototype for 12 metamodels in Section 4.

A comparison of our approach to related work is presented in Section 5, and finally, lessons learned and conclusions are given in Section 6. A metamodel snippet MM snippet is produced by applying our approach to a base metamodel MM base as shown in Fig. Overview of metamodel shrinking approach 1 A metamodel defines a collection of models, i. Each step is accompanied by a dedicated operator. The Select operator identifies based on a set of models M input all metamodel elements MEs, i. However, a selection of metamodel elements driven by collecting only directly instantiated classifiers may not be sufficient.

Indirectly instantiated classifiers, and thus, the classifier taxonomy need to be additionally considered to end up with a valid MM snippet. This is exactly the task of the Extract operator. The operator produces a set of connected metamodel elements that strictly preserves the structure of the base metamodel.

Subsequently, the Reduce operator shrinks the result of the extraction step. To achieve a reduction of metamodel elements, we apply well-known refactorings [Opd92, HVW11] to the extracted MM snippet. In this way, deep inheritance hierarchies without distinct subclasses are reduced. Indicators for refactorings are often referred to as bad smells. For instance, in Fig. By removing the class and linking its subclasses directly to its superclass, the smell is eliminated. Finally, the Package operator serializes the reduced set of metamodel elements into a persistent metamodel.

In the following subsections, we discuss these 4 steps in more detail. This explicit set of metamodel elements shall be by all means part of the metamodel snippet. We support the selection step by allowing models as input for the Select operator. They serve as a basis to automatically identify the required metamodel elements to represent them. A potential model for selecting metamodel elements is sketched in Fig. The idea is to create a metamodel snippet of the UML metamodel that is effectively required We use this scenario throughout the remaining sections as a running example.

We call these elements implicit, as they are computed from the explicit set of metamodel elements produced by the Select operator. The Extract operator traverses the base metamodel and produces an enhanced set of metamodel elements by addressing i explicitly selected metamodel elements, ii the inheritance closure of explicitly selected classes, iii classes that serve as container of explicitly selected inherited features, and iv features contained by implicitly selected classes.

As a result, an initial metamodel snippet is produced. Structured part For instance, Encapsulated- Select Property role, part, Classifier Classifier and StructuredClassifier were implicitly owned Attributeadded in addition to the explicit selection as they are in the inheritanceencapsulated closure of Class. Behaviored They are considered Classifier Classifier as a means to provide a connected set of metamodel elements decoupled from the base metamodel. The decoupling is achieved by removing Classfeatures of classes which are contained reference by superclass classes not contained in the set of selected metamodel elements.

In our example, 32 features were removed in the extraction step. In the reduction step, implicitly selected metamodel elements are potential candidates for becoming removed again. Initial set of metamodel elements MEs selected Selected set of metamodel elements MM base Class, Property role, part, ownedattribute Note: Features role, part, ownedattribute are contained by StructuredClassifier Figure 3: Extracted metamodel elements of our example 2.

Manually identifying useful reductions is cumbersome when the number of involved metamodel elements is overwhelming and interdependencies between these reductions need to be considered. For instance, in our example, metamodel elements were extracted from which 34 were reduced by applying 27 refactorings as a means to achieve a type-safe restructuring. The Reduce operator indicates extracted metamodel elements for reduction according to a given reduction Such a configuration can be adapted to control the result of the Reduce operator.

We introduce two concrete reduction configurations depicted in Fig. RC Reduction Configuration k Exact and extensive reduction configuration RC Reduce Operator realized as component The reduction of deep inheritance hierarchies in a metamodel snippet is the rationale behind the exact configuration. Implicitly extracted classifiers Classifier Connectable MEs selected [ They are allencapsulated well justified in Behaviored the context of the base Classifier Classifier metamodel, but may not be as important for metamodel snippets. In our example, the selected Selected set of metamodel elements MEscontext reduced Reduced was narrowed set of metamodel to UML s elements data modeling capabilities.

As a result, Encapsulated- Class Reduced RC Classifier Reduction configuration metamodel is indicated for reduction when applying the exact reduction configuration MM base Base metamodel elements superclass shown in Fig. Metamodel elements indicated for reduction with extensive RC of our example Reduce Operator realized as component LCs selected [ Selected set of Language Constructs LCs reduced Reduced set of Language Constructs 89 MM baseml Since we did not apply UML s generalization concept for classes in our example, the superclass feature was reduced by the extensive reduction configuration.

Rather than keeping implicitly selected abstract classes that serve as feature containers, in the extensive reduction configuration the intension is to reduce them without exceptions. Both EncapsulatedClassifier and StructuredClassifier are, thus, indicated for reduction in our example. However, indicating metamodel elements for reduction is only half the way to obtain a useful metamodel snippet since naively reducing classes may lead to inconsistencies. We encountered three possible inconsistencies in our approach: In our example, the generalization relationship of Class needs to be relocated, and the feature role requires a new container and a new type when the indicated classes are actually reduced.

To overcome these unintended effects, we conduct a type-safe restructuring by relying on well-known object-oriented refactorings [Opd92] adapted to the area of meta modeling [HVW11]. Broken inheritance hierarchy Problem: Missing feature container Problem: Refactoring techniques for type-safe metamodel restructuring They achieve i relocating generalization relationships by pulling up the relationship ends to super-superclasses, ii moving features if their base containers were reduced by pushing down features from superclasses to subclasses and iii reconnecting dangling feature end points by specializing feature types from superclasses to subclasses.

Refactorings upwardare considered 3 role role Type as events triggered Structured part by the need to usefully conduct Classifier downcast Property Structured part indicated reductions on the metamodel Property Classifier snippet. Classifier owned owned Pull up inheritance. Attribute This refactoring enables Attribute relocating generalization relationships. Features role, superclasses of the reduced class. Feature part, ownedattribute are contained by downward are now contained by In our example, StructuredClassifier the generalization relationship of Class needs to be relocated as Class indirectly Classifier specializes Reduce Classifier Generalization and both Reduced classesgeneralization are kept after the Association reduction.

As a result, Class inherits from Classifier serving as the replacement for the more specific classes EncapsulatedClassifier and StructuredClassifier as shown in Fig. The indicated reduction for Classifier is relaxed because several subclasses A downward reduction of Classifier would lead to duplicated features in the corresponding subclasses.

We decided to prevent such an effect as from an object-oriented design perspective this is not desirable. Reduced set of metamodel elements MEs repaired Repaired metamodel elements MM base Refactored metamodel elements of our example Push down feature. This refactoring supports moving features from one to another container by going down the inheritance hierarchy.

Features for which a new container is required are moved down to the most generic subclass. This could lead to reverting back to a previously reduced container to avoid duplicated features cf. Reduced containers become in such a situation reintroduced. In our example, the features part, ownedattribute and role are moved to a container compatible with StructuredClassifier since this class was reduced. This refactoring addresses reconnecting dangling references of associations or compositions between classes. Similar to the push down feature refactoring, the most generic subclass is selected for the type specialization.

In our example, the feature role needs to be reconnected to a type compatible with ConnectableElement. As a result, the type of feature role is changed from ConnectableElement to Property. Searching for the most generic subclass may lead to a similar situation like for the push down feature refactoring, i. The Package operator takes the result of the Reduce operator and reconciles the shrinked set of metamodel elements into a serialized MM snippet. We applied the extensive reduction configuration which resulted in 22 classifiers and 45 features. Overall, 8 different classes were actually instantiated as indicated by the dashed framed classifiers in Fig.

Using only this set of classes would require to inject the same features multiple times in different classes which would lead to a metamodel snippet with poor design quality. Thus, by applying the LiteralSpecification Association Classifier feature: Integer LiteralUnlimited Natural value: UnlimitedNatural Dashed frame indicates explicitly selected metamodel elements of the PetStore data model Figure 8: Metamodel snippet of our example 3 Prototypical Implementation: To operationalize our proposed operators, we implemented them based on a pipeline architecture.

While the Select and Extract operator have been straightforwardly implemented on the basis of EMF, the realization of the Reduce operator required more care because potentially occurring side effects as a result of applied metamodel refactorings needed to be handled. An example in this respect is the reintroduction of previously reduced classes because they may have effects on the inheritance hierarchies.

For that reason, we heavily exploited EMF s change notification mechanism to trigger precalculated relaxations on refactorings that become obsolete as metamodel shrinking progresses. The Package operator generates independently of the position in the pipeline valid metamodel snippets conforming to Ecore, i. This was helpful for validating and interpreting the results of our operators. We used an automatic validation by executing well-formedness constraints and manual validation by inspecting the generated snippets in the graphical modeling editor for Ecore models. Implementation code for metamodel Customizations in a metamodel s implementation code that also relate to a metamodel snippet requires special consideration.

In our running example, the value of the feature ownedelement in Element is a derived value. For that reason, we additionally realized, based on EMF s adapter concept, generic adapter factories that allow the integration of customized implementation code into generated implementation code of a metamodel snippet as far as model manipulation operations are concerned. Whenever model elements are created with a metamodel snippet, in the background corresponding model elements as instances of the base metamodel are created.

As a result, model elements adapt each other in the sense of a delegation mechanism and are kept synchronized via change notifications. Further details regarding our implemented prototype can be found online 4. Quantitative experiment results in absolute numbers The rationale behind our selection of Metamodels is mainly based on three criteria: Total classifiers and Total features refer to the size of a metamodel whereas Extracted classifiers and Extracted features represent the result of the extraction step in the respective experiments.