Lehrbuch der psychologischen Diagnostik: Mit Hinweisen zur Intervention (German Edition)

Jaeger [ 1 ] mentions that G2C interactions offer the widest range of information and services. The main purpose is improved relations between a government and citizens. West [ 2 ] argues that the exchange is no longer one way. Instead of citizens being forced to go to the government offices, the government now has the possibility of reaching citizens actively via the Internet. At the same time, citizens profit from an online access at any time they want. In a survey undertaken by the European Commission [ 3 ], the most frequently mentioned reasons for citizens using e-services are saving time and gaining flexibility.

A considerable number of publications report the development and testing of ways to enhance the usability of e-Government services. De Meo et al. Other authors also propose ways to support the easier selection of e-service by users e. To ensure e-Governmental success, it is important to evaluate the effectiveness of the information and services offered. Reddick [ 9 ], as well as Janssen et al. According to Peters et al. However, there are models for evaluating e-Government in a user-centric approach [ 12 ], and user satisfaction is recognized as an important factor in most models for developing and maintaining e-Government projects e.

Still, there is little research that shows how this user satisfaction can be measured in the context of e-Government. According to Huang et al. Nevertheless, there is little empirical research that shows under which conditions user satisfaction arises though see [ 20 ]. To measure user satisfaction, it is theoretically necessary to know all expectations. Web users have formed certain expectations regarding efficiency and effectiveness [ 22 ] but also regarding, for example, web design [ 23 ]. If these expectations are fulfilled, it is to be assumed that users will be satisfied with the system.

In this concept of satisfaction, user satisfaction is regarded as the output of a comparison process of expectations and the perceived performance level of the application [ 24 , 25 ]. It is therefore expected that user satisfaction with e-Government sites is achieved if the expectations of its users are fulfilled. In this section, a brief overview of some questionnaires that were developed to measure user satisfaction in different contexts will be given.

Bailey and Pearson [ 26 ] developed a tool with 39 items to gather information about user satisfaction with computers. The questionnaire is however more than 20 years old. At the time that it was constructed, computers had very limited possibilities and were mostly used in data processing, therefore, several items deal solely with the satisfaction of the data-processing personnel.

Technological advancements and the development of interactive software led to the need to provide usable interfaces. Doll and Torkzadeh [ 27 ] developed a questionnaire with 12 items designed to measure ease of use with specific applications. They postulate that user satisfaction is composed of five factors content, accuracy, format, ease of use, and timeliness. Harrison and Rainer [ 28 ] confirmed the validity and reliability of this tool and showed that it could be used as generic measuring tool for computer applications. He regards usability as the prime factor influencing user satisfaction thus the name.

The analysis of his data revealed three factors influencing user satisfaction: The focus of most satisfaction scales lies in the development of application-independent instruments to enable their use in various contexts. Considering the broad range of tools and applications, this is often a very difficult task. However, it can be assumed that users have context-specific expectancies that arise depending on the information system used. In the last years, satisfaction scales and tools for specific areas such as online shopping [ 30 — 33 ], company websites [ 34 ], business-to-employee systems [ 19 ], mobile commerce interfaces [ 35 ], knowledge management systems [ 36 ], Intranets [ 37 ], mobile Internet [ 38 ], mobile banking [ 39 ], ERP's [ 40 ], and the information systems of small companies [ 41 ] were developed.

Based on the screening of the theoretical approaches and empirical data, a first item pool was generated by the authors and a full-time e-Government manager. These items were screened and unified to a first draft of ZeGo. The core element is a short questionnaire containing 15 items that is designed to measure user satisfaction with e-Government portals.

The version in Table 1 was used for the first validation of ZeGo. Four questions 1, 11, 12, and 15 of ZeGo are open ended, and question 14 is binary. These items will be disregarded for the analysis due to their scale. For all other questions, Likert scales were used see Table 1. In this method, respondents specify their level of agreement or disagreement with a positive statement.

A higher number expresses a higher level of agreement, thus satisfaction. Interval measurement is assumed for the rating scale, allowing the corresponding statistical validations. The assumption of interval measurement for a rating scale without prior empirical validation is a widely used research practice [ 42 ]. To ensure a high reliability and validity, use of between five and seven categories for a Likert scale is recommended [ 43 ].

Publications

With the five-point scale, participants have two options for a positive attitude, two for a negative one, and one neutral option. According to Mummendey [ 44 ], participants choose the middle option for multiple reasons: It is crucial that participants do not have to spend more than 10 to 15 minutes answering the survey. This is in line with recommendations made by Batinic [ 45 ]. Referring to similar instruments [ 27 , 29 ], a maximum of 15 items was chosen. To validate ZeGo, it was implemented as an online survey, and tested in cooperation with the portal of the canton Basel, Switzerland.

The e-Governmental website of Basel had about 30, unique visitors per month. Recruitment of participants was initiated by a banner placed on the corresponding website http: On the next pages, all 15 questions see Table 1 were presented on separate screens. When submitting incomplete questions, users were forced by posterior error messages [ 46 ] to choose one of the given options. After the user satisfaction questions had been answered, the survey ended with nine demographic items.

The demographic part was put at the end to avoid the user satisfaction answers being influenced by concerns that feedback could be backtracked [ 47 ].

FROM ENGLISH TO GERMAN = Organisation of American States

The first version of ZeGo was conducted in January for one month. In total, citizens participated in the survey, leading to valid responses 14 responses had to be excluded for various reasons; see Section 3. Regarding gender distribution, the first version of ZeGo was returned by This means a slight overrepresentation of male participants: Before data analysis was made, 14 participants had to be excluded.

Six people were excluded because they answered all 13 items exclusively with the best or the worst item score in these cases, we assume that the participants had no real interest in the survey and just wanted to participate in the raffle. The sample size for the item analysis, therefore, consists of participants.

Introduction

Table 2 gives an overview of the total missing values for each analyzed item after exclusion of the 14 participants. Item 15, with EM is an iterative method, which derives the expectation of missing data based on estimates of the variable and computes parameters maximizing this expectation. The replacement of missing values with EM has been proven to be a valid and reliable method and outclasses the listwise and pairwise deletion in many aspects [ 48 , 49 ].

As mentioned before see Section 3. There are open-ended, binominal, and demographical items; therefore, only 11 questions are included in the item analysis see Table 3. For interval-scaled item responses, it is advisable to calculate the discriminatory power with a product-moment correlation of the item score with the test score [ 50 ].

The discriminatory power of an item describes the items correlation with the total score of the test. Cronbach describes to which extent a group of indicators can be regarded as measurements of a single construct here: Table 4 lists the discriminatory power, and Cronbach for each item. The discriminatory coefficients range between. Three items show a coefficient below. According to Borg and Groenen [ 51 ], the lowest acceptable discriminatory power is.

No item falls in this category. All of the items are in an acceptable to good range. If the items of a test correlate with each other, it can be assumed that they measure similar aspects of the common construct. This topic can be explored in the intercorrelation matrix. It shows significant correlations for all items with no negative correlations.

The intercorrelations of the 11 items are relatively moderate. The average homogeneity index for the scale is at. Cronbach for the first version of ZeGo is relatively high , indicating a good reliability for this instrument. Table 4 shows that the internal consistency would increase to the most by exclusion of item 6.

The first validation of ZeGo shows promising results. At the same time, it becomes clear that there are some problematic items that need to be modified or deleted. The discussion of the scale and the items of ZeGo form the basis for the second version of ZeGo see Section 4.

There is a clear tendency to use ZeGo in the upper part of the five-point scale.

Advances in Human-Computer Interaction

This finding is not surprising: This assumes for ZeGo that the instrument differentiates well for the upper scale range. Here, only the problematic items will be discussed. An item can be regarded as being problematic if the statistical parameters show insufficient values see Section 3. Item 6 The statistical parameters of item 6 are a little weaker than the rest. It shows a low homogeneity, discriminatory power, and reliability.

It can be argued that design and colors do not have to be connected necessarily to other user satisfaction questions. A participant can be satisfied with an e-Government site while disliking the design and the colors. However, this item is useful if design and colors are so poor that user satisfaction decreases. Because of this and despite the weaker statistical values, the item will be temporarily maintained for the second validation. Item 8 This item shows a relatively low homogeneity and discriminatory power.

Furthermore, there are It seems that the completeness of a website is difficult for users to assess. For the same reasons as item 6, the question will remain temporarily in the survey for the second validation. Items 9 and 10 The statistical parameters of these two items are good. Qualitative analysis of registered comments raised the issue of whether the questions are sufficient for examining content quality.


  • OVERVIEW OF PUBLICATIONS!
  • theranchhands.com: Hermann-Josef Fisseni: Books, Biography, Blogs, Audiobooks, Kindle?
  • Publications - Pädagogische Psychologie, Diagnostik und Evaluation - LMU Munich.
  • Yesterdays Gone: Season Two;

More information about the content would facilitate the work of webmasters in improving content-specific website aspects. Therefore, two new items were added for the second version: With an average completion time of 6. Items 14 and 15 These two items stand out due to the high number of users who did not answer the questions.

The two items were intended to investigate whether other e-Government sites are better. It seems that many users do not know other e-Government portals. Additionally, both questions nearly cover the same issue as item Due to the similarity to item 13 and the high percentage of missing answers, these items will be discarded from the second version of ZeGo. The item order was also reconsidered. To facilitate the filling out of the survey, all Likert scales were put together, and all open-ended questions were put at the end.

Only item 11 was set at the beginning to simplify the starting of ZeGo. The assumption that every user follows a goal on a website led to the decision that writing down these goals would be an easier task to begin with than thinking about qualitative aspects of the website.

The first validation of the 15 ZeGo items led to the deletion of two items 14, 15 and the development of two new questions concerning the content. We considered the scree plot and the Kaiser criterion, which specifies that factors must have eigenvalues greater than one. Furthermore, a Minimal Average Partial Test was conducted to provide a more reliable method, which is based on statistical principles.

Components are retained as long as the variance represents systematic than unsystematic variance. In this characteristic, the evaluation is very close to the actual goal of factor analysis itself, which makes it a convenient method in this context. Construct validity was analyzed using Pearson correlation coefficients, following the criteria recommended by Fisseni. Table 1 shows descriptive statistics for the samples. The study sample is comparable to other chronic low back pain populations with respect to the distribution of men and women, the spread of age, duration of pain, degree of impairment, and pain intensity.

Missing data for individual items ranged from zero to three missing values per item. No participant obtained the minimum 0 or maximum possible scores. Therefore, the data were considered appropriate to use for factor analysis. Principal axis factoring revealed a four-factor solution, with eigenvalues between 9. The communalities of the items ranged from 0. Both the Kaiser criterion and the scree plot supported a four-factor solution.

However, the Minimal Average Partial Test revealed three factors. Statistical criteria are a very good orientation but finally components should be good in respect of content and clinical meaningfulness. After taking into consideration the results from each method as well as theoretical considerations, we decided on a four-factor solution. The first factor includes everyday activities involving bending. The second factor includes moving and walking. The third factor involves brief effortful activities, such as lifting, carrying, reaching, overhead movements, pushing, or pulling objects.

The fourth factor includes the three main postures sitting, standing, and lying in bed , and these items imply longer time periods. Accordingly, we labeled the factors as follows: The item-total correlations ranged from 0. The present study managed to successfully perform the first cross-cultural adaption of one of the most important disability scales QBPDS into German. The major goals were the evaluation of its psychometric properties and to provide additional empirical support for the underlying factors.

Internal consistency for the four subscales was slightly lower, although in keeping with the results of other studies. Based on the quality criteria of Terwee et al, 26 the smallest score of the fourth factor is also still within a good range. The higher the scores the better they could be interpreted also on an individual level. In this aspect, it is not certain whether the fourth scale is reliable enough to allow individual diagnostic additional to group comparisons.

Future studies including confirmatory factor analysis could clarify this issue. Convergent validity was confirmed by a high correlation with the PDI. However, the correlation between the QBPDS and the RMDQ was weaker than expected, raising the question of whether the two scales measure the same aspects of functional disability. A comparison of the two questionnaires suggests that the RMDQ uses a broader concept of disability, including items related to avoidance, protection behavior, pain, and appetite. Divergent validity of the QBPDS can be assumed to be good, with a moderate correlation found with pain intensity suggesting that it measures disability relatively independent of pain.

The investigation of the factor structure of the QBPDS was of particular interest, as only two studies aside from the original study by Kopec et al examined the factor structure of this measure and reported heterogeneous results. A high sum score means that the patient is generally disabled in everyday activities but it is not clear which particular activities are impaired and could be practiced during treatment. Furthermore, the subscales indicate the degree of impairment so that the clinician could estimate whether a short or a longer version of treatment is necessary. In a context of a cultural adaption and validation study where possible cultural differences can emerge, we preferred to first perform an exploratory factor analysis, which tries to find a factor structure in the actual data.

Based on the current results, further studies with bigger sample sizes are required to compare the different factor structures with confirmatory factor analysis. In addition, two alternative subscales are suggested: This distinction gives proper instructions to the therapist whose activities could be used for exposure sessions. In addition, other measures such as the Photograph Series of Daily Activities, which assesses the perceived harmfulness of different movements, also differentiate between intermittent load and long-lasting postures.

The four factors explain One possible explanation for this finding could be differences in the study samples. Compared to other studies, we possibly have a selection bias as many participants showed elevated scores in specific measures of fear avoidance such as the Tampa Scale of Kinesiophobia. According to this, it is possible that our sample represents a subgroup of back pain patients. Thus, the generalizability of our findings might be limited. Based on the current results, further studies with bigger sample sizes are required to compare the different factor structures also with confirmatory factor analysis.

Concerning the selection of patients and the evaluation of treatment effects adequate measurements, for example, for pain-related disability are crucial. Future studies should analyze the predicted value of the QBPDS and its sensitivity to find out patients who benefit most from exposure-based treatments. In summary, this psychometric study supports the use of the German version of the QBPDS as a reliable and valid self-report instrument for the assessment of functional disability.

[Full text] Cross-cultural adaption of the German Quebec Back Pain Disability Scal | JPR

We recommend the QBPDS when the assessment of activity limitations is of particular interest, because the items focus solely on constraints in basic daily activities. This may be of particular interest in studies evaluating psychological treatments, especially trials analyzing cognitive behavioral treatments such as graded in vivo exposure, which include engaging in previously avoided daily activities. National Center for Biotechnology Information , U. Journal List J Pain Res v. Published online Jan Author information Copyright and License information Disclaimer.

The full terms of the License are available at http: Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed.

Are You an Author?

Abstract Study design Cross-cultural translation and psychometric testing. Objective The purpose of the present study was to examine reliability and validity of a cross-cultural adaption of the German Quebec Back Pain Disability Scale QBPDS in a context of a randomized controlled trial evaluating the effectiveness of graded in vivo exposure in chronic low back pain patients.

Methods The cross-cultural adaption followed international guidelines. Chronic pain, back pain, questionnaire, functional disability, German, validation. Methods Data collection A total of persons with chronic back pain defined as back pain that has persisted 3 months or more and German mother tongue were recruited via the Internet. Translation and cross-cultural adaption The translation and cross-cultural adaption process followed the guidelines of Beaton et al. Measures The QBPDS measures functional disability related to basic daily activities, which can be classified into six domains: Statistical analysis Floor and ceiling effects were analyzed by calculating the number of individuals obtaining the lowest 0 or the highest possible QBPDS scores.