There are some principles need to be considered when designing questions items for 360 degree assessment questionnaire. to be useful, they must be constructed carefully. A simple way to test each of your items is to ask if the item can be described as the following:
Unidimensional. This means that the item measures only one thing. Perhaps the most common survey-item error is the double question, for example, “To what extent does this person coach and counsel subordinates effectively?” Coaching and counseling are two different activities, of course, and they require separate measurement. The item should be recast as two; otherwise, how does the rater respond if the person being rated is effective on one but not the other?
Free of qualifiers. Words like “usually,” “always,” “never,” and “good” can invalidate the rating scale. For example, if you use a “Strongly agree? Strongly disagree” rating scale, and an item reads, “This person always involves others in decisions that affect them,” how would a rater respond if the person being rated sometimes does this, but not always! Such qualifiers should simply be eliminated.
Observable. The rater must have had the opportunity to see what is being rated. Asking raters to infer and judge is asking for contaminated data. The item, “To what degree is this person skilled in making his or her points clear under stress?” is preferable to “To what degree is this person good at handling conflict situations?” The former can be directly observed, whereas the latter represents an inference or summary judgment. The difficulty with items that require inferences and judgments is that raters will respond to them, but the recipient of the feedback may not be able to interpret the data or find ways to improve on those items.
Tied to the scale. If you use a frequency scale, the items must be written to conform to it. The item “This person shows competency in being able to locate prospective customers effectively” requires a degree scale rather than a rating of frequency.
Clear and understandable. Ideally, every rater needs to interpret each item the same way. That means avoiding any language that is ambiguous, jargonistic, or at too high a level of comprehension. Good rules are to use easy, common words and to write at the lowest educational level of the personnel who will respond to the instrument.
Ratable by the data sources. Since 360° feedback interventions often involve such disparate sources of ratings as self, bosses, peers, subordinates, internal customers, external customers, trained observers, and friends and family, it is critical that the items cover what these raters know something about; otherwise there will be many holes in the feedback. Our practice is to determine the sources of ratings before constructing the instrument, so that we include only those items that respondents are qualified to rate.
Developmental (“So what? Now what?”). Instrument items should center on things that the feedback recipient can do something about. The goal is to generate reliable, valid data that will be useful for developing self-directed action plans for improvement. Getting feedback on “To what degree does this person display an attitude of optimism?” would probably not be so useful to feedback recipients as one that asks, “To what degree is this person able to assist you in analyzing your problem situations at work?”
Aligned with the organization’s vision. The item set should concentrate on what is critical to realizing the hoped-for future of the organization. The connection between what the instrument measures and what the organization stands for and is headed toward should be obvious to all respondents. In a sense, 360° feedback instruments are strong organizational messages about what is important: “If you’re going to stay here and flourish, you need to become highly competent in these areas.” This approach obviates the need for “importance” ratings; every item has already been decided to be critical to carrying out the organization’s intentions.
Developing instruments for 360° feedback entails choosing appropriate content, If you choose to measure other dimensions of individuals, here are categories to consider:
Skills: Sets of behaviors that are shaped toward “objective” standards. Examples: designing meetings, writing objectives, listening, repairing a machine.
Competencies: Developed abilities. Competencies are more general than skills, and often subsume sets of skills. Examples: intervening in conflict situations, providing performance feedback, elucidating vision, strategic planning.
Traits/Characteristics: Descriptions of the feedback recipient as an individual?his or her “character.” Examples: trustworthiness, intelligence, creativity/innovativeness, decisiveness.
Attitudes/Feelings: Inferred from behavior. Attitudes are predispositions to behave predictably toward an object or class of objectives. Feelings are emotions that are experienced in reaction to, or anticipation of, situations and events. Examples: “isms,” optimism, patience, anger.
Behaviors/Leadership Practices: Observable: what the person actually does. Examples: involving people in planning, interceding for subordinates making “command” decisions, expressing caring.