Students' Attitudes

Chapter 1: Introduction

Daniel Katz and Floyd H. Allport

Table of Contents | Next | Previous

1. THE REACTION STUDY AND ITS APPLICATION

THE material upon which this volume is based was derived from an investigation of the attitudes, opinions, and practices of students at Syracuse University. In the research there was employed, for the purpose of eliciting the responses of students, a comprehensive questionnaire, called The Syracuse University Reaction Study, which was administered under controlled conditions.[1] In the fall of 1925 a group of students representing various campus interests and activities joined with . a faculty committee in the work of gathering attitudes that pertained to college studies and college life. A record was kept by the students on this joint committee of the views and attitudes which they had heard discussed among their fellow students. To this record the observer added his own opinions on campus and curricular questions. After a discussion of the relative importance of this material by the committee, it was turned over to the writers, who spent six months in analyzing it and transforming it into a questionnaire.

A number of preliminary studies were conducted to deter-mine the best techniques to be employed in the final form of the Reaction Study. It was decided that many items should be put into the form of "attitude scales," containing five or seven statements and ranging from a conservative to a radical position upon the issue. in question. Opportunity - was thus given for a greater range of expression than would have been afforded by an arbitrary "yes" or "no" answer. Preliminary investigation disclosed that a presentation of these scales in the same logical order to every student might result in a space error;


(2) that is, there was a slight tendency on the part of some individuals to check the first attitude in the scale, merely because it was placed first. This space error was equalized by employing two final questionnaire forms, each form being given to one-half the students. In one form (A) the material was the same as in the other (B); but the items in it which were in scale form had the order of their statements reversed.

The co-operation of the daily student paper and the administration of the University were two indispensable aids in the conduct of the study. The Daily Orange aroused the interest of the students in the project, but did not reveal any of the con-tent of the questionnaire. Students understood that it was a fact-finding survey sponsored by their own leaders and knew a little of its general nature; of the specific questions that were to be put to them they knew nothing. In this way there could be secured the purely individual opinions of each student, free from the effects of group discussion, rumor, or popular agitation. The university administration enlisted the support of the faculty and dismissed all classes for two hours on May 10, 1926. Each student had been notified by postal card of the exact building and room to which he was to report at a specified hour. A "double cut" was announced as the penalty for failure to report, although the co-operative nature of the project was emphasized rather than its compulsory aspect. Faculty members and seniors from the student committee took charge of the rooms in which a total of 4,248 students filled out the questionnaires.

In order to obtain a frank expression of opinion, students were asked not to sign their names. To insure complete anonymity, no handwriting was called for in any part of the questionnaires. Students merely indicated with a check mark the particular statement out of a number of possibilities which most nearly approximated their opinion. Another purpose was served by this device, namely, the standardizing of the stimulus-response situation for all the students.

It has been found that the mere presence of others affects there action of the individual in the group. Individuals, for example, probably tend to avoid extremes in judgment made in the presence of others, even though such judgments are not known


(3) to the other members of the group.[2] In administering the Reaction Study, therefore, the precaution was taken of dividing students into groups of varying sizes in order to determine whether the size of the group had any marked effect upon the reaction of the individual. A control group of three hundred students was also selected, the individuals of which were to work in places where they would be as nearly alone as possible. This latter group were instructed to take the questionnaires to their rooms and fill them out alone, at the same time that their fellow students were working in the various classrooms. In this way the writers had a check upon the factor of social stimulation.

The criticism is often made that, in the use of the questionnaire technique, it is difficult to obtain the serious co-operation of all the subjects, and especially to know whether or not such co-operation has been given. Recognizing the truth of this objection, the writers tried to reduce its force to a minimum by a number of precautions, some of which may be noted here. In the first place, the questionnaire was so formulated as to avoid any phrasing which might lend itself to humorous interpretation. At the same time the material employed was of genuine interest to the students, since it dealt for the most part with their problems. Students, moreover, had an opportunity to express their opinion, on questions concerning which they had opinions, anonymously and without fear of discipline or censure. The students also understood that the results of' the study would be used toward the improvement of any undesirable conditions then existing. Another factor making for seriousness was the absence of crowd influences. The groups for the most part were small, and no talking or communication was allowed while the study was in progress.

In spite of these precautions; the objection might still be raised : How do you know students did take the study seriously? May they not have gone through the questionnaire in listless fashion, checking more or less at random? Probably there were some students who did so; but we may venture that the great majority did not check their questionnaires in random


(4) fashion. Sufficient evidence from the actual data is at hand to substantiate this claim. For one thing, students in certain schools and classes showed a marked consistency among them-selves in the attitudes which they checked, as contrasted with other groups of students. So great a consistency probably could not have been produced by chance checking. Business Administration students, for example, consistently differed in their opinions and reports of their behavior from students in the College of Fine Arts. These differences, moreover, were substantiated by the opinions of members of the faculty who were conversant with the local situation. Then, too, a random or chance checking would have resulted in a flat distribution curve, except perhaps for the extreme positions. When the results were thrown into graphic form such curves were found to be very infrequent. On the other hand, on certain issues vital to all students, the same distribution of opinion (a skewed or irregular one) was found in every college on the campus. If students had been disposed toward a random checking, we should scarcely expect one certain, irregularly shaped curve to be characteristic of every college.[3] Apparently factors much more definite than chance or careless checking operated among the students to give this particular distribution.

In the formation of the questionnaire, considerable attention was given to the principle of "indirection." Questions were not hurled point blank at the students. Often the real object of a question was not apparent. In considerable measure, the favorite rationalizations of students themselves were also included as possibilities to check, in order to elicit a genuine and unguarded reaction.[4]


(5)

2. TREATMENT OF THE DATA

The data from the 4,248 questionnaires were compiled by the use of a Powers Accounting Corporation punch and counting sorter. By means of a code the answers contained in the questionnaire were punched in a series of cards, and these cards were then run through a counting sorter. To check the accuracy of the various steps in the procedure, 300 Liberal Arts questionnaires were selected at random and the original checking carefully compared with the records used in the mechanical process. Mistakes made in coding the 300 questionnaires or in punching cards were added for each of the 75 items separately, and the proportion of the total checks they constituted for that item was computed. This proportion of error is given in the following tables under the caption of mechanical error. Where no mechanical error is listed, no mistakes were found for that particular item.

In Table I the mechanical error of Item 1 is 0.45 per cent. This means that of the 4,409 replies made in Liberal Arts to this question (Item 1 being a multiple check item), 20 were tabulated incorrectly. Inasmuch as Liberal Arts was the first college for which the data were compiled, the clerical workers were probably not then as expert in coding questionnaires and in punching cards as they later became. Hence the assumption seems justified that the mechanical error was at least no greater in the colleges than in Liberal Arts. It should be noted also that, as far as our results are concerned, many of the mistakes were compensating. In addition, the totaling of mistakes which appeared in the coding and in the punching meant that in some cases an error was counted twice. This would make the mechanical errors less than the figures given in the following tables.

Out of the 75 items included in the Reaction Study, the results of about 57 are presented in this volume. The data for most of the remaining items have been compiled or can be made available to anyone who may be interested in further possibilities of research.

Thirty-three of the 75 items included in the Reaction Study were in the form of scales, with steps carefully graded to bring out the range and incidence of opinion on a specific issue. It


(6) was assumed that these scales constituted a rough psychological continuum. Professor L. L. Thurston has shown, however, that such devices are not true scales; since we have no real unit of measurement and hence no way of equating our scale steps.[5] Frequency diagrams erected on the base line of the scales used in the Reaction Study can not be interpreted legitimately as frequency surfaces. In the treatment of the data obtained from the scales of the Reaction Study, the writers, therefore, have not used the measures ordinarily applied to frequency distributions in comparing the distributions of two groups on any given scale. It is true that the median has been employed, but only to show the shifts in direction of various groups toward one or the other extreme of the scale, and not to measure the distance of those shifts. This explains why the probable errors of the medians have not been computed in the tables in which the medians are included. The essential difficulty with the scales of the Re-action Study is our lack of knowledge concerning the equality or inequality of the class intervals. Hence, in comparing two groups on their checking of the same scale, aside from the use of the median just noted, no attempt is made to give any single measure or index for each group as a unit; but instead the groups are compared position for position in their distribution upon the scale. Although the step intervals of the scales employed cannot be guaranteed to be equal in magnitude, it is fairly certain that they are placed in correct order from one logical extreme to the other. The fewness of the steps, combined with care in selecting some one common variable for them all and in wording the degree of that variable, has practically assured this result. Differences between two groups in the proportion of the numbers checking a particular step or position will be regarded as significant and will be discussed in the following chapters only when they are at least three times their probable error (P.E.). Wherever differences are obviously many times their P.E., no P.E. is given. Wherever the discussion deals with a number of differences between groups on the same item, the P.E. cited is the P.E. which is the largest in re-


(7) -lation to the differences to which it refers. The formula employed [6] is that of the probable error of differences in percentages:

   PE p1-p2 = √(PE2p1 + PE2p2), where PEp = .6745√(pq/n)

In the use of this formula no assumption is made that the scales of the Reaction Study yield results which, when plotted, con-form to a normal distribution. The only assumption made is that, if a large number of samples of Syracuse students were taken to ascertain what proportion of them hold one certain attitude (one particular step on the scale), the resulting measures would give a normal curve. For example, if thirty groups of two hundred students each were selected at random and account taken of the number in each group who hold a certain attitude, the large differences from the mean of, the thirty groups would be fewer than the small differences.

In some of the items of the Reaction Study students were asked to check only one of a number of possible answers ; in other items they were not so limited. For this reason, some of the tables which present the results of the study include two rows of totals, the first of which refers to the total number of checks and the second to the total number of students checking. In all tables, however, percentages are computed on the total number of students who have answered the item in question, and not on the total number of checks recorded for that question. In single-check items these two figures, of course, are identical.

For convenience, the writers have been compelled to use certain expressions rather inexactly. The following meanings should be understood for such expressions. The word "University" does not refer strictly to all the colleges of Syracuse University, but only to certain ones, seven in number, for which the data were tabulated. These colleges, referred to as the "University," are Liberal Arts, Business Administration, Fine Arts, Applied Science, Forestry, Home Economics, and the Graduate School. Whenever a cross comparison is made be-


(8) -tween the results on two items, only Liberal Arts students are being dealt with, unless special reference is made to other groups. Comparisons between groups are made on the basis of the percentage of each group checking a certain attitude. That is, groups are not compared as wholes or entities, but only as to their relative numbers who check a given position. Wherever the term "students" is used it should be understood that the reference, unless otherwise stated, is to the students of Syracuse University, and not to college students in general. Finally, the findings of the Reaction Study apply only to the students of Syracuse University in May, 1926. Although for ease of description the present tense has in a few cases been used, it should be borne in mind that it is the situation of 1926 that is really being described. What changes in attitudes may have occurred between that date and the present, we have no reliable means of knowing. A further discussion of this matter will, however, be given at the end of Chapter XVIII.

Notes

  1. Arrangements have been made for the publication of this form, under the title, A Reaction Study for the Measurement of Student Opinion, with C. H. Stoelting Co., Chicago, Ill. In that publication the form has been so revised as to suit the needs of colleges and universities in general; but the original numbering of its items has been retained so as to agree with the numbers used in the present work.
  2. F. H. Allport, Social Psychology, pp. 274-78.
  3. For example, see Table LX where the same general bimodal distribution is found in every one of the six undergraduate colleges. It is unlikely that chance checking would have produced such a curve in six cases without exception. Or, turn to Table LXVII, where Catholic men duplicate almost exactly the checking of Catholic women. It seems reasonable to infer that this similarity is due to the factor of a common religion rather than to random checking.
  4. Other, more general, criticisms to the questionnaire method have been considered in the Foreword, pp. vii-x.
  5. L L. Thurstone, "Attitudes Can Be Measured," American Journal of Sociology, Vol. XXXIII (1928), 529-54; "A Mental Unit of Measurement," Psychological Review, Vol. XXXIX (1927), No. 6.
  6. G. W. Yule, Introduction to the Theory of Statistics, Chapter VIII, "Simple Sampling of Attributes."

Valid HTML 4.01 Strict Valid CSS2