Version 01 Codebook ------------------- CODEBOOK INTRODUCTION FILE 1988 SUPERTUESDAY STUDY (1988.S) USER NOTE: This file has been converted to electronic format via OCR scanning. As as result, the user is advised that some errors in character recognition may have resulted within the text. AMERICAN NATIONAL ELECTION STUDIES 1988 PRESIDENTIAL NOMINATION PROCESS STUDY (1988 'SUPER TUESDAY' STUDY) CODEBOOK ICPSR ARCHIVE NUMBER 9093 TABLE OF CONTENTS Note: >>sections in the codebook introduction and codebook appendix can be navigated in the machine-readable files by searching ">>". INTRODUCTORY MATERIAL (file intstues.cbk) --------------------- >> SUPER-TUESDAY GENERAL INFORMATION >> SUPER-TUESDAY STUDY DESCRIPTION >> SUPER-TUESDAY SAMPLING AND FIELD ADMINISTRATION >> CODEBOOK INFORMATION CODEBOOK -------- APPENDICES (file appstues.cbk) ---------- >> SUPER-TUESDAY BENCHMARK FREQUENCIES >> SUPER-TUESDAY MOST IMPORTANT PROBLEM CODE >> SUPER TUESDAY GENERAL INFORMATION The NES/CPS 1988 "Super Tuesday Study" was conducted by the Center for Political Studies of the Institute for Social Research, under the general direction of Warren E. Miller. Santa Traugott is the Director of Studies. Zoanne Blackburn managed the study for the Survey Research Center's Telephone Facility. Giovanna Morchio of the NES project staff prepared the data for release. The study was funded by National Science Foundation grant #SES-8341310, providing long term support for the National Election Studies. Since 1978, these studies have been designed by a National Board of Overseers, the members of which meet several times a year to plan content and administration of the major study components. Board members during the planning of the Super Tuesday study included: Morris P. Fiorina, Harvard University, Chair; Richard A. Brody, Stanford University; Stanley Feldman, University of Kentucky; Edie N. Goldenberg, University of Michigan; Gary C. Jacobson, University of California, San Diego; Stanley Kelley, Jr., Princeton University; Donald R. Kinder, the University of Michigan; Thomas Mann, The Brookings Institution; Douglas Rivers, the University of California at Los Angeles, Ray Wolfinger, the University of California at Berkeley, and Warren E. Miller, Arizona State University, ex officio. The study was planned by a committee consisting of Professors Feldman, (Chair); Goldenberg, Brody and Rivers, from the Board of Overseers, and Henry Brady, University of Chicago; Larry Bartels, University of Rochester; Steven J. Rosenstone and Donald R. Kinder, University of Michigan. >> The Super-Tuesday Study Description The NES 1988 Study of the Presidential Nomination Process (the Super-Tuesday study) was conducted in two waves. The pre- election wave of the study was in the field between January 17 and 6:00 p.m. on March 8, 1988. A total of 2076 interviews were distributed over the seven weeks; a larger number of interviews were taken in the week between the Iowa caucus and New Hampshire primary, and during the week preceding Super-Tuesday itself. The response rate was 59.3%. (The file actually contains 2117 records; 41 "minimum partials" are included. These are cases where the respondents broke off the interview for good before the G section. While these cases are included in the dataset, they are not included in the calculation of the response rate. Nine of these respondents gave us a post-election interview. All of the 41 respondents can be deleted from analysis by use of variable 5, the Result Code.) The Pre-election questionnaire, administered by telephone, was 40 minutes long. Questions included candidate recognition and evaluations; feeling thermometers and traits; assessment of each candidate's chances of winning their party's nomination and the November general election; attitudes on public issues; vote intention; vote choice; approval voting; age; race; education; occupation; labor union membership; income; and religious affiliation. There were four forms of the pre-election questionnaire; the form determined the order in which the candidate names were read to respondents, in the recognition and feeling thermometer series, traits, chances for nomination and election, and liberal-conservative placements. See Table 1, Questionnaire Forms. Among the questions on the survey was one that asked for the names of newspapers that the respondent read for information about politics. This item is intended to aid those who wish to match the content of newspaper coverage of the campaign with the respondent's views on candidates and issues. In the present dataset, information is available only about those newspapers read by 10 or more respondents (about 50% of the Super-Tuesday respondents). However, data on newspapers read by fewer respondents and presumably with smaller circulation is available from the National Election Studies, following standard request procedures for the release of confidential information. Please contact NES Project staff for details of this procedure. Brief reinterviews were conducted with 1688 respondents in the two and a half weeks immediately after Super-Tuesday. Two- thirds of the recontact interviews were taken in the week after after Super-Tuesday. The response rate was 79.4%. Recognition and feeling thermometers on all candidates, as well as traits on selected candidates, were asked. A full range of voting questions was included; whether R voted; in which primary and for whom; who R prefers to see each party nominate for President, who R would most like to see elected as President, and, if R could have cast more than one vote, for whom would he/she have voted. >> SUPER-TUESDAY SAMPLING AND FIELD ADMINISTRATION The NES 1988 Study of Presidential nominations is based on a two-stage random digit dialing (RDD) sample of telephone households in sixteen states which held a presidential primary election on "Super-Tuesday," March 8, 1988. This is a sample of the states which held a primary on March 8th, but it is important to note that each state is not self-representing, i.e., this is not a collection of separate state samples. The RDD methodology used in this study was unique in that all primary and secondary screening of telephone households took place prior to the start of the actual survey field period. (Typically, the second stage of screening takes place concurrently with the interviewing process.) The study was designed so that a specificed number of interviews were to be taken per week, through the seven weeks leading up to Super Tuesday. Past experience has shown that the conventional method of integrating the second stage RDD screening with the interview of a time allocated sample produces considerable "down time" toward the end of the time interval, as sample is tied up without clear dispositions. Following standard RDD procedure, a sample of 2,132 Area Code/CO combinations was selected from 12,867 possible AreaCode/CO combinations for the sixteen states which have primaries on "Super Tuesday." SRC's January AT&T stratified file was used as the sampling frame. The overall sampling rate for Area Code/CO combinations was 1/6.035. To form the sample primary numbers, random 4-digit suffixes were attached to each of the 2,132 selected Area/CO codes. Each of the a=2132 sample primary numbers was screened to determine its status as a working household telephone number. If the primary number was a working household telephone number, its "one hundred series" was retained for further sampling. Table 2 shows the number of AreaCode/CO combinations selected and the number of "working primaries" for each of the sixteen states. Table 3 outlines the design specifications and assumptions for the complete 1988 NES Study of Presidential Nominations, and the number of interviews actually completed. Based on the sample design assumptions outlined in Table 3, an RDD second stage sample of n=3412 household numbers was determined to be needed to meet the completed interview targets for the study. However, in order to provide the flexibility needed to adjust the sample size for small departures from the design response and eligibility rate assumptions, a reserve of n=400 additional working household numbers was also identified in the second stage prescreening phase of the study. Use of this reserve was in fact required during the study. To achieve a prescreened sample of n=3400 + 400 working household numbers, the second stage cluster size for each of the working primary stage clusters was set to equal 7.677 working household numbers. The prescreening of the RDD second stage sample was conducted during the period November 1987-January 1988. The brief screening interview ascertained not only the household status of the RDD second stage sample number but also identified whether one or more eligible voters resided in the selected households. Late in the study period, an additional sample of 248 numbers, was drawn from the original 495 clusters, bringing the total sample up to 2048 numbers (and increasing the cluster size to 8.177). No prescreening was done on these 248 numbers. The additional sample was necessary because between the prescreening and actual start of the field period, about 350 numbers became nonworking or other nonsample. During the prescreening of the RDD second stage sample, some small fraction of the phone contacts refused to provide information needed to determine the household status of the sample number and or household members/ voter eligibility. Other screening contacts may identify language problems or circumstances which would prevent the household residents from providing an interview. For the most part, these prescreen refusals and noninterviews were passed forward and contacted again during the production phase of the study. If not, there were treated as study refusals, rather than nonsample. Our overall evaluation of this procedure is not complete. There is some reason to believe that it was a costlier procedure than completing screening during the field period, and moreover, that it had some negative impact on response rate. The research design for the Super Tuesday study required a specific time allocation of the study sample across the seven week period preceding the Super Tuesday Election. Table 4 identifies the target dates of the one-week reference periods and the completed interview targets and the number actually taken. Actual release of the sample was distributed over the days of the week in such a way as to ensure a continuous and productive flow of both new and old work through the SRC Telephone Facility. The replicate number and the day the replicate was released are both variables in the datafile. Once released, sample cases were immediately eligible for contact, respondent selection and interview. A case was permitted to remain in the "active" sample until an interivew disposition was obtained or until 14 days had elapsed. At the end of 14 days, cases were reviewed and those which seemed to hold out any real promise of an interview were retained as active sample, from day to day. In general, few cases were promising after 14 days, and if there was no interview, the case was coded either refusal or non-interview (possible nonsample) and removed from the active pool of sample numbers. To facilitate the day by day release of new sample listings, the entire sample of n=3800 was divided into replicates or subsamples of approximately 25 numbers. Each replicate was a small probability subsample. Differences in the number of interviews desired per week were handled by adjustments in the number of replicates per day which were issued. The sample release process took into account the both the start up effect that occurs in week one -- that is, there was no carryover of sample from the previous week -- and the carryforward for old numbers into the last week's sample size. The Pre-election study used Computer Assisted Telephone Interviewing -- i.e., CATI. It is very difficult to follow the flow of this questionnaire using only a hard copy of the computer screens which the interviewers saw as they were interviewing. Consequently, the project staff has prepared a "Questionnaire" for the pre-election wavewhich accompanies this documentation, which should be viewed as a "flow chart" of the interview. Question numbers on the questionnaire will match, with very minor exceptions, the question numbers in the codebook documentation. The Post-election wave was a standard paper and pencil questionnaire. In the interests of reaching respondents very quickly after Super Tuesday, half of the interviews (roughly) were done using the SRC Telephone Facility in Ann Arbor, and the other half were done, also by phone, by SRC's Field Staff of interviewers from their homes in different parts of the country. TABLE 1: SUPER-TUESDAY CANDIDATE ORDER BY FORM OF QUESTIONNAIRE FORM 1 FORM 2 FORM 3 FORM 4 Gore Simon Bush DuPont Dukakis Gephardt Haig Kemp Hart Jackson Robertson Dole Babbitt Babbitt Dole Robertson Jackson Hart Kemp Haig Gephardt Dukakis DuPont Bush Simon Gore Gore Simon Bush DuPont Dukakis Gephardt Haig Kemp Hart Jackson Robertson Dole Babbitt Babbitt Dole Robertson Jackson Hart Kemp Haig Gephardt Dukakis Dupont Bush Simon Gore Questions C1-C3, E1-E4, Gl-G14, and J1, were administered using form variations as above. In contrast, there is only one version of the recontact questionnaire. Please note also that as candidates dropped out of the race, they were dropped from questions E1-E4, G1-G14, and J1, although the C1-C3 series continued to be asked. TABLE 2: Distribution of Primary Numbers by State State No. Selected No. Working % Working Alabama 90 23 25.6 Arkansas 81 13 16.0 Florida 225 70 31.1 Georgia 131 33 25.2 Kentucky 93 16 17.2 Louisiana 98 26 26.5 Maryland 95 30 31.6 Massachusetts 112 35 31.2 Mississippi 61 11 18.0 Missouri 174 27 15.5 North Carolina 136 36 26.5 Oklahoma 121 21 17.4 Rhode Island 20 5 25.0 Tennessee 106 24 22.6 Texas 450 94 20.9 Virginia 139 31 22.3 Total 2132 495 23.2 Table 3: Sample Design Specifications and Assumptions TARGET ACTUAL Completed Interviews 2052 2076 Response Rate .62 .59 Eligible Tele. HHs 3309 3504 Eligibility Rate .97 .97 Sample Telephone HHs 3412 3628 ______________________________________________________ Reserve Sample Tele.HHs 400 648 Total RDD Sample of HH #s 3812 4048 Table 4: Sample Allocation to Study Weeks and Completed Interviews WEEK DATES TARGET ACTUAL 1 01/17-01/23 228 212 2 01/24-01/30 228 227 3 01/31-02/08 228 304 4 02/09-02/16 402 403 5 02/17-02/23 322 274 6 02/24-03/01 322 265 7 02/28-03/08 322 391 TOTAL 2052 2076 >> CODEBOOK INFORMATION The following example from the 1948 NES study provides the standard format for codebook variable documentation. Note that NES studies which are not part of the Time-Series usually omit marginals and the descriptive content in lines 2-5 (except for variable name). Line 1 ============================== 2 VAR 480026 NAME-R NOT VT-WAS R REG TO VT 3 COLUMNS 61 - 61 4 NUMERIC 5 MD=0 OR GE 8 6 7 Q. 17. (IF R DID NOT VOTE) WERE YOU REGISTERED (ELIGIBLE) 8 TO VOTE. 9 ........................................................... 10 11 82 1. YES 12 149 2. NO 13 14 0 8. DK 15 9 9. NA 16 422 0. INAP., R VOTED Line 2 - VARIABLE NAME. Note that in the codebook the variable name (usually a 'number') does not include the "V" prefix which is used in the release SAS and SPSS data definition files (.sas and .sps files) for all variables including those which do not have 'number' names. For example the variable "VERSION" in the codebook is "VVERSION" in the data definition files. Line 2 - "NAME". This is the variable label used in the SAS and SPSS data definition files (.sas and .sps files). Some codebooks exclude this. Line 3 - COLUMNS. Columns in the ASCII data file (.dat file). Line 4 - CHARACTER OR NUMERIC. If numeric and the variable is a decimal rather than integer variable, the numer of decimal places is also indicated (e.g. "NUMERIC DEC 4") Line 5 - Values which are assigned to missing by default in the Study's SAS and and SPSS data definition files (.sas and .sps files). Line 7 - Actual question text for survey variables or a description of non-survey variables (for example, congressional district). Survey items usually include the question number (for example "B1a.") from the Study questionnaire; beginning in 1996 non-survey items also have unique item numbers (for example "CSheet.1"). Line 9 - A dashed or dotted line usually separates question text from any other documentation which follows. Line 10- When present, annotation provided by Study staff is presented below the question text/description and preceding code values. Lines 11-16 Code values are listed with descriptive labels. Valid codes (those not having 'missing' status in line 5) are presented first, followed by the values described in line 5. For continuous variables, one line may appear providing the range of possible values. A blank line usually separates the 'valid' and 'missing' values. Lines 11-16 Marginals are usually provided for discrete variables. The counts may be unweighted or weighted; check the Study codebook introductory text to determine weight usage.