ARIS 2013 Team

Principal Investigators

ISSSC Research Team
Trinity College Student '16

Trinity College Student '16

Trinity College Student '16

Greenberg Center Research Team

ARIS 2001 Methodology


As mentioned previously, these studies were conducted as part of established, ongoing national telephone omnibus programs.  The inherent nature of commercial omnibus surveys provides both advantages and disadvantages from the perspective of preferred survey research practices.   Omnibus surveys provide a means of reaching and interviewing extremely large household samples in relatively short periods of time while taking advantage of the shared nature of the high costs of survey research.  The economic advantage is offset somewhat by the exigencies of the periodic nature of these surveys; the relatively short field periods and the need for minimal geographic sample stratification and controls necessary to insure minimum sample sizes among population subgroups within those strata, tend to depress response rates somewhat.

A brief description of each of these omnibus services will aid in understanding these issues:

·        EXCEL is the research industry’s largest telephone omnibus service and has been in continuous operation for over fifteen (15) years.  EXCEL surveys are fielded at least twice each week, with each survey having a minimum of 1,000 interviews.  Approximately one-half of these are Male and one-half Female.  The sample employs basic geographic stratification at the Census Division level, with target sample sizes allocated proportionately.  Although there is some flexibility in terms of final sample size, it is necessary to adhere fairly closely to the established targets of 50% Male/Female within geographic stratum.  Respondents are randomly designated using the Last Birthday Selection Method.  The RDD sample utilized is provided by GENESYS Sampling Systems.  The field period for each survey is five (5) days – one wave of EXCEL runs Tuesday through Sunday each week, the other Friday through Tuesday, so both include weekends and the call rule is an original plus four attempts.

·        ACCESS has a more restricted set of question topics than does the more general and varied nature of EXCEL.  ACCESS was designed as primarily an omnibus vehicle focusing on residential telecommunications, entertainment and technology issues.  Both are national in representation, although ACCESS targets only about 1,000 completed interviews per week.  The other major difference between the two omnibus surveys is in the execution of data collection.  ACCESS is an ongoing survey as opposed to periodic, with flexible daily and more rigid weekly sample size targets identical to those of EXCEL.  The everyday, ongoing nature of the data collection provides the ability to utilize a single large replicated sample, with additional replicates added as required.  Sample stratification and respondent selection procedures are handled identically.  Similarly, the RDD sample was supplied by GENESYS Sampling Systems.

In summary, both of the telephone omnibus programs utilize National RDD samples.  They were both designed by the same research group and are operated and overseen by ICR personnel.  Moreover, EXCEL was the vehicle used in the 1990 NSRI and GENESYS also provided the sample in that survey effort.  In addition, the demographic battery embedded within the two omnibus surveys are virtually identical and both incorporate questions to determine the number of voice lines each residence maintains in order to develop probability of selection adjustments to the individual sample household records.

The underlying RDD samples used in both omnibus programs are provided by GENESYS Sampling Systems.  These epsem (equal probability selection mechanism) RDD samples are designed using the latest list-assisted methods and are identical to those used almost exclusively by governmental (e.g., the Census Bureau and CDC), social science and academic researchers.   The GENESYS RDD sample frame is completely redefined and rebuilt every quarter and incorporates a precisely defined, extremely fine implicit stratification that underlies every individual sample selection, thus minimizing sample variance.  The sample frame was consistently defined as two-digit working blocks in residential exchanges (NPA-NXXs) containing two or more directory listed telephone households.


The survey and data collection incorporated three phases corresponding to the gathering of information for distinct sub-samples and questionnaire segments:

1)      The overall effort was fielded from 2 February 2001 through 7 June 2001.  During this five (5) month period a total of 34,295 interviews were conducted in the EXCEL omnibus and 15,987 were conducted through ACCESS.  All respondents were screened to determine their religious identification, the identification of their spouse if any.

2)      Between 19 April and 7 June, the Comparative Belief/Secularity (CB/S) battery was administered to a total of 14,155 non-Catholics.

3)      From 19 April to 16 May the CB/S component was administered to 2043 self-identified Catholic respondents.

4)   The CB/S battery is not archived at present.

The individual sub-samples and corresponding questionnaire segments were designed in such a manner as they can be combined in a straightforward manner.  The sample of Catholic respondents is a representative subset of all those asked the CB/S questions and the sub-sample asked the CB/S questions is representative of the entire sample.  The following section describes the process of combining these samples and the manner in which each subset can be used analytically.


As in most surveys designed to fulfill multiple objectives, the research team found it necessary to make a series of trade-offs.  In this case, there were two critical components of the research design.  First, was the overall sample of Religious Identification identified as part of that overall screening process.  The second, was the sample of respondents to whom the CB/S questions were administered, which was comprised of a sub-sample of Catholics, with all other respondents being sampled at 100%.  This was actually a very straightforward design with the corresponding weighting and estimation being carried out in a few simple steps.

The initial phase of the estimation process dealt with the entire sample of 50,282 respondents.  One of the primary objectives of this survey was to provide estimates of the population by religious identification.  Consequently it was deemed desirable to reduce the role of geographic variation in these estimates, as many adherents to specific religions are highly concentrated geographically.  To accomplish thisthe data set was post-stratified into the following geographic components:

·        The largest seventy-seven (77) MSAs/PMSAs by central city and non-central city – this resulted in a total of 153 county defined strata (Note: Nassau-Suffolk PMSA contains no central city).

·        Forty-eight (48) strata, each comprising the residual geography of individual States not defined as part of the MSA/PMSA strata.

For each of the 201 geographic strata, estimates of demographic distributions were then derived from CLARITAS for the following categories: (1) Age within Sex, (2) Race/Ethnicity, (3) and HH income.  In addition, estimates of Total HHs and the population 18+ were also secured for each of the geographic strata.

An initial HH weight for each respondent was computed based on the number of voice lines serving their household.  This weight is actually the inverse of the number of phone lines as it adjusts for the greater probability of selecting that household with two, three or more phone lines have, relative to a HH with just one line.  (One can easily envision that a sample of random telephone numbers will result in twice the number of households with two lines as one would reasonably expect, as this class has twice the probability of selection.)

A second weight, corresponding to the selection of the adult member is then computed: a household with one adult has weight of 1.0; two adults, 2.0 and so on.

With these initial weights computed, the interviews were segregated into the 201 post-strata and a sample balancing (i.e., raking) routine was conducted within each stratum.  This is an iterative process that utilizes the marginal distributions of each of the target demographic variables and the corresponding weighted sample variable categories to compute a series of adjustment factors, which successively bring the sample and population demographic distributions into close alignment.  The final step in this process is the calculation of simple expansion factors to bring the weighted sample totals within each of the 201 strata to the Total HH and Population 18+ estimates derived previously.  Following this process each respondent record contains two weights: one for Household estimates, the other for estimates of the Adult Population.

The next phase in the weighting process involved adjusting for the sub-sampling of all respondents for the CB/S comparative study.   One simple alternative here would have been to simply treat these sub-samples as an independent survey, and replicate the weighting process used for the full survey.  Although straightforward given that the process and procedures were already in place, the result would have produced estimates of religious groups and demographic distributions at variance with the total sample.  Although the Comparative Study sample was a random subset of the larger, it would have also been subject to sampling variance.  It was decided that this was a complication that should be avoided.

It was decided that a better approach would be to use the larger sample to produce estimates to which the sub-sample could then be adjusted; this would also enable one to treat the Catholic sub-sample directly.  The full sample and the CB/S sub-sample were post-stratified into seventeen (17) groups based on religious identification as determined in the questionnaire.  These strata included the largest religious groups individually (Baptist, Catholic, Lutheran, etc) as well as categories corresponding to None, Refused, etc.  Based on the total sample, weighted estimates of Household and Population totals for each religion stratum were created as well as distributions of age, gender, income, race/ethnicity, census region and metropolitan status.

These estimates as well as the CB/S sub-samples were used as input to a similar sample balancing routine as used for the full sample.  However, in this case, the input weight for each record corresponded to the final weight developed during the full sample weighting process.  The process was repeated for each of the seventeen strata created based on religious identification.  As noted above, the post stratification process treated the Catholic sub-sample directly and independently, correcting for the intended under-sampling.

The final data record for each respondent includes Population and Household weights in the full sample file.  For those included in the smaller CB/S comparative study, the data file also includes a set of Household and Population weights developed to produce estimates of Totals and distributions. For reference purposes the approximate relative sampling rates for the CB/S study are as follows: non-catholic HHs, 40%; and Catholic HHs, 20%.


The correct application of household and population weights can oftentimes be confusing.  The choice of one or the other may be determined by the question or variable under consideration, or by the analytic intent.  Just a few guidelines and examples may be instructive.

The Population weights produce estimates of people – specifically, people over 18 years of age.  In the CB/S comparative study a question is asked to determine the length time respondents have been married.  Using the Population Weight will produce an estimate of the number of people married for any given length of time.  However, this is not the same as the number of couples, which would be produced  by using the HH weight.

Similarly, the number of adults with a specific religious identification can be computed by applying the Population Weight.  However, there are theoretical problems with using the Household Weight in combination with religious identification because that is a respondent variable.  Using the Household Weight in this case would be the equivalent of classifying a household based solely on the gender of the respondent and ignoring the fact that HHs can be mixed religions as most contain both males and females.

Demographics present similar difficulties.  Income is a household level variable and intuitively one would use the HH Weight to produce a distribution of HH incomes.  But one could use the Population Weight to show the distribution of adults with certain HH incomes.  These will not be the same because HH income is not perfectly correlated with HH size.

Classifying the sample into subgroups by using a HH level variable (e.g., number of children) does not mean that one then needs to use the HH weight to examine religious identification.  By using the Population Weight one could then produce estimates of the religious identification of adults in HHs with None, One, Two, etc., numbers of children.

In summary, it is critical to explore the relationship between the context of the variable, or variable being used and the resultant base produced by a given weight.


Surveys are subject to a wide variety of errors.  Some of these are related to the sampling process itself and the inherent variation one expects from the process of selecting samples of households at random – two samples are never identical, but one can predict the distribution of differences one can reasonably expect.

Other errors are of a non-sampling nature: limitations in the sampling frame, non-response biases, etc.  These are generally more difficult to quantify since the difference due to non-response, for example, is only directly quantifiable if one has interviewed the non-respondents.

During the weighting and estimation phase one attempt to incorporate and compensate for biases in the sample selection and data collection phases.  In some cases, this is fairly straightforward as in the case of HHs with multiple telephone lines – those with fewer lines are under represented relative to those with more lines.  Of course, this does not directly address the issue as to whether the proportions of single line, two-line, etc., HHs in the resultant weighted sample are the exact proportion as exist in the general population.  In other words, there may be secondary bias contributions because in the data collection process itself it may be more difficult to reach HHs with a single voice line because these HHs tend to have fewer adults.

One would hope that the weighting and estimation process compensates for both sampling and non-sampling errors and that is the objective, under the assumption that there are no systematic biases introduced by either set of errors.  The difference one finds in a sample distribution from what one might expect – say the number of interviews completed in a given State can be a result of either sampling error or non-sampling errors.  One can easily correct for the variation, but to the extent that the shortfall is due to a failure to complete interviews among a distinct subgroup within the State, there remains a risk of potential bias in the overall results.

The combined final sample disposition for all weeks of the survey effort is shown below.   It should be understood that the total number of sample records utilized is substantially understated due to the pres-screening of the RDD sample prior to the actual field period.  Although the actual number of Non-Residential sample records eliminated is unavailable, based on the expected eliminations, it can be estimated that the original sample total was approximately 910,000.

This disposition has been constructed to take into account the limitations of limited field period omnibus surveys by placing callbacks beyond the interviewing period into the Not Eligible Category.  Using the most conservative approach, with a base of all residences, the estimated Response Rate is 16.1%.  Eliminating HHs deemed Not Eligible, raises the response rate somewhat to 18.2%.

Table 1

We have taken care here to insure that the final weighted sample is accurately proportioned across critical geographic and demographic variables, but the risk of response and other biases can not be fully reflected in a simple estimate of sampling variability.


All sample surveys are subject to sampling errors.  Samples always differ from what one would expect if one had measured the entire population.  The expected size of that error is a function of both the sample design and the ultimate sample size.  The size of this error is also influenced by the specific weighting process and variation in resultant weights designed to compensate for non-sampling errors such as non-response.

In addition, we have two samples here: one of approximately 50,000 and one about 17,000.  The accompanying standard error tables provide estimates of the sample variability for each data set along with instructions on constructing confidence intervals based on the estimate and the size of the subgroup being examined.  These estimates were computed from the weighted sample itself using a balanced replication routine (BRR) across a number of survey variables.  Any such table of standard errors is a compromise and an estimate since each survey variable theoretically has its own specific error of measurement and variability.

By examining a range of variables however, one is able to produce an average error, which is then utilized to produce the accompanying Table 2.

Table 2 Estimates of  Survey Standard Errors

For further details and information you should consult Barry A. Kosmin & Ariela Keysar, Religion in A Free Market, Ithaca, NY: Paramount Market Publishing, 2006

ARIS 2001©