Chapter 1 – Introduction

The goals of the 2017 Greater Pittsburgh Jewish Community Study were to understand the size and character of the local Jewish population and to provide the community with high-quality data to drive decision-making for policy and planning. Multiple methods were used to generate population estimates of the Jewish community and to assess the attitudes and behaviors of those who identify as Jewish. The central component of the study was a survey that asked a broad set of questions about Jewish identity, attitudes, and engagement with the community. The survey was administered both as a telephone interview and as an online instrument.

The study was designed to help the Greater Pittsburgh Jewish community and its communal agencies learn about the size and demographic characteristics of their community, synagogue and other affiliations, interest in and utilization of programs and services, and ways that Pittsburgh Jews relate to one another. The findings are intended to inform communal planning and resource allocation.

About This Study

This study follows an enduring tradition of efforts to describe and understand the Greater Pittsburgh Jewish community. Earlier demographic studies were conducted in 1938, 1963, 1984, and 2002 (reports are held at the Berman Jewish Databank). Since 2014, the Pittsburgh Jewish Community Scorecard has tracked key metrics in Jewish engagement, affiliation, social services and philanthropy, connection to the surrounding community, and capacity building. All communities change considerably over time and these studies have provided essential data for planning purposes. The 2017 study, initiated and funded by the Jewish Federation of Greater Pittsburgh (Federation), established the following goals:

  • To estimate the size and geographic distribution of the Jewish population
  • To develop a portrait of the socio-demographic characteristics, affiliations, attitudes, behaviors, needs, and interests of the Jewish community as a whole and of subgroups within the community
  • To identify emerging needs and changes in the community over time
  • To help the community make data-driven decisions for communal planning

The study was conducted by researchers from the Cohen Center for Modern Jewish Studies/ Steinhardt Social Research Institute (CMJS/SSRI) at Brandeis University. Informed by previous research and in consultation with Federation, its community study technical committee, and representatives of Jewish organizations in the Greater Pittsburgh area, CMJS/SSRI developed a research strategy and survey instrument to address the community’s needs.

Methodology

Community studies rely upon scientific methods to collect information from selected members of the community and, from those responses, extrapolate a generalized portrait of the community as a whole. Over time, it has become increasingly complicated to conduct such studies, and particularly to obtain an unbiased, representative sample of community members. The 2017 Greater Pittsburgh Jewish Community Study used innovative methods developed by CMJS/SSRI1 to overcome these challenges.

The central obstacle is that Jews are a relatively small group and traditional methods for identifying a representative sample of Jews are no longer feasible. The classic methodology, random-digit dialing (RDD), relies on telephone calls to randomly selected households in a specified geographic area and phone interviews with household members. Changes in telephone technology (e.g., caller ID) and fewer people answering the phone for unknown callers have reduced response rates for such surveys below 10%.2 An even greater challenge is that over half of all households no longer have landline telephones and rely exclusively on cell phones.3 Because of phone number portability,4 cell phones frequently have an area code, exchange, and billing address that are not associated with the geographic location in which the user resides. In Jewish community studies, this has proven to be especially problematic for ensuring that the survey reaches young adults and newcomers to the community. It is no longer possible to select a range of phone numbers and assume the owners of those numbers will live in the specified area and be willing to answer the phone and complete a survey.

This study addresses these challenges by using several methods, described in detail in Appendix A:

  • Enhanced The enhanced RDD method synthesized hundreds of national surveys conducted by government agencies and other organizations that include questions about religious identification. The synthesis used the data from these surveys along with information collected from Pittsburgh-area residents to estimate the size of the Jewish population in the region.
  • Comprehensive list-based The study selected respondents primarily based on their appearance on the membership and contact lists of dozens of Pittsburgh-area Jewish organizations. This approach ensured that anyone in the Greater Pittsburgh area who has had even minimal contact with any area Jewish organization was represented.
  • Ethnic names sample. The comprehensive list-based sample was supplemented with a list of households in the area comprised of individuals who have a distinctly Jewish first or last name. Such households typically make up 20-25% of Jewish households in a community but are not significantly different from Jewish households that do not have distinctive Jewish 5
  • Multiple survey CMJS approached survey participants by postal mail, phone, and email. Multiple attempts were made to reach each respondent and update contact information and the respondent’s status when initial efforts were unsuccessful.

In consultation with Federation, the geographic focus of the 2017 Greater Pittsburgh Jewish Community Study included households in Allegheny, Beaver, Butler, Washington, and Westmoreland Counties. This area is distinct from the Pittsburgh Metropolitan Statistical Area, which also includes Armstrong and Fayette Counties. Although this study focused solely on the five-county area defined by Federation, anyone who lived in an adjacent county and was associated in any way with a local Jewish organization was still eligible to participate in the survey.

The study was based on a sampling frame of over 81,000 households. From this frame, two samples were drawn: a primary sample of 14,562 households who were contacted by postal mail, phone, and email, and a supplementary sample of 14,997 households who were contacted by email only. The primary sample was designed to be representative of the entire community and was used as the basis for population estimates and analyses of the community as a whole. The response rate6 for this sample was 28.6% (AAPOR RR3) and the cooperation rate7 was 75.3% (AAPOR CR1). In total, over 2,000 Jewish households were interviewed (Table 1.1). Because households in the supplemental sample were only contacted by email, highly engaged households were expected to be more likely to complete the survey. Accordingly, statistical adjustments were used to account for the different likelihood of response in the two samples. Survey weights were developed to ensure that the full sample—primary and supplemental combined— represented the entire community in terms of key factors including age, Jewish denomination, and synagogue membership.

Throughout this report, for purposes of analysis and reporting, estimates about the entire community were derived solely from the primary sample. The combined, or full, sample was used for analyses of subgroups—such as families with children—where the increased number of respondents supported more robust analysis.

Table 1.1 Summary of survey respondents

 Primary nSupplemental nTotal n
Completed eligible households1,2158962,111
From lists1,2008962,096
Ethnic name same (de-duplicated)15-15
Total households on lists--81,125
Drawn sample size14,56214,99729,559
Completed screeners3,7781,9065,684
Response rate (AAPOR RR3)28.6%20.3%-

Undercounted Populations

Although the goal of the study was to develop a comprehensive understanding of the Jewish community, some groups are nevertheless likely to be undercounted and/or underrepresented. In particular, residents of institutional settings such as college dormitories, hospitals, and nursing homes, as well as adults who have never associated with any Jewish organization in the Greater Pittsburgh area are less likely to have been identified and contacted to complete the survey.

Although we cannot produce a completely accurate estimate of these individuals, the undercounts are unlikely to introduce significant bias into the reported estimates. Where appropriate, we have noted the limitations of the methods.

How to Read This Report

Community studies are household surveys. They are designed to represent the views of the entire population by interviewing a randomly selected sample of households from the community. To

extrapolate respondent data to the entire community, the data are adjusted (i.e., “weighted”) by assigning each respondent a weight so that his/her responses represent the proportion of the overall community that has similar demographic characteristics. The weighted respondent thus stands in for that segment of the population, and not only the household from which it was collected. (See Appendix A for more detail.) Unless otherwise specified, this report presents weighted survey data in the form of percentages or proportions. These data should be read not as the percentage or proportion of respondents who answered each question in a given way, but as the percentage or proportion of the population that it is estimated would answer each question in that way had each member of the population been surveyed.

No estimate should be considered an exact measurement. The reported estimate of any value, known as a “point estimate,” is the most likely value for the variable in question given available data, but the true value may be slightly higher or slightly lower. Because estimates are derived from data collected from a representative sample of the population, there is a degree of uncertainty. The amount of uncertainty depends on multiple factors, the most important of which is the number of survey respondents who provided the data from which any given estimate is derived. The uncertainty is quantified as a set of values that range from a value slightly less than the reported estimate to a value a similar percentage above it. By convention, this range, known as a “confidence interval,” is calculated to reflect 95% certainty that the true value for the population falls within the defined range. (See Appendix A for details about the magnitude of confidence intervals around estimates in this study.)

When size estimates of subpopulations (e.g., synagogue members, intermarried families, families with children, etc.) are provided, they are calculated as the weighted number of households or individuals for which the respondents provided sufficient information to classify them as members of the subgroup. When data are missing, those respondents are counted as if they are not members of the subgroups for purposes of estimation. Accordingly, all subpopulation estimates may undercount information on those least likely to complete the survey or answer particular questions. Missing information cannot be imputed reliably in many such cases because the information that could serve as the basis for imputation is also missing. Refer to the codebook (Appendix D) for the actual number of responses to each question.

Some tables and figures that present proportions do not add up to 100%. In most cases, this is a function of rounding, with proportional estimates rounded to the nearest whole number. In some cases, however, this is a result of respondents having the opportunity to select more than one response to a question. In such cases, the text of the report will indicate that multiple responses were possible. When a table shows “0,” it means no respondents selected that option, “<1” indicates that the estimate rounded down to 0, and “–” indicates that there were insufficient responses to report reliable estimates.

For simplicity, not all groups will be displayed in all tables. For example, if the proportion of respondents who made a donation to a Jewish organization is shown, the proportion who did not donate will not be displayed.

Reporting Qualitative Data

 The survey included several questions that called for open-ended responses. These were used to elicit more information about respondents’ opinions and experiences than could be provided in the multiple choice or checkbox formats typical of survey questions. All such responses were categorized, or “coded,” to identify topics and themes mentioned by multiple respondents. Because a consistent set of response options were not offered to respondents, it would be misleading to report weighted estimates of responses to these questions. Instead, we report the total number of respondents whose answers fit a particular code or theme. This number appears in parentheses after the response, without a percentage sign, or in tables labeled as “n” or number of responses. In most cases, sample quotes are also provided, edited for clarity and with identifying information removed.

Comparisons Across Surveys

As part of the goal to assess trends, comparisons of answers to a number of questions are made to earlier local data (in particular, the 2002 study) and data from national studies (in particular, Pew’s 2013 A Portrait of Jewish Americans). Although these analyses are informative, comparisons across studies are not as precise and reliable as the data from the present study. Exact comparisons are not possible for several reasons. The most important of these, noted above, is that the methods used to develop sample frames in the present study differ from those used in 2002.