6.1 Outcomes and response for CBR sample
There were 15,060 children sampled from the Child Benefit Register (CBR) – 30 for each of the 502 Primary Sampling Units (PSUs). As detailed in Section 4 (Sampling), given that response during fieldwork was higher than initially assumed, a random 17 PSUs (510 children) were dropped from the Tranche 3 sample, enabling us to reach the target number of interviews with a higher response rate than would otherwise have been the case. Therefore, a total of 14,550 addresses in 485 PSUs were sent opt-out letters, leading to opt outs from 785 addresses. These addresses were removed from the sample, and a total of 13,765 addresses were issued to interviewers, who sent advance letters before starting their calls.
The overall response rate for the CBR sample was 48 per cent (shown in Table A.2). This figure reflects the proportion of productive interviews across all eligible addresses. The full fieldwork outcomes are shown in Table A.1. Table A.2 then presents various response metrics for the CBR sample, showing trend data since the 2009 survey.
The overall response rate increased in 2022 from 38 per cent in 2021 to 48 per cent, but the response was still lower than prior to the COVID-19 pandemic (a fall from 62 per cent in 2019). The response rate for 2022 was expected to be lower than 2019 because despite many of the COVID-19 restrictions having been lifted by the time fieldwork commenced in April 2022, there was still some uncertainty around the virus which may have led parents to be more cautious about participating in the study. We made allowances to account for this through interviewers following strict protocols and conditions within the participant’s home (as described in Section 5.3). We also allowed for remote interviewing (on telephone and Microsoft Teams) if parents didn’t feel comfortable with a face-to-face interview, and these remote modes were more likely to suffer from broken appointments and poorer response.
Table A.1: Survey response figures, Child Benefit Register sample
|
| Outcome category | Of sampled | Of issued |
Detailed outcomes | N |
| % | % |
PSUs initially sampled | 502 | | | |
PSUs subsequently dropped from Tranche 3 (given better than anticipated response) | 17 | | | |
PSUs issued | 485 | | | |
Addresses issued per PSU | 30 | | | |
Total addresses issued, of which… | 14,550 | TS | 100% | |
Opting out | 785 | R | 4% | |
Addresses issued to interviewers, of which… | 13,765 | | 95% | 100% |
Contact with responsible adult, of which… | 11,400 | | 78% | 83% |
Child at address, of which… | 9,959 | | 68% | 72% |
Refusal | 3,556 | R | 24% | 26% |
Other unproductive | 447 | O | 3% | 3% |
Interview – lone parent | 1,534 | I | 11% | 11% |
Interview – partner interview in person | 0 | I | 0% | 0% |
Interview – partner interview by proxy | 3,212 | I | 22% | 23% |
Interview – unproductive partner | 1,210 | I | 8% | 9% |
No child at address | 1,322 | NE | 9% | 10% |
Unknown if child at address | 119 | UE | 1% | 1% |
No contact with responsible adult, of which… | 1,741 | | 12% | 13% |
Child at address | 231 | NC | 2% | 2% |
Unknown if child at address | 1,510 | UE | 10% | 11% |
Deadwood (address vacant, demolished, derelict, non-residential, or holiday home) | 624 | NE | 4% | 4% |
| | Calculation | Of sampled | Of issued |
Summary of outcomes | N | | % | % |
Total sample (TS) | 14,550 | TS | 100% | |
Eligible sample (ES) | 12,604 | TS-NE | 87% | 92% |
Interview (I) | 5,956 | I | 41% | 43% |
Non-contact (NC) | 231 | NC | 2% | 2% |
Refusal (R) | 4,341 | R | 30% | 26% |
Other non-response (O) | 447 | O | 3% | 3% |
Unknown eligibility (UE) | 1,629 | UE | 11% | 12% |
Not eligible (NE) | 1,946 | NE | 13% | 14% |
Note: From the 2019 survey onwards, the sampling unit for the CBR sample was the address. In cases where the selected child had moved from the sampled address, interviewers determined whether a child aged 0 to 4 currently lived at the address. If so, the address was considered eligible, and an interview was sought with a parent of the child (or children) aged 0 to 4 at the address; if not, the addresses was deemed ineligible. Prior to the 2019 survey, the sampling unit was the child. In cases where the selected child had moved from the sampled address, the child was still considered eligible, and the interviewer attempted to trace the child to his or her new address and conduct an interview there.
Table A.2: Survey response metrics, Child Benefit Register sample
| | Survey year |
| | 2009 | 2010-11 | 2011-12 | 2012-13 | 2014-15 | 2017 | 2018 | 2019 | 2021 | 2022 |
Response metric | Calculation | % | % | % | % | % | % | % | % | % | % |
Overall response rate | I / (I+R+NC+O+(eu*UE)) | 52 | 57 | 58 | 59 | 57 | 52 | 51 | 62 | 38 | 48 |
Eligibility rate (eu) | I+NC+R+O / I+NC+R+O+NE | 98 | 97 | 98 | 97 | 97 | 97 | 97 | 79 | 84 | 85 |
Unadj. response rate | I / TS | 51 | 55 | 57 | 57 | 55 | 50 | 49 | 49 | 32 | 41 |
Co-operation rate | I / (I+R+O) | 67 | 76 | 72 | 73 | 70 | 68 | 71 | 73 | 53 | 60 |
Contact rate | I+R+O / (I+R+NC+O+(eu*UE)) | 77 | 77 | 80 | 80 | 80 | 75 | 72 | 90 | 77 | 87 |
Refusal rate | R / (I+R+NC+O+(eu*UE)) | 24 | 18 | 22 | 21 | 23 | 24 | 22 | 23 | 37 | 34 |
Notes:
The response categories used in the calculations of the response metrics are as follows: Total sample (TS); Interview (I); Non-contact (NC); Refusal (R); Other non-response (O); Unknown eligibility (UE); Not eligible (NE); Eligibility rate (eu). Details of the specific fieldwork outcomes contained within these response categories can be found in Table A.1.
From the 2019 survey onwards, the sampling unit for the CBR sample was the address. In cases where the selected child had moved from the sampled address, interviewers determined whether a child aged 0 to 4 currently lived at the address. If so, the address was considered eligible, and an interview was sought with a parent of the child (or children) aged 0 to 4 at the address; if not, the addresses was deemed ineligible. Prior to the 2019 survey, the sampling unit was the child. In cases where the selected child had moved from the sampled address, the child was still considered eligible, and the interviewer attempted to trace the child to his or her new address and conduct an interview there.
6.2 Outcomes and response for FRS sample
There were 164 valid addresses sampled from the Family Resources Survey (FRS). Opt-out letters were sent to these addresses, leading to opt outs from 9 addresses. These addresses were removed from the sample, and a total of 155 addresses were issued to interviewers, who sent advance letters before starting their calls.
The overall response rate for the FRS sample was 42 per cent (Table A.4). This figure reflects the proportion of productive interviews across all eligible addresses. The full fieldwork outcomes are shown in Table A.3. Table A.4 then presents various response metrics for the FRS sample, showing trend data since the 2017 survey.
6.3 Analyses relating to the change of survey mode
Introduction
As described in Section 5.3, interviews were conducted in 2022 via three different modes which had originally been adopted due to restrictions on face-to-face interviewing in 2021 in response to the COVID-19 pandemic. The three different modes of interview comprised: face-to-face interviewing (where Government guidance permitted); telephone interviewing (with the respondent using single-use showcards, or viewing the showcards online); and Microsoft Teams interviewing (with the respondent viewing the interviewer’s survey script on their own computer, tablet, or other device, and choosing response codes from the screen for questions that would ordinarily use a showcard).
A ‘knock-to-nudge’ approach was used, whereby interviewers visited sampled addresses and invited parents to take part in the interview via one of these three modes. This design differs from previous waves in the Childcare and Early Years Survey of Parents series, for which interviews have been conducted wholly face-to-face, but it is the same as the design adopted for the 2021 wave.
The distribution of interviews by survey mode is shown in Table A.5. Just over two thirds of interviews (70%) were conducted face-to-face (whether in-home, or outside in gardens), 28 per cent were conducted by telephone with very few (1%) conducted by Microsoft Teams. A higher percentage of interviews were conducted via face-to-face interviewing in 2022 than in 2021 (70% in 2022 compared to 39% in 2021).
One consequence of the change of survey design for 2021 and 2022 is that the overall response rate to the survey fell from 51 per cent in 2018, and 62 per cent in 2019, to 38 per cent in 2021 and 48 per cent in 2022, with the unadjusted response rate falling from 49 per cent in both 2018 and 2019, to 32 per cent in 2021 and 41 per cent in 2022 (for further details on the calculation of the survey response rates, see Section 6.1).
This decline means that there is greater scope for non-response bias to affect survey estimates in 2021 and 2022, compared to 2019 and earlier survey years. Non-response bias refers to biases that arise when those participating in a survey differ from those who do not participate in ways that are associated with the survey measures. It should be noted, however, that recent research has found only a weak association between response rates and levels of non-response bias, and that weighting can address (but not eliminate) non-response bias[9] (opens in a new tab).
A second consequence of this change of design is that the survey modes themselves may influence the answers that parents provide. Such ‘mode effects’ can also introduce bias into survey estimates. Past research has shown that mode effects are most pronounced between interviewer administered versus non-interviewer administered modes; for attitudinal rather than factual questions; and for questions of a sensitive nature[10] (opens in a new tab).
It is not possible to provide direct assessments of either the extent of non-response bias, or the influence of mode effects, for the 2021 or the 2022 survey waves. A direct assessment of non-response bias would have required a wholly face-to-face survey to be run in parallel with the 2021 and 2022 waves, with survey estimates compared between the face-to-face only surveys and the mixed mode surveys. While survey estimates from 2022 can be compared with earlier survey waves, it is possible that changes observed will reflect ‘real’ changes among the population, whether due to gradual change over time, or due to acute change in response to the COVID-19 pandemic.
A direct assessment of the influence of mode effects in the 2021 or 2022 waves would have required an experimental design, with each address randomly assigned to one of the three survey modes. In the absence of such a design, mode effects cannot be disentangled from selection effects, whereby those choosing one survey mode differ from those choosing another survey mode in ways that are associated with the survey measures.
In this section, we instead look for indirect evidence to understand the extent to which the 2022 wave may be subject to these biases.
Analyses of the sample profile
An indirect assessment of the scope for non-response bias can be obtained by comparing the profile of the issued sample with that of the achieved sample, for geo-demographic measures known to be related to key survey estimates. These geo-demographic measures must be available for the whole issued sample – that is, including those addresses at which interviews were not obtained – to enable the comparisons to be made.
Table A.6 shows, for the 2018, 2021 and 2022 survey waves, the profiles of the issued and (unweighted) achieved CBR samples for region, area deprivation, and rurality. The 2018 wave is used as the comparator as it is the most recent comparable wave to the 2021 and 2022 waves in terms of the survey population (children aged 0 to 14).
The relative bias - defined as the percentage point difference between the issued and achieved sample for a given subcategory – is also shown. The relative bias describes the extent to which certain regions and area types are over- or under-represented in the achieved samples compared to the issued samples. The ‘absolute relative bias’ has also been computed for each of the three variables. The absolute relative bias is the sum of the absolute values of the relative bias and provides a measure of the overall discrepancy between the issued and achieved samples.
The data in Table A.6 demonstrate a high degree of consistency between the 2018, 2021 and 2022 survey waves. For region, the relative biases range between -2.8 and 1.9 percentage points for 2018, between -1.9 and 1.8 percentage points for 2021 and between -1.2 and 1.7 for 2022, with the absolute relative biases being 9.8, 9.2 and 6.3 percentage points for 2018, 2021 and 2022 respectively. For area deprivation, the relative biases range between -1.1 and 1.0 percentage points for the 2018 wave, between -1.6 and 0.9 percentage points for 2021, and between -0.5 and 1.0 percentage points for 2022, with the absolute relative biases being 3.3, 3.5 and 1.9 percentage points for 2018, 2021 and 2022 respectively. And for rurality, the relative biases are -0.4 and 0.4 percentage points for 2018, -0.1 and 0.1 percentage points for 2021, and -0.6 and 0.6 percentage points for 2022 with the absolute relative biases being 0.8, 0.2 and 1.2 percentage points for 2018, 2021 and 2022 respectively.
Whilst there are some differences between the 2018 and 2022 response profiles they are not large and do not provide sufficient evidence that the 2022 wave is subject to greater levels of non-response bias than the 2018 wave. It is of course possible that the 2022 wave is subject to greater levels of bias on variables other than region, area deprivation, and rurality, but the absence of such variables for the full issued samples means that these comparisons cannot be made.
[9] (opens in a new tab) See e.g.: Patrick Sturgis et al., ‘Fieldwork Effort, Response Rate, and the Distribution of Survey Outcomes’, Public Opinion Quarterly 81, no. 2 (2017): 523–42, https://doi.org/10.1093/poq/nfw055; Teitler, J. O., Reichman, N. E., & Sprachman, S. (2003). Costs and benefits of improving response rates for a hard-to-reach population. Public Opinion Quarterly, 67(1), 126–138. https://doi.org/10.1086/346011
[10] (opens in a new tab) See e.g.: Roger Tourangeau, ‘Mixing Modes: Tradeoffs Among Coverage, Nonresponse, and Measurement Error’, in Total Survey Error in Practice, ed. Paul P. Biemer et al. (Hoboken, NJ, USA: John Wiley & Sons, Inc., 2017), 115–32, https://doi.org/10.1002/9781119041702.ch6.