Methodology

Childcare and early years survey of parents

Published

1. Survey background and history

1.1 Aims of the study

This Technical Report describes the methodology of the 2022 survey in the Childcare and Early Years Survey of Parents (CEYSP) series. 

The survey was funded by the Department for Education (DfE) and carried out by Ipsos. The study has two key objectives. The first is to provide salient, up-to-date information on parents’ use of childcare and early years provision, and their views and experiences. The second is to continue the time series statistics – which have now been running for over ten years – on issues covered throughout the survey series. With respect to both of these objectives, the study aims to provide information to help monitor effectively the progress of policies in the area of childcare and early years education.

1.2 Time series of the Childcare and early years survey of parents

The current study is the 14th in the CEYSP series, which began in 2004. The time series in fact stretches back further than 2004, as the current series is the merger of two survey series that preceded it: i) the Survey of Parents of Three and Four Year Old Children and Their Use of Early Years Services, of which there were six waves between 1997 and 2004, and ii) the Parents' Demand For Childcare Survey, of which there were two waves, the first in 1999 and the second in 2001.

Previous waves of the CEYSP were conducted in 2004, 2007, 2008, 2009, 2010-11, 2011-12, 2012-13, 2014-15, 2017, 2018, 2019, 2020 and 2021. Fieldwork for the 2020 wave started in January 2020, but was forced to end in March 2020, after only around 1,300 interviews had been conducted, due to restrictions on face-to-face interviewing arising from the COVID-19 pandemic. To safeguard the 2021 survey against such restrictions, remote interviewing modes were employed in addition to face-to-face interviewing, namely: telephone interviewing, and interviewing via Microsoft Teams. These interviewing modes were all retained for the 2022 wave, fieldwork for which started in April 2022 and ended in February 2023.

There are a number of questions for which trend data extends to the beginning of the time series which include the use of childcare by families and children, and parents’ perceptions of local childcare (the level of information about local childcare, the availability of local childcare, the quality of local childcare, and the affordability of local childcare). Changes to the questionnaire over time has meant that in many instances though it is not possible to provide direct comparisons that extend to the beginning of the time series.  

 

2. Overview of the study design

2.1 The sample

A total of 6,017 parents with children aged 0 to 14 in England were interviewed between April 2022 and February 2023. 

A probability sample of children aged 0 to 14 in England was drawn from the Child Benefit Register (CBR) maintained by Her Majesty’s Revenue & Customs (HMRC) which, given its high take-up, provides very high coverage of dependent children in England. Interviews were sought with parents of these children. If the sampled child was no longer living at the address, an interview was sought with the current occupiers if they had a child aged 0 to 14, otherwise the address was deemed ineligible .

Prior to the 2019 survey, the sampling unit was the child (rather than the address), and in cases where the sampled child had moved from the sampled address, the child was still considered eligible, and the interviewer attempted to trace the child to his or her new address to conduct an interview there. The sampling unit was changed, from the child to the address, due to the increasing proportion of children that were found to have moved address during fieldwork (from 13% in 2010, to 22% in 2018).[1]

A small additional sample of parents in England was drawn from respondents to the Family Resources Survey (FRS) commissioned by the Department for Work and Pensions, who had consented to be re-contacted for future research.[2]

 

[1] Prior to the 2019 survey, the sampling unit was the child (rather than the address), and in cases where the sampled child had moved from the sampled address, the child was still considered eligible, and the interviewer attempted to trace the child to his or her new address to conduct an interview there. The sampling unit was changed, from the child to the address, due to the increasing proportion of children that were found to have moved address during fieldwork (from 13% in 2010, to 22% in 2018).

[2]This was necessary because the eligibility criteria for Child Benefit changed in 2013 so that higher-income households (those where one or both partners earn £60,000 or more per year) ceased to gain financially from Child Benefit, resulting in them becoming disproportionately likely to be missing from the CBR. To avoid bias to survey estimates, higher-income households missing from the CBR were sampled from the FRS. For further details see Department for Education (2017) Childcare and early years survey of parents: Sampling frames investigation https://www.gov.uk/government/publications/childcare-and-early-years-survey-of-parents-sampling-frames

2.2 The interviews

Interviews were conducted by one of three modes: face-to-face in parents’ homes or gardens; by telephone, or by Microsoft Teams. The main respondent was a parent or guardian of the sampled child with main or shared responsibility for making childcare decisions, and in most cases (82%) was the child’s mother.

The study used an inclusive definition of childcare and early years provision. The respondent was asked to include any time their child was not with them (or their current or ex-spouse or partner), or at school. Ex-husbands/wives/partners were counted as a type of informal provider prior to the 2019 survey, but following the survey’s user consultation in 2018 were excluded from the definition of childcare from the 2019 survey wave onwards, for consistency with other national and international surveys about childcare.

The definition of childcare covered both informal childcare (for instance grandparents, an older sibling, or a friend or neighbour) and formal childcare (for instance nursery schools and classes, childminders, and before- and after-school clubs). Further detail about this definition is provided in Section 2.3.

In families with two or more children, broad questions were asked about the childcare arrangements of all children, before more detailed questions were asked about the randomly sampled child (henceforth referred to as ‘the selected child’).

Because childcare arrangements vary between school term-time and school holidays, most of the questions focused on the most recent term-time week (the ‘reference week’). Separate questions were asked about the use of childcare during times of the year when school children were on holiday.

The interview covered the following topic areas:

  • For all families:
    o use of childcare and early years provision in the reference term-time week, school holidays periods (if applicable) and last year;
    o payments made for childcare and early years provision (for providers used in the last week), the use of free hours of childcare, the use of Tax-Free Childcare, and the use of tax credits and subsidies;
    o sources of information about, and attitudes towards, childcare and early years provision in the local area; and
    o if applicable, reasons for not using childcare.

 

  • For one randomly selected child:
    o a detailed record of child attendance in the reference week;
    o reasons for using and views of the main formal provider; and
    o the home learning environment.

 

  • Classification details:
    o household composition;
    o parents’ education and work details; and
    o provider details.


Across all addresses eligible for interview – that is, all addresses containing a child aged 0 to 14 – an interview was achieved at 48 per cent. For further details on response see Chapter 6.

2.3 Defining childcare

The study uses an inclusive definition of childcare and early years provision. Parents were asked to include any time that the child was not with a resident parent or a resident parent’s current or ex-partner, or at school. 

This definition is consistent with the 2019-2021 survey waves but deviates from that used in earlier waves of the survey by excluding ex-partners. Prior to the 2019 wave, the definition of childcare and early years provision was “any time that the child was not with a resident parent or a resident parent’s current partner, or at school”. This change brought the definition of childcare in line with other research about childcare.

In order to remind parents to include all possible people or organisations that may have looked after their children, they were shown the following list:

Formal providers

  • nursery school
  • nursery class attached to a primary or infants’ school
  • reception class at a primary or infants’ school
  • special day school or nursery or unit for children with special educational needs
  • day nursery
  • playgroup or pre-school
  • childminder
  • nanny or au pair
  • baby-sitter who came to home
  • breakfast club
  • after-school clubs and activities
  • holiday club/scheme

Informal providers[3]

  • the child’s grandparent(s)
  • the child’s older brother/sister
  • another relative
  • a friend or neighbour

Other

  • other nursery education provider
  • other childcare provider

Definitions of main formal providers for pre-school children

A short definition for each of the main formal providers for pre-school children is included below. The definitions were not provided to parents in the survey but these are included here to help the reader differentiate between the most common categories. 

  • nursery school – this is a school in its own right, with most children aged 3 to 5. Sessions normally run for 2 ½ to 3 hours in the morning and/or afternoon;
  • nursery class attached to a primary or infants' school - often a separate unit within the school, with those in the nursery class aged 3 or 4. Sessions normally run for 2½ to 3 hours in the morning and/or afternoon;
  • reception class at a primary or infants' school - this usually provides full-time education during normal school hours, and most children in the reception class are aged 4 or 5;
  • special day school/nursery or unit for children with special educational needs - a nursery, school or unit for children with special educational needs;
  • day nursery - this runs for the whole working day and may be closed for a few weeks in summer, if at all. This may be run by employers, private companies, community/voluntary group or the Local Authority, and can take children who are a few months to 5-years-old; and
  • playgroup or pre-school - the term ‘pre-school’ is commonly used to describe many types of nursery education. For the purposes of this survey, pre-school is used to describe a type of playgroup. This service is often run by a community/voluntary group, parents themselves, or privately. Sessions last up to 4 hours. 

Providers were classified according to the service for which they were being used by parents, for example day-care or early years education. Thus, providers were classified and referred to in analysis according to terminology such as ‘nursery schools’ and ‘day nurseries’, rather than as forms of integrated provision such as Children’s Centres. Reception classes were only included as childcare if it was not compulsory schooling, that is the child was aged under 5 (or had turned 5 during the current school term).

This inclusive definition of childcare means that parents will have included time when their child was visiting friends or family, at a sport or leisure activity, and so on. The term early years provision covers both ‘care’ for young children and ‘early years education’.

Deciding on the correct classification of the ‘type’ of provider can be complicated for parents. The classifications given by parents were therefore checked with the providers themselves in a separate telephone survey and edited where necessary. Detail about the provider edits can be found in Section 7.3.

 


[3] Prior to the 2019 wave, the list of informal providers included “my ex-husband/wife/partner/the child’s other parent who does not live in this household”.

2.4 Interpreting the data in the Official Statistics Report and Tables

The majority of findings in the Official Statistics Report and Tables relate to one of two levels of analysis:

  • the family level (e.g. proportions of families paying for childcare, parents’ perceptions of childcare provision in their local areas); and
  • the (selected) child level (e.g. parents’ views on the provision received by the selected child from their main childcare provider).

However, for most of the analyses carried out for the data tables contained in Chapters 9 and 10 the data was restructured so that ‘all children’ in the household were the base of analysis. This was done to increase the sample size and enable the exploration of packages of childcare received by children in more detail. This approach is not used for other analyses because much more data was collected on the selected child compared with all children in the household.

Weights

A ‘family-level’ weight is applied to family-level analyses. This ensures the findings are representative of families in England with a child aged 0 to 14 in receipt of Child Benefit.

A ‘child-level’ weight is applied to analyses carried out at the (selected) child-level. This weight combines the family-level weight with an adjustment for the probability of the child being randomly selected for the more detailed questions.

Bases

The data tables show the total number of cases that were analysed (e.g. different types of families, income groups). The total base figures include all the eligible cases (in other words all respondents, or all respondents who were asked the question where it was not asked of all) but, usually, exclude cases with missing data (codes for ‘don’t know’ or ‘not answered’). Thus, while the base description may be the same across several data tables, the base sizes may differ slightly due to the exclusion of cases with missing data.

Unweighted bases are presented throughout. This is the actual number of parents that responded to a given question for family-level questions, and the actual number of children about whom a response was provided by parents for child-level questions.

In some tables, the column or row bases do not add up to the total base size. This is because some categories might not be included in the table, either because the corresponding numbers are too small to be of interest or the categories are otherwise not useful for the purposes of analysis.

Where a base size contains fewer than 50 respondents, particular care must be taken, as confidence intervals around these estimates will be very wide, and hence the results should be treated with some caution. In tables with bases sizes below 50, these figures are denoted by squared brackets [ ]. 

Percentages

Due to rounding, percentage figures may not add up to 100 per cent. This also applies to questions where more than one answer can be given (‘multi-coded’ questions).

Continuous data

Some Official Statistics Tables summarise parents’ responses to questions eliciting continuous data; for instance, the number of hours of childcare used per week (see Table 1.12 in the Official Statistics Tables) and the amount paid for childcare per week (see Table 4.6 in the Official Statistics Tables). For these data, both median and mean values are included in the data tables, but median values are reported in the Report as they are less influenced by extreme values and are therefore considered a more appropriate measure of central tendency. It should be noted that ‘outlier’ values, those identified as being either impossible or suspect responses, were removed from the dataset prior to data analysis. As such, the extreme values which remain can be considered as valid responses which lie at the far ends of their respective distributions.

Where significance testing has been conducted on continuous data, this has been carried out using mean values rather than medians. This is because the continuous data is subject to ‘rounding’ by respondents, for instance where payments are rounded to the nearest ten pounds, or where times are rounded to the nearest half hour; this rounding can result in similar median values where the underlying distributions are quite different, and testing for differences between means is more appropriate in these instances as it takes the entire distribution into account. It should be noted however that although mean values are more influenced than median values by extreme values, significance testing on mean values accounts for extreme values by widening the standard error of the mean, which is used in the calculation of the test statistic, thereby reducing the likelihood of finding a significant result. As such, it is not the case that a significant change will be reported between years or between sub-groups simply due to a small number of respondents reporting an extreme value on a continuous variable.

Statistical significance

Where reported survey results have differed by sub-group, or by survey year, the difference has been tested for significance using the complex samples module in SPSS 28.0 and found to be statistically significant at the 95 per cent confidence level or above. This means that the chance that the difference is due to sampling error, rather than reflecting a real difference between the sub-groups or survey years, is 1 in 20 or less. The complex samples module allows us to take into account sample stratification, clustering, and weighting to correct for non-response bias when conducting significance testing. This means that ‘false positive’ results to significance tests (in other words interpreting a difference as real when it is not) is far less likely than if the standard formulae were used.

Symbols in tables

The symbols below have been used in the tables and they denote the following:

n/a      this category does not apply (given the base of the table)

[ ]        percentage based on fewer than 50 respondents (unweighted)

*         percentage value of less than 0.5 but greater than zero[4]

0         percentage value of zero

 


[4] Where a cell in a table contains only an asterisk, this denotes a percentage value of less than 0.5 but greater than zero. Asterisks are also shown immediately to the left of certain figures in tables that present the results of logistic regression models. In these cases, asterisks denote the level of significance of the odds ratios in the table as follows: *p<0.05, **p<0.01, ***p<0.001.

3. Questionnaire development

3.1 Changes to the questionnaire

A small number of changes were made to the 2022 Childcare and Early Years Survey of Parents questionnaire (compared to the 2021 survey wave) to reflect changes in policy, and to improve the quality of data captured. 

The questionnaire changes are described in the bullet points that follow, in which question names are provided in brackets.

New questions

Questions about where after school activities and holiday childcare took place 

  • Two questions were included to measure where after school clubs or activities took place during term time (ASSchWhere) and where holiday clubs took place (HolWhere). 

A question about hiring a childminder 

  • (CMAgency) This question asked whether a named provider was hired through a Childminder Agency. This question was asked in 2017 but rotated out of the questionnaire in 2018.

Questions about government funded entitlement to early education (free hours)

  • A new question was added that asked parents with a 2 year old who were aware of the free hours of childcare for 2 year olds scheme, but had not used it, if they thought they were eligible for the scheme (Free2yoEg). For those parents who thought they were eligible, they were asked why their child hadn’t received the free hours (Free2yoWhy).
  • Two questions were added which asked parents about the impact of the free hours they received for any of their children. The first (F30ImpHr) explored whether the hours had given them more flexibility in the number of hours they could work. The second (F30ImpFx) explored whether the hours had given them more flexibility in which jobs they could do. Both questions were asked in the 2019 wave but had been rotated out in the 2020 wave.

A question about awareness of childcare element of Universal Credit

  • (UCawarChild) This question asked those who were aware of Universal Credit if they were also aware that they may be able to claim back up to 85% of their childcare costs if they were claiming Universal Credit.

Questions about application and use of Tax-free Childcare 

  • (TaxFCSApUse) This question asked if parents had applied for or used a Tax-Free Childcare account. This question replaced TaxFCSAw and TaxFCSAp.
  • (TaxFCSWhat) This question asked parents who had paid a provider from their Tax-free Childcare account what they had used Tax-Free Childcare for.

A question about the steps parents took before choosing their main provider

  • (Diligence) This question asked parents using a formal provider what steps they took before choosing their main provider to look after their child. 

Questions about use of Children’s centre 

  • A question was added which asked all parents whether they had received any help or support for their child from their local Children’s centre, by visiting them in person, virtually, or by being visited at home (HLCCenSupp). Parents whose child had received any help or support were asked how often they received this (HLCCenSuppOf).

Question about the factors of importance for high quality childcare and early years education 

  • (QualFact) This question asked parents to identify factors that they thought were important for high quality childcare and early years education for pre-school children. This question was asked in the 2017 wave but rotated out of the questionnaire in the 2018 wave.

 

Deleted questions

The following questions were removed from the 2022 survey:

Question about when children would be entering Reception

  • (RecStrt) This question asked parents of children who were not at Reception in the reference week, and who were born between 01 September 2016 and 31 August 2017 and so would ordinarily enter Reception in September 2021, when they expected their child to enter Reception. This question was added for the 2021 wave.

Questions about paying providers using an online Payment Provider

  • A question which asked if a named provider had to be paid using an online Payment Provider or whether payment could be made in another way was removed for this wave (PayOnline). The follow-up question, asking which online Payment Provider was used to pay the named provider (PayOnlineWh) was also removed. Both questions had been included for the 2021 wave.

Questions about application and use of Tax-free childcare

  • TaxFCSAw, which provided some information about Tax-free Childcare and asked parents if they were aware of the scheme before their survey interview took place was removed. Similarly, TaxFCSAp, which asked parents if they or their partner (where applicable) had applied for or used a Tax-free Childcare account was also removed. Both questions were added in the 2017 wave. 

Questions about taking the selected child to a Children’s centre 

  • A question exploring whether anyone at home ever took the selected child to a Children’s centre, a Family Centre, a Sure Start Centre, or a Family Hub (HLCCen) was removed as was the follow-up question measuring how often someone at home took the selected child to a Children’s centre (HLCCenO). Both questions were asked in the 2020 wave but before that were last asked in 2017.

Question about helpfulness of sources of information about childcare / out of school activities 

  • (Chelp) This question asked parents about how helpful they found the information they had obtained from different sources about childcare / nursery education / out of school activities in the local area. This question was last asked in 2021. 

Questions about males in the early years workforce

  • (CCMales1) To help DfE understand parental attitudes towards men in the early years workforce, this question asked parents the extent to which they supported or opposed male staff caring for children at formal childcare providers.

 

Amended questions

There were some sections of the questionnaire that had substantive amendments, as detailed below.

Questions containing references to ‘furlough’

  • Due to the cessation of the furlough scheme (first introduced during the coronavirus pandemic), references to ‘furlough’ were removed from the lists of response options and/or interviewer instructions at several questions (ActA, ActAO, ActASp, WFHome and WFHomeSp).

Questions on Tax-Free childcare 

  • The routing at five questions (TaxFCSWhat, TaxFCSApNWh, TaxFCSAy, SuppImp and SuppImpS) was amended to reflect the response codes that were included for the new question TaxFCSApUse to ensure consistency in the responses at all these questions.
  • The description of the Tax-Free Childcare scheme was amended at TaxFCSAy to match the wording used by HMRC to describe the scheme.

Questions with references to Sure Start 

  • Due to the brand ‘Sure Start’ becoming less prevalent, references to ‘Sure Start Centres’ were removed from the answer options at two questions (LearnInfo and Sources).

Question on home learning

  • (ProvSupa) Interviewer feedback for the 2021 survey suggested the question wording, including the examples ‘reading stories together, singing songs, or playing at recognising letters and words’, was not appropriate when asked of providers of school-age children. For 2022 onwards, the examples are only read out to parents if the selected child is aged 0-4. 

Question on the impact of the COVID-19 pandemic

  • At the question about the extent to which the COVID-19 pandemic has harmed their child’s social and educational development (CVTotImp) the routing was changed from ask all to only children aged 4 and above. 

3.2 Questionnaire content

The questionnaire was structured as follows: 

  • Household composition, and identification of the selected child.
  • Household’s use of childcare in the reference week, and the past year.
  • Household’s awareness and use of the 15 and 30 hours offers.
  • Household’s childcare costs, for providers used in the reference week.
  • Household’s receipt of Tax Credits, awareness of Universal Credit, and awareness and use of Tax-Free Childcare.
  • The impact of support received on employment and family finances.
  • Selected child’s attendance record (the day-by-day ‘diary’ of childcare use in the reference week).
  • Selected child’s experiences at their main provider, reasons for choosing the main provider, and reasons for the patterns of provision used.
  • Selected child’s use of childcare during school holiday periods.
  • Selected child’s home learning environment.
  • Respondent’s attitudes towards childcare in the local area.
  • Respondent’s and child(ren)’s demographic characteristics.
  • Respondent’s employment history.
  • Consent to data linkage; consent for follow-up research; contact details for pre-school providers.
  • Partner’s employment status and details.

4. Sampling

4.1 Survey population

The survey population was children aged 0 to 14 living in private residential accommodation[5] in England. 

This survey population mirrors previous survey waves, with the exception of the 2019 wave, for which the survey population was children aged 0 to 4, in order to allow more in-depth analyses of the childcare-related experiences of pre-school children. Future waves may shift the focus back to children aged 0 to 4, dependant on analytical and policy requirements.

Although the sampling units were children, the interview for each selected child was conducted with an appropriate adult (defined as an adult within the child’s household with ‘main or shared responsibility for making decisions about the child’s childcare’).


 

[5] Children living in communal establishments such as children’s homes are excluded.

4.2 Sample frames

Up until the 2014-15 wave of the Childcare and Early Years Survey of Parents, children were sampled exclusively from the Child Benefit Register (CBR). This was a highly efficient approach given the near universal take-up of Child Benefit among parents of children aged 0 to 14 in England, and hence the near total coverage of the sample population by the sample frame. In 2013 this coverage was damaged by the introduction of the High Income Child Benefit Charge (HICBC), the effect of which has been to decrease the likelihood that children born since 2013 to higher income parents (those where one or both partners earn £60,000 or more per year) are listed on the CBR.

DfE commissioned Ipsos to write a report investigating the potential impact of this change, and to explore potential solutions.[6] The report found that persisting with the CBR as the sole sampling frame would introduce non-coverage bias that would reduce both the accuracy of survey estimates, and the ability to compare changes in estimates over time. The report recommended that a sample of children should be drawn from the CBR, as per previous survey waves, but should be supplemented with a sample of respondents to the Family Resources Survey (FRS) who had agreed to be recontacted for the purposes of future research. The FRS respondents were those with a child (or children) who had not made a claim for Child Benefit, or who had made a claim for Child Benefit but had subsequently opted-out of receiving Child Benefit due to having a high income. These families would have little or no chance of being selected in the CBR sample.

Since the 2017 wave, the survey has used a dual-frame approach, sampling from both the CBR and the FRS.

Selection of the CBR sample

The sample of children from the CBR was selected by HMRC from all children in England that would be aged 0 to 14 on the first day of fieldwork (25 April 2022) for whom a Child Benefit claim had been made.

A small number of children were excluded from the sampling frame before selection took place. The exclusions were made according to HMRC procedures and reasons included: death of a child, cases where the child has been taken into care or put up for adoption, cases where the child does not live at the same address as the claimant and cases where there has been any correspondence by the recipient with the Child Benefit Centre (because the reason for correspondence cannot be ascertained and may be sensitive).

The sample of children was selected in two stages: selection of Primary Sample Units (PSUs) and selection of individual children within each PSU. Ipsos randomly selected 502 PSUs, plus an additional 502 PSUs that could be used as a reserve sample if needed. The PSUs were based on postcode sectors. HMRC provided a full list of postcode sectors in England with counts for each of the number of children on Child Benefit records aged 0 to 14 to the nearest five. In order to reduce clustering, postcode sectors containing fewer than 250 children were grouped with neighbouring postcode sectors. The list of grouped postcode sectors was stratified by region, population density, proportion of households in managerial professional and intermediate occupations, and proportion of the population that were unemployed. A size measure was calculated for each PSU based on the population of children, and sample points were selected with probability proportionate to this size measure.

At the second stage, prior to the start of fieldwork, 30 children per PSU were selected randomly by HMRC from the selected PSUs (both the 502 main PSUs and 502 reserve PSUs). A list of all eligible children aged 0 to 14 in the PSU was created and was sorted by postcode and child benefit number to help to avoid children from the same household being selected.

The main sample of 502 PSUs x 30 children meant that a sample of 15,060 addresses were available from which to achieve the target of 6,000. 

The sample was issued in three separate ‘tranches’ across the fieldwork period. For Tranche 3, as response during fieldwork was higher than initially assumed, a random subset of 17 PSUs were dropped, such that 485 (rather than 502) PSUs were issued in total. This enabled us to reach the target number of interviews with a higher response rate than would otherwise have been the case.

Each sampled child was the ‘selected child’ about whom detailed child-specific questions in the interview was asked. In certain instances, the questionnaire script re-selected this child, from among all children in the household, at the start of the interview. This occurred in the following instances:

  1. Where the selected child was no longer living at the sampled address (for instance, where the family had moved address without informing HMRC, meaning that their address listed on the CBR was out of date). In these instances, as long as there was a child aged 0 to 14 living at the address at the point that the interviewer made contact, the interviewer sought an interview with one of the parents of this child (or children), with the questionnaire script randomly choosing one child aged 0 to 14 in the household to become the selected child (where there was more than one). This occurred at 393 households. Prior to the 2019 wave, the interviewer was instead required to attempt to trace the selected child to his or her new address and conduct the interview there.
  2. Where the selected child was living at the address, and a child had been born into the household between the date that the sample was drawn and the date of the interview. As there was approximately a gap of two months between the sample being drawn and the start of fieldwork, children that were born during this time were not represented in the sample of children drawn from Child Benefit records. To account for this, in households where a child had been born since the sample was drawn, the questionnaire script re-selected the child that was to be the focus of the child-specific questions from all children (including the newborn child) in the household. This re-selection occurred at 289 households.
  3. Where the selected child was living at the address, and where the number of children in the household (excluding children born since the sample was drawn) was found to be greater than the number of children living in the household according to Child Benefit records, and where Child Benefit was received by some but not all children in the household. In these instances, there was a (non-newborn) child in the household that did not have a chance of selection at the sampling stage, as said child was not on the Child Benefit database. Such instances may reflect a child in the household for whom the parents had decided not to claim, an error on the Child Benefit database, or a family event such as adoption. In these households, the questionnaire script re-selected the child that was to be the focus of the child-specific questions from all children in the household. This re-selection occurred at 73 households.

Selection of the FRS sample

The sample of FRS respondents (n = 164) was selected by DWP from households who had taken part in the 2020/21 FRS survey, who had consented to be re-contacted for the purposes of further research at the time of their FRS interview, and who had a child (or children) born since 7th January 2013 (the date that the High Income Child Benefit Charge was introduced) for whom they either:

  • had not made a claim for Child Benefit, or 
  • had opted out of receiving Child Benefit payments due to having a high income. 

Those opting out were included to ensure that all children in FRS households that could not be covered via the CBR were captured. Specifically, while families opting out of receiving Child Benefit remain listed on the CBR and are therefore available to be sampled, their contact details are more likely to be out of date as these families have little reason to inform HMRC of a change of address if they move, and as a result, they are likely to be under-represented in the CBR achieved sample. The FRS sample therefore boosts the sample of households that have opted-out of Child Benefit as they would otherwise be under-represented in a sample selected from the CBR alone. 


 

[6] https://www.gov.uk/government/publications/childcare-and-early-years-survey-of-parents-sampling-frames

5. Fieldwork

5.1 Briefings

Prior to the start of fieldwork, all interviewers received a full briefing by watching video modules recorded by the research team. These modules were available online, and interviewers could complete them in their own time. After each module, interviewers had to answer a series of ‘quiz’ questions, to ensure they understood the content. After the briefings had been completed, a series of online “Q&A” drop-in sessions were hosted, in which interviewers could ask questions of the field and research teams.

The briefings covered: an introduction to the study and its aims (including the importance of the survey, along with examples of how the survey data has been used to develop and understand the impact of childcare and early years policies); an explanation of the samples and procedures for contacting sampled parents, and receiving contact from sampled parents; detailed descriptions of the survey materials; full definitions of formal and informal childcare; a segment on how interviewers should determine the appropriate survey mode (i.e. telephone, Microsoft Teams or face-to-face) and schedule the interview; and a comprehensive section on the survey questionnaire (including differences in administration between the survey modes). 

A new module was introduced for 2022, which highlighted the main changes made to the survey compared to 2021. This emphasised face to face interviewing as the preferred mode of data collection and drew attention to the change in the age range of children sampled for the FRS addresses[7] as well as new questions that had been included in the questionnaire for this wave of the survey.

The briefing sessions covered all salient points on conducting research with parents and how best to secure participation, as well as potential sensitivities within the survey, and practical information for interviewers.


 

[7]All children born since 7th January 2013 (the date that the High Income Child Benefit Charge was introduced) were eligible for the FRS sample, therefore children aged 0 to 9 years were eligible in 2022, whereas the previous year only children aged 0 to 8 years were eligible.

5.2 Contact procedures

Letters and leaflet

An ‘opt-out’ letter introducing the survey was mailed prior to the start of fieldwork, in April 2022, addressed to (for the CBR sample) the named benefit recipient of the child sampled from the CBR, and (for the FRS sample) the adult who had taken part in the FRS survey and had consented to be recontacted for further research.

The opt-out letter described the survey and the remote interviewing modes available, including how each mode would work in practice. It contained information about Coronavirus protocols followed in the survey, including that the interviewer would not attempt to complete an interview in-home unless government advice allowed it.

The letter provided details about how the household could opt-out of the survey, should they not wish to participate. Opt-out methods included an online ‘participant portal’ through which parents could either opt-out of the survey or request a specific (remote) mode of interview. Parents who requested a mode of interview via the portal were asked to provide their telephone number (in the case of a telephone interview) as well as their email address (in the case of a Microsoft Teams interview). All households, except those opting out, were issued for interview. 

Interviewers sent a separate ‘advance letter’ to each household in their assignment shortly before making their calls. This letter let parents know they should shortly expect a visit from an interviewer. Enclosed with the advance letter was a ‘survey leaflet’, which provided further details about the study.

Interviewer visits

For the CBR sample, interviewers were provided with the selected child’s name, address, and the name of the person in the household listed as the recipient of Child Benefit for that child. An interview could be conducted with an adult with ‘main or shared responsibility for making decisions about childcare for the selected child’. This adult did not have to be the Child Benefit recipient.

In cases where the selected child had moved from the sampled address, interviewers sought to determine whether a child aged 0 to 14 currently lived at the address. If so, the address was deemed to be eligible, and the interviewer introduced the survey to the current residents, who would not have received any advance communications about the survey. Interviewers then sought to conduct an interview with a parent of the child (or children) aged 0 to 14 at the address. If the interviewer was unable to identify whether a child aged 0 to 14 lived at the address (for instance, where the current residents refused to provide this information), the address was deemed to be of unknown eligibility, and no interview was sought. If the interviewer determined that no child aged 0 to 14 lived at the address, the address was deemed to be ineligible, and no interview was sought[8].

For the FRS sample, interviewers were provided with the FRS respondent’s name, address, and telephone number (if available). An interview could be conducted with an adult with ‘main or shared responsibility for making decisions about childcare for the child or children aged 0 to 9 (born since 7th January 2013) in the household’.

For both the CBR and FRS sample a ‘knock-to-nudge’ approach was used, which involved interviewers calling on previously written-to households and encouraging them to participate in the survey. If a parent had requested a telephone or Microsoft Teams interview via the participant portal, the interviewer would make contact by telephone first to arrange the interview. However, if a parent had not been in touch via the participant portal, the interviewer visited the sampled address to try and arrange an interview and collect the relevant contact information from the parent to do this.

Interviewers were provided with an ‘Impact Card’ to use, at their discretion, to maximise co-operation across all issued addresses. This Impact Card laid out some of the ways in which the data from the survey series has been used to improve the services the Government provides to parents.

For both the CBR and FRS samples, an interview only took place where the responsible adult consented to be interviewed.


 

[8] These procedures mark a deviation from those followed prior to 2019, when, if the selected child had moved from the sampled address, the interviewer attempted to trace the child’s new address and conduct an interview there. Due to the rising proportion of children found to have moved from the address listed on the CBR (from 13% of addresses issued to interviewers in the 2010 survey wave, to 22% in the 2018 survey wave), combined with the difficulties of tracing new addresses in the field, from the 2019 wave onwards, the sampling unit became the address, rather than the child.

5.3 Interviewing

Interviews were conducted via three different modes, from which parents could choose. All three modes were conducted by Ipsos’s face-to-face interviewer panel:

  • Face-to-face interviewing – in-home or outside (where Government guidance permitted); 
  • Telephone interviewing (with the respondent using single-use showcards, or viewing the showcards online); and 
  • Microsoft Teams interviewing (with the respondent viewing the interviewer’s survey script on their own computer, tablet, or other device, and choosing response codes from the screen for questions that would ordinarily use a showcard).

Face-to-face in-home interviewing was carried out more frequently in 2022 as Coronavirus restrictions eased. The return to in-home data collection in the first few months of 2022 fieldwork involved strict protocols and conditions. These included interviewers wearing a facemask throughout in-home interviews, and considerations such as interviewers only entering properties once an initial risk assessment had been undertaken (such as availability of suitable room/space, ventilation) and then undertaking a continuous agile risk assessment once in-home. Following revised guidance from the Market Research Society (MRS), risk assessments and wearing facemasks continued for in-home interviewing but protocols relating to mask wearing for doorstep interviews and to ventilation for in-home interviews were relaxed. 

Telephone interviewing took place with the respondent using either single-use paper showcards or looking at showcards online. For the telephone single-use showcards mode, interviewers left the parent with a set of the single-use showcards, and then telephoned to do the interview while directing them as to which showcard to look at for each question. For the telephone online showcards mode, interviewers provided parents with the online address, or URL, where the showcards were in advance of the appointment. At the start of the telephone interview the interviewer directed the parent to bring the showcards up on a screen, for instance a PC, laptop, tablet, or even a smartphone, and to look at them on screen during the interview.

For Microsoft Teams interviewing, the parent needed to have internet access and the interviewer sent them an email invitation link to join the interview at the scheduled time. If the parent did not have a Microsoft Teams account, they could join as a ‘guest’. After initially showing themselves to the respondent at the start of the interview, the interviewer then deactivated the camera so the parent was not visible during the interview. The interviewer shared their computer screen with the parent over Microsoft Teams, so the respondent could see the possible answer options at relevant questions and choose the appropriate response.

In situations where respondents could not speak English well enough to complete the interview, interviewers were able to use another household member to assist as an interpreter, or another interviewer in the area who was able to speak their language was asked to conduct the interview. If translation was not possible, the interview was not carried out.

The interviews lasted for a mean of 45, and a median of 41 minutes. The length of the interview varied by survey mode, as follows:

  • Face-to-face in-home (where Government guidance permitted): mean of 45 minutes, median of 41 minutes;
  • Face-to-face outdoors (e.g. in gardens, where Government guidance permitted): mean of 34 minutes, median of 32 minutes;
  • Telephone: mean of 48 minutes, median of 44 minutes;
  • Microsoft Teams: mean of 55 minutes, median of 52 minutes.

6. Response

6.1 Outcomes and response for CBR sample

There were 15,060 children sampled from the Child Benefit Register (CBR) – 30 for each of the 502 Primary Sampling Units (PSUs). As detailed in Section 4 (Sampling), given that response during fieldwork was higher than initially assumed, a random 17 PSUs (510 children) were dropped from the Tranche 3 sample, enabling us to reach the target number of interviews with a higher response rate than would otherwise have been the case. Therefore, a total of 14,550 addresses in 485 PSUs were sent opt-out letters, leading to opt outs from 785 addresses. These addresses were removed from the sample, and a total of 13,765 addresses were issued to interviewers, who sent advance letters before starting their calls.

The overall response rate for the CBR sample was 48 per cent (shown in Table A.2). This figure reflects the proportion of productive interviews across all eligible addresses. The full fieldwork outcomes are shown in Table A.1. Table A.2 then presents various response metrics for the CBR sample, showing trend data since the 2009 survey.

The overall response rate increased in 2022 from 38 per cent in 2021 to 48 per cent, but the response was still lower than prior to the COVID-19 pandemic (a fall from 62 per cent in 2019).  The response rate for 2022 was expected to be lower than 2019 because despite many of the COVID-19 restrictions having been lifted by the time fieldwork commenced in April 2022, there was still some uncertainty around the virus which may have led parents to be more cautious about participating in the study. We made allowances to account for this through interviewers following strict protocols and conditions within the participant’s home (as described in Section 5.3).  We also allowed for remote interviewing (on telephone and Microsoft Teams) if parents didn’t feel comfortable with a face-to-face interview, and these remote modes were more likely to suffer from broken appointments and poorer response. 

 

Table A.1: Survey response figures, Child Benefit Register sample

Outcome category

Of sampled

Of issued

Detailed outcomes

N

%

%

PSUs initially sampled

502

PSUs subsequently dropped from Tranche 3 (given better than anticipated response)

17

PSUs issued

485

Addresses issued per PSU

30

Total addresses issued, of which…

14,550

 TS 

100%

     Opting out

785

 R 

4%

Addresses issued to interviewers, of which…

  13,765 

95%

100%

     Contact with responsible adult, of which…

  11,400 

78%

83%

          Child at address, of which…

    9,959 

68%

72%

                       Refusal

    3,556 

 R 

24%

26%

                       Other unproductive

       447 

 O 

3%

3%

                       Interview – lone parent

    1,534 

 I 

11%

11%

                       Interview – partner interview in person

 I 

0%

0%

                       Interview – partner interview by proxy

    3,212 

 I 

22%

23%

                       Interview – unproductive partner

    1,210 

 I 

8%

9%

          No child at address

    1,322 

 NE 

9%

10%

          Unknown if child at address

       119 

 UE 

1%

1%

    No contact with responsible adult, of which…

1,741

12%

13%

          Child at address

231

 NC 

2%

2%

          Unknown if child at address

1,510

 UE 

10%

11%

    Deadwood (address vacant, demolished, derelict,
    non-residential, or holiday home)

624

 NE

4%

4%

Calculation

Of sampled

Of issued

Summary of outcomes

N

%

%

Total sample (TS)

 14,550 

TS

100%

Eligible sample (ES)

 12,604 

TS-NE

87%

92%

Interview (I)

 5,956 

I

41%

43%

Non-contact (NC)

 231 

NC

2%

2%

Refusal (R)

 4,341 

R

30%

26%

Other non-response (O)

 447 

O

3%

3%

Unknown eligibility (UE)

 1,629 

UE

11%

12%

Not eligible (NE)

 1,946 

NE

13%

14%

Note: From the 2019 survey onwards, the sampling unit for the CBR sample was the address. In cases where the selected child had moved from the sampled address, interviewers determined whether a child aged 0 to 4 currently lived at the address. If so, the address was considered eligible, and an interview was sought with a parent of the child (or children) aged 0 to 4 at the address; if not, the addresses was deemed ineligible. Prior to the 2019 survey, the sampling unit was the child. In cases where the selected child had moved from the sampled address, the child was still considered eligible, and the interviewer attempted to trace the child to his or her new address and conduct an interview there.

Table A.2: Survey response metrics, Child Benefit Register sample

Survey year

2009

2010-11

2011-12

2012-13

2014-15

2017

2018

2019

2021

2022

Response metric

Calculation

%

%

%

%

%

%

%

%

%

%

Overall response rate

I / 
(I+R+NC+O+(eu*UE))

52

57

58

59

57

52

51

62

38

48

Eligibility rate (eu)

I+NC+R+O / I+NC+R+O+NE

98

97

98

97

97

97

97

79

84

85

Unadj. response rate

I / TS

51

55

57

57

55

50

49

49

32

41

Co-operation rate

I / (I+R+O)

67

76

72

73

70

68

71

73

53

60

Contact rate

I+R+O / (I+R+NC+O+(eu*UE))

77

77

80

80

80

75

72

90

77

87

Refusal rate

R / (I+R+NC+O+(eu*UE))

24

18

22

21

23

24

22

23

37

34

Notes:

The response categories used in the calculations of the response metrics are as follows: Total sample (TS); Interview (I); Non-contact (NC); Refusal (R); Other non-response (O); Unknown eligibility (UE); Not eligible (NE); Eligibility rate (eu). Details of the specific fieldwork outcomes contained within these response categories can be found in Table A.1.
From the 2019 survey onwards, the sampling unit for the CBR sample was the address. In cases where the selected child had moved from the sampled address, interviewers determined whether a child aged 0 to 4 currently lived at the address. If so, the address was considered eligible, and an interview was sought with a parent of the child (or children) aged 0 to 4 at the address; if not, the addresses was deemed ineligible. Prior to the 2019 survey, the sampling unit was the child. In cases where the selected child had moved from the sampled address, the child was still considered eligible, and the interviewer attempted to trace the child to his or her new address and conduct an interview there.

6.2 Outcomes and response for FRS sample

There were 164 valid addresses sampled from the Family Resources Survey (FRS). Opt-out letters were sent to these addresses, leading to opt outs from 9 addresses. These addresses were removed from the sample, and a total of 155 addresses were issued to interviewers, who sent advance letters before starting their calls.

The overall response rate for the FRS sample was 42 per cent (Table A.4). This figure reflects the proportion of productive interviews across all eligible addresses. The full fieldwork outcomes are shown in Table A.3. Table A.4 then presents various response metrics for the FRS sample, showing trend data since the 2017 survey.

Table A.3: Survey response figures, Family Resources Survey sample

Outcome category

Of sampled

Of issued

Detailed outcomes

N

%

%

Total addresses sampled, of which…

164

TS

100%

     Opting out

9

R

5%

Total addresses issued, of which…

155

92%

100%

   No child at address

16

NE

10%

10%

   Respondent moved

13

NC

8%

8%

   Contact made, but not with sampled parent

2

NC

1%

1%

   Refusal

42

R

26%

27%

   Other unproductive

4

O

2%

3%

   Unknown eligibility

17

UE

10%

11%

   Interview – lone parent

2

I

1%

1%

   Interview – partner interview in person

0

I

0%

0%

   Interview – partner interview by proxy

50

I

30%

32%

   Interview – unproductive partner

9

I

5%

6%

Calculation

Of sampled

Of issued

Summary of outcomes

N

%

%

Total sample (TS)

164

TS

100%

Eligible sample (ES)

148

TS-NE

90%

95%

Interview (I)

61

I

37%

39%

Non-contact (NC)

15 

NC

9%

10%

Refusal (R)

51 

R

31%

27%

Other non-response (O)

O

2%

3%

Unknown eligibility (UE)

17 

UE

10%

11%

Not eligible (NE)

16 

NE

10%

10%

Table A.4: Survey response metrics, Family Resources Survey sample

Survey year

2017

2018

2019

2021

2022

Response metric

Calculation

%

%

%

%

%

Overall response rate

I / (I+R+NC+O+(eu*UE))

39

52

52

35

42

Eligibility rate (eu)

I+NC+R+O / I+NC+R+O+NE

100

100

95

89

89

Unadjusted response rate

I / TS

39

52

50

31

37

Co-operation rate

I / (I+R+O)

55

66

75

52

53

Contact rate

I+R+O / (I+R+NC+O+(eu*UE))

69

78

69

67

79

Refusal rate

R / (I+R+NC+O+(eu*UE))

31

23

14

26

35

Notes:

The response categories used in the calculations of the response metrics are as follows: Total sample (TS); Interview (I); Non-contact (NC); Refusal (R); Other non-response (O); Unknown eligibility (UE); Not eligible (NE); Eligibility rate (eu). Details of the specific fieldwork outcomes contained within these response categories can be found in Table A.1.

6.3 Analyses relating to the change of survey mode

Introduction

As described in Section 5.3, interviews were conducted in 2022 via three different modes which had originally been adopted due to restrictions on face-to-face interviewing in 2021 in response to the COVID-19 pandemic. The three different modes of interview comprised: face-to-face interviewing (where Government guidance permitted); telephone interviewing (with the respondent using single-use showcards, or viewing the showcards online); and Microsoft Teams interviewing (with the respondent viewing the interviewer’s survey script on their own computer, tablet, or other device, and choosing response codes from the screen for questions that would ordinarily use a showcard).

A ‘knock-to-nudge’ approach was used, whereby interviewers visited sampled addresses and invited parents to take part in the interview via one of these three modes. This design differs from previous waves in the Childcare and Early Years Survey of Parents series, for which interviews have been conducted wholly face-to-face, but it is the same as the design adopted for the 2021 wave. 

The distribution of interviews by survey mode is shown in Table A.5. Just over two thirds of interviews (70%) were conducted face-to-face (whether in-home, or outside in gardens), 28 per cent were conducted by telephone with very few (1%) conducted by Microsoft Teams. A higher percentage of interviews were conducted via face-to-face interviewing in 2022 than in 2021 (70% in 2022 compared to 39% in 2021). 

One consequence of the change of survey design for 2021 and 2022 is that the overall response rate to the survey fell from 51 per cent in 2018, and 62 per cent in 2019, to 38 per cent in 2021 and 48 per cent in 2022, with the unadjusted response rate falling from 49 per cent in both 2018 and 2019, to 32 per cent in 2021 and 41 per cent in 2022 (for further details on the calculation of the survey response rates, see Section 6.1). 

This decline means that there is greater scope for non-response bias to affect survey estimates in 2021 and 2022, compared to 2019 and earlier survey years. Non-response bias refers to biases that arise when those participating in a survey differ from those who do not participate in ways that are associated with the survey measures. It should be noted, however, that recent research has found only a weak association between response rates and levels of non-response bias, and that weighting can address (but not eliminate) non-response bias[9].

A second consequence of this change of design is that the survey modes themselves may influence the answers that parents provide. Such ‘mode effects’ can also introduce bias into survey estimates. Past research has shown that mode effects are most pronounced between interviewer administered versus non-interviewer administered modes; for attitudinal rather than factual questions; and for questions of a sensitive nature[10].

It is not possible to provide direct assessments of either the extent of non-response bias, or the influence of mode effects, for the 2021 or the 2022 survey waves. A direct assessment of non-response bias would have required a wholly face-to-face survey to be run in parallel with the 2021 and 2022 waves, with survey estimates compared between the face-to-face only surveys and the mixed mode surveys. While survey estimates from 2022 can be compared with earlier survey waves, it is possible that changes observed will reflect ‘real’ changes among the population, whether due to gradual change over time, or due to acute change in response to the COVID-19 pandemic. 

A direct assessment of the influence of mode effects in the 2021 or 2022 waves would have required an experimental design, with each address randomly assigned to one of the three survey modes. In the absence of such a design, mode effects cannot be disentangled from selection effects, whereby those choosing one survey mode differ from those choosing another survey mode in ways that are associated with the survey measures.

In this section, we instead look for indirect evidence to understand the extent to which the 2022 wave may be subject to these biases.

Analyses of the sample profile

An indirect assessment of the scope for non-response bias can be obtained by comparing the profile of the issued sample with that of the achieved sample, for geo-demographic measures known to be related to key survey estimates. These geo-demographic measures must be available for the whole issued sample – that is, including those addresses at which interviews were not obtained – to enable the comparisons to be made.

Table A.6 shows, for the 2018, 2021 and 2022 survey waves, the profiles of the issued and (unweighted) achieved CBR samples for region, area deprivation, and rurality. The 2018 wave is used as the comparator as it is the most recent comparable wave to the 2021 and 2022 waves in terms of the survey population (children aged 0 to 14).

The relative bias - defined as the percentage point difference between the issued and achieved sample for a given subcategory – is also shown. The relative bias describes the extent to which certain regions and area types are over- or under-represented in the achieved samples compared to the issued samples. The ‘absolute relative bias’ has also been computed for each of the three variables. The absolute relative bias is the sum of the absolute values of the relative bias and provides a measure of the overall discrepancy between the issued and achieved samples.

The data in Table A.6 demonstrate a high degree of consistency between the 2018, 2021 and 2022 survey waves. For region, the relative biases range between -2.8 and 1.9 percentage points for 2018, between -1.9 and 1.8 percentage points for 2021 and between -1.2 and 1.7 for 2022, with the absolute relative biases being 9.8, 9.2 and 6.3 percentage points for 2018, 2021 and 2022 respectively. For area deprivation, the relative biases range between -1.1 and 1.0 percentage points for the 2018 wave, between -1.6 and 0.9 percentage points for 2021, and between -0.5 and 1.0 percentage points for 2022, with the absolute relative biases being 3.3, 3.5 and 1.9 percentage points for 2018, 2021 and 2022 respectively. And for rurality, the relative biases are -0.4 and 0.4 percentage points for 2018, -0.1 and 0.1 percentage points for 2021, and -0.6 and 0.6 percentage points for 2022 with the absolute relative biases being 0.8, 0.2 and 1.2 percentage points for 2018, 2021 and 2022 respectively.

Whilst there are some differences between the 2018 and 2022 response profiles they are not large and do not provide sufficient evidence that the 2022 wave is subject to greater levels of non-response bias than the 2018 wave. It is of course possible that the 2022 wave is subject to greater levels of bias on variables other than region, area deprivation, and rurality, but the absence of such variables for the full issued samples means that these comparisons cannot be made.

 

[9] See e.g.: Patrick Sturgis et al., ‘Fieldwork Effort, Response Rate, and the Distribution of Survey Outcomes’, Public Opinion Quarterly 81, no. 2 (2017): 523–42, https://doi.org/10.1093/poq/nfw055; Teitler, J. O., Reichman, N. E., & Sprachman, S. (2003). Costs and benefits of improving response rates for a hard-to-reach population. Public Opinion Quarterly, 67(1), 126–138. https://doi.org/10.1086/346011

[10] See e.g.: Roger Tourangeau, ‘Mixing Modes: Tradeoffs Among Coverage, Nonresponse, and Measurement Error’, in Total Survey Error in Practice, ed. Paul P. Biemer et al. (Hoboken, NJ, USA: John Wiley & Sons, Inc., 2017), 115–32, https://doi.org/10.1002/9781119041702.ch6.

Table A.5: Achieved interviews, by mode of interview

Mode of interview

%

Base: All families with child(ren) aged 0 to 14

6,017

Face-to-face

70

Telephone

28

Microsoft Teams

1

Table A.6: Issued and achieved CBR sample profiles for region, area deprivation, and rurality, 2018, 2021 and 2022

2018

2021

2022

Issued sample

Achieved sample

Relative bias

Issued sample

Achieved sample

Relative bias 

Issued sample

Achieved sample

Relative bias 

%

%

ppts

%

%

ppts

%

%

ppts

Base: All families with child(ren) aged 0 to 14

11,539

5,877

n/a

18,340

5,892

n/a

14,550

5,956

n/a

Region

North East

4.6

4.8

0.2

4.6

4.0

-0.6

4.5

5.4

0.9

North West

13.5

14.6

1.0

13.7

14.6

0.9

14.0

15.7

1.7

Yorkshire and the Humber

10.0

11.9

1.9

10.1

9.6

-0.5

9.9

10.3

0.4

East Midlands

8.5

7.8

-0.7

8.6

7.7

-0.9

8.9

8.8

-0.1

West Midlands

11.0

11.4

0.4

11.3

9.3

-1.9

11.3

10.8

-0.5

East of England

11.2

12.5

1.4

11.2

13.0

1.8

11.1

11.3

0.2

London

16.1

13.3

-2.8

15.7

17.3

1.6

15.3

14.1

-1.2

South East

15.8

14.6

-1.2

15.5

14.8

-0.6

15.5

14.4

-1.1

South West

9.2

9.1

-0.1

9.3

9.6

0.3

9.5

9.3

-0.2

ABSOLUTE RELATIVE BIAS

9.8

9.2

6.3

Area deprivation

1st quintile – most deprived

28.0

29.0

1.0

27.7

26.1   

0.9

27.1

28.1

1

2nd quintile

20.0

20.3

0.3

20.8

20.6

0.4

22.4

21.9

-0.5

3rd quintile

18.0

16.9

-1.1

18.2

18.7

0.5

19.6

19.2

-0.4

4th quintile

17.9

17.4

-0.5

17.6

18

-0.2

15.3

15.3

0

5th quintile – least deprived

16.1

16.5

0.4

15.7

16.6

-1.6

15.6

15.6

0

ABSOLUTE RELATIVE BIAS

3.3

3.5

1.9

Rurality

Rural

13.7

14.1

0.4

13.9

14.0

0.1

12.9

12.3

-0.6

Urban

86.3

85.9

-0.4

86.1

86.0

-0.1

87.1

87.7

0.6

ABSOLUTE RELATIVE BIAS

0.8

0.2

1.2


 

7. Data processing

7.1 Coding and editing of the data

The survey script ensured that the correct routing was followed throughout the questionnaire and applied range checks, which prevented invalid values from being entered. It also included consistency checks, which prompted interviewers to check answers that were inconsistent with information provided earlier in the interview. These checks allowed interviewers to clarify and query any data discrepancies directly with the respondent and were used extensively throughout the questionnaire.

The data collected during interviews was coded and edited. The main task was the back-coding of ‘other’ answers. This was carried out when over 10 per cent of respondents at a particular question provided an alternative answer to those that were pre-coded; this answer was recorded verbatim during the interview and was coded during the coding stage using the original list of pre-coded responses and sometimes additional codes available to coders only.

Coding was completed by a team of Ipsos coders who were briefed on the survey. If the coder could not resolve a query, this was referred to the research team.

After the dataset was cleaned, the analysis file of question-based and derived variables was set up in SPSS and all questions and answer codes labelled.

7.2 Analysis and significance testing

Data tables showing survey results were created. These were generated in SPSS, and significance testing was undertaken using SPSS version 28. The complex samples module in SPSS was used to take into account the impact of stratification, clustering and non-response on the survey estimates. This means that ‘false positive’ results to significance tests (in other words interpreting a difference as real when it is not) is far less likely than if the standard formulae were used.

7.3 Provider edits

Checks were carried out on respondents’ classifications of the pre-school childcare providers they used in order to improve the accuracy of the classifications. During the main survey, parents were asked to classify the childcare providers they used for their children into types (for example nursery school, playgroup and so on). Given that some parents may have misclassified the pre-school providers they used, Ipsos contacted providers by telephone, where possible, and asked them to classify the type of provision they offered to children of different ages. Telephone interviews with providers were carried out in three separate batches, the first two during fieldwork period, and the second immediately after fieldwork had finished.

The following provider types (as classified by parents) were contacted:

  • nursery school
  • nursery class attached to a primary or infants’ school
  • special day school or nursery or unit for children with special educational needs
  • playgroup or pre-school

As in 2021 (but not in previous survey waves), reception classes at primary or infants’ schools and day nurseries, were removed from the scope of the provider checks to help to reduce the length of the main survey given the move to the remote interviewing modes. This was deemed appropriate given that parents are correct in their categorisations of these provider types in the great majority of instances (e.g. a 92% accuracy rate for each of these provider types in the 2019 wave).

The process of checking providers started by extracting data from the survey data regarding the providers used and the parents’ classification of them. This was only done in cases where parents had agreed to Ipsos contacting their providers. Each provider remained linked to the parent interview so that they could be compared and later merged with the parent interview data.

Ipsos received information on 970 providers from the interview data. Because different parents may have used the same provider, the contact information for that provider was potentially repeated. As such, Ipsos de-duplicated the list of providers, which was done both manually and automatically. 119 providers were duplicates and were therefore removed from the checks.

A full list of 851 providers was generated, and telephone interviewers were briefed. Interviews with providers were approximately six minutes long, and covered the services provided and the age range of the children who attended each service. Interviews were achieved with 594 providers, which constitutes a response rate of 70 per cent.

The classification of pre-school providers was compared between the parent face-to-face interviews and the provider checks telephone interviews, and final classifications were derived by following pre-agreed editing rules. Table A.7 compares parents’ classification of providers with the final classification of providers after the edits had been carried out.

While these data illustrate the net change in provider classifications before and after the provider edits, they do not show the gross changes; that is, how exactly each provider as classified by parents is ultimately reclassified after the provider edits are complete. This is shown for those provider mentions which were subjected to the provider edits (i.e. where provider contact details were provided and an interview with the provider was sought) in Table A.8.

This table shows that parents were most accurate when categorising special day schools or nurseries or units for children with special educational needs (94% accuracy), followed by nursery classes (82% accuracy). Parents were least accurate where they classified a provider as a nursery school – this proved accurate in 57 per cent of cases, with 34 per cent of these classifications ultimately proving to be a day nursery, and six per cent a nursery class.

Table A.7: Summary classification of providers before and after provider checks


Parents’ classification

Final classification after all checks

%

%

Base: All formal institutional providers identified by parents for whom contact details were provided by parents

970

970

Nursery school

47

34

Nursery class attached to a primary or infants’ school

32

30

Reception class

n/a

1

Special day school or nursery or unit for children with SEN

2

2

Day nursery

n/a

18

Playgroup or pre-school

19

15

Table A.8: Detailed classification of providers before and after provider checks. Parents’ classifications (bold) and final classifications (not bold)


Per provider

Of total

N

%

%

Nursery school

456

100

47

Nursery school

259

57

27

Nursery Class

26

6

3

Reception Class

1

*

*

Special day school/nursery

1

*

*

Day Nursery

157

34

16

Playgroup or pre-school

12

3

1

Nursery Class

314

100

32

Nursery school

31

10

3

Nursery Class

259

82

27

Reception Class

11

4

1

Special day school/nursery

0

0

0

Day Nursery

9

3

1

Playgroup or pre-school

4

1

*

Special day school/nursery

18

100

2

Nursery school

0

0

0

Nursery Class

0

0

0

Reception Class

1

6

*

Special day school/nursery

17

94

2

Day Nursery

0

0

0

Playgroup or pre-school

0

0

0

Playgroup or pre-school

182

100

19

Nursery school

35

19

4

Nursery Class

4

2

0

Reception Class

0

0

0

Special day school/nursery

0

0

0

Day Nursery

11

6

1

Playgroup or pre-school

132

73

14

GRAND TOTAL

970

100

7.4 Weighting

Summary of the weighting

The sample was selected from two sources: the main component was sampled from the Child Benefit Register (CBR) as per previous years of the survey, with an additional sample from respondents to the Family Resources Survey (FRS) that were identified as not receiving Child Benefit because of the introduction of the High Income Benefit Charge. These two components of the survey were weighted separately.   

The sample is analysed at both the family and child-level, and hence there are two final weights; a family weight for family-level analyses, and a child weight for analyses of data collected about the selected child.

Child Benefit sample: Family weights

Family selection weight

The Child Benefit sample was designed to be representative of the population of children (aged 14 or younger) of parents receiving Child Benefit, rather than the population of parents or families themselves. This design feature means that larger families are over-represented in the sample[11]. In addition, the sampling was designed so that the sample of children aged 0 to 4 was boosted by a factor of 2.4.

The first stage of the weighting for the family weights corrects for these design features by calculating the appropriate selection weights; these selection weights corrected for families for which the number of children in the sample frame differed from the number of children found in the family at interview.

The family selection weight is the inverse of the family’s selection probability, so larger households are weighted down:

W1 = 1/Pr(F); where

Pr(F) = (# children not aged 0 to 4) + 2.4 x (# children aged 0 to 4)

The counts of the children were based on the sampling frame information but were adjusted up (or down) if more (or fewer) children were found in the family at interview – this adjustment was trimmed to reduce the variance of the final child weights. 

Family calibration weight

The next stage of the weighting adjusted the sample using calibration weighting, so that the weighted distribution for region and the number of children in the household at the family level matched the family-level Child Benefit counts, and the weighted distribution for age groups at the child level matched child-level Child Benefit counts (Table A.9). HMRC provided Ipsos with a breakdown of the sampling frame (before exclusions) for different variables at family and child level (see Tables A.9 and A.10). 

The family selection weights (W1) were used as the starting weights for the calibration weighting stage. 

Child Benefit sample: Child weights

Child selection weight

At each sampled address from the Child Benefit sample, a single child aged 0 to 14 was selected at random to be the focus of the detailed childcare section of the questionnaire. Children aged 0 to 4 were given a higher chance of selection (by a factor of 2.4) in order to boost the sample in that age range.

The child selection weight (W3) is the inverse of the child selection probabilities applied within each household:

W3 = 1/Pr(C); where

Pr(C) = 2.4 / [(# children not aged 0 to 4) + 2.4 x (# children aged 0 to 4)] if the child was not aged 0 to 4

Child calibration weight

The next stage was to produce calibration weights that adjusted the sample of selected children so that the weighted distributions for age/sex groups, region and number of children in the household matched child-level Child Benefit counts (Table A.10). The starting weights for the calibration stage (W4) were obtained by combining the family weight (W2) with the child selection weights (W3): W4 = W2 x W3.
 

FRS Sample: Family and child weights

Because the number of interviews carried out with the sample selected from the Family Resources Survey was relatively small (61), a complex weighting strategy was not appropriate. Instead, the child and family weights for the FRS sample were both set to be three times the corresponding mean value for the Child Benefit sample weights.

The weights for the two sample components were combined and re-scaled to have mean of 1, so the weights sum to the sample size. 

Effective sample size

Disproportionate sampling and sample clustering usually result in a loss of precision for survey estimates. All else being equal, the more variable the weights, the greater the loss in precision.

The effect of the sample design on the precision of survey estimates is indicated by the effective sample size. The effective sample size measures the size of an (unweighted) simple random sample that would have provided the same precision as the design being implemented. The efficiency of a sample is given by the ratio of the effective sample size to the actual sample size.

The estimated ‘average’ effective sample size and sample efficiency were calculated for both weights (Table A.11). Note that this calculation includes only effects of the weighting; it does not include clustering effects, which will be question-specific. In addition, this is an ‘average’ effect for the weighting – the true effect will vary from question to question. These figures provide a guide to the average level of precision of child-level and family-level survey estimates.

Confidence intervals

Confidence intervals (at the 95% level) for key estimates in the survey are shown in Table A.12. The confidence intervals have been generated using standard errors calculated using complex samples formulae. 

 

[11] This follows from children in England having an equal chance of selection, meaning that a family with two children has twice the chance of having a child selected as a family with one child, a family with four children has four times the chance of having a child selected as a family with one child, and so on.

Table A.9: Control totals for the family calibration weights

Population

Population

Selection weight (W1)

Final weight (W2)

N

%

%

%

Region (families)

North East

250,737

4.6

5.4

4.6

North West

733,837

13.6

15.2

13.6

Yorkshire and the Humber

542,185

10.0

10.0

10.0

East Midlands

469,606

8.7

8.6

8.7

West Midlands

593,858

11.0

10.7

11.0

East of England

609,019

11.2

11.5

11.3

London

848,479

15.7

14.6

15.7

South East

862,071

15.9

14.8

15.9

South West

505,425

9.3

9.3

9.3

TOTAL

5,415,215

Children’s age (children)

0-1

655,618

7.5

9.4

7.5

2-4

1,597,146

18.3

19.1

18.3

5-7

1,807,950

20.7

20.8

20.7

8-11

2,650,131

30.3

28.9

30.3

12-14

2,026,923

23.2

21.7

23.2

TOTAL

8,737,768

Number of children aged 0 to 14 in household (families)

1

2,873,402

53.1

46.4

53.0

2

1,930,956

35.7

40.4

35.7

3

470,160

8.7

10.1

8.7

4+

140,697

2.6

3.1

2.6

TOTAL

5,415,215

Table A.10: Control totals for the child calibration weights

Population

Population

Pre-calibration weight (W4)

Final weight (W4)

N

%

%

%

Region (children)

North East

401,744

4.6

4.5

4.6

North West

1,198,717

13.7

13.8

13.7

Yorkshire and the Humber

887,741

10.2

10.1

10.2

East Midlands

755,419

8.6

8.9

8.6

West Midlands

986,970

11.3

11.1

11.3

East of England

973,419

11.1

11.3

11.1

London

1,353,945

15.5

15.0

15.5

South East

1,366,812

15.6

15.9

15.6

South West

813,001

9.3

9.5

9.3

TOTAL

8,737,768

Selected child’s gender / age (children)

Males: 0-1

336,483

3.9

3.4

3.9

Males: 2-4

818,848

9.4

8.8

9.4

Males: 5-7

926,530

10.6

11.2

10.6

Males: 8-11

1,357,729

15.5

16.4

15.5

Males: 12-14

1,037,229

11.9

12.1

11.9

Females: 0-1

319,135

3.7

3.2

3.7

Females: 2-4

778,298

8.9

8.7

8.9

Females: 5-7

881,420

10.1

11.0

10.1

Females: 8-11

1,292,402

14.8

14.0

14.8

Females: 12-14

989,694

11.3

11.3

11.3

TOTAL

8,737,768

Number of children in household (children)

1

2,867,992

32.8

32.2

32.8

2

3,854,641

44.1

44.3

44.1

3

1,407,825

16.1

16.5

16.1

4+

607,310

7.0

7.0

7.0

TOTAL

8,737,768

Table A.11: Effective sample size and weighting efficiency

All

Base: All cases

6,017

Child weight

Effective sample size

4,602

Sample efficiency

76.5%

Family weight

Effective sample size

3,885

Sample efficiency

64.6%

Table A.12: Confidence intervals (95%) for key estimates

Estimate

Standard error

Lower

Upper

Unweighted base

Use of any childcare by family

70.32%

0.01

68.45%

72.19%

6,017

Use of formal childcare by family

57.71%

0.01

55.75%

59.66%

6,017

Use of informal childcare by family

27.29%

0.01

25.55%

29.02%

6,017

Hours of childcare used (pre-school children) (mean)

26.56

0.44

25.70

27.43

1,753

Hours of childcare used (school-age children) (mean)

9.58

0.34

8.91

10.25

1,722

Weekly amount (£) paid for childcare (mean)

66.31

2.51

61.39

71.24

2,310

Use of holiday childcare 

40.81%

0.01

38.43%

43.20%

4,920

Annexes

Appendix: Socio-demographic profile

Respondent characteristics

Gender of responding parent

As in previous surveys in the series, the majority of parents who responded to the survey were female (83%).

Age

The mean age of respondents was 39.8. Table B.1 shows the age bands of respondents by family type. It shows that respondents in couple families tended to be slightly older than lone parent respondents.

Table B.1 :Age of respondent, by family type

Family type

Couples

Lone parents

All

Age of respondent

%

%

%

Base: All families with child(ren) aged 0 to 14

4,483

1,534

6,017

20 and under

*

*

*

21 to 30

9

18

11

31 to 40

42

43

42

41 to 50

41

30

38

51+

8

9

8

Mean

40.3

38.3

39.8

 

Marital status

The majority of respondents (69%) were married or in a civil partnership and living with their husband/wife/civil partner. Around one in five (21%) were single and never married (including persons who were cohabiting) (Table B.2).

Table B.2: Marital status

All

Marital status

%

Base: All families with child(ren) aged 0 to 14

6,017

Married or in a civil partnership and living with husband/wife/civil partner

69

Single (never married)

21

Divorced / civil partnership dissolved

5

Married or in a civil partnership and separated from husband/wife/civil partner

4

Widowed

1

Qualifications

Respondents in lone parent families tended to have lower qualifications than respondents in couple families (Table B.3). Lone parents were less likely to hold Honours and Masters degrees as their highest qualification than were respondents in couple families, and were more likely not to hold any academic qualifications.

Table B.3: Highest qualification, by family type

Family type

Couples

Lone parents

All

Qualifications

%

%

%

Base: All families with child(ren) aged 0 to 14

4,427

1,510

5,937

GCSE grade D-G/CSE grade 2-5/SCE O Grades (D-E)/SCE

6

12

8

GCSE grade A-C/GCE O-level passes/CSE grade 1/SCE O

10

15

11

GCE A-level/SCE Higher Grades (A-C)

12

14

13

Certificate of Higher Education

7

9

7

Foundation degree

4

5

5

Honours degree (e.g. BSc, BA, BEd)

28

17

25

Masters degree (e.g. MA, PGDip)

16

6

13

Doctorate (e.g. PhD)

2

1

2

Other academic qualifications

1

1

1

None

14

22

16

Family characteristics

Size of the family

The median family size was four people. The smallest families comprised two people (i.e. one parent and one child), and the largest comprised 12 people.

Number of children aged 0 to 14 in the family

Just over half of families (52%) had one child aged 0 to 14, 36 per cent had two children, and 12 per cent had three or more children (Table B.4). Lone parents tended to have fewer children than couple families.

Table B.4: Number of children in the family, by family type

Family type

Couples

Lone parents

All

Number of children

%

%

%

Base: All families with child(ren) aged 0 to 14

4,483

1,534

6,017

1

49

61

52

2

39

28

36

3+

12

11

12

Around one in five (18%) of families had only pre-school children, 17% had both pre-school and school-age children, and two thirds (65%) had only school-age children (Table B.5).

Table B.5: Number of pre-school and school-age children in the family, by family type

Family type

Couples

Lone parents

All

Age of children in family

%

%

%

Base: All families with child(ren) aged 0 to 14

4,483

1,534

6,017

Only pre-school children (0 to 4 years)

20

15

18

Both pre-school and school-age children

17

14

17

Only school-age children (5 to 14 years)

63

71

65

Family annual income

Table B.6 shows the family annual income (before tax). Lone parents tended to have lower family annual incomes than did couple families.

Table B 6: Annual family income, by family type

Family type

Couples

Lone parents

All

Family annual income

%

%

%

Base: All families with child(ren) aged 0 to 14

3,796

1,317

5,113

Up to £9,999

2

9

4

£10,000 - £19,999

6

38

15

£20,000 - £29,999

12

30

17

£30,000 - £44,999

19

13

17

£45,000 - £64,999

26

6

20

£65,000 or more

36

4

27

Family type and work status

Table B.7 shows family type and work status. Just over half of respondents were from couple families where both parents worked (53%), and a further 17 per cent were in couple families where one parent worked. In 12 per cent of families no-one was working (9% were non-working lone parent families and 3% were couple families where neither parent was in work).

Table B.7: Family work status

All

Family work status

%

Base: All families with child(ren) aged 0 to 14

6,017

Couple – both working

53

Couple – one working

17

Couple – neither working

3

Lone parent working

18

Lone parent not working

9

Tenure

The tenure of respondents’ families is shown in Table B.8. Families were most likely to buying it with the help of a mortgage or loan (49%) or be renting the property (41%). Most couple families were in the process of buying their home with the help of a mortgage or loan (59%), while most lone parents were renting (70%). 

Table B.8: Tenure status, by family type

Family type

Couples

Lone parents

All

Tenure status

%

%

%

Base: All families with child(ren) aged 0 to 14

4,443

1,517

5,960

Buying it with the help of a mortgage or loan

59

20

49

Rent it

30

70

41

Own it outright

9

5

8

Live rent-free (in relative’s/friend’s property)

1

3

1

Pay part rent and part mortgage 
(shared ownership)

1

2

1

Selected child characteristics

Gender of selected child

Just over half of selected children were boys (52%), and just under half were girls (48%).

Age of selected child

The age of the selected child was spread across all age categories (Table B.9).

Table B 9: Age of selected child, by family type

Family type

Couples

Lone parents

All

Age of selected child

%

%

%

Base: All child(ren) aged 0 to 14

4,483

1,534

6,017

0 to 2

14

11

13

3 to 4

13

12

13

5 to 7

22

20

21

8 to 11

29

33

30

12 to 14

22

25

23

Ethnic group of selected child

The majority of selected children in the survey were White British (65%) (Table B.10).

Table B.10: Ethnicity of selected child, by family type


Family type

Couples

Lone parents

All

Ethnicity of selected child

%

%

%

Base: All child(ren) aged 0 to 14

4,465

1,519

5,984

White

White British

65

65

65

White Irish

*

*

*

Other White

7

4

7

Mixed

White and Black Caribbean

1

4

2

White and Black African

1

2

1

White and Asian

2

1

2

Other mixed

2

3

2

Asian or Asian British

Indian

4

1

4

Pakistani

5

3

5

Bangladeshi

3

1

2

Other Asian

2

1

1

Black or Black British

Caribbean

1

3

1

African

4

9

5

Other Black

*

*

*

Chinese

1

*

*

Arab

1

1

1

Other

1

1

1

Special education needs and disabilities of selected child

Thirteen per cent of selected children had a special educational need[12], and eight per cent had a long-standing physical or mental impairment, illness or disability (Table B.11). 

Table B.11: Special educational needs or disabilities of selected child, by family type

Family type

Couples

Lone parents

All

Special educational needs or disabilities of selected child

%

%

%

Base: All child(ren) aged 0 to 14

4,483

1,534

6,017

Child has SEN

11

18

13

Child has long-standing physical or mental impairment, illness or disability

7

11

8

Among children with a special educational need, 36 per cent had an Education, Health and Care plan or a Statement of special educational needs, and 27 per cent received SEN support (Table B.12). A further nine per cent received one of these (an Education, Health and Care plan/Statement of special educational needs, or SEN support) but the parent did not know which.

Table B.12: Support received by selected child with special educational needs, by family type

Family type

Couples

Lone parents

All

Special educational needs

%

%

%

Base: All child(ren) with a special educational need or other special needs

411

239

650

Child has Education, Health and Care plan or Statement of special educational needs

37

35

36

Child receives SEN support

28

25

27

Child receives one of the above but parent does not know which

7

12

9

Child does not receive any of these

28

27

28


Region, area deprivation and rurality

Table B.13 shows the geographical spread of the surveyed families according to region.

Table B.13: Region

All

Region

%

Base: All families with child(ren) aged 0 to 14

6,017

North East

5

North West

14

Yorkshire and the Humber

10

East Midlands

9

West Midlands

11

East of England

11

London

16

South East

16

South West

9

Interviewed families lived in a broad range of areas in terms of deprivation levels, as defined by the Index of Multiple Deprivation in England (Table B.14).

Table B 14: Area deprivation according to the Index of Multiple Deprivation

All

Area deprivation

%

Base: All families with child(ren) aged 0 to 14

6,017

1st quintile – most deprived

25

2nd quintile

22

3rd quintile

19

4th quintile

16

5th quintile – least deprived

18

Table B.15 shows that 87 per cent of families lived in urban areas, with the remaining 13 per cent living in rural areas.

Table B.15: Rurality

All

Rurality

%

Base: All families with child(ren) aged 0 to 14

6,017

Rural

13

Urban

87

Urban - major conurbation

40

Urban - minor conurbation

3

Urban - city and town

43

Rural - town and fringe

6

Rural - town and fringe in a sparse setting

*

Rural - village and dispersed

7

Rural - village and dispersed in a sparse setting

*

 

[12] The selected child was categorised as having a special educational need (or not) during the interview via the parent’s response to the question “Does [child’s name] have any special educational needs or other special needs? [yes/no/don’t know/refused]”

Help and support

Contact us

If you have a specific enquiry about Childcare and early years survey of parents statistics and data:

Early Years Analysis and Research

Email: EY.ANALYSISANDRESEARCH@education.gov.uk
Contact name: Rachel Murphy

Press office

If you have a media enquiry:

Telephone: 020 7783 8300

Public enquiries

If you have a general enquiry about the Department for Education (DfE) or education:

Telephone: 037 0000 2288

Opening times:
Monday to Friday from 9.30am to 5pm (excluding bank holidays)