For the core ESS 2022 survey, a total of 72,918 interviews were undertaken between June 2022 and March 2023. Table 2‑1 provides a breakdown of completed interviews by nation.
Table 2‑1 Interviews achieved by nation
Nation | No. of interviews |
---|
England | 59,486 |
---|
Scotland | 5,207 |
---|
Northern Ireland | 3,400 |
---|
Wales | 4,825 |
---|
Sampling
Sampling unit
The sampling unit was at an establishment level, rather than at an organisation / company level. This is in recognition of the influence that local labour markets have on skill issues and the fact that skills issues are felt most acutely at the site level. This mirrored the establishment-based approach adopted in previous UK Employer Skills Surveys and the UK Employer Perspectives Surveys, as well as the legacy skills surveys in each nation.
The individual sought at each establishment was the person who had most responsibility for staff issues such as training, recruitment or resourcing. For smaller establishments this was most often the general manager or owner, and for larger establishments this was most often the HR manager.
Survey scope / eligibility
In line with the approach adopted since ESS 2013, the survey population for ESS 2022 was establishments with 2+ employment: establishments were eligible if they had two or more people on the payroll at the site, regardless of whether or not these individuals were proprietors or not, and excluding the self-employed, outside contractors and agency staff.
The 2011 survey was the first year of transitioning to a UK-wide Employer Skills Survey and so it had 1+ employment coverage to allow comparisons with the preceding national skills surveys. Note that in the current and the 2013-2019 ESS reports, where comparisons are made with 2011, this is based on 2011 data that has been re-weighted on a 2+ employment population. This means that results from the 2011 survey that are presented in the 2022 report will not necessarily match those published in the 2011 report. The rationale for the change in survey population and the 2011 re-weighting process is detailed in the ESS 2013 technical report.
ESS 2019 did not include employers based in Scotland. Instead, the Scottish Government commissioned its own Scotland-wide ESS in 2020 and EPS in 2019 and 2021. This means that in the published findings, UK level results use 2017 as the last comparable data point (rather than 2019). However, where time series is covered at nation level, it is possible to compare to ESS 2019 for England, Northern Ireland and Wales results. Comparisons have not been made to the Scottish ESS 2020 or the Scottish EPS 2021 in the UK reports due to the timings of these surveys, with fieldwork undertaken for the Scottish surveys during the COVID-19 pandemic. This means that Scottish ESS results are not comparable to the England, Wales and Northern Ireland 2019 ESS. In the main UK report we do compare Scottish EPS 2019 results where comparable questions were asked in ESS 2019.
Sampling process
Move to a random probability sampling (RPS) approach
The 2022 UK Employer Skills Survey was the first in the series to adopt a random probability (RPS) sampling approach. ESS 2011 to 2019, all previous iterations of the EPS, and the Scottish ESS 2020 and Scottish EPS 2019 and 2021, all used a quota sampling approach, where the goal is to obtain a target number of achieved interviews with a certain size, sector and geographic profile from the issued sample. There was no set process for the number of times each piece of sample should be called, and once a particular quota target had been reached (for example Manufacturing firms in the East of England with 2-4 employees) then remaining sample of that type was withdrawn (unless other quota cells are likely to fall short of target, and then initial quota targets are adjusted to take account of this).
While this approach has various benefits including flexibility, and cost-effectiveness, there are also some disadvantages. For example, it can lead to inefficient use of sample, where sample is abandoned, once targets are hit, which in turn introduces potential non-response effects.
In contrast, under a random probability sampling (RPS) approach, all sample issued is processed according to agreed protocols until all leads are exhausted. There are no quotas or caps on achieved interviews. In theory, it means that units sampled from a given population cells have an equal and known probability of being sampled. There are various advantages and disadvantages to this sampling approach, outlined below:
- Being able to assign a known probability of being sampled to every unit in the sampling frame enhances estimates of statistical confidence.
- It leads to more efficient use of issued sample, with zero wastage and a high conversion rate per cell compared against quota sampling. This maximises the number of achieved interviews among the issued sample and gives better coverage of difficult to reach respondents.
- Given there is a set process for the minimum number of times each record is called it provides a more consistent basis for projects where more than one field agency is involved.
- It ensures equal treatment of all units of sample within a cell, both in terms of probability of selection and the fieldwork processes administered in order to achieve interviews.
- It is considered the most robust method for national statistics and public sector surveys.
Following a trial of RPS in the West Midlands region in ESS 2019 (more information is available in the ESS 2019 technical report), the process was scaled up for all regions in the 2022 survey.
The following contacting rules were established:
2-4, 5-9 and 10-24 employees
A minimum of 10 call attempts for all sample records. If at any point during those 10 attempts a ‘definite appointment’ was registered (i.e. there was a firm interest in taking part), minimum number of tries for that record was increased to 12. If the last call outcome was a ‘definite appointment’, the number of attempts was increased by 1 (up to a maximum of 15 calls).
25-49 and 50-99 employees
As above, except if a definite appointment was registered in the initial 10 tries, the number of attempts was increased to 14. If the last call outcome was a ‘definite appointment’, the number of attempts was increased by 1 (up to a maximum of 17 calls).
100-249 employees
A minimum of 12 call attempts, which increased to 15 if a definite appointment was registered. If the last call outcome was a ‘definite appointment’, the number of attempts was increased by 1 (up to a maximum of 20 calls).
250+ employees
A minimum of 15 call attempts, which increased to 20 if a definite appointment was registered. If the last call outcome was a ‘definite appointment’, the number of attempts was increased by 1 (until a maximum of 25).
Population data used for setting targets
Population statistics used to stratify the business population were established through the March 2021 Inter-Departmental Business Register (IDBR), which was the latest available at the time. The IDBR is administered by the Office for National Statistics (ONS), which holds records of all businesses registered for VAT and all businesses operating a pay as you earn (PAYE) income tax scheme. The IDBR is widely regarded as being the most accurate and comprehensive ‘official’ source of business population data available and was used for sampling and weighting in all previous editions of the Employer Skills Surveys and in the nation-specific legacy skills surveys.[1] (opens in a new tab)
Setting targets
Quotas are not set under an RPS approach, as at the point where a record is contacted, it must be called to the required protocols as outlined above. However, notional ‘ideal’ targets were set in order to achieve a representative sample.
Targets were set by geography, size and sector using interlocked size and sector targets within Northern Ireland, Scotland, Wales and each English region. This largely mirrored the approach of ESS 2019’s disproportionate stratified random sampling strategy, such that the quota targets set intentionally oversampled some groups and under sampled others, rather than setting targets in direct proportion to the business population. In practice this means that some smaller sub-groups of employers (such as large establishments) are oversampled to ensure that a sufficiently large number of interviews are achieved to allow for robust sub-group analyses.
First, overall targets were set by nation, as shown in Table 2‑3.
The overall target for each nation was then distributed by size band according to the distribution in Table 2‑2. This follows the method of ESS 2019 (and previously from EPS 2016) of striking a balance between over-sampling larger employers (relative to the population), while not skewing the size profile too far away from smaller establishments. Note, the 250+ target was initially set at 3%, but due to the 250+ sized group being historically challenging to achieve (in 2019, 925 interviews were completed), a decision was made to cap the total number of interviews to 1,500 in this category (effectively reducing the distribution to 2%). Sample numbers falling beyond the 1,500 cap were distributed proportionately among the other sizebands.
Table 2‑2 Target distribution of size at UK level
No. of employees | 2-4 | 5-9 | 10-24 | 25-49 | 50-99 | 100-249 | 250+ |
---|
Distribution | 27% | 23% | 22% | 13% | 6% | 6% | 2% |
---|
Within England, the overall targets within each size band were initially distributed across the nine English regions (as shown in Table 2‑3), half in proportion to their distribution by region within size band in the IDBR population data and half equally across the regions, providing a balance between boosting smaller regions and mirroring the universe profile. Note that this is a slight modification from the ESS 2019 practice whereby this 50:50 ‘proportional / equal’ distribution occurred at overall region level, rather than independently within size band. The modified method in 2022 ensures that the target is distributed according to a more consistent process within size band, although in practice due to the similarities of IDBR regional distribution within size band, the differences between the resulting target by cell are very small compared with the 2019 method.
For each size band within region the targets were distributed across thirteen industry sectors in proportion to the corresponding IDBR distribution. The overall English targets by interlocking size band and sector were derived by summing the corresponding cells for the nine English regions.
For Northern Ireland and Scotland, size targets were set using the target UK size distribution shown in Table 2‑3. Then within size band, the process was the same as for English regions, distributing across thirteen industry sectors in proportion to the corresponding IDBR distribution.
For Wales, due to specific regional sample size requirements, purposive targets were initially set at regional level (as per Table 2‑3 below). The process for size and sector target-setting then mirrored that of the English region approach.
Table 2‑3 Target number of interviews by region / nation
Nation | Target no. of interviews |
---|
England | 70,000 |
---|
East Midlands | 6,798 |
---|
East of England | 7,744 |
---|
London | 10,349 |
---|
North East | 5,260 |
---|
North West | 8,174 |
---|
South East | 9,699 |
---|
South West | 7,586 |
---|
West Midlands | 7,313 |
---|
Yorkshire and the Humber | 7,077 |
---|
Northern Ireland | 3,900 |
---|
Scotland | 5,900 |
---|
Wales | 5,600 |
---|
North Wales | 1,325 |
---|
Mid Wales | 800 |
---|
South East Wales | 1,225 |
---|
South West Wales | 2,250 |
---|
Total UK | 85,400 |
---|
Sector categorisation
The sector categorisation for sampling and weighting matched that of the 2019 survey. These sectors, defined using Standard Industrial Classifications (SIC), were:
- Primary Sector and Utilities (SIC 01-03, 05-09, 35-39)
- Manufacturing (SIC 10-33)
- Construction (SIC 41-43)
- Wholesale and Retail (SIC 45-47)
- Hotels and Restaurants (SIC 55-56)
- Transport and Storage (SIC 49-53)
- Information and Communications (SIC 58-63)
- Financial Services (SIC 64-66)
- Business Services (68-82)
- Public Administration (SIC 84)
- Education (SIC 85)
- Health and Social Work (SIC 86-88)
- Arts and other service activities (SIC 90-96)
Further information on the SIC definitions for these sectors can be found in Appendix A.
Adopting such an approach ensured that, as far as possible, the Maximum Standard Error associated with findings by sector would be no greater than an average of ±3% at the combined level. Critically, this also helped to ensure that within each broad sizeband and sector, key cuts of the data (such as the nature of skills gaps relating to individual occupations or the causes and implications of specific types of skill-shortage vacancies), were associated with sufficiently robust base sizes.
Revisions to targets
Due to fieldwork challenges, including it being more difficult to get a hold of employers compared to previous years, the target number of interviews was reduced from 85,400 to 77,630, to provide a 10% reduction in the original target for English regions and a 5% reduction in the devolved administrations. The final notional targets by region and nation are shown in Table 2‑4.
Table 2‑4 Revised target number of interviews by region / nation
Nation | Target no. of interviews |
---|
England | 63,000 |
---|
East Midlands | 6,118 |
---|
East of England | 6,970 |
---|
London | 9,314 |
---|
North East | 4,734 |
---|
North West | 7,357 |
---|
South East | 8,729 |
---|
South West | 6,827 |
---|
West Midlands | 6,582 |
---|
Yorkshire and the Humber | 6,369 |
---|
Northern Ireland | 3,705 |
---|
Scotland | 5,605 |
---|
Wales | 5,320 |
---|
North Wales | 1,259 |
---|
Mid Wales | 760 |
---|
South East Wales | 2,137 |
---|
South West Wales | 1,164 |
---|
Total UK | 77,630 |
---|
Sample sources
As in 2017 and 2019, Market Location was used as the principal sample source of ESS 2022, supplemented with ‘top up’ sample ordered direct from the ONS Inter-Departmental Business Register (IDBR). The IDBR was not used as the primary sample source for ESS 2022 (nor any of the previous iterations of the survey) as the majority of records in the IDBR do not come with a telephone number. To use the IDBR as the primary source of sample would not be desirable since the telematching exercise typically finds a telephone number for a small proportion of the IDBR sample (the match rate was 19% for the 2022 survey).
To determine which IDBR sample records should be requested from ONS, a comparison was made between the 2007 4-digit SIC code counts for the sample received from Market Location and the corresponding IDBR population counts. In cases where specific 4-digit SIC codes were available in the IDBR, but not included in Market Location’s sample, either all, or a proportion (depending on volumes) of the IDBR records were requested from ONS. Similarly, in cases where there was a particularly low coverage of Market Location records relative to the IDBR, records were ‘topped up’ for specific 4-digit SIC codes.
More than 270 4-digit SIC codes were requested from ONS. For brevity, the 20 most common codes in terms of volumes of records requested have been shown below.
- 4941: Freight transport by road
- 0143: Raising of horses and other equines
- 5621: Event catering activities
- 8411: General public administration activities
- 5629: Other food service activities
- 0145: Raising of sheep and goats
- 4120: Construction of residential and non-residential buildings
- 5210: Warehousing and storage
- 6202: Computer consultancy activities
- 4339: Other building completion and finishing
- 5320: Other postal and courier services
- 0111: Growing of cereals (except rice), leguminous crops and oil seeds
- 0142: Raising of other cattle and buffaloes
- 4321: Electrical installation
- 5520: Holiday and other short stay accommodation
- 4711: Retail sale in non-specialised stores with good, beverages or tobacco predominating
- 4110: Development of building projects
- 8891: Child day-care activities
- 4322: Plumbing, heating and air-conditioning installation
- 0113: Growing of vegetables and melons, roots and tubers
Sample was initially ordered from Market Location at an average ratio of around 5.6:1 against target interviews required. A sample-to-target ratio of around 5.5:1 was targeted in all regions, except for London where a 6:1 ratio was targeted, due to response rates being historically challenging in London. For example, in 2019, using a quota sampling approach, the ratio of sample to complete interviews in London was 8:1, compared with 7:1 overall.
The 5.5:1 ratio was chosen based of requiring a 5:1 ratio in the ESS 2019 West Midlands RPS trial and adding some contingency. In practice, it was not possible to obtain the desired ratios solely via Market Location (hence the need for IDBR sample). In the Mid Wales region for example, the sample-to-target ratio was only 4.8:1. Due to the availability of sample this varied at sector level from 2.6:1 (Public Administration in Northern Ireland) to 8.4:1 (Construction in London); the lower ratios reflecting the fact that for some quota cells the entirety of available sample was ordered.
As discussed above, due to fieldwork challenges, targets were reduced partway through fieldwork and additional sample drawn from Market Location. The final sample-to-target ratios are shown in Appendix B.
A total of c.518,000 records were ordered from Market Location for fieldwork. A total of c.257,000 records were requested from the IDBR. These records were checked against the Market Location sample for duplicate records using a combination of company name and postcode. This left c.233,000 of the IDBR records eligible for inclusion. Since a large proportion IDBR records received did not include telephone numbers, these were sourced using automated directory look-ups (c.34,000 records were successfully telematched from c.185,000 records run through the process). A second round of checks for duplicates against the Market Location sample was carried out, this time factoring telephone numbers into the duplication checks, which left c.65,000 IDBR records.
All sample records were postcode-validated to ensure that geographical regions had been correctly assigned.
Checks were also undertaken in instances where duplicate telephone numbers existed within the sample. In certain sectors, such as retail and finance, it is common for different establishments to appear under the same centralised telephone number. Such establishments were marked up on the sample ‒ with the address of the sampled establishment displayed on-screen ‒ so that interviewers would be aware that the telephone number they were calling was a centralised switchboard and thus they would need to request to be transferred to a particular site.
In total, 477,069 records were loaded for fieldwork, from the 582,000 usable records that were drawn from Market Location and the IDBR.
Questionnaire design
Since ESS 2019, the Employer Skills Survey (ESS) and Employer Perspectives Survey (EPS) have been merged in order to provide greater efficiency and to enhance the potential for cross analysis. Merging the surveys in 2019 involved adding many of the questions used in the EPS series to those of the ESS series examining current skills position and skill needs of employers. More information on how this initial merging of ESS and EPS questionnaire content was achieved can be found in the ESS 2019 Technical Report.
Overall, the content of the 2022 questionnaire largely mirrored that of 2019 in order to maximise comparability and retain the time series for the survey. However, some changes were required for the following reasons:
- To reintegrate Scotland into the survey (see ‘Survey scope / eligibility’ for more information).
- Ensuring topical issues are covered in outcome codes (e.g., relating to COVID-19)
- To cover new areas of policy interest (e.g., Higher Technical Qualifications)
The full ESS 2022 questionnaire has been published alongside this technical report. Questions are also summarised in Appendix C.
The full list of questions that were removed from ESS 2019 can be found in Appendix D.
The rest of this section outlines the new questions that were added to the survey, as well as any question areas that were significantly redeveloped, along with the reasoning behind such changes.
New topical questions / precodes
Throughout the questionnaire, where appropriate new pre-codes were added related to COVID-19, where this likely had an impact on recruitment and training activities. For example, at SC11A where information is collected on the causes of hard-to-fill vacancies, three new pre-codes were added: ‘COVID-19’ has limited the availability of candidates’; ‘COVID-19 has hindered the recruitment process’; and ‘Brexit has made it more difficult to recruit EU nationals’. The full list of new (or amended) codes added to the questionnaire can be found in Table C-2.
A new question was also added that asked employers that had provided online training or e-learning to staff whether this was a replacement for some or all of the face-to-face training they would have provided if COVID-19 had not happened, or whether it was training they would have undertaken anyway (SSF7C). This question was first asked in Scottish ESS 2020 and is only asked of Scotland and Northern Ireland in the 2022 survey.
Scotland-specific questions
New questions were included asking employers whether they had heard of the Developing the Young Workforce (DYW) Regional Groups, and whether establishments had any engagement with these (SPC28 and SPC29). DYW and other Scottish initiatives such as the Young Person’s Guarantee, were also added as pre-coded responses, one example being at question PC20, which asked establishments their main reasons for offering work placements or internships, with ‘approached by a DYW Lead or Regional Group’ and ‘signed up to the Young Person’s Guarantee’ added as unprompted pre-codes.
There were also Scotland-specific versions of existing questions added to the questionnaire. This included a set of questions asking employers whether they had recruited people to their first job from a Scottish secondary school, a Scottish College, or a Scottish University and follow up questions about their preparedness for work and, if applicable, the ways in which they had been poorly prepared (SPC10E, SPC10F, SPC10G). These are similar to the existing questions for employers in England, Northern Ireland and Wales about their education leaver recruits (PC10B, PC10C, PC10D).
Another example like this is the addition of SSF13, which asked employers in Scotland the level of qualification that staff had trained towards. The Scottish-specific qualification descriptions were taken from Scottish ESS 2020. In ESS 2017, employers in Scotland received the same question as England, Northern Ireland and Wales but with Scottish-specific text substitutions, though the descriptions and examples were less detailed than in the version used in Scottish ESS 2020 and ESS 2022.
New policy areas
Questions were added to cover awareness of T-levels (PUH1); this question originated from the 2021 Employer Pulse Survey. Note, ESS 2019 had already asked employers whether they would be interested in providing work placements to T-level students and about their capacity to offer them (NC28 and NC29). These questions were retained for the 2022 survey. There was also a new question asking employers about their awareness of Higher Technical Qualifications (HTQs), a new quality mark for a subset of Level 4 and 5 technical qualifications approved by a panel of employers (NC30).
Questions about skills lacking among applicants and the existing workforce
While questions asking about skills lacking among applicants and the existing workforce were largely kept the same, the technical skills lists were updated with a prompted code, ‘creative and innovative thinking’ (SC13A, SD11A, SE3). This code was first introduced in Scottish ESS 2020.
A new question was also added to ESS 2022, which asked all employers that had mentioned at least one specific skill lacking among staff with skills gap to check whether any of those mentioned related to wanting to be more sustainable or carbon neutral (ND11NWA).
Welsh language apprenticeship delivery questions
Two new questions were asked of employers in Wales. The first, asked those with current apprentices whether any of these apprentices undertake their qualifications through the medium of Welsh or bilingually. The second asked employers in Wales more generally, whether their establishment would benefit from a wider range of qualifications being made available through the medium of Welsh or bilingually.
Visas for staff who ordinarily reside outside of the UK
A question was added for employers in Wales asking whether they currently have any staff who are ordinarily resident outside the UK and currently working in the UK on a visa. Those who did have these staff were also asked whether their organisation was actively sponsoring visas for staff at their site (ND42, ND43).
High performance working practices
For employers in Northern Ireland, Scotland and Wales, questions were reintroduced asking employers about their high performing working practices, such as whether they have equal opportunities policies or conduct training needs assessments. These employers were also asked whether staff have variety in their work, discretion over how they do their work and whether they have access to flexible or hybrid working arrangements (SG1, SG5). These questions were last asked UK-wide in ESS 2017 and more recently in Scottish ESS 2020.
In total, employers were asked about 10 high performance work practices, the full list of these are given at SG1 and SG5 within Table C-1 in Appendix C.
Other notable changes
Wording was adapted to a long-running question asking employers whether they provide off-the-job training. The changes aimed to tighten up the wording and align with the UK Department for Education’s definition used to determine whether apprentices are meeting off-the-job training minimum funding requirements.[2] (opens in a new tab) This includes specifying, for example that off-the-job training should take place beyond training which takes place as part of an employee’s normal work duties and that it can take place anywhere. The question was first trialled in the 2021 Employer Pulse Survey.
Online occupational prompts
In order to allow assessment of skill needs at an occupational level, a key element of the Employers Skill Survey series is a set of questions that asks employers to assign the employees at their establishment into nine different occupational categories ranging from Managers, Directors and Senior Officials through to Elementary Occupations. In 2022, as in recent waves of ESS, a set of occupational prompts, providing detailed examples of the types of job roles to be included in each occupational grouping, was created and hosted online by IFF Research under the domain name www.skillsurvey.co.uk/jobs. The occupational prompts were tailored to give pertinent, specific examples relevant to each broad sector classification (as provided by the respondent earlier in the interview). A link to the online prompt card was offered to establishments with 10 or more employees. For employers that opted against accessing the online occupational prompts live during the interview, the occupational descriptions and example job roles were read out by interviewers.
The example job lists for each occupation were slightly updated from 2019. The full list of prompts used can be viewed at the end of the ESS 2022 questionnaire which has been separately published on the DfE gov.uk website.
Interview length
The average overall interview length was 23 minutes (the same as 2019). This varied between different employers depending on their recruitment activities, experience of skill-shortage vacancies, internal skills gaps, and training activities, and whether they were selected for a module or not.
As shown in Table 2‑5, interviews with larger establishments took longer on average given that they were more likely to have trained their staff and to have experienced skill-shortage vacancies and/or skills gaps given their greater number of employees.
Table 2‑5 Average interview length by size of establishment
Size of establishment | Average interview length[3] |
---|
2-4 | 20 minutes |
---|
5-9 | 21 minutes |
---|
10-24 | 24 minutes |
---|
25-49 | 26 minutes |
---|
50-99 | 29 minutes |
---|
100-249 | 31 minutes |
---|
250+ | 33 minutes |
---|
Module | Average interview length |
---|
Module A | 23 minutes |
---|
Module B | 25 minutes |
---|
Module C | 24 minutes |
---|
Module D | 28 minutes |
---|
Module E (Module E was blank, i.e. those selected for this module received no additional questions) | 20 minutes |
---|
Overall | 23 minutes |
---|
Combined pilot and cognitive testing phase
A pilot was conducted between the 17th May and 24th May 2022 to test the suitability of changes made to the questionnaire, and to ensure it flowed well and was of an appropriate length and nature for CATI-based interviewing. A total of 102 pilot interviews were completed among employers across a range of size bands and sectors and module. Overall, 41 were completed in England, 26 in Scotland, 25 in Wales and 10 in Northern Ireland. The sample was drawn from a list of establishments that had taken part in the ESS 2019 and Scottish ESS 2020 surveys and at the end of the survey agreed to be re-contacted for research purposes.
Given the small-scale nature of the changes (mainly deletions), a full-scale cognitive testing phase was not deemed necessary, however some cognitive questions were added at the end of the pilot survey to check overall comprehension and some specifics around respondent understanding of the new questions. This included, for example, a question to check understanding of the description of HTQs.
One key area of testing during pilot surveying was the interview length. Through pilot testing, the average interview length was 24 minutes and 53 seconds. Despite substantial reductions to the questionnaire in the initial design phase to reduce the survey to its target length of 23 minutes, a contributing factor to the extended survey length was an apparent increase in respondents with vacancies, specifically skill-shortage vacancies (SSVs), which routes a longer version of the survey exploring the causes of SSVs and the skills lacking among applicants. This also bore out in the mainstage survey; weighted results show 23% had vacancies, 15% had hard-to-fill vacancies and 10% had skill-shortage vacancies, compared with 17%, 8% and 5% respectively in 2019 (2019 results do not include Scotland).
Cuts were required to the questionnaire before the mainstage fieldwork launch to bring it in line with the agreed interview length. These cuts were discussed with the DfE and the steering group, to ensure that the most critical elements of the survey were retained. Cuts were largely made to the new questions that were not asked in 2019 but had been added for piloting (e.g., questions on business strategy). Due to challenges in finding a sufficient level of question deletions, further time savings were found by:
- Reallocating module proportions so that a larger proportion of respondents received no module
- Adjusting the base at some modular questions (for example, the subset of Module B asked questions on upskilling was reduced further, having already been reduced prior piloting)
- Reducing target sample sizes for the devolved administrations.
All of these changes helped the survey get closer to the target length of 23 minutes; however, the questionnaire still ran over length.
Modularisation
As was the case for the 2019 ESS, large sections of the survey were modularised to ensure that ESS and EPS legacy questions could be covered without adding significantly to interview length. The 2022 survey largely retained the modularisation from the 2019 survey, though some new questions were added to existing modules. Establishments were randomly allocated to one of five modules, although only businesses in England were assigned to Module E. The five modules are detailed below:
- Module A – Apprenticeships and Traineeships
- Module B – EPS Training, under-utilisation of skills, upskilling
- Module C – Education leavers (including external training, training to vocational qualifications and information and advice on training); work experience placements and work inspiration activities; T Levels; awareness of HTQs
- Module D – EPS recruitment, under-utilisation of skills, upskilling and high-performance working practices
- Module E – no extra questions
The specific questions that each module was composed of are outlined in Appendix C.
As well as some additions and deletions to modules, in order to meet the required interview length, the distribution of sample for each module was updated in England, so that a greater proportion were in module E (i.e. asked no modular questions), as shown in Table 2‑6. Table 2‑7 shows the comparative distribution in 2019.
Table 2‑6 Target module distribution for ESS 2022
Module | Module A | Module B | Module C | Module D | Module E |
---|
Sample distribution (England) | 17% | 17% | 17% | 17% | 32% |
---|
Sample distribution (devolved administrations) | 25% | 25% | 25% | 25% | n/a |
---|
Table 2‑7 Target module distribution for ESS 2019
| Module A | Module B | Module C | Module D | Module E |
---|
Sample distribution (England) | 19% | 19% | 19% | 19% | 24% |
---|
Sample distribution (devolved administrations) | 25% | 25% | 25% | 25% | n/a |
---|
The full final questionnaire with interviewer briefing notes has been separately published on the DfE gov.uk website.
The questionnaire was translated into Welsh by a professional translation agency.
Fieldwork
A total of 72,918 interviews were conducted by telephone using computer-assisted telephone interviewing (CATI) systems. Fieldwork was conducted by three research agencies (IFF Research, BMG Research and Ipsos). The number of interviews completed by nation were:
- England: 59,486
- Northern Ireland: 3,400
- Scotland: 5,207
- Wales: 4,825
As lead contractor, a member of the IFF team was involved in checking the CATI script programming undertaken by each agency to ensure consistency across all three contractors.
Establishments were not pre-notified that they would be called for the survey, partly due to financial considerations and partly because it was felt that this could lead to a reduction in response rates if head offices potentially opted all the establishments in their organisation out of the survey. An exception was made for certain large banks, where head offices were contacted by members of the DfE team prior to the survey commencing in order to obtain telephone numbers at branch level / interviewees who could answer at branch level for establishments included in the sample drawn from the Market Location database. This approach was taken as the original telephone numbers supplied in the Market Location sample directed interviewers to call centres or a central telephone switchboard from where, based on past experiences of the Employer Skills Surveys and Employer Perspective Surveys, it has proved particularly challenging to reach individual branches.
In line with the approach adopted in previous waves of ESS, large multisite organisations (defined as those with 300 or more sites for the 2022 survey), along with the large banks that were being contacted by DfE, were managed and only contacted by the lead contractor (IFF Research). This enabled contacts for multisite organisations to be split across a number of sample batches and released sequentially over the course of fieldwork to ensure that the various sites were not contacted within too short a time window.
As in previous studies, each agency was allocated separate regions for fieldwork. Due to resourcing challenges that emerged during fieldwork, IFF and Ipsos also conducted some interviews in the regions originally allocated to BMG Research. As noted above, IFF Research also interviewed large multisites across the whole of the UK. Table 2‑8, Table 2‑9 and Table 2‑10 detail how the interviews broke down between the three research agencies.
Table 2‑8 Interviews achieved by BMG per region
Region | Number of interviews | Total number in region | Proportion completed |
---|
East Midlands | 18 | 6,198 | 0.3% |
---|
East of England | 3,625 | 6,743 | 53.8% |
---|
London | 13 | 7,428 | 0.2% |
---|
North East | 4 | 4,793 | 0.1% |
---|
North West | 3,133 | 6,321 | 49.6% |
---|
South East | 23 | 9,045 | 0.3% |
---|
South West | 3,949 | 6,766 | 58.4% |
---|
West Midlands | 3,081 | 5,817 | 53.0% |
---|
Yorkshire and the Humber | 9 | 6,375 | 0.1% |
---|
Northern Ireland | 0 | 3,400 | 0.0% |
---|
Scotland | 5 | 5,207 | 0.1% |
---|
Wales | 8 | 4,825 | 0.2% |
---|
Table 2‑9 Interviews achieved by IFF per region
Region | Number of interviews | Total number in region | Proportion completed |
---|
East Midlands | 182 | 6,198 | 2.9% |
---|
East of England | 2,421 | 6,743 | 35.9% |
---|
London | 7,394 | 7,428 | 99.5% |
---|
North East | 136 | 4,793 | 2.8% |
---|
North West | 2,334 | 6,321 | 36.9% |
---|
South East | 356 | 9,045 | 3.9% |
---|
South West | 2,279 | 6,766 | 33.7% |
---|
West Midlands | 1,877 | 5,817 | 32.3% |
---|
Yorkshire and the Humber | 120 | 6,375 | 1.9% |
---|
Northern Ireland | 3,399 | 3,400 | 100.0% |
---|
Scotland | 5,202 | 5,207 | 99.9% |
---|
Wales | 4,816 | 4,825 | 99.8% |
---|
Table 2‑10 Interviews achieved by Ipsos per region
Region | Number of interviews | Total number in region | Proportion completed |
---|
East Midlands | 5,998 | 6,198 | 96.8% |
---|
East of England | 697 | 6,743 | 10.3% |
---|
London | 21 | 7,428 | 0.3% |
---|
North East | 4,653 | 4,793 | 97.1% |
---|
North West | 854 | 6,321 | 13.5% |
---|
South East | 8,666 | 9,045 | 95.8% |
---|
South West | 538 | 6,766 | 8.0% |
---|
West Midlands | 859 | 5,817 | 14.8% |
---|
Yorkshire and the Humber | 6,246 | 6,375 | 98.0% |
---|
Northern Ireland | 1 | 3,400 | 0.0% |
---|
Scotland | 0 | 5,207 | 0.0% |
---|
Wales | 1 | 4,825 | 0.0% |
---|
To ensure consistency between agencies, one comprehensive set of interviewer briefing notes was created for use by all contractors, and representatives of each contractor and the DfE team attended the initial interviewer briefing at IFF Research. Subsequent to this, a member of the IFF team attended the briefings conducted by each agency. Each briefing lasted around 90 minutes and all of the interviewers that were due to work on the survey were required to attend one such briefing prior to them starting work on the survey. Answers to any questions raised were shared with all three interviewing teams. Quality assurance on the interviewing was carried out by IFF at each of the research agencies, and DfE and IFF attended interviewer briefing sessions at all contractors, providing full feedback post-session which was also shared with all contractors.
Interviews were conducted with the most senior person at the site with responsibility for recruitment, human resources and workplace skills. Reassurances were provided to respondents prior to the survey, including confirmation that data would be reported in a way that would not allow them or their organisation to be identifiable. If after the first contact the respondent or gatekeeper wanted more information about the survey a reassurance email was sent (see Appendix F for a copy of the reassurance email). This reassurance email included a link to the dedicated survey website which was created and hosted by IFF Research (https://www.skillssurvey.co.uk (opens in a new tab)). This website provided further background information on the research, links to the 2019 results, and a list of frequently asked questions.
Following the COVID-19 pandemic, many employers have continued with homeworking, creating further practical challenges around administering the survey. With many switchboards unable to transfer interviewers to the correct respondents, processes were introduced by IFF Research to collect email addresses for homeworking respondents. Once the email address was collected, invites were sent offering respondents the opportunity to leave their telephone number to arrange a call back. The email invite is shown in Appendix G.
Fieldwork took place from 20th June 2022 to 30th March 2023. This included an extension of three months due to challenges faced during fieldwork. Weekly progress updates were produced by each fieldwork contractor and collated by the lead contractor, IFF Research, to monitor progress throughout the fieldwork period. Interim data runs were also provided from all agencies to IFF as lead contractor twice during fieldwork, which were checked to ensure consistency between agencies.
Response rate
High response rates are central to the success of the Employer Skills Survey. The overall response rate for the survey was 53%, calculated as ‘achieved interviews’ as a proportion of all respondents who started the survey. Table 2‑11 provides a detailed breakdown of survey outcomes.
Table 2‑11 Sample outcomes and response rate
Outcome | Number of contacts | % of all sample | % of complete contacts |
---|
Total sample | 477,069 | 100% | |
---|
Ineligible establishments (e.g., just 1 working proprietor at site) | 14,033 | 3% | |
---|
‘Live’[4] | 18,235 | 4% | |
---|
Unobtainable / invalid numbers | 74,957 | 16% | |
---|
Withdrawn after completing RPS protocol | 231,515 | 49% | |
---|
Total complete contacts | 138,329 | 29% | 100% |
---|
Achieved interviews | 72,918 | 15% | 53% |
---|
Respondent refusal | 61,948 | 13% | 45% |
---|
Quits during interview | 3,463 | 1% | 3% |
---|
As is common with employer surveys, it was particularly difficult to achieve interviews in the smaller sized establishments in sectors such as Construction and Agriculture which are typically site/outdoor based rather than office based. To mitigate the effect of this, fieldwork contractors also called these establishments outside of normal business hours to try to gather responses.
ESS 2019 achieved a response rate of 41% using a quota sampling approach, alongside a 50% response rate for the West Midlands RPS trial. In ESS 2022, adopting an RPS approach led to a 53% response rate. These response rates are calculated as percentage of complete contacts. Despite this increased response rate, the sample required to achieve the completed interviews did not differ significantly between years (6.5:1 in 2022, 7.4:1 in 2019).
Appendix B shows the actual sample ratio required to reach the achieved number of interviews by size, sector and region, from the total sample available during fieldwork. Appendix B also shows the areas where it was not possible to reach the original ideal targets in the fieldwork period.
Appendix H shows how the achieved response rate differed by nation, size and sector.
Data edits
It was recognised at the outset that the ESS questionnaire involved the collection of some complex data that respondents would possibly struggle to answer. There was also, despite stringent quality control, the chance that interviewers may enter typing errors, for example accidentally entering extra zeros on the end of numerical variables.
Data checks were built into the CATI script to ensure, for example, that questions breaking down the workforce into by occupation equalled the number of people working at the site, and that the number of staff in each job role who were not proficient could not exceed the number of staff they had in each job role. However, some data validation needed to occur after fieldwork had finished to ensure no errors were present in the final data. Guidelines were issued to all fieldwork contractors on how to edit data to ensure consistency; these guidelines can be seen in Appendix I.
Coding
Open ended responses to the survey were coded by each contractors’ coding teams. To ensure consistency the codeframes were developed in unison, with codeframes regularly compared and reviewed. As lead contractor IFF Research took the final decisions as to what codes to use after considering advice and outputs from the IFF, BMG and Ipsos coding teams.
Standard Industrial Classifications (SIC) were coded using 2007 standards (the most up to date at the time of the survey), and Standard Occupational Classifications (SOC) were coded using both 2010 and 2020 classifications (also the most up to date). SOC 2010 will be used for categorising occupations in the 2022 survey publications.
Weighting
The survey weighting was updated this wave in-line with the transition to a random probability sample (RPS) for all regions. Instead of cell weighting to population targets as per previous years, the best practice for RPS sample weighting was followed. The process of weight creation is cumulative[5] (opens in a new tab) with each stage falling into one of the following two categories:
- Selection weighting, to correct for the different probabilities of business units being in the ‘issued’ sample / specific modules / and specific question sets. Its purpose is to ensure that the profile of selected cases closely match that of the population.
- Non-response weighting, to correct for different probabilities of business units completing an interview. Its purpose is to ensure that the profile of completed interviews closely matches that of the ‘selection weighted’ sample.
- Other adjustments, to ensure that estimates obtained accurately represent the profile of the known population. The adjustments made for employment and selective use of Random Iterative Method (RIM) weighting fall into this category.
Survey data were weighted and grossed up to the total population of establishments and total population of employees, according to the 2022 IDBR ‒ the latest available business population statistics published by ONS at the time that weighting was carried out.
Given that the ESS data were intended to be used in a variety of ways (from a combined UK unit and employment-based level, to similar measures at a regional and local level), a number of different weights were produced:
- Core weights, used to weight the combined UK dataset and used for the majority of analysis. This weighting set is the default to use.
- Modular weights, to be used when analysing data from questions within one of the four modules business units were allocated to.
- Local weights for use analysing England data by UTLA and LEP.
- Two-digit SIC weights for use when analysing at two-digit SIC level.
Weights were created in pairs: a ‘unit-based’ weight and an ‘employment-based’ weight. The unit-based weight was designed for analyses of number or proportion of establishments; the employment-based weight was designed for analyses of number or proportion of employees (including volume measures of vacancies, skills gaps and numbers trained). Data dictionary files were created listing each variable with notes and guidance on the correct weight to use.
Summary of weighting stages
The various stages of weighting described above are summarised in Figure 2‑1, with each new weight calculated using its previous weights as a composite starting weight.[6] (opens in a new tab)
Figure 2‑1 Flow Diagram summarising weighting stages
Sample Data and Adjustments to IDBR Population
A database was collated for ‘issued sample’ containing key variables used in the weighting process, containing sector, sizeband, region, number of employees, fieldwork Contractor (IFF, Ipsos, BMG), UTLA, local Authority and SIC two-digit code. Where complete interviews were obtained, precedence was given to the version of the sector, sizeband and region variable captured in the survey, rather than from the sample. These variables were coded using the categories in Appendix J.
The 2023 IDBR universe data, used for estimating the population of business by units and employment was organised into cells by sector, sizeband and region. Of the 1,365 possible cells (13 x 7 x 15) there were 19 which were not sampled, all in the Welsh regions, and mainly in Mid-Wales. To enable creation of selection weights, the IDBR population of units and employment was adjusted for these missing cells. The populations of these missing cells were reallocated within the same sizeband by region of the universe grid, proportionally to its industry sector distribution[7] (opens in a new tab).
A further adjustment was needed to the IDBR to account for cells that were sampled, but which according to the IDBR had zero population. This situation arose as a result of giving precedence to sector, sizeband and regional information from the survey, which sometimes resulted in establishments being reallocated to cells which according to IDBR have no establishments. For these cells, a population of one business unit was assigned. No adjustments were made to the employment field, as in these cases the survey estimate of employment was used as the best estimate of employment per unit[8] (opens in a new tab).
Population profiles were produced for sector, size and region which showed only negligible differences between the unadjusted and adjusted population universe for units and employment.
Selection weights
Different sampling probabilities were used for establishments in every sector by sizeband by region cell to enable minimum numbers of interviews to be achieved, according to ideal targets agreed with DfE. Therefore, the issued sample over-represents establishments cells that were over-sampled relative to their share of the population[9] (opens in a new tab). Selection weighting is designed to address this bias and ensure the distribution in the issued sample matches that of the IDBR adjusted population.
A selection probability for business units was calculated for each sector by sizeband by region cell by dividing its number of establishments in the issued sample by its number of establishments in the adjusted IDBR population. The selection weight was calculated as 1 divided by the selection probability and was scaled to sum to the total population of business units across the UK for profiling purposes. Profiles show the selection weighted sample exactly replicates the adjusted IDBR population by sector, sizeband and region.
Non-Response Weights
The probability of a sampled establishment completing the survey depends on its profile by variables such as sector, sizeband, region, fieldwork contractor and whether the unit is in the multisite sample. Without further adjustments beyond the sampling weight, the profile of complete interviews would over-represent establishments in categories of variables with higher completion rates and under-represent those in categories with lower completion rates. Non-response weights are designed to address this bias.
Regression modelling was used to predict the probability of business units completing an interview given these other variables. The model was created on the sample file, weighted by a special version of the sampling weight scaled to the issued sample base size.
Predictors for the model were selected in several iterations of screening. Decisions were also made to combine categories of certain variables with low base sizes of ‘completes’ to ensure sufficient sample sizes for completes to model with. The final model contained a special version of the region variable, split by fieldwork contractor where base sizes allowed, sector, sizeband, whether part of multisite job, and all two-way interactions for sector, sizeband and region.
Using the final model, a probability of completing the survey was scored for each establishment in the sample, conditional on its predictor categories. For cases in the complete sample, a non-response weight was calculated as 1 divided by the probability of completion. Trimmed versions of this weight were also created for evaluation[10] (opens in a new tab).
A composite weight was created from the untrimmed and trimmed versions of the non-response weight by multiplying them by the corresponding selection weight from the previous stage. The sum of the weights was rescaled to the population total and profiles were run for sector, sizeband, region and all of their two-way combinations to evaluate against the adjusted universe data. The decision was made to adopt a version of the non-response weight trimmed 1% at its highest and lowest end[11] (opens in a new tab). A composite ‘core’ unit weight was created from the product of this trimmed weight and the selection weight from the previous stage.
This ‘core’ weight is used as the default weight for analysing frequencies of business establishments for questions on the core section of the questionnaire.
Employment Adjustment Weight
When using the core unit weight for analysis, the estimates of total number of employees by sector, size and business do not always tally with the IDBR employment population. This is most apparent for business units with 250+ employees that on average only report half of the expected number of employees. The purpose of the employment adjustment weight is to adjust for this discrepancy by cell. Its effect is to adjust the average number of employees per unit to a value that, combined with the core unit weight, will approximately replicate the known universe numbers for employment by cell.
Prior to calculating this weight, further adjustments needed to be made to the population grids as 85 of the 1,346 sampled cells by sector, sizeband and region did not have any complete interviews. The population for these missing cells was allocated within the same sizeband by region proportionally across sectors[12] (opens in a new tab) according to their population.
For each sector by sizeband by region cell, the employment adjustment weight was calculated by dividing the adjusted IDBR estimate of the population of employees in the cell by the estimated population of employees under the core unit weight. Trimming of this weight was not necessary. A composite core employment weight was created by taking the product of the employment adjustment weight and core unit weight.
This core employment weight is used as the default when analysing questions in the core questionnaire whose responses are numbers of employees. When applied to estimates of total number of employees provided by establishments in the survey, it naturally sums to the total employment population in the UK. This weight was shown in profiling to reduce bias in the profiles of employment by sector, sizeband and region and their two-way interactions down to negligible levels when compared with the adjusted IDBR population.
Core weights
To summarise, the two core weights provided are:
The core unit weight, the default for analysing questions in the core questionnaire whose responses are numbers of business units. It is defined as the product of the selection weight and non-response weights.
The core employment weight, the default for analysing questions in the core questionnaire whose responses are numbers of employees. It is defined as the product of the core unit weight and the employment adjustment weight.
Module Adjustment weight
Module weights were also developed to be used as the default for analysing questions in any one of the four survey modules A through to D. To help minimise interview length, whilst maximising base sizes in the devolved nations, all establishments in Wales, Northern Ireland and Scotland were allocated to one of the modules (with equal probability between A, B, C and D from within the interview), whereas only approximately two-thirds of English establishments were allocated to modules.
The modules therefore over-represent the devolved nations and need an additional weight to represent them in their correct proportion. A probability of selection for ANY of the modules A to D was calculated separately for England and the devolved nations (with core unit weight active) and then an additional module adjustment weight was calculated as 1 divided by this probability of selection[13] (opens in a new tab).
A composite module unit weight was calculated, just for cases allocated to modules, as the product of the core unit weight and the module adjustment weight. An additional module employment weight was calculated as the product of the module unit weight and the employment adjustment weight from the previous stage. Separate versions of the module unit weight and module employment weight were then calculated for each module A to D which were scaled to their correct population totals.
Profiles showed that when these weights were applied the difference in profile of the modules by sector, sizeband and region vs the core unit weights were small though negligible. However, the differences in the profile vs the core employment weights were large enough in some cases not to be considered negligible and therefore it was decided to additionally RIM weight each of the modules A to D to be equivalent to the core sample profile by both unit and employment[14] (opens in a new tab). No trimming of the additional RIM weights was necessary and final version of the module unit and module employment weights were created for each module. Profiles by these weights precisely matched those of the core sample profile.
Special Module Combination Adjustment Weight
An additional special set of weights were needed for each of the Underutilisation and Upskilling sections. These questions were asked on both modules B and D. However, for Underutilisation, while all countries were asked this section in Module B, only Wales, Scotland and Northern Ireland were asked this section in Module D. For Upskilling, while all countries were asked this section in Module B, only Wales and Scotland (but not Northern Ireland in this case) were asked this section in Module D. The special weights were calculated for analysing these sections combined across both modules B and D and their purpose is to correct for the overrepresentation of the devolved countries resulting from the filtering used for these question sections.
A complex rescaling of the existing module weights, conditional on nation, was performed and a version of a combined weight for Modules B and D was created for units and employment for both the Underutilisation and Upskilling section. Profiles showed that these additional weights represented the countries in the correct proportions.
Local geography weights
ESS data is also used to conduct analysis at local geography level. Historically ESS has used Local Education Authority (LEA) categories for weighting purposes for England sample. In 2022, Upper Tier Local Authority (UTLA) was used to categorise local subregions instead, due to the discontinuation of Local Education Authorities. This comprises 152 subregions, providing a similar level of granularity to the previously used LEA. Collapsing UTLAs also allows analysis by other regional groupings such as Mayoral Combined Authorities. Northern Ireland was grouped into five categories using District Council area[15] (opens in a new tab); Scotland was grouped into a combination of Regional Outcome Agreement (ROA) regions and local authorities (the latter was required in some cases due to overlap between some of the ROA regions)[16] (opens in a new tab). No further local weighting was required for Wales as the core weights had already split Wales into four subregions.
In previous waves, RIM weights were imposed within the core weights to ensure the survey population of each local geography matched the employer population without further correction for size and sector at this level. This approach was reviewed in the 2022 survey given the changes in weighting approach mentioned above. Profiles were run of the UTLAs within each English and Welsh region and within Scotland and NI, using the core unit and employment weights. These profiles showed only negligible departures from the expected proportions of interviews in each UTLA (at the most 1 to 2 percentage points, with most close to zero with broad region). Given that the previous weighting stages were design to obtain a balance by sector, sizeband, region and two-way combinations of these variables, it was felt that further RIM weighting would not only be unnecessary but would re-introduce bias in other variable corrected by the previous weights.
Instead, separate weights were created for the purposes of analysis at local geography level. This was achieved by scaling to each local geographical group’s population via target weighting, using the core unit and employment weights described as input weights.
Two-digit SIC weights
A new requirement for ESS was the capability of conducting analysis at 2-digit SIC level. To achieve this, target weighting was used to scale to the respective populations of each 2-digit SIC code, using the core unit and employment weights described above as input weights. The final SIC unit and employment weights should only be used for core (i.e., non-modular) questions.
Table 4‑1 in ‘Using the survey for analysis’ explains how the weights should be applied to analyse ESS data.
[1] (opens in a new tab)Inter-Departmental Business Register (IDBR) - Office for National Statistics (ons.gov.uk) (opens in a new tab)
[2] (opens in a new tab)Off-the-job training: myth busters (opens in a new tab), GOV.UK
[3] (opens in a new tab)All average interview lengths provided refer to the mean and are rounded to the nearest minute.
[4] (opens in a new tab)This row includes sample which was ‘live’ at the end of fieldwork – i.e., records for which a final outcome (refusal, completed interview etc.) was not reached.
[5] (opens in a new tab)In that each new stage of weighting builds and further weights previous stages, using the previous stage as an input weight.
[6] (opens in a new tab)A composite weight at any stage is the product of the weight at that stage with the weight at all previous stages.
[7] (opens in a new tab) The population for cells not sampled was distributed among the industry sectors with at least one unit of sample in the Sizeband by Region domain.
[8] (opens in a new tab) See section on Employment Weight
[9] (opens in a new tab) For example organisations with larger numbers of employees were over-represented relative to those with small employees; establishments in Wales/Scotland and Northern Ireland were over-sampled relative to their population.
[10] (opens in a new tab) Trimming reduces the number of extreme weights, whilst maintaining the weights ability to minimise much of the bias of the original weight. The benefit of a trimmed weight is greater weighting efficiency which translates into higher effective sample size.
[11] (opens in a new tab) Trimming at 1% each end is the same as capping the weight at its 1st and 99th percentile.
[12] (opens in a new tab) The reallocation to other sectors only included sectors where we had at least one complete interview.
[13] (opens in a new tab) Profiles showed that the random allocation between modules A, B, C and D produced very consistent profiles by Sector, Sizeband and Region within each module individually and therefore it was not necessary to calculate the probability for each module individually.
[14] (opens in a new tab) Although the additional Unit rim weighting was not strictly necessary it was felt that we should rim weight both the unit and employment profiles for consistency.
[15] (opens in a new tab) Belfast, East, South, North and West. These five regions were comprised of the 11 District Council geographies of Northern Ireland: Belfast - comprised of the Belfast district council; East - comprised of the Ards and North Down, Mid and East Antrim, Antrim and Newtownabbey, and Lisburn and Castlereagh district councils; South - comprised of the Armagh City Banbridge and Craigavon, and Newry Mourne and Down district councils; North - comprised of the Causeway Coast and Glens, and Derry City and Strabane district councils; West - comprised of the Mid Ulster, and Fermanagh and Omagh district councils.
[16] (opens in a new tab) This comprised Aberdeen and Aberdeenshire (ROA), Ayrshire (ROA), Borders (ROA), Dumfries and Galloway (ROA), Edinburgh and Lothians (ROA), Fife (ROA), East Dunbartonshire (LA), East Renfrewshire (LA), Glasgow (LA), Highlands and Islands (ROA), North Lanarkshire (LA), South Lanarkshire (LA), Tayside (ROA), Inverclyde (LA), Renfrewshire (LA), West Dunbartonshire (LA), and West Lothian (ROA).