VET21001 Guidelines

All VET21001 Guidelines.

The infrastructure of educational organizations vary significantly from one organization to another, as many aspects impact the needed infrastructure – such as the scientific area of the learning programmes delivered, the delivery format used, the pedagogic model followed, etc.. An organization that delivers natural science programs might need a biology laboratory, while one that delivers information technology programmes might need a computer laboratory. An organization that delivers programmes in e-learning will need a learning management system, while an organization that delivers presential programmes will need classrooms and adequate furniture. If the programmes are offered with a partial or full board approach, the educational organization might also need cafeterias and dormitories. Additionally to the infrastructure requirements derived directly from the characteristics of educational programmes offered, other infrastructure needs may arise from basic or supplementary services offered by the educational organization to their learners and other beneficiaries. This includes libraries, sports facilities,  medical facilities, resting and leisure spaces, among others.

Below we provide a list of common infrastructure elements that can be found in educational organizations:

Spaces associated with educational products an services

  • Classrooms
  • Laboratories
  • Libraries
  • Sports facilities (for programmes in this scientific/technical area of the programme)
  • Vehicles (cars, boats, airplanes, etc. if pertaining to the scientific/technical area of the programme)
  • Etc.

Spaces associated with supplementary services provided to learners

  • WCs
  • Studying spaces
  • Leisure and resting spaces
  • Administrative services areas
  • Reprographies
  • Cafeterias and kitchens
  • Dormitories
  • Medical facilities
  • Sports facilities
  • Etc.

Other Resources (Equipment, Devices, tangible and non-tangible tools and  Instruments, etc.)

  • Vehicles (school buses, cars, boats, airplanes, etc. – sometimes needed for pedagogical reasons according to the scientific/technical area of the education services delivered)
  • Laboratory equipment, including monitoring and measuring resources
  • Kitchen equipment, including monitoring and measuring resources
  • Non-tangible tools and instruments:
    • Psychology and career guidance tools, such as psychometric scales and vocational surveys
    • Management tools, such as learner satisfaction and alumni tracking surveys
    • Pedagogic tools, such as assessment of learning Instruments – e.g. tests, exams and assignments
  • Etc.

The above list is not exhaustive and many other elements exist. It is the responsibility of each educational organization to identify the elements of infrastructure that are needed to adequately deliver the educational products and services they offer to learners and other beneficiaries, and to decide which ones they have internal capacity to provide and which ones require partnerships to be established so they can be provided by external suppliers. In any case, both EQAVET and ISO 21001 require not only that this identification is done, but that the adequate infrastructure is in place. ISO 21001 , being more prescriptive also requires that the infrastructure is adequately maintained. This includes preventive and corrective maintenance, such as adequate configuration and validation of its fitness for purpose. And it applies both to the infrastructure and the work environment related to it. Examples:

  • A biology laboratory with the adequate equipment, measuring devices adequately calibrated and adequate conditions of hygiene and asepsis to allow that experiments done can provide the expected results.
  • A classroom with sufficient furniture for the number of students that will attend classes in it and adequate conditions of lighting and noise reduction through double glass windows, but also through values of studying in silence and of respecting colleagues when they speak by also keeping silence.
  • A cafeteria with sufficient furniture to seat the amount of students that are supposed to have a meal inn each time slot and adequate cooking equipment to cook the meals, which an ISO 22000, HACCP (Hazzard Analyses of Critical Control Points) or similar system implemented to assure food safety.
  • A dormitory with sufficient furniture to seat the amount of students that are supposed to live there and security measures in place, from fire alarms to door keys and security agents, assure the safety and security of the onboarded students.
  • Etc.

Given to its complexity, dedicated Guidelines to non-tangible resources are provided in this toolkit.

When educational organizations offer psychology services to learners, psychologists use scales to support their work. A few common examples are:

  • Scales used to assess cognitive ability – e.g. Weschler Intelligence Scale for Children (WISC)
  • Scales used to diagnose neurodivergences or functional impairments – e.g. Conners scales for ADHD, Weiss Functional Impairment Rating Scale, Autism Spectrum Rating Scales (ASRS)
  • Scales used to assess vocational identity – e.g. VIS, VIM
  • Etc.

Unlike physical equipment and devices, which can be calibrated to assure its fitness for purpose (as required by EQAVET implicitly and by ISO 21001 explicitly),  these non-tangible tools require a different process: The educational organizations will need to assure that only scales that have been scientifically validated are used. Particular attention should be given to the nature of each scale to determine if these are culturally sensitive. If they are, re-validation in the culture of the user group might be necessary – to be noted that a simple translation of the tool does not assures its fitness for purpose in a culture different from the one in which the tool was initially validated. Another situation that constitutes a threat to the validation of these tools is its usage partial usage – for example, by selecting items to be used. Developing a new tool with parts of different (validated) tools will not create a new validated tool – the new tool will need to go through a scientific validation process.

Educational organizations use a multitude of management tools to monitor and measure different issues. For example:

  • Satisfaction – of learners and other beneficiaries, of staff, of partners
  • Alumni tracking – g. to check employability rates of study programmes

These tools are usually home-developed in the form of surveys and, unlike physical equipment and devices, which can be calibrated to assure its fitness for purpose (as required by EQAVET implicitly and by ISO 21001 explicitly), these non-tangible tools require a different process: The educational organizations will need to assure that only surveys that have been previously validated can be used. Examples of methods that can be used to do so, are:

  • Validation by Expert Panels – e.g. DELPHI method
  • Pre-­testing a small group with monitored reaction
  • Etc.

The most important activity of monitoring and measuring in education is related to the assessment of learning – in other words, determining what learners have learned. The results of these activities provide relevant information to modulate the learning services – in the case of formative assessment – and to decide on the certification of qualifications – in the case of summative assessment. Given the impact of these decisions on the conformity of the learning services, educational organizations need to trust the results provided by the assessment of learning instruments they use. However, there are many threats to its reliability and, consequently, to the validity of its results:

  • Each learner may have a better or worse performance depending on the type of questions chosen for a given test/exam – e.g. due to individual talents, individual communication styles, personality, stress, etc.;
  • The same student may have a better or worse performance depending on the day in which takes the test/exam – e.g. due to mental and physical heath conditions;
  • Different graders may grade the same content differently – e.g. due to previous experience, halo effect, etc.
  • The same grader may grade differently the same content, depending on the moment in which performs the task – e.g. due to fatigue and health conditions, halo effect, etc.

Considering the above, assessment of learning instruments should not be used without being validated. There are several statistical techniques that can be used to do so, such as the Kuder-Richardson Coefficient for muti-choice questions and the Cronbach Alpha for open questions. Both measure the internal consistency, providing results between 0­1 and the higher the internal consistency, the better the reliability of the test/exam. Additionally, non-statistical methods, such as pre-tests with monitored reaction, can be used.

In some circumstances, although technically possible, it might be unviable financially to use statistical techniques nor pre-tests to validate the assessment of learning instruments, as the re-use of the instrument will not bring return on investment. After all there is a significant difference between an exam needed to achieve a Microsoft certification, which will be applied for years all over the world; and a test used once in a secondary school located somewhere in the globe. In those case, educational organizations should at least assure a systematic approach to the process of developing assessment of learning instruments. This can be done by establishing minimum requirements regarding competence, the process and the instruments. Examples are:

  • Competence requirements:
    • Educators understand the threats to the reliability of the assessment of learning instruments and, consequently, to the validity of its results; and
    • Educators know how to mitigate these threats while designing and developing formative and summative assessment of learning instruments
  • Process requirements:
    • A diversify of methods are used
    • A diversify of instruments are used
    • Results are used to increment the “True Score” across time
  • Requirements for instruments
    • Instruments are designed by multi-cross-discipline teams
    • Type of questions used is diversified
    • Number of questions per instrument is incremented
    • Objective grading criteria is pre-defined
    • Blind and double grading are implemented
    • Etc.