cut-e has innovative, short and precise online ability tests

For years research has shown that aptitude tests are powerful predictors of long term professional success. Virtually no other tool provides as much added value for HR decision making with comparably small time and effort.

All tests in the scales suite developed by cut-e are developed in a way that it is possible to administer the tests without the presence of an administrator. Therefore all tests cannot only be used for diagnostics, but also for online recruitment and selection processes.

Characteristics of cut-e ability tests that make them candidate friendly and robust

All scales aptitude tests have the following characteristics:

  • Self-explanatory through interactive example sequences.
  • Cheat-proof due to item generators. (This unique technology generates an individual test for every participant and thus prevents cheating as no sample solutions exist and learning effects, if tests are repeated, are negligible).
  • Hardware-independent due to vector graphics. The way every task is displayed adapts to the user’s screen setting (resolution and proportion), and thus avoiding any possible negative effects caused by the hardware used or the user’s internet connection.
  • Language availability, in order to guarantee comparability and fairness for participants of different mother tongues. The systems and the instruments are available in many languages.
  • Valid test results with a maximum test time of 15 minutes.
  • Scientifically sound – ensured through continuous standardisation and validation studies in cooperation with companies and universities.
  • Certified by Det Norske Veritas, according to the framework of the International Test Commission.
  • Seamless integration into existing recruitment workflow systems.
  • No barriers in accordance with the provision for creating barrier-free information technology based on the equal opportunities act.

New guide offers best practice on retesting job applicants

The cut-e Guidelines for retesting job candidates, explain 'when' retesting is appropriate, 'what' to retest and 'how' to do it, as well as how to interpret the results and how to communicate effectively with candidates. "Candidates may feel less inclined to cheat if you make it clear during your application process that you'll conduct supervised retesting," said Dr Lochner.

Research supports our test development

cut-e works with a range of universities for student driven research and has an own International Research Team to constantly research, maintain and improve our online assessments.

An example: Nina Galler, consultant with cut-e, researched the effect of cheating in online assessment for her Bachelor thesis. “With organizations increasingly using online assessment in their recruitment activities it is tempting for candidates to try to manipulate the testing situation and seek ways to cheat. We have looked specifically at a short term memory test, scales stm, and the impact on test results.” The results fed into our test development to help further cheat-proof our tests.

cut-e ability tests in use

Processes can be managed either by cut-e or by clients autonomously. Many different functions are available for entering and adding projects and candidates, for sending e-mails and creating various reports.

The results can be called up simply and easily online. They are clearly arranged in a profile chart, or alternatively narrative feedback reports can be generated.

The system, tests and reports are available in several languages. Additional language versions are available on request. The international and local set of norm groups as well as the language versions are continually updated.

What are the advantages of adaptive testing?

Adaptive testing offers a number of advantages compared to conventional testing, which has been developed on the basis of the classical testing theory. Dr Katharina Lochner, Research Director at cut-e, explains how adaptive testing works.

What is adaptive testing?
Lochner: “In adaptive testing, the test or the questionnaire is adapted to the answers given by the candidate. We evaluate cognitive capacity like numeracy-based logic or concentration. In this case, the candidate is automatically presented with a task (item) that will give us maximum information. By way of the questionnaire, we aim to find out something about personality, skills, motives or values, and evaluate which aspects of these are more or less prominent in an applicant. We achieve this by comparing different elicited characteristics. With an adaptive questionnaire, you need not compare each concept with each other, but only those that are similar.”

What advantages are there to adaptive testing?
Lochner: “In a test, the candidate does not need to process any tasks that are either too easy or too difficult for him. This reduces the processing time. Furthermore, the candidate is not bored by items that are too easy for him and does not feel overwhelmed by those too difficult. By using a questionnaire, a similar effect is experienced: In that the applicant does not need to grapple with unnecessary statements, the processing time is considerably reduced. This is not only more agreeable for applicants, but also better for the company to which they are applying. A study has proven that many applicants abort the processing of a questionnaire when it takes more than 20 minutes. Those who quit the procedure are to a large extent the very good candidates that are of interest to the company. It is these candidates who should not be expected to work on annoyingly long questionnaires, as they also have good chances in the application procedures at other companies.

A further advantage is that adaptive methods make tests and questionnaires more secure. Each individual execution of a test is different – different items are presented. This makes it impossible for candidates to obtain any answers prior to the test and thereby cheat in the application process.”

Are there any disadvantages to adaptive testing?
Lochner: “Aside from the obvious advantages in the application, there is an extra effort and expense in developing adaptive tests and questionnaires.” A very large pool of items is required as well as large-scale sampling for testing and validation. This takes time as well as financial investment. The compiling of an adaptive test is only worthwhile if enough candidates are expected. In addition, processing usually has to be done electronically. Before computers were in daily use, there were attempts to employ adaptive testing using paper and pencil. This proved cumbersome, and at times impossible.

For the candidates, an adaptive method implies not being able to go back to an item and modifying a previous answer. Answers, once given, cannot be changed. Moreover, items are processed one at a time, that is, the item that is visible to the candidate. No item can be skipped, either. Both (not being able to move back nor forward) may cause a certain degree of stress in the candidate taking the test. Furthermore, there are no two identical tests or questionnaires as in non-adaptive methods, in which all candidates receive the exact same items.”

White Paper adaptive testing

White paper adaptive testing

With the adalloc method the company cut-e offers a new and unique technology for adaptive measurement of different types of concepts. The adalloc method is suitable for measurement of competencies, personality dimensions, attitudes, interests, and values, as well as for the assessment of job requirements. 

Is there an easy way to match a candidate’s abilities to the requirements of any job?

The Occupational Information Network (O*NET) is an online database containing validated information on hundreds of different occupations, including the knowledge, skills, abilities and education level required in each role. It is the definitive reference source for jobs worldwide.

At cut-e, we've linked to the O*NET database and created an online cut‑e product finder, which allows you to search for any job role. For example, search for 'psych' and you'll see a list of possible roles including clinical psychologist, psychiatrist and psychiatric technician. You can then click to obtain more information about your chosen role, directly from the O*NET database, including the activities and tasks involved and the typical salaries available. We've also analysed the requirements of each job and created a list of recommended assessments from our portfolio which would help you to assess applicants for every role.

Usually, before utilising an assessment test, you'll want to conduct a pre-screening validation study to ensure that your chosen test reflects the requirements of the role (content validity) and that it measures the relevant traits (construct validity).

It can be challenging to undertake these validation studies for every job. However, because the O*NET database tells you the components required in the role, you can speed up this process by apportioning validity on the basis that your tests have already been validated elsewhere against those particular components and attributes. This is called 'synthetic validity'. In some countries, this counts as sufficient validation. In other countries, such as the US and the UK, you would still have to undertake a post-screening validation to ensure your test is predicting what it's meant to predict (criterion validity).

At cut-e, we have 30 different scales ability tests which cover verbal and numerical abilities, abstract logical abilities, special knowledge/skills and specific cognitive abilities such as reaction speed, multi-tasking capability and short-term memory.

You can combine our tests to assess for the specific knowledge, skills and abilities required in any job. Instead of having to go through a different report for each ability test, you'll get a single, validated report which provides a 'match score' for each candidate against the specific role. This makes it much easier and quicker for you to select the best candidates.

How do you combat fraud and verify results are from the correct candidate?

You need to create an ‘honesty contract’ with your candidates. Make it clear that they will be re-tested if/when they’re invited for interview and that any major discrepancies will be investigated. 

More importantly, you need to make sure that the instruments you use are stable and fake-proof in themselves, by using technologies such as item generator concepts and adaptive testing, but also by designing item formats that are actually hard to break. At cut-e, we have developed all our measurement tools for unsupervised online use from the onset, and always with an eye on how to make them as fake-proof as possible.

Ask the expert: How can online tests be developed to protect against cheating?

Often there are some reservations regarding online assessment, especially when applied in an unsupervised manner. cut-e, as an expert in online psychometrics, regularly receives questions from clients about this matter. We have answered some of the most frequently asked questions.

Respondent identity – who is taking the assessment?
When using online assessment instruments, companies are often uncertain, if the candidates take the tests on their own, or if someone else helps or has been included to produce a better result. How do you know that the person who took the test is actually your candidate, and if they did the test on their own?

The use of online assessment is most effective when selecting people from a large pool of applicants. The purpose of using online assessment tools in pre-selection is to rule out applicants with insufficient test-results (so called negative selection). An applicant who does well online is then invited to a job interview and/or a more in depth assessment. They can be and should be tested again when they come on site (so called re-test) in combination with additional measures of abilities. At this point it will become obvious if test results are much lower than during the online screening, indicating some sort of irregularity.

One easy way to lower motivation for cheating is by describing the retesting process before administering online test. This usually eliminates the problem. Practical experience shows that false identity attempts are no longer an issue where this process is used.

Training for tests – can candidates practice?
Cognitive abilities can be trained. Therefore, the performance in online tests can be improved, or so goes the assumption.

While this seems to be true, it doesn’t affect selection processes so much, because tests would have to be trained over a long period of time in order to create significant improvements in performance.

An example: Imagine you needed a good track runner. In order to select the best runner you would measure their running time. Let’s say a candidate works out intensely one day before the test-run. This will have a small or even negative effect on his test performance. But, if he would train his running-skills for six months prior to the test, his performance will improve. You will be happy to see such a good performance and chose the candidate. The candidate’s sustained effort and practice will eventually result in a better runner. The same goes for cognitive abilities.

Sample solutions - Won’t answers for online tests be passed around?
Access to sample solutions in the Internet is a genuine problem for traditional tests. Therefore, one has to be careful not to design online tests that are static. It is vital that there are many different versions of tests that draw their items from an item-pool, a so-called item-generator. This way every candidate gets his or her individualized test version of selected but comparable test items. Sample solutions will no longer help the candidate. Countless parallel versions can be generated with an item-pool, and the probability of two candidates getting the same test is smaller than winning the lottery ten times in a row. By attributing a difficulty rating to the different items, it is guaranteed that the overall difficulty remains the same for each candidate.

Support tools – can they be used to cheat?
What about the use of calculators or support tools to gain an advantage on the tests?

The challenge is in the development of the test. Test exercises have to be designed in a way that support tools have no benefit at all. If, for example, a test to measure skills in calculation is developed, it should be designed in a way that the use of a calculator has no added benefit at all. For example, candidates are asked to fill blank spaces with equation or other symbols used in calculation. This requires logical thinking and is not a mere operational task. An alternative would be to specify in the instructions that use of a calculator is permitted to all candidates, but then somewhat different skills are measured.

Overview scale aptitude tests

Search result for: Numerical abilities

  • scales numerical (industry) - Numerical reasoning

    This test measures the ability to draw logical conclusions from complex numerical information that is presented in tables and charts. Additionally, this test measures the ability to retrieve relevant information when confronted with different type of questions. The handling of the test is analogue to modern office software.

Search result for: Verbal abilities

  • scales verbal (industry) - verbal reasoning

    The test measures the ability to draw logical conclusions from complex verbal information. Additionally, this test measures the ability to retrieve relevant information when confronted with different type of questions. The handling of the test is analogue to modern office software.

Search result for: Abstract logical abilities

  • scales cls - Inductive-logical Thinking

    This test measures inductive logical reasoning. The task is to work out rules and interrelations which assign tables to two different categories. The discovered rules need to be applied in order to assign new tables to the relevant categories.

  • scales fx - Deductive-logical Thinking

    Measures deductive logical reasoning. The task is to work out the operating mode of an element by means of exploration. The answer format and surveyed additional data exclude that guessed answers are correct.

  • scales ix - Inductive-logical Thinking

    This test measures logical thinking ability. The task is to detect the object corporate rule and find out the object that does not match this rule.

  • scales ix - Inductive-logical Thinking

    This test measures logical thinking ability. The task is to detect the object corporate rule and find out the object that does not match this rule.

  • scales lst - Deductive-logical Thinking

    This test measures deductive reasoning ability. The test is based on a grid containing several objects. One cell in this grid is marked by a question mark. Each object appears only once per row and per column. The task is to work out what object should be in the cell marked with a question mark.

  • scales sx - Deductive-logical Thinking

    This test measures deductive logical reasoning. The task is to identify the correct operator based on a specific result. The answer format and surveyed additional data exclude that guessed answers are correct.

Search result for: Specific cognitive abilities

  • scales mt (sonic) - Multi-tasking Capability

    What does this assessment measure?

    Ability to multi-task

    What is the task?

    The test taker is presented with three different tasks and is required to work through these simultaneously. These tasks include hand-eye coordination, focused calculation and focused checking. The special characteristic of this test is the audio task: A sound sequence is played, which includes five randomly chosen letters of the phonetic alphabet. The task is to identify whether a letter is played twice during the sequence. Immediate feedback is given regarding their answer.

    Owing to the task which includes the sound sequence, scales mt (sonic) requires a controlled and standardised setting. Consequently, the test must be carried out in a controlled setting (for example Onsite-AC) and furthermore on a Desktop PC.

  • scales ndb - Spatial Orientation

    This test measures sense of orientation. The task is to specify the position and course of a plane relative to a non-directional beacon with the aid of a gyrocompass and a radio compass.

Search result for: Specific knowledge

  • scales lt-no - Language Skills - Norwegian

    This test measures Norwegian language ability adaptively by testing the three language aspects: fluency, vocabulary and spelling. Contains a speed as well as a power component.

  • scales lt-se - Language Skills - Swedish

    This test measures Swedish language ability adaptively by testing the three language aspects: fluency, vocabulary and spelling. Contains a speed as well as a power component.

Search result for: Personality

  • shapes (management) - Work-related Behavior

    This questionnaire measures particularly management behavior/potential.

  • shapes (sales) - Work-related Behavior

    Considerably changed version of the shapes management version (new combination of some shapes primary scales (standard version) and some views scales; some items are different. This questionnaire is appropriate for candidates without a university degree.

Search result for: Values, motives and culture

  • chatAssess – work-related interests and motivations

    What does this assessment measure?

    Behaviour, personality and ability in client specific work-based scenario and using mobile, interactive technology and simulating real-time instant-messaging.

    What is the task?

    The test taker is presented with incoming real-life work messages from different people and responds by selecting from one of the answer options given. Based on the test taker response, subsequent questions or situations are presented. The test uses a simulated instant-messaging format and making for a more realistic and enjoyable presentation than traditional situational judgement tests.

  • views - Work-related Interests and Motives

    views is a system of adaptive computerised questionnaires that helps understand individual values, motives and interests. With its unique measurement technique, it is an ideal tool for career guidance, coaching and team development.

Search result for: Integrity

  • squares - Situational Behavior

    The squares psychometric tool measures the probability of counterproductive behaviour in a work context. Thanks to an innovative item format squares is very agreeable, intuitive to understand and easy to complete.

Search result for: Creativity

Reference reading: all about ability testing

Beauducel, A., & Kersting, M. (2002). Fluid and Crystallized Intelligence and the Berlin Model of Intelligence Structure (BIS). European Journal of Psychological Assessment, 18, 97–112.

Birney, D. P., Halford, G. S., & Andrews, G. (2006). Measuring the Influence of Complexity on Relational Reasoning. The Development of the Latin Square Task. Educational and Psychological Measurement, 66(1), 146-171.

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge, New York: Cambridge University Press.

Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1–22. doi:10.1037/h0046743 

Gardner, H. (1983). Frames of Mind: The Theory of Multiple Intelligences. New York, NY: Basic Books.

Guilford, J. P. (1967). The nature of human intelligence. New York: McGraw-Hill.

Halford, G. S., Wilson, W. H., & Phillips, S. (1998). Processing capacity defined by relational complexity: Implications for comparative, developmental, and cognitive psychology. Behavioral and Brain Sciences, 21, 803-831.

Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology, 57(5), 253–270. doi:10.1037/h0023816 

Hornke, L. F., Küppers, A., & Etzel, S. (2000). Konstruktion und Evaluation eines adaptiven Matrizentests. Diagnostica, 46(4), 182–188. doi:10.1026//0012-1924.46.4.182 

Jäger, A. O. (1984). Intelligenzstrukturforschung: Konkurrierende Modelle, neue Entwicklungen, Perspektiven [Research on intelligence structure: Competing models, new developments, perspectives]. Psychologische Rundschau, 35, 21–35.

Jäger, A. O., Süß, H.-M., & Beauducel, A. (1997). Berliner-Intelligenz-Struktur-Test: BIS-Test, Form 4 [Berlin-intelligence-structure-test: BIS-test form 4]. Göttingen: Hogrefe.

Kersting, M., Althoff, K., & Jäger, A. O. (2008). Wilde-Intelligenz-Test 2. Göttingen: Hogrefe.

Liepmann, D., Beauducel, A., Brocke, B., & Amthauer, R. (2007). Intelligenz-Struktur-Test 2000 R. [Manual] (2nd ed.). Göttingen: Hogrefe.

McGrew, K. S. (2009). CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence, 37(1), 1–10.

Raven, J., Raven, J. C., & Court, J. H. (2003). Manual for Raven's Progressive Matrices and Vocabulary Scales. San Antonio, TX: Harcourt Assessment.

Spearman, C. (1904). “General Intelligence,” Objectively Determined and Measured. The American Journal of Psychology, 15(2), 201–292. doi:10.2307/1412107 

Sternberg, R. J. (1985). Beyond IQ: A Triarchic Theory of Intelligence. Cambridge: Cambridge University Press.

Thurstone, L. L. (1938). Primary mental abilities. Chicago: University of Chicago Press.

Wechsler, D. (1939). The Measurement of Adult Intelligence. Baltimore, MD: Williams & Witkins.

Wechsler, D., Coalson, D. L., & Raiford, S. E. (2008). Wechsler Adult Intelligence Scale—Fourth Edition (WAIS IV). San Antonio, TX: Pearson.

Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson III. Itasca, IL: Riverside Publishing.

cut-e product finder

Search amongst over 40 different online psychometric assessments for the right test or questionnaire to suit your needs.
Search by