The World's Only Test Security Blog
Pull up a chair among Caveon's experts in psychometrics, psychology, data science, test security, law, education, and oh-so-many other fields and join in the conversation about all things test security.
Posted by David Foster, Ph.D.
updated over a week ago
Caveon has a history of—and “passion” for—innovation. We believe that just because there is a way something has been done, doesn’t mean that is the way it should be done. We are constantly looking for ways to improve our processes, products, and outcomes. This drive led us to create two pretty darn impressive innovations: the SmartItem and Caveon AIG.
Automated Item Generation (AIG) is a process that leverages test item templates and computer algorithms to quickly create a large variety of item variants or similar test items. The result is hundreds or thousands of new test questions based on a single item template. As the name suggests, AIG automates much of the effort involved in item creation, a time-intensive and costly part of test development.
I invite you to read this article and this ultimate guide to learn more about how AIG works, the benefits, and how you can implement AIG within your program.
While the concept and promise of AIG have been around for quite some time, AIG has always required advanced item modeling skills and/or coding skills to pull off. At Caveon, we have built a way to make AIG accessible and usable for testing programs of every size. We call this technology Caveon AIG.
A SmartItem is a general design concept (we often call it an “item treatment” because you can use it with any item type—from multiple choice to short answer, hot spot to essay items, etc.). A SmartItem uses special technology during exam development and test administration so that each individual SmartItem renders differently each time it is given on a test. Translation: no two examinees experience the same thing during testing, so “stealing answers” to gain an advantage is pointless. To encourage proper studying and instruction, the SmartItem is coded to completely cover the target standard or competency.
We often get asked whether SmartItems are just another form of AIG. And while they may seem similar, they aren’t the same. Let’s start by looking at some of their similarities:
AIG impacts test security in two ways. First, it makes secure test design accessible. Most secure test designs such as CATs, LOFTs, or even a plethora of forms require a large pool of items. AIG makes these designs possible for even small programs. Second, AIG blocks pre-knowledge. By making secure item design possible, AIG makes buying and memorizing items from the internet pretty pointless. A large enough item bank utilized properly makes it unlikely that any exposed items will be on any person’s actual exam.
Let’s change gears to the ways SmartItems improve test security. By presenting different renderings of each SmartItem to every test taker, testing programs can dramatically reduce cheating. A SmartItem makes it so a test taker can no longer cheat by sharing content, buying questions and answers, or asking a friend to take the test and tell them what they saw. In addition, a SmartItem cannot be stolen. (Well, it can, but it would be pointless.) Because a SmartItem covers the entire skill or learning objective, a SmartItem answer key that is posted online will benefit nobody.
A test taker cannot tell the difference between a test developed traditionally, one using AIG technology, or one made up entirely of SmartItems. Instead, the test taker will simply see an exam made up of multiple-choice, essay, matching, hotspot questions, or whatever item type and exam structure you choose for your test.
Neither AIG nor SmartItems would be possible without recent advancements in technology. While the dream of an ideal test that selects randomly from “a universe of items” has been around since Fredrick Lord in the 1950s and Lee Cronbach in the 1970s, SmartItems made that dream a reality with 21st-century technology. (Learn more about Lord’s vision here and here, and Cronbach’s vision here and here.) Similarly, AIG was relegated to the obscure world of academia and computer programmers until recent advancements in testing software and graphical user interphases (GUI) made it accessible to all.
AIG and SmartItems are easy to create and implement using the GUI in Caveon’s test development and delivery software (learn more about this software, Scorpion, here). They can also be used with a testing program’s existing platforms. AIG items can be exported and used with any vendor’s test administration platform. With the SmartItem, a client can either use Scorpion to administer tests or integrate the SmartItem API with their existing vendor’s test administration technology.
There are seven distinct differences between AIG and SmartItem technology:
AIG generates static items during the test development process. Those items sit in item bank databases until placed on test forms. The items on those forms are seen by every test taker who is administered that specific form.
In contrast, a SmartItem renders different versions of itself to each test taker in real time during exam administrations. Thus, while the test-taker data is captured, that version of the SmartItem is transitory, and it is unlikely to ever be seen again.
AIG is a process used to create lots of items during test development. A SmartItem is an actual item, albeit non-static, that is used on the test.
The goal of SmartItems is to reduce the size of an item bank to, ideally, one item (that is, a SmartItem) per objective or standard. The goal of AIG is to expand an item bank by creating hundreds or thousands of items automatically that can either be stored in an item bank or placed on forms. Many item versions created from AIG may not ever be used because there will be no need.
Renderings of a SmartItem do not need field testing or review beyond the quality-assurance process for the entire SmartItem objective. On the other hand, each item created from AIG is usually reviewed and/or field tested during the development process.
There is no risk of a SmartItem being over-exposed or disclosed. Additionally, a SmartItem will never need to be replaced unless the objective or skill being measured changes. However, if any items created by AIG are exposed (e.g., found on a braindump site), those items will need to be immediately replaced to protect the validity of the exam.
Utilizing SmartItem technology eliminates the need for forms and many other features of traditional test design. AIG creates items that fit within existing test designs and development processes, such as using multiple equivalent or equated forms.
SmartItems are often referred to as “the future of testing” and test security. The SmartItem is a step towards a new and better way of producing and administering tests that improve fairness, validity, accessibility, and cost-savings. AIG, by contrast, is a more straightforward process that supports current methods of testing.
AIG is a process. It leverages test item templates and computer algorithms to quickly create a large pool of item variants or similar test questions. The result is hundreds or thousands of new test questions, all based on a single item template. As the name suggests, AIG automates much of the effort involved in item creation—which is a time-intensive and costly part of test development.
A SmartItem is an item treatment. Each individual SmartItem completely covers a standard or objective. The SmartItem uses special technology during exam development and test administration to render differently every time it appears on a test. This makes it so that no two examinees experience the same item during testing, making efforts to “steal answers” or gain an advantage pointless.
While Automated Item Generation and SmartItem technology have many similarities, they are entirely different innovations that impact the world of testing (and the methods needed to ensure test security) in vastly different ways. You can learn more about automated item generation in this ultimate guide, and SmartItem technology in this booklet.
A psychologist and psychometrician, David has spent 37 years in the measurement industry. During the past decade, amid rising concerns about fairness in testing, David has focused on changing the design of items and tests to eliminate the debilitating consequences of cheating and testwiseness. He graduated from Brigham Young University in 1977 with a Ph.D. in Experimental Psychology, and completed a Biopsychology post-doctoral fellowship at Florida State University. In 2003, David co-founded the industry’s first test security company, Caveon. Under David’s guidance, Caveon has created new security tools, analyses, and services to protect its clients’ exams. He has served on numerous boards and committees, including ATP, ANSI, and ITC. David also founded the Performance Testing Council in order to raise awareness of the principles required for quality skill measurement. He has authored numerous articles for industry publications and journals, and has presented extensively at industry conferences.View all articles
For more than 18 years, Caveon Test Security has driven the discussion and practice of exam security in the testing industry. Today, as the recognized leader in the field, we have expanded our offerings to encompass innovative solutions and technologies that provide comprehensive protection: Solutions designed to detect, deter, and even prevent test fraud.