Since 2002, 4 million visitors plus:
hit counters
search engine optimization service

  Appletcollection Vertical Menu java applet, Copyright 2003 GD

Short Form Assessments: Some thoughts by Cisco and Eggbert

John and I have been asked a number of time over the past year about our thoughts on the use of "Short Forms" for the WISC-III. Do they exist? How valid and reliable are they? Are there any easy charts or tables for creating them? Should they be used? Would we be willing to provide charts and tables for them? Because I am somewhat biased about short forms, having recently published an article about a particular WISC-III combination in Psychology in the Schools, John agreed to disagree with me on this article. (As many of you know, John and I have argued with each other, as well as occasionally agreed, at workshops a number of times and have affectionately been labeled, and have subsequently adopted as our moniker, the nicknames "Cisco and Eggbert" of testing.

"Eggbert, have you got any thoughts on short forms IQ tests? We, or should I say "I", always seem to be complaining about the amount of time we spend testing and evaluating children. Wouldn't some short form be a useful time saver? The usual purpose for readministering intelligence tests to children already identified as learning disabled is usually to simply validate the individual's cognitive functioning as being at least "normal". This having been done, why do it again? Studies have found the results of IQ tests do not usually contribute to recommendations regarding remediation or intervention. They're often simply used to make decisions regarding the continuation of the handicapping condition. Wouldn't these limitation, coupled with the amount of time that we spend administering lengthy assessments, make us want to explore alternative procedures for validating a student's cognitive functioning during a three year evaluation? Don't we test too much and without reason?"

"Cisco, you know me, of course I have some thoughts about short form IQ tests. Since you researched short forms and then created and validated one, let me say that yours was the right way to do it, and doing it any less thoroughly would be reckless. However, there is a right way to fillet a guppy for dinner or to perform brain surgery with only two instruments, and if you are going to do either one you should do it right...but why would you be doing them? If the studies you referred to are right, and we aren't testing to discover current patterns of intellectual abilities or in possible changes in patterns, what are we interested in? Do we simply want a number?"

"Easy Eggbert, you're getting carried away. Special education teams deal with numbers every day. It's the nature of the beast. Some teams 'require' the re-assessment of a child's intellect every three years. That itself seems silly and may be the problem, but since they do, it turns into a question of the redundancy of the testing. We all know places where the motto is 'If it moves..WISC it.' How many times have we tested kids who know the test better than we do because they've been given a WISC 4 times. They ought to be giving the test to us and at times they know the material enough that they are answering before we ask. I love giving the DAS on reevaluation and really stumping the kids who are expecting a WISC. If the child is already identified as learning disabled, wouldn't assessing IQ using a short form be beneficial. Less time is taken on something that we probably won't use anyway, and more emphasis can be placed on areas of identified weakness. And maybe, philosophically, shouldn't we be de-emphasizing the whole idea of IQ as an exclusionary factor in our assessments?"

"Great idea but will it happen? Think about the meetings we've been in where the IQ has changed over a three year interval. There are few sights more pathetic than a team scrambling around trying to explain away a drop in IQ score on re-evaluation. Teams hesitate to admit that three years in a wonderfully created individualized educational setting has cost a child a significant number of IQ points. Teams also don't seem to know what to do with increases in IQ scores. Does this mean that the child has actually gotten smarter, but is learning even less in the special education program? Most of the time, the teams simply explain away unexpected IQ scores and pay no attention to expected ones. If we ignore the retest, why bother in the first place? And what of the dilemma for the teams that do take all the numbers seriously? If a learning disabled child's IQ score drops toward the already-low achievement scores, will the team decide the child is no longer disabled? The child used to have a high IQ with low achievement and was considered educationally challenged by a learning disability. Now the achievement is still low but the IQ has dropped as well, so the child is no longer disabled and no longer in need of service. Decisions like these make Joseph Heller's Catch-22 look like a triumph of rationality. "

"We agree on all this, Eggbert. I think! The testing as it stands is problematic because of the way we use them. But if the team requests an intellectual assessment, and the chances are that nothing more than the total score will be used, and you already have a complete valid assessment, I suggest that the use of a "good" short form is justified. Clearly some rationale for the choice of the short form is necessary, as well as determination of reliability, validity, and standard error is needed. I am not advocating IQ roulette. Examiners choosing or creating short forms must understand the properties of those forms."

"Yes, and the operative word is 'good'. If you do use a short form remember: a short form of a well constructed test is better than a short test with known shortcomings; three or more subtests tapping different abilities is better than a short test tapping only one or two; and it's clearly a professional, ethical, and legal requirement to use an instrument with specified reliability. Your short form procedure meets these requirements, whereas brief tests do not, and estimating, averaging, and prorating certainly do not."

Having said all that, for anyone interested in a short paper on how to use the Tellegen & Briggs formulas to create and validate short forms of major tests, use this link.