Since 2002, 4 million visitors plus:
hit counters
search engine optimization service

  Appletcollection Vertical Menu java applet, Copyright 2003 GD

 

Test behaviors of interest

&

Testing the Limits Ideas

 

John Willis, Jonas Taub, Ron Dumont and Nancy Marron

Examples are gleaned from the WISC-III and DAS but may be relevant to other tests that utilize similar measures:

Look for some excellent suggestions in Jerome Sattler's (1992) Assessment of Children (revised and updated 3rd ed.). San Diego: Jerome M. Sattler, Publisher. and Audrey Myerson O'Neill's (1995) Clinical Inference: How to draw meaningful conclusions from tests. New York: Wiley.

Information: Is the student able to retrieve information fairly easily, or is there inaccuracy and inconsistency? Does the student have particular holes in his or her knowledge base? When they do, what about the difficult items they are successful with? Are there particular strengths or areas of keen interest and experience? Interview will reveal this. I am always observant of the kid who misses questions involving time concepts but has all this science knowledge.

Group items by category (Number items, science items, general factual items). Does the child miss a certain type?

Testing limits: With some children, who miss number items, I ask "Tell me the days of the week?" Can they names the days but out of sequence? Do they have a problem initiating? (Long pause with no answer. You say, "Let me start you - Sunday…." The child jumps in and says the rest). "Tell me the months of the year."

Similarities (DAS - Similarities): The DAS format is like the WISC-III with the exception that it provides 3 target words as opposed to 2. This allows a child who doesn't know one of the words to still have a chance of getting passing credit. How often do you get asked to define Tribe?

Arithmetic: Do they ask for repetition? Are they able to hold and manipulate information without losing some of it, especially Arithmetic and Digits Reversed. Do they remember the information correctly but get the task wrong, or is their memory inaccurate?

Testing the limits - try a paper and pencil task. First try presenting the same problems in written form. Then allow scratch paper. Finally, by converting the word problems to a paper and pencil calculation test (see example) you might be able to discern if this is a problem of attention/concentration and/or mathematical skill.

Digit Span (DAS Recall of Digits): Obviously make a verbatim copy of the response, whether correct or not. This allows for item analysis. What errors were made? Did the child simply forget the items, remember the items but mis-sequence them, or remember the items but have intrusions of other numbers. Did the child on the WISC-III verbally compensate by repeating the numbers to him/herself while you read them at a rate of 1-per-sceond? On the DAS the numbers are read 2-per second, somewhat alleviating the verbal compensation and making the test a "purer" measure of short-term memory.

Picture Completion: Does the child respond with a word, a description of the missing detail or by pointing? When he has difficulty coming up with the specific labeling word, I begin to wonder about word retrieval difficulties.

I use the abbreviations PC and PIC for "Pointed correctly" and "Pointed Incorrectly".

Testing the limits: I have used the PC subtest as an un-normed naming vocabulary test. Go back and instead of "What is missing" try "What is this". If I sense poor one word expressive, I follow up with a better measure.

Coding: I observe their tracking skills and the errors they make. Do they have an orderly approach, or do they have difficulty finding their place?. Do they begin to memorize symbols (you'll notice this if they don't look up at the code on top)?

Beware of making much about speed and slowing. Some (Kaufman (1995), Nicholson, & Alcott (1994)) have suggested that slowing of responses on the Coding subtest may be indicative of certain problems. We [John Willis and Ron Dumont (1998)] found that slowing on Coding was normal

Picture Arrangement: Time the standard procedure but also record the time it takes to initiate action. Although the student receives credit based on the total time for completion, many children spend large amounts of time "figuring out" the stories. Timing how long it takes to make the very first move may help understand how they are processing the information.

Testing the limits: Any story that is incorrect should be questioned as part of the testing-the-limits stage. I lay out the pictures exactly as the child arranged them and ask: "Tell me the story that this represents." I get a sense of the logic behind the arrangement.

Block Design (DAS - Pattern Construction): I am intrigued by students who misplace one of the diagonal blocks on Block Design. Seeing the error, the student rotates the block -- still wrong. A second rotation -- wrong again. The next turn will do it, but instead the student flips the block over to the identical, opposite face and starts over. I mark this behavior on the protocol as TTF (turn, turn, flip). Some kids rate several TTFs. I then wonder about frustration tolerance, ability to stick with a strategy, etc. As with all such hypotheses, I then seek real-life evidence to confirm or refute my guess. Sometimes it is a unique response to the artificial demands of the test. Occasionally it helps me learn something about the student's operating style in other situations.

I use CBO and CBR as abbreviations to indicate "Correct but overtime" and "Correct but rotated."

Look to see if there were quantitative as well as qualitative differences as the task changes. Note that Items 1-2 have the model made by the examiner (3 dimensional), items 3-5 have a model with the dividing lines shown (2 dimensional), items 6-9 have 4 blocks but no dividing lines in the picture, and items 10-12 have 9 blocks but no dividing lines. Some children will make errors as the stimuli shift slightly. I look to see if that is where the errors begin.

General - What is the student's response style? Quick or slow. Self confident. Impulsive. Cautious. Is there a response delay and what might it be due to?

When the student makes an obvious error -- e.g., an obviously senseless PA story; or a total mess of a BD item; or making a giraffe with a goiter instead a horse; or a VW with two extra, floating parts instead of a large sedan; or a Picasso face -- does the student make a visible effort to ignore the error (hoping perhaps I will, too?), try to rationalize it ("That's how I make horses!" "It's a little car"), try again, or frankly admit failure and give up?

Measures latency before beginning a response and silences during a response by discretely marking one dot each second while he waits [yes, he also uses his stopwatch to time the item correctly]. The protocol gives a visible record of the student's speed, fluency, and spontaneity and allows you to determine if overtime responses on, for instance, PC went overtime with the initial delay or subsequent pauses.

Verbal Expression Tasks - Vocabulary, Similarities, Comprehension - I look for word retrieval, precision, the categorical term vs. explanation. Response delays. Organization and expression. Focused responses vs. circuitous responses vs. successive approximation of the correct response -adding information until they have answered the question.

If the child asks for repetition of an item or a word, if the task is not a timed task, ask "What did you think I asked." This may help distinguish between auditory misperception, confusion, and/or memory issues.

Visual Construction Tasks - Picture Arrangement, Block Design, Object Assembly - I note problem solving approach. Are they intuitive, perceptive, able to anticipate connections, or do they try piece after piece until they find connections that work? There are some kids who will not try anything until they are pretty sure that it will work, and others who will try endlessly illogical connections. How well do they recognize errors and how well do they use this information to make corrections? Do they have difficulty seeing the whole and reproducing the patterns, or do they have difficulty finding the correct orientation of individual parts, even when they clearly know what the solution should look like. I always allow the child to continue beyond time limits until they finish (correct or not) or give up. I score strictly by the time limits, but I want to know if they can solve it correctly, and how many they are able to solve correctly beyond time limits. If I have exceeded the discontinue criterion, I stop when it is clear the child is beyond their ability level. But I've frequently gotten kids who reach discontinue criterion early on, but continue to solve most or all of the remaining items, often within time limits. When testing the limits, see if the child can make a previously impossible, 9-block construction inside the box.

On Achievement Tests, I try to analyze errors to determine which skills are strong, which are missing, what trips them up, and what keeps them from progressing.

On Reading Tests - I want to observe the strategies the student uses for decoding and comprehension. I take detailed notes of misreadings and miscues so I can analyze strategies used. I look for phonological confusion, phonetic skills and holes, the ability to segment and blend, knowledge of sight words, and the ability to apply these skills to reading, both in isolation and in context. I also observe fluency and automaticity.

On Math Tests – I later readminister items that were missed because of misreading operations signs or making simple factual errors. I give the student a calculator and red pen and allow the student to correct missed and skipped items (but I try not to let the student know I will be doing this so I can avoid an incentive to skip difficult items). With younger students, I make or have them make a number line before readministering missed items.

On Writing Tests – I also collect a writing sample done with a word processor. If you use a test with two forms (e.g., PIAT-R NU, WIAT, TOWL-3), you can even compare and contrast the two efforts. I run the student's writing (both handwritten and word-processed, but not formal spelling tests) through one or more spelling check programs or machines. It is helpful to know what percent of the student's errors are picked up, how many of those elicit the intended word from the machine, and how many of those are given as the first choice. Térèse Murphy, Jaffrey-Rindge, NH, School District, found that different spelling check programs and machines had very different degrees of effectiveness for different individuals.

In verbal tests, does the student have word finding difficulties--does he "talk around" the answer? Does he have trouble phrasing sentences in correct grammatical order? Does she say she knows the answer but can't remember it right now? Does she give up too quickly, does she refuse to give up even if you have offered to go on to the next item? How does he react to not knowing an answer? On performance items, does he have an organized trial and error method or a more random style? On block design does he just place the blocks without trail and error? Can he rotate a row of blocks at the bottom when he sees they are the wrong direction or does he start over? Does he continue to repeat the same mistakes over and over? Has he learned anything from the successful completion of earlier test items such as in block design? Does he take apart puzzles or block designs that are correct? In spite of saying, "It's a soccer ball," does he continue to attempt to assemble the puzzle into "unround" shapes? Does he use only one hand or worse, use his right hand for the right side of the design and his left for the left side of the design? Does she show anxiety when she detects that speed is important? Does he make self-degrading remarks about his performance or say "this is so easy" when he is actually doing very poorly? Does he make unusual movements like tremors, foot tapping, finger tapping, tics, hair twisting, or make repetitive noises, throat clearing, whispering to self, repeating phrases over and over?

Dumont, R., Willis, J. O., Farr. L. P & Whelley, P. (1998) 30-Second Intervals Performance on the Coding Subtest of the WISC-III: Further Evidence of WISC Folklore? Psychology in the Schools, 52, 2

Kaufman, A. S. (1995). Intelligent testing with the WISC-III. New York: Wiley-Interscience.

Nicholson, C. L. & Alcorn, C. L. (1994) Educational Application of the WISC-III: A Handbook of Interpretive Strategies and Remedial Recommendations. Western Psychological Services: Los Angeles