teen benchmarks
teen benchmarks

Do Curriculum Based Measures Compare Against Peers?

Do Curriculum Based Measures Compare Against Peers? Curriculum based measurement offers a crucial understanding of a student’s performance relative to specific learning objectives, but to fully grasp their academic standing, it’s essential to consider how they measure up against their peers. COMPARE.EDU.VN is here to illuminate how norm-referenced assessments provide that vital comparative perspective. This involves understanding academic progress, comparing assessment data, and offering valuable information.

1. Understanding Criterion-Referenced vs. Norm-Referenced Tests

The key to understanding Bruno’s situation lies in differentiating between criterion-referenced and norm-referenced tests. While the tests themselves might look similar, the interpretation of the results differs significantly. Some assessments even offer both types of results.

1.1 Criterion-Referenced Tests: Measuring Against a Standard

Criterion-referenced tests evaluate a student’s knowledge or skills against a pre-defined standard, learning goal, or performance level. Each student’s performance is compared directly to the standard, irrespective of how other students perform. These tests often use “cut scores” to categorize students into levels like “basic,” “proficient,” or “advanced.”

Alt Text: Measuring height against a criterion, similar to criterion-referenced testing.

Imagine a carnival ride with a sign stating, “You must be this tall to ride!” The indicated height functions as the criterion. The ride operator compares each person’s height against it, regardless of the height of others in line. Similarly, in criterion-referenced assessments, a student’s score is independent of the performance of other students.

As shown in the charts below, a student’s score and performance category (“below proficient”) remain constant, regardless of whether they are a top, average, or low-performing student.

Alt Text: Criterion-referenced test results showing consistent performance regardless of peer performance.

Alt Text: Another example of criterion-referenced test results demonstrating consistent performance.

Criterion-referenced tests reveal how a student performs in relation to a specific criterion but do not indicate whether they are performing below average, above average, or average compared to their peers.

1.2 Norm-Referenced Tests: Comparing Against Peers

Norm-referenced measures compare a student’s knowledge or skills to those of a norm group. The composition of this norm group varies depending on the assessment. In student assessments, the norm group is often a nationally representative sample of students in the same grade. Norm groups can be further refined by age, ELL status, socioeconomic level, or race/ethnicity.

Alt Text: Teenagers measuring their height, representing norm-referenced benchmarks.

A familiar example is baby weight growth charts at the pediatrician’s office. These charts show a child’s weight percentile:

  • A child in the 50th percentile has an average weight.
  • A child in the 75th percentile weighs more than 75% of the babies in the norm group.
  • A child in the 25th percentile weighs more than 25% of the babies in the norm group.

These measures don’t indicate whether a baby’s weight is “healthy” or “unhealthy,” but rather how it compares with the norm group. For instance, a baby weighing 2,600 grams at birth would be in the 7th percentile, but 2,600 grams is considered a normal, healthy weight.

Norm-referenced assessments provide a student’s percentile rank, describing their performance relative to the norm group, without indicating whether they meet a specific standard or criterion.

As illustrated below, a student’s percentile rank changes based on the performance of the norm group, even if their score remains constant. High-performing students have high percentile ranks, while low-performing students have low percentile ranks. These charts do not reveal whether the student should be categorized as proficient or below proficient.

Alt Text: Norm-referenced test results showing varying percentile ranks based on peer performance.

Alt Text: Another example of norm-referenced test results demonstrating changing percentile ranks.

Knowing a student’s percentile rank on a norm-referenced test indicates how well they performed compared to the norm group but doesn’t reveal whether they met, exceeded, or fell short of a specific criterion.

2. Distinguishing Between Norm-Referenced and Criterion-Referenced Scores

Assessments often provide both criterion-referenced and norm-referenced results, which can lead to confusion.

Consider a student with a high percentile rank who doesn’t meet the criterion for proficiency. Are they performing well because they are outperforming their peers, or poorly because they haven’t achieved proficiency?

Alt Text: Student with high percentile rank but not meeting proficiency criterion.

Conversely, a student with a very low percentile rank might still meet the criterion for proficiency. Are they performing poorly because they aren’t performing as well as their peers, or well because they’ve achieved proficiency?

Alt Text: Student with low percentile rank but meeting proficiency criterion.

A more common scenario involves a “typical” student who doesn’t achieve proficiency because most students are not proficient. This pattern is evident in the 2017 NAEP scores, where the typical fourth-grade student (50th percentile) scores 226, and the average fourth-grade student scores 222, while proficiency requires a score of 238 or higher.

Alt Text: NAEP scores showing typical student not achieving proficiency.

In these cases, educators must use their professional judgment, knowledge of the student, familiarity with standards, understanding of available resources, and subject-area expertise to determine the best course of action. Assessments and data merely provide information to inform these decisions.

3. The Interplay of Norm-Referenced and Criterion-Referenced Interpretations

Many assessments combine norm-referenced scores with criterion-referenced performance categories. For example, universal screeners often report both risk categories (derived from empirical studies predicting outcomes like “proficient” on a state test) and national percentile scores. This combination makes universal screeners versatile for program evaluation and identifying students needing additional support.

3.1 Bruno’s Case: A Closer Look

Let’s revisit Bruno’s situation. In the fall, he scored 55 out of 100, exceeding the proficiency cut-score of 50. His score was higher than 88% of his peers in the norming group, giving him a percentile rank of 88.

Alt Text: Bruno’s fall assessment results showing proficiency and high percentile rank.

In the spring, Bruno scored 60, still proficient. However, the norm group’s scores increased significantly from fall to spring. As a result, Bruno’s spring score placed him at the 38th percentile.

Alt Text: Bruno’s spring assessment results showing proficiency but lower percentile rank.

This scenario is concerning for Bruno’s teacher. While proficient, Bruno isn’t keeping pace with his peers and may fall behind. If the proficiency criterion increases, he might struggle to meet expectations. This underscores the importance of monitoring growth in addition to gains.

4. The Significance of Student Growth Percentiles (SGP)

Gains, calculated by subtracting a previous score from a current score, indicate increased knowledge or skill but don’t reveal whether a student is keeping up with peers. A growth measure, like a Student Growth Percentile (SGP), is needed.

SGP is a norm-referenced measure that compares a student’s gains to those of their academic peers nationwide. Academic peers are students in the same grade with similar score histories, meaning low-performing students are compared to other low-performing students. SGP helps educators quickly assess whether a student is making typical growth or growing more quickly or slowly than their peers. It also highlights differences between students with the same score but different learning needs.

Alt Text: Student Growth Percentile (SGP) graph demonstrating growth compared to peers.

Alt Text: Another SGP graph illustrating different growth trajectories.

4.1 Enhancing Bruno’s Outcome with SGP

Knowing Bruno’s SGP in the fall would have alerted his teacher to his slower-than-expected growth. Proactive measures could have included:

  • Additional practice opportunities
  • Placement in a different instructional group
  • Targeted supports or scaffolding
  • Peer tutoring with a higher-performing student

More frequent assessments could have monitored his gains and growth more closely. These efforts might have helped Bruno maintain his top-performing status and better prepare for the next grade.

4.2 The Importance of SGP Consistency

SGP is available after a student has taken the assessment in at least two different testing windows. The SGP reported with a fall assessment score shows growth from:

  • Spring-to-fall
  • Winter-to-fall
  • Fall-to-fall

Depending on when the student was last assessed. A midyear assessment updates the SGP to reflect fall-to-winter growth. In all cases, SGP helps educators identify trends and predict future gains.

5. Guiding Intervention with Norm-Referenced Assessment

Beyond student growth, universal screeners, which use norm-referenced assessment, help identify students needing support, those on track, and those benefiting from accelerated instruction. Although universal screeners are brief and don’t always specify needed support, combining different assessment data creates a detailed picture of which skills need teaching. This data helps determine if intervention is needed at the class level (Tier 1), in a small group (Tier 2), or individually (Tier 3).

5.1 Universal Screening: The Role of Norm-Referenced Assessment

The primary goal of norm-referenced assessment is to compare a student’s performance to a peer group, indicating their level of risk or need. Universal screeners are norm-based assessments designed to identify students at risk for poor learning outcomes. These assessments are typically brief, reliable, and valid.

In universal screening, students nationwide take an assessment, and educators analyze scores to understand compared performance. For example:

  1. A student scoring at the 75th national percentile performs at or above 75 percent of students nationally, above the national average.
  2. A student scoring at the 35th national percentile performs just below the national average.

This information is essential for determining how much extra help a student needs to be “on-track” for success by year’s end.

5.2 Renaissance’s Solution for Universal Screening

Renaissance’s FastBridge and Star Assessments are norm-referenced universal screening solutions offering curriculum-based measures (CBMs) and computer-adaptive tests (CATs). FastBridge and Star assess students in reading, math, and social-emotional behavior. This combined approach improves the reliability and validity of decisions, providing more accurate data faster than other systems.

5.3 Progress Monitoring: Continuing with Norm-Referenced Assessment

Another vital goal of norm-referenced assessment is regularly monitoring a student’s progress, typically weekly or biweekly. Progress monitoring tools provide information about students in targeted Tier 2 and Tier 3 interventions. Progress monitoring serves three main objectives:

  1. Evaluate student learning outcomes: Track gains and determine additional gains needed to catch up to peers.
  2. Consider instructional change: Evaluate teaching practices and revise interventions if needed.
  3. Determine eligibility for other educational services: Identify students needing additional services, including special education.

5.4 Renaissance’s Solution for Progress Monitoring

FastBridge and Star Assessments also support progress monitoring. They measure students’ rates of improvement and determine whether students are on-track to meet their goals. This helps determine whether each intervention should be maintained, modified, or intensified to close gaps and help students meet expectations.

6. Criterion-Referenced Assessment and Performance Categories

Criterion-referenced scores compare a student’s knowledge or skills against a criterion based on expert judgment or external indicators. State accountability assessments typically report results as proficiency levels established through carefully designed protocols. The criterion is usually a single score or a few scores resulting in several performance categories.

Therefore, criterion-referenced results indicate whether a student met a standard but not how far above or below the standard they performed. Academic tests often use “cut scores” to place students into categories such as basic, proficient, and advanced.

7. Achieving Successful Intervention with Research-Based Assessments

In conclusion, both norm-referenced and criterion-referenced scores are essential for a complete picture of student performance and needs, helping identify the best supports for learning. For a comprehensive solution for screening and progress monitoring for pre-K through grade 12, Renaissance can help.

Discover how curriculum based measures compare against peers at COMPARE.EDU.VN. Find detailed, objective comparisons to help you make informed decisions about your child’s education or your own learning path. Visit us at 333 Comparison Plaza, Choice City, CA 90210, United States, or contact us via Whatsapp at +1 (626) 555-9090.

Connect with an expert today to get started.

FAQ: Understanding Curriculum Based Measures and Peer Comparison

1. What are curriculum-based measures (CBMs)?
Curriculum-based measures are assessment tools used to monitor student progress within the school curriculum. They are designed to be quick, efficient, and directly tied to what is being taught in the classroom.

2. How do CBMs help in understanding student performance?
CBMs provide educators with immediate feedback on student learning, allowing them to adjust their teaching strategies and interventions as needed. They assess specific skills and knowledge aligned with curriculum goals.

3. Do curriculum-based measures compare students against their peers?
While CBMs primarily assess individual student progress against curriculum standards, they do not inherently compare students against their peers. To understand how a student performs relative to others, norm-referenced assessments are needed.

4. What are norm-referenced assessments?
Norm-referenced assessments compare a student’s performance to that of a norm group, typically a representative sample of students in the same grade. This provides a percentile rank, indicating how the student performs compared to their peers.

5. Why is it important to compare students against their peers?
Comparing students against their peers provides context for their performance. It helps identify students who may need additional support or those who are excelling and could benefit from more advanced instruction.

6. Can a student be proficient but still perform poorly compared to peers?
Yes, a student can meet the criteria for proficiency on a curriculum-based measure but still have a low percentile rank on a norm-referenced assessment, indicating they are not performing as well as their peers.

7. How can educators use both CBMs and norm-referenced assessments effectively?
Educators can use CBMs to monitor individual progress and adjust instruction, while norm-referenced assessments can provide a broader context by comparing the student’s performance to their peers. This combined approach offers a comprehensive understanding of the student’s learning needs.

8. What is a Student Growth Percentile (SGP)?
A Student Growth Percentile (SGP) is a norm-referenced measure that compares a student’s academic growth to that of their academic peers nationwide, providing insight into whether the student is growing faster, slower, or at the same rate as their peers.

9. How do universal screeners use norm-referenced assessments?
Universal screeners use norm-referenced assessments to identify students who may be at risk for poor learning outcomes by comparing their performance to a norm group. This helps educators determine which students need additional support.

10. Where can I find objective comparisons of educational assessments?
You can find detailed, objective comparisons of educational assessments at compare.edu.vn, which provides resources to help you make informed decisions about education and learning.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *