How Do You Compare Proportions Between Multiple Groups?

Comparing proportions between multiple groups is essential for identifying significant differences and drawing meaningful conclusions, a task that COMPARE.EDU.VN simplifies. By using statistical tests like the G-test, researchers and analysts can determine whether observed variations in proportions are statistically significant or merely due to chance. This comprehensive guide will walk you through the process, providing insights and practical examples to enhance your understanding and decision-making. Dive in to explore hypothesis testing, statistical significance, and various comparative methods.

1. Understanding Proportions and Groups

1.1 What Are Proportions in Statistics?

Proportions represent the fraction of a population that possesses a specific attribute or characteristic. They are calculated by dividing the number of individuals or observations with the attribute by the total number in the group. Proportions are typically expressed as decimals or percentages, making them intuitive and easy to interpret. According to a study by the National Center for Education Statistics, understanding proportions helps in analyzing educational outcomes, such as graduation rates.

1.2 Defining Multiple Groups

Multiple groups refer to distinct segments of a population that are being compared. These groups can be categorized based on various factors such as age, gender, treatment type, or any other relevant criteria. Defining these groups clearly is essential for accurate comparison.

1.3 Why Compare Proportions?

Comparing proportions across multiple groups helps in identifying statistically significant differences that can inform decision-making and policy development. For instance, comparing the success rates of different marketing strategies or the effectiveness of various medical treatments can provide valuable insights. According to research published in the Journal of Applied Statistics, comparative studies of proportions are fundamental in evidence-based practices.

2. The Importance of Statistical Significance

2.1 What is Statistical Significance?

Statistical significance indicates that the observed differences between groups are unlikely to have occurred by chance. It’s determined using statistical tests that yield a p-value, which represents the probability of observing the data (or more extreme data) if there is no real effect. If the p-value is below a pre-defined significance level (commonly 0.05), the result is considered statistically significant.

2.2 Significance Level (Alpha)

The significance level, denoted as alpha (α), is the threshold for determining statistical significance. A commonly used alpha level is 0.05, meaning there is a 5% risk of concluding that a difference exists when it does not (Type I error). The choice of alpha depends on the context of the study and the acceptable level of risk.

2.3 P-Value Explained

The p-value is the probability of obtaining results as extreme as, or more extreme than, those observed, assuming the null hypothesis is true. A small p-value suggests strong evidence against the null hypothesis, leading to the conclusion that there is a statistically significant difference between the groups. A large p-value indicates weak evidence, suggesting that the observed difference could be due to random variation.

3. Hypothesis Testing for Proportions

3.1 Null and Alternative Hypotheses

In hypothesis testing, the null hypothesis (H0) states that there is no difference between the proportions of the groups being compared. The alternative hypothesis (H1) states that there is a significant difference. The goal of hypothesis testing is to determine whether there is enough evidence to reject the null hypothesis in favor of the alternative hypothesis.

3.2 Setting Up the Hypotheses

When comparing proportions across multiple groups, the hypotheses are set up as follows:

  • Null Hypothesis (H0): The proportions of all groups are equal.
  • Alternative Hypothesis (H1): At least one group has a different proportion from the others.

For example, if you are comparing the effectiveness of three different teaching methods, the hypotheses would be:

  • H0: The success rates of all three teaching methods are the same.
  • H1: At least one teaching method has a different success rate.

3.3 Choosing the Right Statistical Test

Selecting the appropriate statistical test is critical for accurate hypothesis testing. For comparing proportions across multiple groups, the Chi-Square test and the G-test are commonly used. The choice between these tests often depends on sample size and the specific characteristics of the data.

4. Chi-Square Test for Proportions

4.1 Basics of the Chi-Square Test

The Chi-Square test is a versatile statistical test used to determine if there is a significant association between two categorical variables. It compares the observed frequencies of categories with the frequencies that would be expected under the null hypothesis of no association.

4.2 Formula for Chi-Square

The Chi-Square statistic is calculated using the formula:

χ² = Σ [(Observed – Expected)² / Expected]

Where:

  • χ² is the Chi-Square statistic.
  • Σ means “sum of”.
  • Observed is the actual frequency in each category.
  • Expected is the frequency expected under the null hypothesis.

4.3 Calculating Expected Frequencies

The expected frequency for each cell in the contingency table is calculated as:

Expected Frequency = (Row Total × Column Total) / Grand Total

This calculation is performed for each cell in the table, and the results are used in the Chi-Square formula.

4.4 Degrees of Freedom

The degrees of freedom (df) for the Chi-Square test are calculated as:

df = (Number of Rows – 1) × (Number of Columns – 1)

Degrees of freedom are important because they determine the shape of the Chi-Square distribution, which is used to find the p-value.

4.5 Example: Chi-Square Test

Suppose you want to compare the proportion of satisfied customers across three different product lines (A, B, and C). You survey 200 customers for each product line and obtain the following results:

Product Line Satisfied Not Satisfied Total
A 150 50 200
B 130 70 200
C 160 40 200
Total 440 160 600

First, calculate the expected frequencies:

  • Expected (A Satisfied) = (200 × 440) / 600 = 146.67
  • Expected (A Not Satisfied) = (200 × 160) / 600 = 53.33
  • Expected (B Satisfied) = (200 × 440) / 600 = 146.67
  • Expected (B Not Satisfied) = (200 × 160) / 600 = 53.33
  • Expected (C Satisfied) = (200 × 440) / 600 = 146.67
  • Expected (C Not Satisfied) = (200 × 160) / 600 = 53.33

Next, calculate the Chi-Square statistic:

χ² = [(150 – 146.67)² / 146.67] + [(50 – 53.33)² / 53.33] + [(130 – 146.67)² / 146.67] + [(70 – 53.33)² / 53.33] + [(160 – 146.67)² / 146.67] + [(40 – 53.33)² / 53.33]

χ² = 0.075 + 0.213 + 1.840 + 5.107 + 1.213 + 3.367 = 11.815

The degrees of freedom are (3 – 1) × (2 – 1) = 2.

Using a Chi-Square distribution table or statistical software, find the p-value associated with χ² = 11.815 and df = 2. The p-value is approximately 0.0027.

Since the p-value (0.0027) is less than the significance level of 0.05, you reject the null hypothesis. There is a statistically significant difference in the proportion of satisfied customers across the three product lines.

5. G-Test (Likelihood Ratio Test) for Proportions

5.1 Introduction to the G-Test

The G-test, also known as the likelihood ratio test, is another statistical test used to compare proportions across multiple groups. It is particularly useful when dealing with small sample sizes or when the assumptions of the Chi-Square test are not fully met.

5.2 Formula for the G-Statistic

The G-statistic is calculated using the formula:

G = 2 × Σ [Observed × ln(Observed / Expected)]

Where:

  • G is the G-statistic.
  • Σ means “sum of”.
  • Observed is the actual frequency in each category.
  • ln is the natural logarithm.
  • Expected is the frequency expected under the null hypothesis.

5.3 Calculating the G-Statistic

To calculate the G-statistic, you first need to determine the observed and expected frequencies for each cell in the contingency table. Then, apply the formula above to find the G-statistic.

5.4 Example: G-Test

Let’s use the same data from the Chi-Square example to perform a G-test.

Product Line Satisfied Not Satisfied Total
A 150 50 200
B 130 70 200
C 160 40 200
Total 440 160 600

The expected frequencies are the same as calculated in the Chi-Square example:

  • Expected (A Satisfied) = 146.67
  • Expected (A Not Satisfied) = 53.33
  • Expected (B Satisfied) = 146.67
  • Expected (B Not Satisfied) = 53.33
  • Expected (C Satisfied) = 146.67
  • Expected (C Not Satisfied) = 53.33

Now, calculate the G-statistic:

G = 2 × [(150 × ln(150 / 146.67)) + (50 × ln(50 / 53.33)) + (130 × ln(130 / 146.67)) + (70 × ln(70 / 53.33)) + (160 × ln(160 / 146.67)) + (40 × ln(40 / 53.33))]

G = 2 × [(150 × 0.022) + (50 × -0.065) + (130 × -0.119) + (70 × 0.274) + (160 × 0.087) + (40 × -0.288)]

G = 2 × [3.30 – 3.25 – 15.47 + 19.18 + 13.92 – 11.52] = 2 × 6.16 = 12.32

The degrees of freedom are (3 – 1) × (2 – 1) = 2.

Using a Chi-Square distribution table or statistical software, find the p-value associated with G = 12.32 and df = 2. The p-value is approximately 0.0021.

Since the p-value (0.0021) is less than the significance level of 0.05, you reject the null hypothesis. There is a statistically significant difference in the proportion of satisfied customers across the three product lines.

5.5 When to Use the G-Test

The G-test is often preferred over the Chi-Square test when dealing with small sample sizes or sparse data. It is also more robust when the expected frequencies are low. However, both tests generally yield similar results when the sample size is sufficiently large and the expected frequencies are adequate.

6. Post-Hoc Analysis: Pairwise Comparisons

6.1 The Need for Post-Hoc Tests

When the overall test (Chi-Square or G-test) indicates a significant difference among multiple groups, it is important to conduct post-hoc tests to determine which specific groups differ significantly from each other. These pairwise comparisons help pinpoint the sources of the overall significance.

6.2 Bonferroni Correction

The Bonferroni correction is a simple and conservative method for adjusting the significance level in multiple comparisons. It divides the overall significance level (alpha) by the number of comparisons being made. For example, if you are comparing three groups and performing three pairwise comparisons, the adjusted significance level would be 0.05 / 3 = 0.0167.

6.3 Example: Pairwise Comparisons

Using the customer satisfaction data from the previous examples, let’s perform pairwise comparisons with Bonferroni correction to determine which product lines differ significantly.

The three pairwise comparisons are:

  1. Product Line A vs. Product Line B
  2. Product Line A vs. Product Line C
  3. Product Line B vs. Product Line C

The adjusted significance level is 0.05 / 3 = 0.0167.

First, perform a Chi-Square or G-test for each pairwise comparison:

1. Product Line A vs. Product Line B

Product Line Satisfied Not Satisfied Total
A 150 50 200
B 130 70 200
Total 280 120 400

Expected Frequencies:

  • Expected (A Satisfied) = (200 × 280) / 400 = 140
  • Expected (A Not Satisfied) = (200 × 120) / 400 = 60
  • Expected (B Satisfied) = (200 × 280) / 400 = 140
  • Expected (B Not Satisfied) = (200 × 120) / 400 = 60

Chi-Square Statistic:

χ² = [(150 – 140)² / 140] + [(50 – 60)² / 60] + [(130 – 140)² / 140] + [(70 – 60)² / 60]

χ² = 0.714 + 1.667 + 0.714 + 1.667 = 4.762

Degrees of Freedom = (2 – 1) × (2 – 1) = 1

P-value ≈ 0.029

Since the p-value (0.029) is greater than the adjusted significance level (0.0167), there is no significant difference between Product Line A and Product Line B.

2. Product Line A vs. Product Line C

Product Line Satisfied Not Satisfied Total
A 150 50 200
C 160 40 200
Total 310 90 400

Expected Frequencies:

  • Expected (A Satisfied) = (200 × 310) / 400 = 155
  • Expected (A Not Satisfied) = (200 × 90) / 400 = 45
  • Expected (C Satisfied) = (200 × 310) / 400 = 155
  • Expected (C Not Satisfied) = (200 × 90) / 400 = 45

Chi-Square Statistic:

χ² = [(150 – 155)² / 155] + [(50 – 45)² / 45] + [(160 – 155)² / 155] + [(40 – 45)² / 45]

χ² = 0.161 + 0.556 + 0.161 + 0.556 = 1.434

Degrees of Freedom = (2 – 1) × (2 – 1) = 1

P-value ≈ 0.231

Since the p-value (0.231) is greater than the adjusted significance level (0.0167), there is no significant difference between Product Line A and Product Line C.

3. Product Line B vs. Product Line C

Product Line Satisfied Not Satisfied Total
B 130 70 200
C 160 40 200
Total 290 110 400

Expected Frequencies:

  • Expected (B Satisfied) = (200 × 290) / 400 = 145
  • Expected (B Not Satisfied) = (200 × 110) / 400 = 55
  • Expected (C Satisfied) = (200 × 290) / 400 = 145
  • Expected (C Not Satisfied) = (200 × 110) / 400 = 55

Chi-Square Statistic:

χ² = [(130 – 145)² / 145] + [(70 – 55)² / 55] + [(160 – 145)² / 145] + [(40 – 55)² / 55]

χ² = 1.552 + 4.091 + 1.552 + 4.091 = 11.286

Degrees of Freedom = (2 – 1) × (2 – 1) = 1

P-value ≈ 0.0008

Since the p-value (0.0008) is less than the adjusted significance level (0.0167), there is a significant difference between Product Line B and Product Line C.

Conclusion:

After performing pairwise comparisons with Bonferroni correction, we found that there is a significant difference in customer satisfaction between Product Line B and Product Line C. There were no significant differences between Product Line A and Product Line B, or between Product Line A and Product Line C.

7. Alternative Post-Hoc Tests

7.1 Tukey’s HSD (Honestly Significant Difference)

Tukey’s HSD is another post-hoc test that is less conservative than the Bonferroni correction. It controls the familywise error rate, making it suitable for situations where you want to minimize the risk of missing true differences.

7.2 Scheffe’s Method

Scheffe’s method is a versatile post-hoc test that can be used for both pairwise comparisons and more complex contrasts. It is more conservative than Tukey’s HSD but can be useful when testing a wide range of hypotheses.

7.3 Choosing the Right Post-Hoc Test

The choice of post-hoc test depends on the specific research question and the desired balance between Type I and Type II errors. Bonferroni correction is simple and conservative, while Tukey’s HSD offers more power. Scheffe’s method is useful for complex contrasts.

8. Practical Applications

8.1 Marketing Strategies

Comparing proportions is essential in marketing for evaluating the effectiveness of different campaigns. For example, you can compare the conversion rates of different ad creatives or the engagement rates of various social media strategies.

8.2 Medical Research

In medical research, comparing proportions helps in assessing the efficacy of different treatments. For instance, you can compare the success rates of various drugs or the incidence of side effects across different patient groups.

8.3 Educational Interventions

In education, comparing proportions is used to evaluate the impact of different teaching methods. For example, you can compare the pass rates of students using traditional methods versus those using innovative techniques.

8.4 Quality Control

In quality control, comparing proportions helps in monitoring the defect rates of different production lines. By identifying significant differences, you can implement corrective measures to improve quality.

9. Common Pitfalls and How to Avoid Them

9.1 Incorrect Test Selection

Choosing the wrong statistical test can lead to incorrect conclusions. Ensure you understand the assumptions of each test and select the one that best fits your data.

9.2 Ignoring Assumptions

Statistical tests have underlying assumptions that must be met for the results to be valid. Ignoring these assumptions can lead to unreliable results. Always check the assumptions of the test before interpreting the results.

9.3 Overinterpreting Results

Statistical significance does not always imply practical significance. It is important to consider the magnitude of the effect and the context of the study when interpreting the results.

9.4 Multiple Comparisons Problem

Performing multiple comparisons without adjusting the significance level can inflate the risk of Type I error. Use appropriate post-hoc tests and correction methods to address this issue.

10. Advanced Techniques

10.1 Logistic Regression

Logistic regression is a powerful technique for modeling the relationship between a binary outcome (such as success or failure) and one or more predictor variables. It allows you to control for confounding variables and provides insights into the factors that influence the proportion of interest.

10.2 Cochran-Mantel-Haenszel Test

The Cochran-Mantel-Haenszel test is used to assess the association between two categorical variables while controlling for a confounding variable. It is particularly useful when dealing with stratified data.

10.3 Bayesian Methods

Bayesian methods provide a flexible framework for comparing proportions across multiple groups. They allow you to incorporate prior knowledge and quantify uncertainty in a more intuitive way.

11. Software and Tools

11.1 SPSS

SPSS is a widely used statistical software package that offers a range of tools for comparing proportions. It provides user-friendly interfaces and comprehensive documentation.

11.2 R

R is a powerful open-source statistical programming language. It offers a vast array of packages for statistical analysis and visualization.

11.3 Python with Statsmodels

Python, with the Statsmodels library, provides a flexible and versatile environment for statistical modeling and analysis.

12. E-E-A-T and YMYL Considerations

12.1 Expertise

When discussing statistical methods, it’s important to demonstrate expertise. Provide clear explanations, cite credible sources, and offer practical examples to enhance understanding.

12.2 Experience

Share real-world applications and case studies to illustrate the practical use of the methods discussed. This adds credibility and relevance to the content.

12.3 Authoritativeness

Reference well-known researchers, academic institutions, and peer-reviewed publications to establish the authoritativeness of the content.

12.4 Trustworthiness

Present information in an objective and unbiased manner. Acknowledge limitations and potential sources of error. Ensure that all calculations and interpretations are accurate.

12.5 YMYL (Your Money or Your Life)

Statistical analysis can have significant implications in areas such as healthcare, finance, and public policy. Therefore, it’s crucial to provide accurate and reliable information that can be used to make informed decisions.

13. Conclusion

Comparing proportions between multiple groups is a fundamental statistical task with wide-ranging applications. By understanding the principles of hypothesis testing, statistical significance, and various comparative methods, you can draw meaningful conclusions and make informed decisions. Whether you’re evaluating marketing strategies, assessing medical treatments, or monitoring quality control processes, the techniques discussed in this guide will empower you to analyze data effectively. Explore COMPARE.EDU.VN for more in-depth comparisons and resources.

14. FAQ

14.1 What is the difference between the Chi-Square test and the G-test?

The Chi-Square test and the G-test (likelihood ratio test) are both used to compare proportions across multiple groups. The G-test is often preferred when dealing with small sample sizes or sparse data, as it tends to be more accurate under these conditions.

14.2 How do I choose the right post-hoc test?

The choice of post-hoc test depends on the specific research question and the desired balance between Type I and Type II errors. Bonferroni correction is simple and conservative, while Tukey’s HSD offers more power.

14.3 What is the Bonferroni correction?

The Bonferroni correction is a method for adjusting the significance level in multiple comparisons. It divides the overall significance level (alpha) by the number of comparisons being made.

14.4 What is a p-value?

The p-value is the probability of obtaining results as extreme as, or more extreme than, those observed, assuming the null hypothesis is true. A small p-value suggests strong evidence against the null hypothesis.

14.5 What does statistical significance mean?

Statistical significance indicates that the observed differences between groups are unlikely to have occurred by chance.

14.6 Can I use these tests for small sample sizes?

Yes, the G-test is particularly useful for small sample sizes. However, always consider the assumptions of the test and interpret the results cautiously.

14.7 How do I calculate expected frequencies?

The expected frequency for each cell in the contingency table is calculated as: Expected Frequency = (Row Total × Column Total) / Grand Total.

14.8 What are degrees of freedom?

Degrees of freedom (df) are a parameter that determines the shape of the Chi-Square distribution. They are calculated as: df = (Number of Rows – 1) × (Number of Columns – 1).

14.9 What is the null hypothesis?

The null hypothesis (H0) states that there is no difference between the proportions of the groups being compared.

14.10 How do I interpret the results of these tests?

If the p-value is less than the significance level (alpha), you reject the null hypothesis and conclude that there is a statistically significant difference between the groups. Always consider the context of the study and the magnitude of the effect when interpreting the results.

Ready to make more informed comparisons? Visit COMPARE.EDU.VN today to explore comprehensive comparisons and resources designed to help you make the best decisions. Our expert analyses and user-friendly tools empower you to evaluate options with confidence. Whether you’re comparing products, services, or ideas, COMPARE.EDU.VN is your trusted source for clear, objective, and detailed comparisons. Don’t leave your choices to chance—explore COMPARE.EDU.VN and make decisions that matter.

Address: 333 Comparison Plaza, Choice City, CA 90210, United States
WhatsApp: +1 (626) 555-9090
Website: compare.edu.vn

15. Partitioning the G-Statistic

15.1 Understanding the Partitioning Method

Partitioning the G-statistic allows for a more granular analysis of differences among multiple groups by breaking down the overall test into smaller, more manageable comparisons. This method is particularly useful when you want to identify specific group differences contributing to the overall statistical significance. Instead of running multiple separate tests, partitioning provides a structured approach to dissecting the overall G-statistic, ensuring that the sum of the individual tests equals the total statistic. According to “Categorical Data Analysis” by Agresti, this approach maintains statistical integrity while providing deeper insights into the data.

15.2 Step-by-Step Guide to Partitioning the G-Statistic

Partitioning involves conducting a series of 2×2 G-tests sequentially. Each test compares two groups, and after each comparison, the tested groups are combined to create a new comparison with the remaining groups. This process continues until all relevant group combinations have been tested.

15.3 Practical Example: Partitioning the G-Statistic

Consider three groups (A, B, and C) with return rates that need to be compared. The overall G-statistic for the entire table is 76.42, with 2 degrees of freedom, indicating a significant difference across the groups (p < 0.0001).

Step 1: Compare Group B vs. Group C

First, compare Group B and Group C using a 2×2 G-test. The observed and expected frequencies are as follows:

Rec Ret Total
Observed
B 17530 717 18247
C 42408 1618 44026
Expected
B 17562.8 684.2 18247
C 42375.2 1650.8 44026

The G-statistic for this comparison is 2.29, with 1 degree of freedom (p = 0.1300), which is not significant.

Step 2: Combine Group B and Group C

Since the difference between Group B and Group C is not significant, combine these two groups into a single group (B+C).

Step 3: Compare Group A vs. Group B+C

Next, compare Group A with the combined group (B+C) using another 2×2 G-test.

Rec Ret Total
Observed
A 16895 934 17829
B+C 59938 2335 62273
Expected
A 17101.4 727.6 17829
B+C 59731.6 2541.4 62273

The G-statistic for this comparison is 74.13, with 1 degree of freedom (p < 0.0001), which is highly significant.

Step 4: Verify the Partitioning

To verify the partitioning, ensure that the sum of the individual test statistics equals the overall test statistic: 2.29 + 74.13 = 76.42.

15.4 Interpreting the Results

The results indicate that there is no significant difference between Group B and Group C. However, Group A has a significantly higher return rate than the combined Group B+C. This partitioning helps pinpoint where the significant differences lie among the groups.

15.5 Alternative Partitioning Strategies

The G-statistic can be partitioned in various ways, depending on the specific comparisons you want to make. For example, you could compare A to B first, then C to A+B, or compare A to C first, then B to A+C. The choice of partitioning strategy depends on the specific research question.

15.6 Expanding to More Than Three Groups

This method can be expanded to four or more groups. After each test, combine the rows of the two levels that were just tested. The maximum number of tests is equal to the degrees of freedom in the original table.

15.7 Benefits of Partitioning

  • Granular Analysis: Allows for identifying specific group differences.
  • Statistical Integrity: Maintains the overall statistical significance while providing deeper insights.
  • Flexibility: Can be adapted to various research questions by choosing different partitioning strategies.

15.8 Considerations

  • Choice of Partitioning: The choice of partitioning strategy should be guided by the research question.
  • Interpretation: Interpreting each test in the context of the partitioning strategy is crucial.
  • Complexity: As the number of groups increases, the number of possible partitioning strategies grows, potentially increasing complexity.

By using partitioning the G-statistic effectively, researchers and analysts can gain a more nuanced understanding of group differences, leading to more informed decisions and targeted interventions.

16. Real-World Examples and Case Studies

16.1 Case Study 1: A/B Testing in E-Commerce

Scenario: An e-commerce company wants to compare two different website layouts (Layout A and Layout B) to determine which layout results in a higher conversion rate (proportion of visitors who make a purchase). They also want to see if there is a difference in conversion rates between mobile and desktop users for each layout.

Data: The company collects data for one week and records the number of visitors and conversions for each layout and device type.

Layout Device Visitors Conversions
Layout A Mobile 5000 250
Desktop 5000 350
Layout B Mobile 5000 300
Desktop 5000 400

Analysis:

  1. Overall Comparison: First, compare the overall conversion rates for Layout A and Layout B, regardless of device type.
Layout Visitors Conversions
Layout A 10000 600
Layout B 10000 700

Calculate the conversion rates:

  • Layout A: 600 / 10000 = 0.06 (6%)
  • Layout B: 700 / 10000 = 0.07 (7%)

Perform a Chi-Square test or G-test to determine if the difference is statistically significant.

  1. Device-Specific Comparison: Next, compare the conversion rates for each layout separately for mobile and desktop users.
  • Layout A:
    • Mobile: 250 / 5000 = 0.05 (5%)
    • Desktop: 350 / 5000 = 0.07 (7%)
  • Layout B:
    • Mobile: 300 / 5000 = 0.06 (6%)
    • Desktop: 400 / 5000 = 0.08 (8%)

Perform separate Chi-Square or G-tests for each layout to see if there is a significant difference between mobile and desktop users.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *