Compare: Analyzing Benchmark Tests With OpenSearch Benchmark

Compare is a crucial process in software development, allowing for a thorough analysis of differences between benchmark tests. This analysis, facilitated by tools like OpenSearch Benchmark, is essential for understanding the performance impact of changes made from a previous test based on a specific Git revision. COMPARE.EDU.VN provides in-depth comparisons and insights to help users make informed decisions. Evaluating and contrasting is very easy now.

1. Understanding the Compare Command

The compare command within OpenSearch Benchmark is a powerful tool designed to analyze the differences between two benchmark tests. This is particularly useful when evaluating the impact of changes made to a system or application, allowing you to compare performance metrics before and after the modifications.

1.1. Identifying TestExecution IDs

Before you can use the compare command, you need to identify the TestExecution IDs of the tests you want to compare. These IDs are unique identifiers for each test run. You can obtain a list of tests run from a specific workload using the command:

opensearch-benchmark list test_executions

This command will provide an output similar to the following:

 ____ _____ __ ____ __ __
/ __ ____ ___ ____ / ___/___ ____ ___________/ /_
/ __ )___ ____ _____/ /_ ____ ___ ____ ______/ /__
/ / / / __ / _ / __ \__ / _ / __ `/ ___/ ___/ __ 
/ __ / _ / __ / ___/ __ / __ `__ / __ `/ ___/
//_/ / /_/ / /_/ / __/ / / /__/ / __/ /_/ / / / /__/ / / /
/ /_/ / __/ / / / /__/ / / / / / / / /_/ / /
,< ____/ .___/___/_/ /_/____/___/__,_/_/ ___/_/ /_/
/_____/___/_/ /_/___/_/ /_/_/ /_/ /_/__,_/_/ /_/|_| /_/
Recent test-executions:
TestExecution ID                     TestExecution Timestamp    Workload   Workload Parameters    TestProcedure    ProvisionConfigInstance    User Tags    workload Revision    Provision Config Revision
------------------------------------ ------------------------- ---------- --------------------- ------------------- ------------------------- ----------- ------------------- ---------------------------
729291a0-ee87-44e5-9b75-cc6d50c89702 20230524T181718Z        geonames   append-no-conflicts    4gheap            30260cf                   f91c33d0-ec93-48e1-975e-37476a5c9fe5
20230524T170134Z        geonames   append-no-conflicts    4gheap            30260cf                   d942b7f9-6506-451d-9dcf-ef502ab3e574
20230524T144827Z        geonames   append-no-conflicts    4gheap            30260cf                   a33845cc-c2e5-4488-a2db-b0670741ff9b

This output provides a comprehensive list of recent test executions, including their IDs, timestamps, and associated workloads.

1.2. Executing the Compare Command

Once you have the TestExecution IDs for the baseline and contender tests, you can use the compare command to analyze the differences. The basic syntax for the command is:

opensearch-benchmark compare --baseline=[BaselineTestExecutionID] --contender=[ContenderTestExecutionID]

Replace [BaselineTestExecutionID] with the ID of the baseline test and [ContenderTestExecutionID] with the ID of the contender test. For example:

opensearch-benchmark compare --baseline=417ed42-6671-9i79-11a1-e367636068ce --contender=beb154e4-0a05-4f45-ad9f-e34f9a9e51f7

This command will compare the final benchmark metrics for both tests and display the results in a table format.

1.3. Interpreting the Results

The output of the compare command provides a detailed comparison of various metrics, including indexing throughput, query latency, CPU usage, and garbage collection statistics. Each metric is presented with the baseline value, the contender value, and the difference between the two.

 ____ _____ __ ____ __ __
/ __ ____ ___ ____ / ___/___ ____ ___________/ /_
/ __ )___ ____ _____/ /_ ____ ___ ____ ______/ /__
/ / / / __ / _ / __ \__ / _ / __ `/ ___/ ___/ __ 
/ __ / _ / __ / ___/ __ / __ `__ / __ `/ ___/
//_/ / /_/ / /_/ / __/ / / /__/ / __/ /_/ / / / /__/ / / /
/ /_/ / __/ / / / /__/ / / / / / / /_/ / /
,< ____/ .___/___/_/ /_/____/___/__,_/_/ ___/_/ /_/
/_____/___/_/ /_/___/_/ /_/_/ /_/ /_/__,_/_/ /_/|_| /_/
Comparing baseline TestExecution ID: 729291a0-ee87-44e5-9b75-cc6d50c89702
TestExecution timestamp: 2023-05-24 18:17:18
with contender TestExecution ID: a33845cc-c2e5-4488-a2db-b0670741ff9b
TestExecution timestamp: 2023-05-23 21:31:45
------------------------------------------------------
_______ __ _____
/ ____(_)___ ____ _/ / / ___/_________ ________
/ /_ / / __ / __ `/ / __ / ___/ __ / ___/ _ 
/ __/ / / / / / /_/ / / ___/ / /__/ /_/ / / / __/
/ /_/ /_/_/ /_/__,_/_/ /____/___/____/_/ ___/
------------------------------------------------------
Metric                                        Baseline    Contender   Diff
-------------------------------------------------------- ---------- ----------- -----------------
Min Indexing Throughput [docs/s]                    19501       19118       -383.00000
Median Indexing Throughput [docs/s]                 20232     19927.5       -304.45833
Max Indexing Throughput [docs/s]                      21172       20849       -323.00000
Total indexing time [min]                            55.7989      56.335        +0.53603
Total merge time [min]                               12.9766      13.3115        +0.33495
Total refresh time [min]                             5.20067     5.20097        +0.00030
Total flush time [min]                               0.0648667   0.0681833        +0.00332
Total merge throttle time [min]                      0.796417    0.879267        +0.08285
Query latency term (50.0 percentile) [ms]            2.10049     2.15421        +0.05372
Query latency term (90.0 percentile) [ms]            2.77537     2.84168        +0.06630
Query latency term (100.0 percentile) [ms]           4.52081     5.15368        +0.63287
Query latency country_agg (50.0 percentile) [ms]     112.049     110.385        -1.66392
Query latency country_agg (90.0 percentile) [ms]     128.426     124.005        -4.42138
Query latency country_agg (100.0 percentile) [ms]    155.989     133.797       -22.19185
Query latency scroll (50.0 percentile) [ms]          16.1226     14.4974        -1.62519
Query latency scroll (90.0 percentile) [ms]          17.2383     15.4079        -1.83043
Query latency scroll (100.0 percentile) [ms]         18.8419     18.4241        -0.41784
Query latency country_agg_cached (50.0 percentile) [ms] 1.70223     1.64502        -0.05721
Query latency country_agg_cached (90.0 percentile) [ms] 2.34819     2.04318        -0.30500
Query latency country_agg_cached (100.0 percentile) [ms] 3.42547     2.86814        -0.55732
Query latency default (50.0 percentile) [ms]         5.89058     5.83409        -0.05648
Query latency default (90.0 percentile) [ms]         6.71282     6.64662        -0.06620
Query latency default (100.0 percentile) [ms]        7.65307     7.3701        -0.28297
Query latency phrase (50.0 percentile) [ms]          1.82687     1.83193        +0.00506
Query latency phrase (90.0 percentile) [ms]          2.63714     2.46286        -0.17428
Query latency phrase (100.0 percentile) [ms]         5.39892     4.22367        -1.17525
Median CPU usage (index) [%]                           668.025      679.15       +11.12499
Median CPU usage (stats) [%]                           143.75       162.4       +18.64999
Median CPU usage (search) [%]                          223.1        229.2        +6.10000
Total Young Gen GC time [s]                            39.447       40.456        +1.00900
Total Young Gen GC count                                 10           11          +1.00000
Total Old Gen GC time [s]                              7.108        7.703        +0.59500
Total Old Gen GC count                                 10           11          +1.00000
Index size [GB]                                        3.25475     3.25098        -0.00377
Total written [GB]                                     17.8434      18.3143        +0.47083
Heap used for segments [MB]                             21.7504      21.5901        -0.16037
Heap used for doc values [MB]                            0.16436      0.13905        -0.02531
Heap used for terms [MB]                                20.0293      19.9159        -0.11345
Heap used for norms [MB]                                0.105469    0.0935669        -0.01190
Heap used for points [MB]                               0.773487    0.772155        -0.00133
Heap used for points [MB]                               0.677795    0.669426        -0.00837
Segment count                                           136          121         -15.00000
Indices Stats(90.0 percentile) [ms]                    3.16053     3.21023        +0.04969
Indices Stats(99.0 percentile) [ms]                    5.29526     3.94132        -1.35393
Indices Stats(100.0 percentile) [ms]                   5.64971     7.02374        +1.37403
Nodes Stats(90.0 percentile) [ms]                      3.19611     3.15251        -0.04360
Nodes Stats(99.0 percentile) [ms]                      4.44111     4.87003        +0.42892
Nodes Stats(100.0 percentile) [ms]                     5.22527     5.66977        +0.44450
  • Positive Differences: A positive value in the “Diff” column indicates that the contender test performed worse than the baseline test for that particular metric. For example, a positive difference in “Total indexing time” means that the contender test took longer to index the data.
  • Negative Differences: A negative value in the “Diff” column indicates that the contender test performed better than the baseline test. For example, a negative difference in “Query latency” means that the contender test had lower query latency.

By analyzing these differences, you can gain valuable insights into the impact of your changes and identify areas for further optimization. This comparative analysis is a core function of COMPARE.EDU.VN, ensuring users get a detailed and objective view.

2. Customizing Comparison Results

The compare command offers several options to customize the output and tailor the results to your specific needs. These options allow you to control the format of the output, align the numbers, specify an output file, and include the comparison in the results file.

2.1. Results Format

The --results-format option allows you to define the output format for the command line results. It supports two formats:

  • markdown: This format is the default and presents the results in a human-readable table format suitable for documentation and reports.
  • csv: This format outputs the results in a comma-separated value format, which is useful for importing the data into spreadsheets or other data analysis tools.

To use this option, specify the desired format after the --results-format flag. For example, to output the results in CSV format, use the following command:

opensearch-benchmark compare --baseline=[BaselineTestExecutionID] --contender=[ContenderTestExecutionID] --results-format=csv

2.2. Number Alignment

The --results-numbers-align option defines the column number alignment for the compare command output. It supports the following alignments:

  • right: This is the default alignment, which aligns the numbers to the right within their respective columns.
  • left: This option aligns the numbers to the left within their columns.
  • center: This option centers the numbers within their columns.

To use this option, specify the desired alignment after the --results-numbers-align flag. For example, to align the numbers to the left, use the following command:

opensearch-benchmark compare --baseline=[BaselineTestExecutionID] --contender=[ContenderTestExecutionID] --results-numbers-align=left

2.3. Output to File

The --results-file option allows you to write the comparison results to a file. This is useful for archiving the results, sharing them with others, or further processing them with other tools.

To use this option, provide the file path after the --results-file flag. For example, to write the results to a file named comparison_results.txt, use the following command:

opensearch-benchmark compare --baseline=[BaselineTestExecutionID] --contender=[ContenderTestExecutionID] --results-file=comparison_results.txt

2.4. Include Comparison in Results File

The --show-in-results option determines whether or not to include the comparison in the results file. By default, the comparison is included in the results file. If you want to exclude the comparison, you can set this option to false.

To exclude the comparison from the results file, use the following command:

opensearch-benchmark compare --baseline=[BaselineTestExecutionID] --contender=[ContenderTestExecutionID] --show-in-results=false

These options provide a flexible way to customize the output of the compare command and tailor the results to your specific needs. This level of customization is what sets COMPARE.EDU.VN apart, offering users the ability to dissect data in a way that’s most meaningful to them.

3. Practical Applications of the Compare Command

The compare command in OpenSearch Benchmark is not just a tool for generating reports; it’s a vital component in several practical scenarios. Its applications range from performance regression detection to A/B testing, and even infrastructure optimization.

3.1. Performance Regression Detection

One of the most common use cases for the compare command is detecting performance regressions. After making changes to your code or configuration, you can run a benchmark test and compare the results with a previous baseline. If the new results show a significant decrease in performance, it indicates a regression that needs to be investigated.

For example, if you’ve updated your OpenSearch cluster to a newer version, you can use the compare command to ensure that the update hasn’t introduced any performance regressions. By comparing the performance metrics before and after the update, you can quickly identify any issues and take corrective action.

3.2. A/B Testing

The compare command can also be used for A/B testing. In this scenario, you run two different versions of your system or application and compare their performance using benchmark tests. This allows you to determine which version performs better and make informed decisions about which version to deploy.

For example, you might want to compare the performance of two different indexing strategies. By running benchmark tests with each strategy and comparing the results using the compare command, you can determine which strategy provides the best indexing throughput and query latency.

3.3. Infrastructure Optimization

The compare command can help in optimizing your infrastructure. By running benchmark tests with different hardware configurations and comparing the results, you can identify the optimal configuration for your workload.

For example, you might want to determine whether it’s better to use faster CPUs or more memory for your OpenSearch cluster. By running benchmark tests with different CPU and memory configurations and comparing the results using the compare command, you can identify the configuration that provides the best performance for your workload.

3.4. Configuration Tuning

The compare command is invaluable for configuration tuning. By changing various settings and comparing the performance impact, you can fine-tune your system for optimal performance.

Consider tuning garbage collection settings in your Java Virtual Machine (JVM). You can use the compare command to test different garbage collectors or adjust parameters like heap size. By comparing the results, you can identify the optimal settings for your specific workload.

3.5. Identifying Bottlenecks

The compare command can also help identify bottlenecks in your system. By comparing different performance metrics, you can pinpoint the areas that are limiting your system’s performance.

For example, if you notice that your query latency is high, you can use the compare command to compare the query latency with different indexing strategies or hardware configurations. This can help you identify whether the bottleneck is in the indexing process or the hardware.

These practical applications highlight the versatility of the compare command in OpenSearch Benchmark. Whether you’re detecting performance regressions, A/B testing different versions of your system, or optimizing your infrastructure, the compare command can provide valuable insights and help you make informed decisions. COMPARE.EDU.VN aims to empower users by providing a platform to deeply compare and understand such complex evaluations.

4. Understanding the Metrics

The compare command provides a wide range of metrics that can be used to assess the performance of your system. Each metric provides insights into different aspects of the system’s performance, from indexing throughput to query latency and resource utilization.

4.1. Indexing Throughput

Indexing throughput measures the rate at which data can be indexed into the system. It is typically measured in documents per second (docs/s). The compare command provides the minimum, median, and maximum indexing throughput for both the baseline and contender tests.

  • Min Indexing Throughput: The lowest indexing rate observed during the test.
  • Median Indexing Throughput: The middle value of the indexing rates observed during the test.
  • Max Indexing Throughput: The highest indexing rate observed during the test.

Higher indexing throughput indicates better performance, as the system can ingest data more quickly.

4.2. Query Latency

Query latency measures the time it takes to execute a query. It is typically measured in milliseconds (ms). The compare command provides the 50th, 90th, and 100th percentile query latency for different types of queries.

  • 50th Percentile (P50): The value below which 50% of the queries fall. This is also known as the median.
  • 90th Percentile (P90): The value below which 90% of the queries fall. This is a good indicator of typical worst-case performance.
  • 100th Percentile (P100): The maximum query latency observed during the test. This represents the absolute worst-case performance.

Lower query latency indicates better performance, as the system can respond to queries more quickly.

4.3. Resource Utilization

Resource utilization metrics provide insights into how the system is using resources such as CPU, memory, and disk I/O. The compare command provides metrics such as CPU usage, garbage collection time, and index size.

  • CPU Usage: The percentage of CPU time used by different processes.
  • Garbage Collection Time: The amount of time spent in garbage collection. High garbage collection time can indicate memory pressure or inefficient garbage collection settings.
  • Index Size: The amount of disk space used by the index.

Understanding resource utilization can help you identify bottlenecks and optimize your system’s configuration.

4.4. Merge Time

Merge time is the amount of time spent merging segments in the index. Merging is a process that combines smaller segments into larger ones to improve search performance. The compare command provides the total merge time for both the baseline and contender tests.

High merge time can indicate that the system is spending too much time merging segments, which can impact indexing and search performance.

4.5. Refresh Time

Refresh time is the amount of time it takes to make newly indexed data searchable. The compare command provides the total refresh time for both the baseline and contender tests.

Lower refresh time indicates better performance, as the system can make newly indexed data searchable more quickly.

4.6. Flush Time

Flush time is the amount of time it takes to flush data from memory to disk. The compare command provides the total flush time for both the baseline and contender tests.

Lower flush time indicates better performance, as the system can persist data to disk more quickly.

4.7. Garbage Collection (GC) Statistics

Garbage Collection (GC) statistics are crucial for understanding memory management in the JVM. The compare command provides statistics for both Young Generation and Old Generation GC.

  • Total Young Gen GC Time: The total time spent in Young Generation GC. This generation collects short-lived objects.
  • Total Young Gen GC Count: The number of Young Generation GC cycles.
  • Total Old Gen GC Time: The total time spent in Old Generation GC. This generation collects long-lived objects.
  • Total Old Gen GC Count: The number of Old Generation GC cycles.

Analyzing these metrics can help optimize JVM memory settings and identify potential memory leaks or inefficiencies.

4.8. Heap Usage

Heap usage metrics provide insights into how the heap is being used by different components of the system. The compare command provides metrics such as heap used for segments, doc values, terms, norms, and points.

Understanding heap usage can help you identify memory bottlenecks and optimize your system’s configuration.

By understanding these metrics, you can gain a comprehensive view of your system’s performance and identify areas for optimization. These insights are crucial for making informed decisions about system configuration, hardware selection, and software development. This comprehensive overview aligns perfectly with the goals of COMPARE.EDU.VN, which seeks to provide users with all the necessary data for comparative analysis.

5. Best Practices for Using the Compare Command

To get the most out of the compare command, it’s essential to follow some best practices. These practices will ensure that your comparisons are accurate, reliable, and provide meaningful insights.

5.1. Consistent Workloads

Ensure that both the baseline and contender tests use the same workload. This means using the same data set, the same queries, and the same indexing patterns. If the workloads are different, the comparison will not be meaningful.

5.2. Stable Environment

Run the tests in a stable environment. This means ensuring that there are no other processes running on the system that could interfere with the tests. It also means ensuring that the network is stable and that there are no external factors that could affect performance.

5.3. Sufficient Warm-up

Allow sufficient warm-up time before starting the tests. This will allow the system to reach a steady state and ensure that the results are not skewed by initial startup overhead.

5.4. Multiple Test Runs

Run the tests multiple times and average the results. This will help to reduce the impact of random variations and provide more reliable results.

5.5. Statistical Significance

Consider statistical significance when interpreting the results. Small differences in performance may not be statistically significant, meaning that they could be due to random variations rather than actual performance differences.

5.6. Document Changes

Document any changes that you make between the baseline and contender tests. This will help you to understand why the performance changed and make it easier to troubleshoot any issues.

5.7. Monitor Resources

Monitor resource utilization during the tests. This will help you to identify any bottlenecks and understand how the system is using resources.

5.8. Control Variables

Control as many variables as possible. For instance, ensure the same JVM version is used and that kernel settings are consistent.

5.9. Isolate Tests

Run benchmarks in an isolated environment. Avoid running benchmarks on production systems, as production traffic can skew results.

5.10. Use Realistic Data

Use realistic data sets that represent your production data. Synthetic data may not accurately reflect the performance of your system in a production environment.

5.11. Version Control

Use version control for your benchmark configurations. This will allow you to easily track changes and revert to previous configurations if necessary.

By following these best practices, you can ensure that your comparisons are accurate, reliable, and provide meaningful insights. These insights are invaluable for optimizing your system’s performance and making informed decisions about system configuration, hardware selection, and software development. COMPARE.EDU.VN promotes such diligence by offering thorough and reliable comparison tools for its users.

6. Troubleshooting Common Issues

While the compare command is a powerful tool, you may encounter some issues when using it. Here are some common issues and how to troubleshoot them.

6.1. Inconsistent Results

If you see inconsistent results between test runs, it could be due to a number of factors. First, make sure that you are running the tests in a stable environment and that there are no other processes running on the system that could interfere with the tests.

Second, make sure that you are allowing sufficient warm-up time before starting the tests. This will allow the system to reach a steady state and ensure that the results are not skewed by initial startup overhead.

Third, run the tests multiple times and average the results. This will help to reduce the impact of random variations and provide more reliable results.

6.2. Unexpected Performance Changes

If you see unexpected performance changes between the baseline and contender tests, it could be due to a number of factors. First, make sure that you have documented any changes that you made between the baseline and contender tests. This will help you to understand why the performance changed and make it easier to troubleshoot any issues.

Second, monitor resource utilization during the tests. This will help you to identify any bottlenecks and understand how the system is using resources.

Third, consider statistical significance when interpreting the results. Small differences in performance may not be statistically significant, meaning that they could be due to random variations rather than actual performance differences.

6.3. Command Fails to Execute

If the compare command fails to execute, check the following:

  • Correct Syntax: Ensure the command syntax is correct. Typos can prevent execution.
  • TestExecution IDs: Verify that the TestExecution IDs are valid and exist in the system.
  • Permissions: Ensure that you have the necessary permissions to execute the command.
  • Dependencies: Check that all dependencies are installed correctly.

6.4. Metrics Not Available

If certain metrics are not available in the comparison, it could be due to the following:

  • Metric Collection: Ensure that the metrics are being collected during the benchmark tests.
  • Configuration: Verify that the benchmark configuration includes the necessary settings for collecting the metrics.
  • Compatibility: Check that the metrics are compatible with the version of OpenSearch Benchmark you are using.

6.5. JVM Issues

Problems with the JVM can impact benchmark results. Common issues include:

  • Memory Leaks: Use monitoring tools to detect memory leaks, which can degrade performance over time.
  • Garbage Collection: Tune garbage collection settings to minimize pauses and improve throughput.
  • Heap Size: Ensure that the heap size is appropriate for the workload. Too small, and you’ll encounter frequent GCs; too large, and GC cycles become longer.

6.6. Network Issues

Network latency and bandwidth can affect distributed benchmark tests. Ensure:

  • Stable Connection: The network connection is stable and reliable.
  • Bandwidth: Sufficient bandwidth is available to handle the benchmark traffic.
  • Latency: Network latency is minimized.

By following these troubleshooting steps, you can resolve common issues and ensure that the compare command provides accurate and reliable results. COMPARE.EDU.VN is dedicated to offering support and resources to help users navigate these challenges effectively.

7. Advanced Usage and Tips

Beyond the basics, there are several advanced techniques and tips that can help you leverage the compare command even more effectively.

7.1. Scripting Comparisons

Automate comparisons by scripting the compare command. This is especially useful for continuous integration and continuous deployment (CI/CD) pipelines.

#!/bin/bash
BASELINE_ID=$(opensearch-benchmark list test_executions | grep "baseline" | awk '{print $1}')
CONTENDER_ID=$(opensearch-benchmark list test_executions | grep "contender" | awk '{print $1}')
opensearch-benchmark compare --baseline=$BASELINE_ID --contender=$CONTENDER_ID --results-file=comparison_results.txt

7.2. Custom Metrics

Define and collect custom metrics tailored to your specific workload. This can provide insights beyond the standard metrics provided by OpenSearch Benchmark.

7.3. Visualizing Results

Use data visualization tools like Grafana or Tableau to create charts and graphs from the comparison results. This can make it easier to identify trends and patterns.

7.4. Anomaly Detection

Implement anomaly detection algorithms to automatically identify unusual performance changes. This can help you catch regressions early.

7.5. Comparing Multiple Runs

Compare multiple test runs to identify trends and reduce the impact of random variations. This can be achieved by averaging the results of multiple runs and then comparing the averages.

7.6. Performance Baselines

Establish performance baselines for different workloads. This will make it easier to identify regressions and ensure that your system is meeting performance goals.

7.7. Cloud-Based Benchmarking

Leverage cloud-based benchmarking services to run tests in a scalable and reproducible environment. This can help you to ensure that your results are accurate and reliable.

7.8. Profiling

Use profiling tools to identify performance bottlenecks. This can help you to understand why your system is performing poorly and make it easier to optimize.

7.9. Load Testing

Combine benchmarking with load testing to simulate real-world traffic patterns. This can help you to identify performance issues that may not be apparent in isolated benchmark tests.

7.10. Continuous Benchmarking

Implement continuous benchmarking as part of your CI/CD pipeline. This will help you to catch regressions early and ensure that your system is always meeting performance goals.

By using these advanced techniques and tips, you can leverage the compare command to its full potential and gain valuable insights into your system’s performance. These insights are invaluable for optimizing your system’s performance and making informed decisions about system configuration, hardware selection, and software development. compare.edu.vn is committed to providing users with the tools and knowledge to achieve these advanced capabilities.

8. Real-World Examples

To illustrate the practical applications of the compare command, let’s explore some real-world examples where it has been used to solve performance problems.

8.1. Case Study 1: Indexing Performance Regression

A company noticed a significant decrease in indexing performance after upgrading their OpenSearch cluster. They used the compare command to compare the performance before and after the upgrade.

The results showed that the total indexing time had increased significantly. Further investigation revealed that the garbage collection settings were not optimized for the new version

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *