One of the most critical components in a Guidewire PolicyCenter (PC) implementation is to integrate with a rating engine; this is typically either the Guidewire (GW) Rate Engine or an external rate engine like Oracle Insbridge, SoftRater etc. While testing this key component, validation should not simply be limited to table comparison and premium calculation but also should ensure that data traversed correctly to the core system.

There are 3 key challenges most teams face during rating implementation:

Third party integration issues when external rate engines are used

Very often, a lack of communication between teams increases defect turnaround time. Defect triaging becomes challenging to decide the starting point to debug defects. For example, a premium mismatch found could be due either to a miscalculation in the external rating engine or a plugin issue – either incorrect input was passed from PC to the external rate engine or the returned result from the rating engine was mapped incorrectly to PC.

Lack of Automation for regression testing

Budget constraints typically limit a team’s ability to automate regression test cases during the initial release, forcing the team to perform manual regression testing. When automated regression testing is not put in place, any changes to rating logic during User Acceptance Testing (UAT) or late in the Stabilization phase will increase the effort to regression test and become a significant risk to the program.

Lack of proper documentation

Zero or minimal documentation is another key challenge that teams face on legacy replacement projects. The actuarial team generally completes premium validation during the Development phase. The testing scope is then later handed over to the QA team for the first time during the Stabilization phase. This dependence on the actuarial team (who are likely busy with other projects) can add to the learning curve for QA and adversely impact testing progress and the overall program timeline.

Cynosure QA follows proven best practices to validate rating. In an agile world, teams often develop rate books, tables and rating algorithms continuously during the Development phase. This is done even before rates are filed and approved by the state insurance department. A key benefit of this approach is the development of robust algorithms and rate routines, thereby stabilizing the overall implementation. Rate books are normally filed during the Stabilization phase when few changes to rate factors are expected. Any updated rate factors can be exported from the system and easily compared with the changed requirements.

The following best practices have helped teams mitigate the challenges listed above quite easily:

Collaboration between teams and better understanding of the system and interdependencies

When defects are identified, the Testing team with great understanding of the system verifies data integrity between systems and does not limit those tests to just premium comparison. This helps the testing team to share details on the causes for a premium mismatch helping dev to debug and fix issues quickly.

Cynosure Test Automation Framework (CTAF) built using open source tools to test rating

Cynosure has developed a testing framework using open source tools to automate rating validation. Key businesses and data combinations are automated. For example, liability coverage with different limits (e.g. split limits versus a combined single limit) is validated; whereas an additional coverage like excess electronics endorsement is not automated. Another example is covering types of discounts like Good Student Discount or Affinity Group Discount based on prospects.

One of the best practices the team follows is to keep coverage names consistent between requirement documents and the Cost Specialty Display Name in the GW Rating Engine tables (or related tables in external rate engine). This helps the team to export and do quick comparisons. In addition, we leverage the Guidewire built in ‘Impact Testing’ tool that helps actuarial teams do quick testing when rates are changed. Automating data capture and comparing results with previously validated premiums until rates are revised is an alternative approach the team often follows.

Reusable tools, templates and collaterals

The Cynosure QA team brings tools and templates for rating validation that can be customized based on customer need (e.g. to meet state rules). Rules around minimum premium adjustments, premium overrides, penny to penny matching or premium rounding, etc. can also impact the testing effort and are also planned for accordingly. If legacy policies are being converted, separate premium comparisons are done using the migrated policies. Rate capping (if applicable) is also a key functionality that gets tested too.

Validating chargeability logic, rate routines, rating algorithms, rate capping etc. is fun and refreshes those minds that enjoy numbers and calculations. Cynosure QA has extensive rating experience and is helping numerous personal as well as commercial lines carriers test the most vital modules in their insurance systems.

We would love to hear from you on what’s worked well and challenges with rate testing that we could help resolve. Feel free to reach out to the Cynosure Director of Quality Assurance Puja Mukherjee at pmukherjee@cynosureince.com and we will be happy to assist you. Together we can put a plan in place that best supports your testing requirements.

Share
1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 2.50 out of 5)
Loading...


Posted by Puja Mukherjee

Leave a reply