How We Code and Test Rules
One of FundApps' distinguishing factors is our rigorous approach to rule testing. Here you can get a "behind-the-scenes" look at how FundApps codes the rules engine, and the internal controls we employ to reduce the risk of mistakes.
When do we create or change a rule?
At FundApps, we rely on aosphere and its interpretation of the regulation as our trusted legal information provider. A new rule may be created when a new regulation is introduced which affects a holder’s compliance obligations (e.g. EU SSR, Transparency Directive). Rules may also be updated either when there is an update to aosphere memos or when we improve our legal interpretation. In some circumstances, we may introduce new properties or change existing ones if required by regulation, in order to ensure rule accuracy.
We commonly engage in discussion with aosphere and our clients about regulatory matters. You can read more about that here.
How do we code our rules?
FundApps has a dedicated compliance team who maintain our entire library of rules as well as create new ones. Before any new rule versions are pushed into the cloud and deployed across client environments, we have a firmly established procedure all new code must pass through:
- Legal memo analysis: Once the legal memo is received from aosphere, our content team will interpret the changes and outline how it applies to our rules.
- Interpretation review: The interpretation of the legal memo is reviewed by a second person to ensure accuracy.
- New rule version: The first "draft" of the rule is coded by an assigned member of the team
- Testing: We test this code against individualised, automated test cases involving up to 100 real business scenarios. This is known as Behaviour Driven Development. E.g. We test for particular asset classes, characteristics, and other nuances the rule needs to pick up.
- Code improvement: Test failures are reviewed and used to improve the current draft of the code, upon which more tests are run.
- 4-eye review: After passing those stages, the code is assigned to a 2nd person (reviewer), who is tasked with checking the regulatory justification for the rule and prove from testing that the rule is coded in line with interpretation.
- Deploy! If the rule coding passes review, then it is ready to be deployed into the live version of Rapptr, which will affect all client environments. More on this below.
Essentially this is an automated, scenario-based testing method, where the initial assignee specifies a set of testing criteria in a 'Given, When, Then’ model. This has a number of benefits:
- Each rule is tested against a set of strict criteria, determined completely from an individual and specific use case.
- Simple syntax – easy to understand and importantly, easy to see exactly what is being tested and perhaps more importantly, exactly what is not.
- Expressive test names are useful when tests fail.
- Simple sentence test cases make the tests focused
Deploying a new rule
When deploying any new rules or properties, a Rapptr notification is sent out to inform clients if any special action is required of them, including any new data requirements.
Whether it's a completely new rule such as a new jurisdiction or a new version of an existing rule, it will appear on your task list and will need to be approved by you before it takes effect in your environment. When you are reviewing the new rule version, Rapptr will highlight the changes for comparison.