Integration Quality Control
Proactive and Reactive
Maintaining high quality integrations to many endpoint systems times many users requires a two-pronged strategy. You should be planning for how you proactively improve quality, mostly by performing various types of testing, and reactively improving quality by implementing an effective monitoring strategy.
Proactive
Proactive quality control is primarily about how you test integrations and the components that are used to create them.
What do you test?
There are a number of things to test in service of making sure an integration does what it’s indeed meant for an end customer. The following summarizes what you can test:
- Component behavior
- Component behavior in the context of a flow
- JSONata maps and JS snippets used in flows
- End to end integration flows
For the sake of resources, you generally do not want to include testing the behavior of integrated systems or APIs in the plan for testing an integration. However, if you are actively building your API on one side of the integration while building the integration, it’s easy to mix the two initiatives. Even if both parts are changing, try to consider the API a contract and keep the development and testing separate.
Types of Testing
There are many types of tests to run on complex systems like an integration framework. The following are particularly useful when testing integrations on Open Integration Hub:
- Unit tests on components, JSONata maps, and JS snippets
- Component contract tests
- Integration tests
- End to end tests
The details of what these tests are and how to run them to support building integrations in Open Integration Hub are described later.
Manual or Automated
Many tests can be run in automated fashion or manually by a quality assurance team. Your quality control plan should consider what tests will be run and whether they will be automated. Both manual and automated testing should be part of the plan.
Automated tests provide the advantages of scale and speed. You can make a change to an integration, run a test suite in seconds, and have confidence that all use cases (provided they have a test) will still execute properly. A strong test suite reduces the need to involve humans to test manually every time a change is made.
Automated tests are not perfect though. They are limited to the knowledge and understanding of the tests’ authors. Some tests are also brittle, meaning there is a higher likelihood of false positives or negatives. There’s also a significant upfront cost to writing the tests, though the reduction in necessary QA people typically offsets that over time.
Not all tests can easily be run manually, but manual testing is another really helpful way to test functionality. One advantage is that for test types that are well suited to manual testing, very little setup is required. Another advantage is that you don’t have to wrestle with a framework to make it do what you want. Simply have a person read the script and execute the script.
Unit tests are difficult to test manually, so they rarely are. End to end tests are most suited for manual testing, but at scale, having many people running many tests on many integrations can be complex. Human talent is also expensive, so your maintenance costs for integrations will rise.
Reactive
It’s not possible or financially feasible to suss out every possible problem an integration could have using proactive testing. Your integration quality plan should also include a plan for reactive monitoring to be alerted of problems that were not uncovered or not foreseeable during testing.
What do you monitor?
The four categories of details you should monitor are:
- An infrastructure failure (lack of CPU or memory, etc.)
- A service or component failure (part of the OIH application goes down)
- An endpoint API failure (one of the integrated systems is down)
- Data-related problems
Types of Monitoring
The two most effective strategies for monitoring the activity in an Open Integration Hub instance are log monitoring and queue monitoring. The former provides low level technical information, logged by the individual components and services while they do work. The latter provides detail about business-level information flowing through integrations. It also gives you good aggregate views of data flow through all integrations.
Prioritizing Investments in Quality Control
Integration teams with finite time and money are likely not able to build tests and monitors for every possible situation on all integrations. Even with virtually unlimited resources, the complexity this creates still wouldn’t justify it.
Integration teams are encouraged to think about their integration portfolio in terms of value to the business (because of their value to end users), and use that hierarchy to decide which integrations get the full load of testing and monitoring and which get less.
A smart way to allocate your quality budget (time and money) is to prioritize using the same four tiers the product team who builds the integration should use. These are:
- Ecosystem imperative integrations
- Ecosystem important integrations
- Ecosystem adjacent integrations
- Ecosystem irrelevant integrations
You should spend the most time and money testing and monitoring your ecosystem imperative integrations. These are the few integrations that represent the highest number of users, impact to the business, etc. There are usually fewer than a dozen of these, likely fewer than half a dozen.
Your ecosystem important integrations should also receive a reasonable allocation of quality budget, but less so than imperative ones. These are still important for your business, but probably don’t have a large enough usage footprint to be impactful on their own.
Ecosystem adjacent integrations should receive close to a minimum responsible amount of testing and monitoring, because they just don’t have much overall business impact. There should still be a minimum standard that is applied though.
Ecosystem irrelevant integrations should not really be built, but if they are, apply the same minimum standards. If you build it, you are responsible for its quality.