Delivering a Quality Product to Enable Customer Success
SnapLogic is an integration platform as a service (iPaaS) that solves complex integration problems faced by many enterprises. The platform supports a wide range of integration needs including application integration, data integration, hybrid model support, big data, API management, B2B integration, and more – all with an easy-to-use, self-service user interface powered by AI.
The SnapLogic integration platform supports 500+ applications. Testing the integration of these applications means-testing numerous combinations of them which also means testing all the interactions of the SnapLogic platform with the underlying technologies of these various applications. To test these vast sets of requirements, the approach has to be smart and not exhaustive, using the right set of automation tools and thorough test planning to reach the desired quality.
In addition to this, what differentiates SnapLogic from other integration platforms is the hybrid model support which includes an additional layer of the data plane where customers have an option to choose between cloud and on-prem models. This adds additional complexity to our testing process where QE (Quality Engineering) has to simulate customer environments and test various customer setups be it proxies or OS level configurations to cover all the scenarios for testing.
A laser focus on quality
A feature-rich, scalable, stable platform is what our enterprise customers demand from us. Before any new feature goes out the door, rigorous testing is done to ensure it meets SnapLogic’s quality standards.
So how does SnapLogic define a quality product?
Quality is about providing customers the best software that is easy to use, free of defects, and helps them achieve their desired results. At SnapLogic, testing is taken very seriously and we achieve this through a dedicated team of experienced software testers that consistently and creatively explore and evaluate the application to ensure that the quality of the software is of high standards before shipping.
Testing using agile methodology
There are many different testing models available in the software industry. Some organizations follow traditional waterfall models and some follow agile. At SnapLogic, we follow agile testing practices due to its adaptive nature and better predictability. Developers, the QE team, and the Product team work together to ensure the SnapLogic platform is best-in-class. The QE team is involved from the beginning of the release cycle and continues testing iteratively along with the development team. We believe that by involving QE early in the process, any issues or defects get identified and fixed promptly and within the release cycle. Through our daily scrums, product owners engage constantly with the developers and QE to track progress and identify risks. When product risks or delays are identified early on, they are easy to accommodate without impacting the quality and delivery of the product. Once the testing team completes their certification process, the product is signed off on and released for beta testing and finally to production.
A systematic, multi-layered testing approach
SnapLogic’s QE testing process employs a systematic and multi-layered approach. Every new feature that is developed follows this model. Any change in the behavior introduced in the code can affect the existing functionality and regress the application. To identify regression issues, we are constantly running a set of existing test cases against the changing build throughout the release cycle. Any defects that are opened as a result of this are constantly triaged and prioritized, always keeping the release board up-to-date. Once the application has reached a certain level of stability, we continue to test other parts of the application like performance and end-to-end customer tests. These tests further ensure that there is no degradation in other parts of the system.
At SnapLogic, we conduct several types of tests:
- Unit testing
- Integration testing
- Smoke testing
- Functional testing
- Exploratory testing
- Performance testing
- End-to-end customer scenario testing
This is the most critical part of testing since this is as close as it can get to the source code of the application. The developers are responsible for writing their own unit tests and these tests must pass before handing the build over to QE. These are lightweight tests and are run very frequently as a part of the release cycle.
Integration tests validate the interactions between various services within the application code. They interact at the API layer making them much faster to run compared to the UI tests. These tests serve as a quick validation for the health and stability of the deployed system. Any failing integration test will halt the testing of the application.
Smoke tests include the critical flows of the application. These are basic tests that confirm that there are no blockers for QE before they proceed with their testing. These tests are designed to catch major blockers in the application. They are fully automated by QE and run each time a new build is deployed to any environment for sanity testing.
Functional tests test the business logic of the application at the UI layer. These test cases are automated using our internal framework developed on top of WebDriver which automates all the user actions interfacing with the browser. These tests are fully automated by QE leaving manual efforts only for exploratory testing and edge cases. UI automation tests are very close to how customers would use the application and ensures that the functionality works properly. However, it is very important that we do not exhaustively test the application through UI testing because these tests tend to be a bit fragile and expensive to maintain. These tests are chosen sparingly and paired appropriately with API tests for the best quality results.
As the name suggests, exploratory testing is a type of testing where you explore the application to learn and investigate the business requirements. We perform these tests prior to creating the test cases early in the test cycle or to capture the edge cases later in the cycle. These are manual, resource-intensive tests and are run just a few times in a release typically at the start or sometimes towards the end of the cycle. Exploratory testing is performed by QE or sometimes outside the QE organization and these tests catch complex defects that cannot be caught through automated scripts.
The goal for performance testing is to identify any performance degradation within the application. To test this, we stress certain back-end components and monitor the results of the application under load. We compare the results with the previous baselines. Any increase in the response time is flagged as a performance defect. Performance tests are fully automated and managed by QE and are run after the features are functionally stable and can sustain a certain amount of load for testing.
End-to-end customer scenario testing
These are business use cases that test the system from the start to the end. Unlike unit testing and functional testing which test specific components of the application, these tests require all the dependent systems to be up and running and flow through various systems and components. At SnapLogic, QE is responsible for creating these test cases by simulating customer scenarios. The majority of these cases are pipelines that are automated where we use our own product to validate our tests.
The criteria for certification is that all these tests must pass. Any failed test cases corresponding to defects are documented as known defects or triaged for future releases. Once the certification criteria is met, we push the bits for beta testing to a UAT (User Acceptance Testing) environment for customer testing and feedback. Any feedback we receive is implemented and any issues scoped for the release are fixed, tested, and released. Once all criteria are met, the product is shipped to production. Any issues that are potentially faced in production by our customers are looked into very seriously and analyzed and resolved by the test team. They are then released and incorporated into the test suite so that they don’t appear again in the product.
Ensuring a stable, quality, high-performing product has been and remains a top priority for us at SnapLogic. Over the years, we’ve continually assessed and fine-tuned our QE processes, embraced automation and new techniques, and importantly, ensured QE is working side-by-side with our development and product colleagues from conception through to delivery.
SnapLogic prides itself on the innovation it regularly builds into its products. New features and enhanced capabilities are delivered to customers every quarter. But nothing goes into production, and into the hands of our customers until it meets SnapLogic’s strict quality testing standards. Our customers deserve it.