Photo by Hush Naidoo Jade Photography on Unsplash

Cloud-based solutions are all the rage these days. The cloud approach is becoming extremely popular in many business areas due to advantages such as scalability, enhanced productivity, better traffic and transaction management, and significantly lower equipment costs. Moreover, a cloud-based solution makes digital operations more streamlined and provides businesses of any size with greater flexibility.

This migration of applications to the cloud has made software testing become an essential part of the business cycle. The switch to distributed and component-based applications, which is the basis for the touted flexibility and scalability, has also introduced additional layers of complexity and potential points of failure and communication, making testing cloud-based systems a vital business function.

Types of Tests

Generally speaking, every software application development must involve several types of testing distributed along the lifecycle of the product. The purpose of all such testing is to ensure the product meets both functional and non-functional requirements and to deliver a high-quality end product that will delight users. Typically, the types of testing that any application should go through are the following:

Functional testing

Functional testing ensures that the product actually provides all the services and functionalities as advertised and that the business requirements are being met. The main types of functional testing are:

  • Component and Unit Testing: This kind of test is performed by developers to validate specific functionality for each unit of an application. During unit testing, each unit of code and component is tested in isolation to make sure that it works as intended and provides the expected results.
  • Integration Testing: This ensures that the modules of an application are working fine and helps verify the combined functionality. Integration tests allow operational commands and data to act as a whole system, rather than as individual components. This type of testing is especially relevant to UI operations, operation timing, API calls, data formats, and database access.
  • Acceptance Testing: These tests are performed by a selected group of end-users that will be given access to a functional version of the application and will validate whether the application is good enough (accepted) or not. In other words, they indicate if the application meets the business objectives.

However, for cloud-based products, it’s essential to make sure that the product (or service) not only meets its functional requirements but also the non-functional ones. So a strong emphasis needs to be laid on non-functional testing as well.

Non-functional testing

Non-functional testing focuses on verifying cloud computing characteristics and features:

  • Security Testing: A cloud offering must guarantee that the data and resources of the system are protected from any unauthorized access, but it also must be protected from threats or misuses that can take the entire system down. This can be complex and, at a minimum, involves the following:
    • Vulnerability scanning: This is done through automated software to scan a system against known vulnerability signatures.
    • Security scanning: Identifies network and system weaknesses, and provides a basis for defining solutions for reducing these risks. Both manual and automated scanning can be performed.
    • Penetration testing: This kind of testing simulates an attack from a malicious hacker. It involves the analysis of a particular system to check for potential vulnerabilities to an external hacking attempt.
    • Risk assessment: This is an assessment of the security risks that is made at a broad organizational level, involving analysis of the software itself, as well as the processes and the technologies used. Risks are classified as Low, Medium, and High. The result of this testing is a list of recommended controls and measures to reduce the risks.
    • Ethical hacking: This involves hacking into the software systems to understand vulnerabilities. Unlike malicious hackers, who steal for their own gain, the underlying intention is to expose security flaws.
  • Multi-tenancy Testing: Multi-tenancy refers to a cloud-based service that accommodates multiple clients or organizations. The service is typically customized for each client and provides data and configuration level security to avoid any access-related issues. A cloud-based offering should be thoroughly validated for each client whenever multiple clients are to be active at a given time.
  • Performance Testing: Performance testing checks the speed, response time, reliability, resource usage, and scalability of a cloud offering under an expected workload. A cloud-based offering should be “elastic”, allowing for the increase or decrease of on-demand resource usage, while maintaining a desired throughput level. The goal of performance testing is to eliminate performance bottlenecks in the software. There are many types of performance tests. These are some of the most common:
    • Smoke tests verify that the system can handle a minimal load without problems.
    • Load tests are primarily concerned with assessing the performance of the system in terms of concurrent users or requests per second.
    • Stress tests and spike tests assess the limits of your system and stability under extreme conditions.
    • Soak tests evaluate the reliability and performance of a system over an extended period of time.
  • Availability Testing: Availability testing provides a measure of how often any given software is actually on hand and accessible for use. Cloud offerings must be available at all times. It is the responsibility of the cloud service provider to ensure that there are no abrupt downtimes. This kind of testing is primarily based on observation of the system being used along with the Quality of Service (QoS) level guaranteed by the service provider.
  • Disaster Recovery Testing:  This is a measure of the time it takes for a cloud application to recover from a disastrous failure. It may encompass certain hard measures like rolling back databases or deployments. In the case of a failure, recovery time must be low. Verification must be done to ensure the service is back online with minimal adverse effects on the client’s business.
  • Interoperability Testing: Any cloud application must work in multiple environments and platforms. It should also have the capability to be executed across many cloud platforms. It should be easy to move cloud applications and platforms from one infrastructure (as a service) to another infrastructure.

Testing Throughout the Software Development Life Cycle

Now, it becomes obvious that not all of those tests can be carried out at the same time, nor by the same set of persons, nor at the same stage of the project. However, when and how we apply all of these testing techniques plays a critical role in the quality of the resulting product.

If we look at the typical Software Development Life Cycle (the process of building software while ensuring the quality and accuracy of the software being built), it defines a series of stages and procedures. Each stage leads to the next step and produces results that move the development towards a completed product.  Stages are typically defined as follows:

  • Planning stage (also called the feasibility stage) is the phase in which developers plan for the upcoming project. Here, the problem to be solved and project scope are defined, along with determining project objectives.
  • Requirements analysis stage, includes gathering all the specific details required for a new system, as well as determining initial prototype ideas.
  • Design and prototyping stage, where developers will outline high level application requirements, along with more specific aspects, such as:
    • User interfaces
    • System interfaces
    • Network and network requirements
    • Databases
  • Development stage, where developers actually write code and build the application according to the earlier design documents and outlined specifications.
  • Testing stage, where software is tested to make sure that there aren’t any bugs, and that the end-user experience will not be negatively affected at any point. During the testing stage, developers will go over their software with a fine-tooth comb, noting any bugs or defects that need to be tracked, fixed, and later retested.
  • Integration and implementation (or deployment) stage, where the system will be integrated into its environment and eventually deployed. After passing this stage, the software is theoretically ready for market and may be provided to any end-users.
  • Operations and maintenance stage, where developers are responsible for implementing any changes that the software might need after deployment, as well as handling issues reported by end-users.

But how do the different kinds of tests relate to these stages? The following table attempts to represent the existing relationship between the SDLC stages and each type of testing.

PhaseKind of tests
Planning Stage
Requirements Analysis Stage
Design and Prototyping StageProcess testing
Software Development StageUnit testing
Component testing
Software Testing StageAcceptance testing
Exploratory testing
Regression testing
Security testing
Implementation/IntegrationIntegration testing
Smoke testing
Operations and Maintenance StagePerformance testing
Compatibility testing
Recovery testing
Availability testing

Looking at the stages, one could think that all tests will happen on the so-called “testing stage”. This is mostly true in a typical waterfall model of software project management, where a phase-only begins when the previous one has already finished. The fact that tests are left to the later stages when development is –theoretically- finished,  is usually the main cause of project failures following this methodology, due to the difficulty (or even the impossibility) of applying proper corrections at such a late stage.

Fortunately, modern software development methodologies, especially the agile ones, attempt to fix this with two breaking changes:

  • short feedback loops, i.e. show working things early and repeatedly, so that any misunderstanding or deviation can be tackled soon.
  • test from the beginning, so that any defect can be found and fixed as soon as possible.

In short, agile methodologies look for integrating all — or as many as possible — of the activities inside the development iterations, because everything done within one iteration provides feedback for the next iteration.

In this light, we can see that all functional testing is actually carried out by developers during the development phase. Following the modern methodologies, developers leverage techniques like TDD (Test-Driven Development) for crafting unit tests, component tests, and integration tests, which also help them get good internal quality. They also write regression tests to ensure that any previously fixed bug does not come to life again. Acceptance tests for each task are also included to ensure that the functionality works and is properly implemented. 

All of these types of tests are not only fully covered and automated, they are perfectly assumed by the developers and completely integrated into their regular day-to-day work. This means that the development team gets quick and frequent feedback and information on these aspects, and are able to respond immediately to any incident related to them.

However, in the non-functional part, the situation is not so clear. There are certain kinds of tests that have already been adopted by the development teams. Examples include aspects like multi-tenancy or data access control, for which tests are already usually developed as part of the regular component tests.

On the other hand, aspects like availability testing or disaster recovery testing, which are less of a test, are usually not activities present in the day-to-day of the development team. The availability of a service, being a measure of how the service behaves over time, becomes a mark on the service monitoring activities rather than a development task. And disaster recovery requires a complete contingency plan that rarely fits the development team’s tasks. These kinds of “tests” cannot be easily integrated into a development workflow.

Others, like security and performance testing, are in an intermediate adoption stage.

Regarding security, it is now relatively common that development teams integrate a static code analysis tool in their Continuous Integration system, which analyzes the code based on a pre-set collection of rules looking for common errors and design flaws. But it also provides relevant information regarding possible security holes or weaknesses that appear or can be inferred directly from the source code. However, this is only a small part of what a comprehensive security testing suite should be. Penetration testing, system security scanning, and auditing, along with ethical hacking among others, should also be part of this process. However, most of these activities are still manual and cannot be better automated as part of a development workflow.

Finally, talking about performance testing, it turns out that it’s a kind of testing which is all too often left for later stages of the development, taken as an afterthought, and even left to users for validation. This is a practice that, when applied to a cloud-based application, can severely damage the image of both the application and the company behind it. The main reasons for this are usually:

  • the need for having parts of the system developed.
  • the difficulty of generating enough load on the system based on human testers alone.

Obviously, a functional system is needed to be able to actually perform this test. But then again, following modern development methodologies, that occurs during the first stages. On the other hand, there are available tools that help create the number of virtual users (i.e. bots) needed to generate load on the system. These are scripted sequences of steps intended to mimic the behavior of a real user (or maybe just to make some calls to an API). The funny thing about this is that once there is a script to simulate one user, it’s just a matter of configuring the script execution to generate the needed load for the test.

SmartCLIDE’s Take

The SmartCLIDE project seeks to foster and promote modern agile methodologies. The aim is to democratize ownership of testing as well as software Quality Assurance among the whole development team, making everyone responsible for the quality of their own developments. With that in mind, we are building a tool that will provide developers with plenty of utilities to build high quality software, offering the best support for each stage of development. Being a cloud-based tool, and looking specifically after cloud-oriented developments, we have put special extra attention on some critical points. According to the analysis presented earlier, testing is supported right from the process definition stage, going through unit test generation and code recommendations, all the way up to code analysis and deployment.

Since that is quite common among IDEs nowadays, we wanted to go one step further by helping to integrate some of those activities related to non-functional requirements of cloud offerings. So, in SmartCLIDE, developers will find the following interesting features:

  • Security analysis integration, including reporting of metrics, weaknesses detected, and improvement points.
  • Performance testing integration, including a test generator that helps create the test suite.
  • Technical debt cost analysis, to inform about the estimated cost of fixing the detected technical debt left in the code.
  • Deployment cost estimation, to help decide whether a cloud provider is, economically speaking, a suite for the needs.

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *

Archives
Recent Tweets
Share via
Copy link
Powered by Social Snap