Software developers have more cloud integration technology choices than ever before. But the dependencies among systems and applications can complicate integration testing to the point where vendors and development organizations put it off or forgo it altogether. Neither approach is practical for organizations that wish to avoid costly fixes.
"Integration testing is a regular software functionality and quality problem in the traditional sense, where you're using something and that 'something' consists of many components created by many people," said James Bach, principal consultant for Satisfice, Inc. "And it's a new challenge because there are so many new exciting components out there that represent a real testing challenge."
Although cloud-based technologies and applications present new integration opportunities, they also present "technology and business challenges that folks may not be accustomed to," said Andrew C. Oliver, president at Open Software Integrators, LLC.
"If you're tying together lots of different services -- Google Maps, Twitter, etc. -- some of these are not free. They have different charges based on the different usages. You have to do the business math ahead of time to figure out if what you're doing is viable and make sure you can get the SLAs [service-level agreements] you need," he said.
While usage and SLAs represent one set of problems, availability represents another.
More on cloud integration
Guide to cloud integration platforms
Business, IT tense over integration
Oracle Cloud integration potentially tricky
"If a company assumes that the cloud will always be there; if they assume that everyone's connected all the time, when those connections go down, it can have a catastrophic effect," Satisfice's Bach said. For example, some applications, like Evernote, assume that the user is always online. "When you're offline, you can't access any of your notes. Everything's in the cloud -- and only in the cloud."
Testing in the cloud with a plethora of devices and applications represents a major challenge for the testing community, according to Bach. "That is a huge challenge, and the testing world is scratching its head, because we don't know how to test a simulated cloud environment so that we can be confident that any bugs we'd see in real life we'd see in testing," he said.
Lisa Crispin, a tester for ePlan Services, Inc.'s Fast401k service and co-author of Agile Testing: A Practical Guide for Testers and Agile Teams, agrees with Bach. "With big apps like social media and things like that, it's difficult to test everything that millions of people might do," she said.
Even though integration testing might be more complicated, "there is a force that is making integration testing easier -- pushing it in the other direction," Bach said. "People have lower standards." We have become used to phone calls dropping out or to the one-second delay that occurs when we change the television channel. These types of performance issues used to be rare, but they've become more frequent and we've become accustomed to them, he said.
As a result, some software developers assume less responsibility for the performance of their product. "Unfortunately, when I bring up the risks of releasing technology that relies on different systems that have to work together and stay in sync, there are a lot of people that shrug and say, 'Oh well, some things will go wrong. People will just have to reboot or try again later,'" Bach said.
It is with that attitude that some organizations release applications into production without formal integration testing. Instead, they rely on users to do the testing. "They just put it out there and try to rake in the money faster than they have trouble. If they have to recall, no problem. They just reissue the application with updates," Bach said. "They can upgrade so fast, they don't need to test."
And that brings up another problem associated with integration testing: time. "Sometimes people think about testing after developing software, then they don't have time to do the testing," ePlan Services' Crispin said. Automated regression testing can help, but it can still take a long time to run tests in big systems. Some organizations decide that they'll release the system even while they're running the regression test, and monitor it while waiting for the results, she said.
To avoid these problems, experts recommend not waiting till the last minute. "Develop your tests parallel with the software. Do your integration testing early and your load testing early. Don't lump it all together at the end in some waterfall phase of the project. That's the most expensive time to fix major architectural problems," Open Software Integrators' Oliver said.
Crispin believes that in testing, as in all things, it's important to plan. "When starting a new project, say, 'Okay, we're implementing this feature. Let's start thinking about what integration testing we need. What other systems will be affected? What resources do we need to do that testing?' Then schedule time in that environment when you think you'll be ready for it," she said.
Finally, be cognizant of the need to test what you're building while you’re building it. "The number one thing I tell people that -- generally speaking -- they haven’t thought of at all is testability. Testability is very important. It's not enough to know how to test. It's not enough to try to test. It's not even enough to have tools that help you test. We must build each product with testability in mind," Satisfice's Bach said.
About the author
Crystal Bedell is a freelance technology writer specializing in IT security, cloud computing and networking. She can be reached at firstname.lastname@example.org.
This was first published in August 2012