|By Sellers Smith||
|November 23, 2013 05:00 PM EST||
Tests are an investment in the quality of any given system. There's always a cost to build, run, and maintain each test in terms of time and resources. There's also a great deal of value to be extracted from running the right test at the right time. It's important to remember that for everything you do test, there's something else you're not testing as a result.
Understanding that some tests are more important than others is vital to creating a useful and fluid test plan capable of catering to modern software development techniques. Traditional waterfall development - where everything is delivered for a specific release date in a finished package - has been succeeded by continuous feature roll outs into live systems. This necessitates a different approach from QA departments.
How Do You Build Good Tests?
You can't design or execute the right tests without understanding the intended purpose of the system. Testers need to have an insight into the end user's expectations. Communication between the product people at the business end, the engineers working on the code, and the test department enables you to score tests in terms of their importance and work out where each test cycle should be focusing.
We can break it down into three simple queries: why, what, and how.
"Why" is a higher level overview that really ties into the business side. It's the big-picture thinking that reveals why you're building the software in the first place. What audience need is your product fulfilling? For example, we need to build an e-commerce website to sell our product to the general public.
"What" is really focused on individual features or functions of a system. Using a shopping cart analogy for an e-commerce website, you might say that users must be able to add and remove items from their shopping cart, or that they shouldn't be able to add something that's out of stock.
"How" relates to the practical application of your testing. How exactly is the software going to be tested? How is the success and failure measured?
Good tests are always going to cover our trio, but it can be a useful exercise to break things down.
If you get too caught up in the "what" and the "how," it's possible to miss the "why" completely and it's the most important element because it dictates that some tests are more important than others. The business case for developing your software in the first place has to remain front and center throughout the project. If you begin to lose sight of what the end user needs, then you could be focusing your efforts in the wrong places. Delivering value to your customers is critical. Good testers and good tests always retain and use an awareness of what the intended audience wants and expects.
One technique we can employ is risk-based analysis of tasks. With risk-based analysis, we can arrive at a numerical value for each test, which gives you a sense of its importance. We can assign a score of between 1 and 9 to each test. At the top end, a score of 9 would be a critical test, and at the other end of the spectrum a score of 1 might indicate a test that only needs to be used sparingly. The value is determined by multiplying two factors:
- Impact to the user: What are they trying to accomplish and what would the impact be if they couldn't? How critical is this action?
- Probability of failure: How likely is it that this code will fail? This is heavily influenced by how new it is and how much testing it has already undergone.
If we return to our e-commerce website analogy then we could take the action of being able to buy goods, clearly that's essential, so it would be assigned a 3. However, the functionality for adding goods to the basket and checking out has been there since day one so it has already been tested quite extensively; however, some new features have been added that could impact the code, so that might result in a score of 2. Multiply the two together and you've got a 6. This figure will change over time, because probability of failure will go up if this part of the system is significantly altered, and it will go down over time if it isn't. There's also a discretionary factor that might lead you to bump that 6 up to a 7 if you feel it's merited.
Testers come up with specific scenarios of how an end user might interact with the software and what their expected outcome would be. A typical test might consist of many steps detailed in a script, but this approach can cause problems. What if a new tester comes in to run the test? Is the intent of the test clear to them? What if the implementation of the feature changes? Perhaps the steps no longer result in the expected outcome and the test fails, but that doesn't necessarily mean that the software is not working as intended.
The steps and scripts are delving into the "how," but if the "what" is distinct from the "how" then there's less chance of erroneous failure. Allow the tester some room for an exploratory approach and you're likely to get better results. If something can be tightly scripted and you expect it to be a part of your regression testing, then there's an argument for looking at automating it.
Adopting a technique like Specification by Example or Behavior Driven Design, you're going to lay each test out in this format:
- Given certain preconditions
- When one or more actions happen
- Then you should get this outcome
Regardless of the specifics of the user interface, or the stops along the way between A and Z, the "Given, When, Then" format covers the essential core of the scenario and ensures that the feature does what it's intended to do, without necessarily spelling out exactly how it should do it. It can be used to generate tables of scenarios that describe the actions, variables, and outcomes to test.
Getting down to the nuts and bolts of how testers will create, document, and run tests, we come to the "how." Since projects are more fluid now and requirements or priorities can change on a daily basis, there needs to be some flexibility in the "how" and a willingness to continually reassess the worth of individual tests. Finding the right balance between automated tests, manually scripted tests, and exploratory testing is an ongoing challenge that's driven by the "what" and the "why."
Traditionally, manual tests have been fully documented as a sequence of steps with an expected result for each step, but this is time-consuming and difficult to maintain. It's possible to borrow from automation by plugging in higher-level actions, or keywords, that refer to a detailed script or a common business action that's understood by the tester. There's also been a move toward exploratory testing, where the intent of the test is defined but the steps are dynamic. Finally, there's a place for disposable testing where you might use a recording tool to quickly document each step in a manual test as you work through it. These tests will need to be redone if anything changes, but as it's a relatively quick process and you're actually testing while you create the test, that's not necessarily a problem.
Each individual test should be viewed as an investment. You have to decide whether to run it, whether it needs to be maintained, or if it's time to drop it. You need to continually assess the value and the cost of each test so that you get maximum value from each test cycle. Never lose sight of the business requirements. When a test fails, ask yourself if it's a problem with the system or a problem with the test, and make sure that you always listen to the business people, the software engineers, and most importantly the customer.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
Feb. 28, 2015 12:00 PM EST Reads: 1,088
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
Feb. 28, 2015 12:00 PM EST Reads: 1,804
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data they generate about customer usage and product performance to deliver extremely compelling and reliabl...
Feb. 28, 2015 12:00 PM EST Reads: 1,212
SYS-CON Events announced today that CodeFutures, a leading supplier of database performance tools, has been named a “Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. CodeFutures is an independent software vendor focused on providing tools that deliver database performance tools that increase productivity during database development and increase database performance and scalability during production.
Feb. 28, 2015 11:00 AM EST Reads: 2,396
The IoT market is projected to be $1.9 trillion tidal wave that’s bigger than the combined market for smartphones, tablets and PCs. While IoT is widely discussed, what not being talked about are the monetization opportunities that are created from ubiquitous connectivity and the ensuing avalanche of data. While we cannot foresee every service that the IoT will enable, we should future-proof operations by preparing to monetize them with extremely agile systems.
Feb. 28, 2015 11:00 AM EST Reads: 1,125
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. Learn about IoT, Big Data and deployments processing massive data volumes from wearables, utilities and other machines.
Feb. 28, 2015 11:00 AM EST Reads: 982
Feb. 28, 2015 10:30 AM EST Reads: 2,374
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Feb. 28, 2015 10:00 AM EST Reads: 3,192
SYS-CON Events announced today that Intelligent Systems Services will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Established in 1994, Intelligent Systems Services Inc. is located near Washington, DC, with representatives and partners nationwide. ISS’s well-established track record is based on the continuous pursuit of excellence in designing, implementing and supporting nationwide clients’ mission-critical systems. ISS has completed many successful projects in Healthcare, Commercial, Manufacturing, ...
Feb. 28, 2015 10:00 AM EST Reads: 1,144
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add scalable realtime functionality with minimal effort and cost.”
Feb. 28, 2015 10:00 AM EST Reads: 4,596
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...
Feb. 28, 2015 10:00 AM EST Reads: 3,948
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
Feb. 28, 2015 09:45 AM EST Reads: 712
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Feb. 28, 2015 09:15 AM EST Reads: 1,179
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities. In his session at @ThingsExpo, Gary Hall, Chief Technology Officer, Federal Defense at Cisco Systems, will break down the core capabilities of IoT in multiple settings and expand upon IoE for bo...
Feb. 28, 2015 09:00 AM EST Reads: 1,042
With several hundred implementations of IoT-enabled solutions in the past 12 months alone, this session will focus on experience over the art of the possible. Many can only imagine the most advanced telematics platform ever deployed, supporting millions of customers, producing tens of thousands events or GBs per trip, and hundreds of TBs per month. With the ability to support a billion sensor events per second, over 30PB of warm data for analytics, and hundreds of PBs for an data analytics archive, in his session at @ThingsExpo, Jim Kaskade, Vice President and General Manager, Big Data & Ana...
Feb. 28, 2015 09:00 AM EST Reads: 1,180
For years, we’ve relied too heavily on individual network functions or simplistic cloud controllers. However, they are no longer enough for today’s modern cloud data center. Businesses need a comprehensive platform architecture in order to deliver a complete networking suite for IoT environment based on OpenStack. In his session at @ThingsExpo, Dhiraj Sehgal from PLUMgrid will discuss what a holistic networking solution should really entail, and how to build a complete platform that is scalable, secure, agile and automated.
Feb. 28, 2015 09:00 AM EST Reads: 2,268
We’re no longer looking to the future for the IoT wave. It’s no longer a distant dream but a reality that has arrived. It’s now time to make sure the industry is in alignment to meet the IoT growing pains – cooperate and collaborate as well as innovate. In his session at @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine the key ingredients to IoT success and identify solutions to challenges the industry is facing. The deep industry expertise behind this presentation will provide attendees with a leading edge view of rapidly emerging IoT oppor...
Feb. 28, 2015 09:00 AM EST Reads: 2,783
In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon, will discuss how as enterprise IT embraces other IoT-related technology trends, enterprises with i...
Feb. 28, 2015 09:00 AM EST Reads: 2,076
Feb. 28, 2015 09:00 AM EST Reads: 1,214
The Internet of Things (IoT) is causing data centers to become radically decentralized and atomized within a new paradigm known as “fog computing.” To support IoT applications, such as connected cars and smart grids, data centers' core functions will be decentralized out to the network's edges and endpoints (aka “fogs”). As this trend takes hold, Big Data analytics platforms will focus on high-volume log analysis (aka “logs”) and rely heavily on cognitive-computing algorithms (aka “cogs”) to make sense of it all.
Feb. 28, 2015 09:00 AM EST Reads: 959