These material are compiled for helping junior / senior software engineers and others.
- What is 'Software Quality Assurance'?
Software QA involves the entire software development PROCESS - monitoring
and improving the process, making sure that any agreed-upon standards and
procedures are followed, and ensuring that problems are found and
dealt with. It is oriented to 'prevention'.
- What is 'Software Testing'?
Testing involves operation of a system or application under controlled
conditions and evaluating the results (eg, 'if the user
is in interface A of the application while using hardware B,
and does C, then D should happen'). The controlled conditions
should include both normal and abnormal conditions. Testing should
intentionally attempt to make things go wrong to determine if things
happen when they shouldn't or things don't happen when they should.
It is oriented to 'detection'.
Organizations vary considerably in how they assign responsibility
for QA and testing. Sometimes they're the combined responsibility of
one group or individual. Also common are project teams that
include a mix of testers and developers who work closely together,
with overall QA processes monitored by project managers. It will
depend on what best fits an organization's size and business structure.
- What are some recent major computer system failures caused by software bugs?
-
Media reports in January of 2005 detailed severe problems with
a $170 million high-profile U.S. government IT systems project. Software
testing was one of the five major problem areas according to a
report of the commission reviewing the project. Studies were under
way to determine which, if any, portions of the project could be salvaged.
-
In July 2004 newspapers reported that a new government
welfare management system in Canada costing several hundred million
dollars was unable to handle a simple benefits rate increase after
being put into live operation. Reportedly the original contract
allowed for only 6 weeks of acceptance testing and the system was
never tested for its ability to handle a rate increase.
-
Millions of bank accounts were impacted by errors due to installation
of inadequately tested software code in the transaction processing
system of a major North American bank, according to mid-2004 news
reports. Articles about the incident stated that it took two weeks
to fix all the resulting errors, that additional problems resulted
when the incident drew a large number of e-mail phishing attacks
against the bank's customers, and that the total cost of the incident
could exceed $100 million.
-
A bug in site management software utilized by companies
with a significant percentage of worldwide web traffic was
reported in May of 2004. The bug resulted in performance
problems for many of the sites simultaneously and required
disabling of the software until the bug was fixed.
-
According to news reports in April of 2004, a software bug was
determined to be a major contributor to the 2003 Northeast
blackout, the worst power system failure in North American
history. The failure involved loss of electrical power to
50 million customers, forced shutdown of 100 power plants,
and economic losses estimated at $6 billion. The bug was
reportedly in one utility company's vendor-supplied power
monitoring and management system, which was unable to correctly
handle and report on an unusual confluence of initially localized
events. The error was found and corrected after examining
millions of lines of code.
-
In early 2004, news reports revealed the intentional use
of a software bug as a counter-espionage tool. According to the
report, in the early 1980's one nation surreptitiously allowed a hostile
nation's espionage service to steal a version of sophisticated
industrial software that had intentionally-added flaws. This
eventually resulted in major industrial disruption in the country
that used the stolen flawed software.
-
A major U.S. retailer was reportedly hit with a large government fine
in October of 2003 due to web site errors that enabled customers to
view one anothers' online orders.
-
News stories in the fall of 2003 stated that a manufacturing company
recalled all their transportation products in order to fix a software
problem causing instability in certain circumstances. The company found
and reported the bug itself and initiated the recall procedure in which
a software upgrade fixed the problems.
-
In August of 2003 a U.S. court ruled that a lawsuit against a large
online brokerage company could proceed; the lawsuit reportedly
involved claims that the company was not fixing system problems
that sometimes resulted in failed stock trades, based on the
experiences of 4 plaintiffs during an 8-month period. A previous
lower court's ruling that "...six miscues out of more than
400 trades does not indicate negligence." was invalidated.
-
In April of 2003 it was announced that a large student loan company
in the U.S. made a software error in calculating the monthly
payments on 800,000 loans. Although borrowers were to be notified
of an increase in their required payments, the company will still
reportedly lose $8 million in interest. The error was uncovered
when borrowers began reporting inconsistencies in their bills.
-
News reports in February of 2003 revealed that the U.S. Treasury
Department mailed 50,000 Social Security checks without any beneficiary
names. A spokesperson indicated that the missing names were due
to an error in a software change. Replacement checks were
subsequently mailed out with the problem corrected, and recipients
were then able to cash their Social Security checks.
-
In March of 2002 it was reported that software bugs in Britain's
national tax system resulted in more than 100,000 erroneous tax
overcharges. The problem was partly attributed to the difficulty of
testing the integration of multiple systems.
-
A newspaper columnist reported in July 2001 that a serious flaw was
found in off-the-shelf software that had long been used in systems
for tracking certain U.S. nuclear materials. The same software had been
recently donated to another country to be used in tracking their own
nuclear materials, and it was not until scientists in that country
discovered the problem, and shared the information, that U.S.
officials became aware of the problems.
-
According to newspaper stories in mid-2001, a major systems
development contractor was fired and sued over problems with a
large retirement plan management system. According to the reports,
the client claimed that system deliveries were late, the software had
excessive defects, and it caused other systems to crash.
-
In January of 2001 newspapers reported that a major European
railroad was hit by the aftereffects of the Y2K bug. The company
found that many of their newer trains would not run due to their
inability to recognize the date '31/12/2000'; the trains were
started by altering the control system's date settings.
-
News reports in September of 2000 told of a software vendor
settling a lawsuit with a large mortgage lender; the vendor had
reportedly delivered an online mortgage processing system that
did not meet specifications, was delivered late, and didn't work.
-
In early 2000, major problems were reported with a new computer
system in a large suburban U.S. public school district with 100,000+
students; problems included 10,000 erroneous report cards and students
left stranded by failed class registration systems; the district's
CIO was fired. The school district decided to reinstate it's original
25-year old system for at least a year until the bugs were worked out
of the new system by the software vendors.
-
In October of 1999 the $125 million NASA Mars Climate
Orbiter spacecraft was believed to be lost in space due
to a simple data conversion error. It was determined that
spacecraft software used certain data in English units that should
have been in metric units. Among other tasks, the orbiter
was to serve as a communications relay for the Mars
Polar Lander mission, which failed for unknown reasons
in December 1999. Several investigating panels were
convened to determine the process failures that allowed
the error to go undetected.
-
Bugs in software supporting a large commercial high-speed data
network affected 70,000 business customers over a period of 8 days
in August of 1999. Among those affected was the electronic trading
system of the largest U.S. futures exchange, which was shut down
for most of a week as a result of the outages.
-
In April of 1999 a software bug caused the failure of a $1.2 billion
U.S. military satellite launch, the costliest unmanned accident in the
history of Cape Canaveral launches. The failure was the latest
in a string of launch failures, triggering a complete military
and industry review of U.S. space launch programs, including software
integration and testing processes. Congressional oversight hearings
were requested.
-
A small town in Illinois in the U.S. received an unusually large monthly
electric bill of $7 million in March of 1999. This was about 700
times larger than its normal bill. It turned out to be due to
bugs in new software that had been purchased by the local power
company to deal with Y2K software issues.
-
In early 1999 a major computer game company recalled all copies
of a popular new product due to software problems. The company
made a public apology for releasing a product before it was ready.
-
The computer system of a major online U.S. stock trading service
failed during trading hours several times over a period of days in
February of 1999 according to nationwide news reports. The problem
was reportedly due to bugs in a software upgrade intended to
speed online trade confirmations.
-
In April of 1998 a major U.S. data communications network
failed for 24 hours, crippling a large part of some U.S. credit
card transaction authorization systems as well as other large U.S.
bank, retail, and government data systems. The cause was
eventually traced to a software bug.
-
January 1998 news reports told of software problems at a
major U.S. telecommunications company that resulted in no charges
for long distance calls for a month for 400,000 customers. The
problem went undetected until customers called up with
questions about their bills.
-
In November of 1997 the stock of a major health industry
company dropped 60% due to reports of failures in computer
billing systems, problems with a large database conversion,
and inadequate software testing. It was reported that more than
$100,000,000 in receivables had to be written off and that
multi-million dollar fines were levied on the company by
government agencies.
-
A retail store chain filed suit in August of 1997
against a transaction processing system vendor (not a credit
card company) due to the software's inability to handle
credit cards with year 2000 expiration dates.
-
In August of 1997 one of the leading consumer credit reporting
companies reportedly shut down their new public web site after
less than two days of operation due to software problems. The new
site allowed web site visitors instant access, for a small
fee, to their personal credit reports. However, a number of
initial users ended up viewing each others' reports instead
of their own, resulting in irate customers and nationwide
publicity. The problem was attributed to "...unexpectedly
high demand from consumers and faulty software that routed
the files to the wrong computers."
-
In November of 1996, newspapers reported that software bugs caused
the 411 telephone information system of one of the U.S. RBOC's to
fail for most of a day. Most of the 2000 operators had to
search through phone books instead of using their 13,000,000-listing
database. The bugs were introduced by new software modifications
and the problem software had been installed on both the production
and backup systems. A spokesman for the software vendor reportedly
stated that 'It had nothing to do with the integrity of the
software. It was human error.'
-
On June 4 1996 the first flight of the
European Space Agency's new Ariane 5 rocket failed shortly
after launching, resulting in an estimated uninsured loss
of a half billion dollars. It was reportedly due to the lack
of exception handling of a floating-point error in a
conversion from a 64-bit integer to a 16-bit signed integer.
-
Software bugs caused the bank accounts of 823 customers of a major
U.S. bank to be credited with $924,844,208.32 each in May of 1996,
according to newspaper reports. The American Bankers Association
claimed it was the largest such error in banking history. A bank
spokesman said the programming errors were corrected and all
funds were recovered.
-
Software bugs in a Soviet early-warning monitoring system
nearly brought on nuclear war in 1983, according to news reports
in early 1999. The software was supposed to filter out
false missile detections caused by Soviet satellites picking up
sunlight reflections off cloud-tops, but failed to do so. Disaster was
averted when a Soviet commander, based on what he said was a '...funny
feeling in my gut', decided the apparent missile attack was a
false alarm. The filtering software code was rewritten.
- Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is
low-visibility. This is illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known
throughout the land and employed as a physician to a great lord. The
physician was asked which of his family was the most skillful healer.
He replied,
"I tend to the sick and dying with drastic and dramatic treatments,
and on occasion someone is cured and my name gets out among the
lords."
"My elder brother cures sickness when it just begins to take
root, and his skills are known among the local peasants and
neighbors."
"My eldest brother is able to sense the spirit of sickness and
eradicate it before it takes form. His name is unknown outside our
home."
- Why does software have bugs?
- miscommunication or no communication - as to specifics of
what an application should or shouldn't do (the application's
requirements).
- software complexity - the complexity of current software
applications can be difficult to comprehend for anyone without
experience in modern-day software development. Multi-tiered
applications, client-server and distributed applications, data
communications, enormous relational databases, and
sheer size of applications have all contributed to the
exponential growth in software/system complexity.
- programming errors - programmers, like anyone else, can
make mistakes.
- changing requirements (whether documented or undocumented) -
the end-user may not understand the effects of changes, or may understand
and request them anyway - redesign, rescheduling of engineers,
effects on other projects, work already completed that may
have to be redone or thrown out, hardware requirements that
may be affected, etc. If there are many minor changes or any
major changes, known and unknown dependencies among parts of the
project are likely to interact and cause problems, and the
complexity of coordinating changes may result in errors.
Enthusiasm of engineering staff may be affected. In some
fast-changing business environments, continuously modified
requirements may be a fact of life. In this case, management
must understand the resulting risks, and QA and test
engineers must adapt and plan for continuous extensive
testing to keep the inevitable bugs from running out of
control.
- time pressures - scheduling of software projects is difficult
at best, often requiring a lot of guesswork. When deadlines
loom and the crunch comes, mistakes will be made.
- egos - people prefer to say things like:
'no problem'
'piece of cake'
'I can whip that out in a few hours'
'it should be easy to update that old code'
instead of:
'that adds a lot of complexity and we could end up
making a lot of mistakes'
'we have no idea if we can do that; we'll wing it'
'I can't estimate how long it will take, until I
take a close look at it'
'we can't figure out what that old spaghetti code
did in the first place'
If there are too many unrealistic 'no problem's', the
result is bugs.
- poorly documented code - it's tough to maintain and modify code
that is badly written or poorly documented; the result is bugs. In
many organizations management provides no incentive for programmers
to document their code or write clear, understandable, maintainable code.
In fact, it's usually the opposite: they get points mostly for quickly
turning out code, and there's job security if nobody else can understand
it ('if it was hard to write, it should be hard to read').
- software development tools - visual tools, class libraries, compilers,
scripting tools, etc. often introduce their own bugs or are poorly
documented, resulting in added bugs.
- How can new Software QA processes be introduced in an existing organization?
- A lot depends on the size of the organization and the risks involved.
For large organizations with high-risk (in terms of lives or property)
projects, serious management buy-in is required and a formalized
QA process is necessary.
- Where the risk is lower, management and organizational buy-in
and QA implementation may be a slower, step-at-a-time
process. QA processes should be balanced with productivity
so as to keep bureaucracy from getting out of hand.
- For small groups or projects, a more ad-hoc process may be
appropriate, depending on the type of customers and projects. A
lot will depend on team leads or managers, feedback to developers,
and ensuring adequate communications among customers, managers,
developers, and testers.
- The most value for effort will often be in (a) requirements
management processes, with a goal of clear, complete, testable
requirement specifications embodied in requirements or design
documentation, or in 'agile'-type environments extensive continuous
coordination with end-users, (b) design inspections and code
inspections, and (c) post-mortems/retrospectives.
- What is verification? validation?
Verification typically involves reviews and meetings to evaluate
documents, plans, code, requirements, and specifications. This
can be done with checklists, issues lists, walkthroughs, and
inspection meetings. Validation typically involves actual
testing and takes place after verifications are completed.
The term 'IV & V' refers to Independent Verification and
Validation.
- What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or
informational purposes. Little or no preparation is usually
required.
- What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically
with 3-8 people including a moderator, reader, and a recorder to
take notes. The subject of the inspection is typically a document
such as a requirements spec or a test plan, and the purpose is to
find problems and see what's missing, not to fix anything. Attendees
should prepare for this type of meeting by reading thru the document;
most problems will be found during this preparation. The result of the
inspection meeting should be a written report. Thorough preparation for
inspections is difficult, painstaking work, but is one of the most cost
effective methods of ensuring quality. Employees who are most skilled
at inspections are like the 'eldest brother' in the parable in
'Why is it often hard for management to get serious about quality assurance?'
Their skill may have low visibility but they are extremely valuable
to any software development organization, since bug prevention is
far more cost-effective than bug detection.
- What kinds of testing should be considered?
- Black box testing - not based on any knowledge of internal design
or code. Tests are based on requirements and functionality.
- White box testing - based on knowledge of the internal logic
of an application's code. Tests are based on coverage of code
statements, branches, paths, conditions.
- unit testing - the most 'micro' scale of testing; to test
particular functions or code modules. Typically done by the
programmer and not by testers, as it requires detailed knowledge
of the internal program design and code. Not always easily done
unless the application has a well-designed architecture with tight
code; may require developing test driver modules or test harnesses.
- incremental integration testing - continuous testing of an
application as new functionality is added; requires that various
aspects of an application's functionality be independent enough
to work separately before all parts of the program are completed,
or that test drivers be developed as needed; done by programmers
or by testers.
- integration testing - testing of combined parts of an application
to determine if they function together correctly. The 'parts'
can be code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially
relevant to client/server and distributed systems.
- functional testing - black-box type testing geared to functional
requirements of an application; this type of testing should be done by
testers. This doesn't mean that the programmers shouldn't check that
their code works before releasing it (which of course applies to any
stage of testing.)
- system testing - black-box type testing that is based on overall
requirements specifications; covers all combined parts of a system.
- end-to-end testing - similar to system testing; the 'macro' end of
the test scale; involves testing of a complete application environment
in a situation that mimics real-world use, such as interacting with
a database, using network communications, or interacting
with other hardware, applications, or systems if appropriate.
- sanity testing or smoke testing - typically an initial testing effort
to determine if a new software version is performing well enough to
accept it for a major testing effort. For example, if the new software
is crashing systems every 5 minutes, bogging down systems to a crawl,
or corrupting databases, the software may not be in a 'sane' enough
condition to warrant further testing in its current state.
- regression testing - re-testing after fixes or modifications of the
software or its environment. It can be difficult to determine how
much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially
useful for this type of testing.
- acceptance testing - final testing based on specifications of
the end-user or customer, or based on use by end-users/customers
over some limited period of time.
- load testing - testing an application under heavy loads, such as
testing of a web site under a range of loads to determine
at what point the system's response time degrades or fails.
- stress testing - term often used interchangeably with 'load'
and 'performance' testing. Also used to describe such tests as
system functional testing while under unusually heavy loads,
heavy repetition of certain actions or inputs, input of
large numerical values, large complex queries to a database system, etc.
- performance testing - term often used interchangeably with
'stress' and 'load' testing. Ideally 'performance' testing
(and any other 'type' of testing) is defined in requirements
documentation or QA or Test Plans.
- usability testing - testing for 'user-friendliness'. Clearly this is
subjective, and will depend on the targeted end-user or customer. User
interviews, surveys, video recording of user sessions, and other
techniques can be used. Programmers and testers are usually not
appropriate as usability testers.
- install/uninstall testing - testing of full, partial, or
upgrade install/uninstall processes.
- recovery testing - testing how well a system recovers from crashes,
hardware failures, or other catastrophic problems.
- failover testing - typically used interchangeably with 'recovery testing'
- security testing - testing how well the system protects against
unauthorized internal or external access, willful damage, etc; may
require sophisticated testing techniques.
- compatability testing - testing how well software performs in
a particular hardware/software/operating system/network/etc.
environment.
- exploratory testing - often taken to mean a creative, informal
software test that is not based on formal test plans or test cases;
testers may be learning the software as they test it.
- ad-hoc testing - similar to exploratory testing, but often
taken to mean that the testers have significant understanding
of the software before testing it.
- context-driven testing - testing driven by an understanding of
the environment, culture, and intended use of software. For example,
the testing approach for life-critical medical equipment software would
be completely different than that for a low-cost computer game.
- user acceptance testing - determining if software is satisfactory
to an end-user or customer.
- comparison testing - comparing software weaknesses and strengths
to competing products.
- alpha testing - testing of an application when development is
nearing completion; minor design changes may still be made as a
result of such testing. Typically done by end-users or others, not
by programmers or testers.
- beta testing - testing when development and testing are
essentially completed and final bugs and problems need to be
found before final release. Typically done by end-users or
others, not by programmers or testers.
- mutation testing - a method for determining if a set of test
data or test cases is useful, by deliberately introducing
various code changes ('bugs') and retesting with the
original test data/cases to determine if the 'bugs' are
detected. Proper implementation requires large
computational resources.
- What are 5 common problems in the software development process?
- poor requirements - if requirements are unclear, incomplete,
too general, and not testable, there will be problems.
- unrealistic schedule - if too much work is crammed in too little
time, problems are inevitable.
- inadequate testing - no one will know whether or not the program is
any good until the customer complains or systems crash.
- featuritis - requests to pile on new features after development
is underway; extremely common.
- miscommunication - if developers don't know what's needed or customer's
have erroneous expectations, problems are guaranteed.
- What are 5 common solutions to software development problems?
- solid requirements - clear, complete, detailed, cohesive, attainable,
testable requirements that are agreed to by all players. Use prototypes
to help nail down requirements. In 'agile'-type environments,
continuous coordination with customers/end-users is necessary.
- realistic schedules - allow adequate time for planning, design,
testing, bug fixing, re-testing, changes, and documentation; personnel
should be able to complete the project without burning out.
- adequate testing - start testing early on, re-test after fixes or
changes, plan for adequate time for testing and bug-fixing.
'Early' testing ideally includes unit testing by developers
and built-in testing and diagnostic capabilities.
- stick to initial requirements as much as possible - be prepared to
defend against excessive changes and additions once development has
begun, and be prepared to explain consequences. If changes are
necessary, they should be adequately reflected in related schedule
changes. If possible, work closely with customers/end-users to
manage expectations. This will provide them a higher comfort
level with their requirements decisions and minimize excessive
changes later on.
- communication - require walkthroughs and inspections when
appropriate; make extensive use of group communication tools -
e-mail, groupware, networked bug-tracking tools and change
management tools, intranet capabilities, etc.; insure that
information/documentation is available and up-to-date - preferably
electronic, not paper; promote teamwork and cooperation; use
protoypes if possible to clarify customers' expectations.
- What is software 'quality'?
Quality software is reasonably bug-free, delivered on time
and within budget, meets requirements and/or expectations,
and is maintainable.
However, quality is obviously a subjective term. It
will depend on who the 'customer' is and their overall
influence in the scheme of things. A wide-angle view of
the 'customers' of a software development project might include
end-users, customer acceptance testers, customer contract
officers, customer management, the development organization's
management/accountants/testers/salespeople, future software
maintenance engineers, stockholders, magazine columnists, etc.
Each type of 'customer' will have their own slant on 'quality' -
the accounting department might define quality in terms of profits
while an end-user might define quality as user-friendly and
bug-free.
- What is 'good code'?
'Good code' is code that works, is bug free, and is readable and
maintainable. Some organizations have coding 'standards' that
all developers are supposed to adhere to, but everyone has different ideas
about what's best, or what is too many or too few rules. There are
also various theories and metrics, such as McCabe Complexity metrics.
It should be kept in mind that excessive use of standards and rules
can stifle productivity and creativity. 'Peer reviews', 'buddy checks'
code analysis tools, etc. can be used to check for problems and
enforce standards.
For C and C++ coding, here are some typical ideas to consider
in setting rules/standards; these may or may not apply to
a particular situation:
- minimize or eliminate use of global variables.
- use descriptive function and method names - use both upper
and lower case, avoid abbreviations, use as many characters
as necessary to be adequately descriptive (use of more than
20 characters is not out of line); be consistent in naming conventions.
- use descriptive variable names - use both upper and lower case,
avoid abbreviations, use as many characters as necessary to be
adequately descriptive (use of more than 20 characters is not
out of line); be consistent in naming conventions.
- function and method sizes should be minimized; less than
100 lines of code is good, less than 50 lines is preferable.
- function descriptions should be clearly spelled out in comments
preceding a function's code.
- organize code for readability.
- use whitespace generously - vertically and horizontally
- each line of code should contain 70 characters max.
- one code statement per line.
- coding style should be consistent throught a program (eg, use of
brackets, indentations, naming conventions, etc.)
- in adding comments, err on the side of too many rather than
too few comments; a common rule of thumb is that there should
be at least as many lines of comments (including header blocks)
as lines of code.
- no matter how small, an application should include documentaion
of the overall program function and flow (even a few paragraphs
is better than nothing); or if possible a separate flow chart and
detailed program documentation.
- make extensive use of error handling procedures and status and error
logging.
- for C++, to minimize complexity and increase maintainability, avoid
too many levels of inheritance in class heirarchies (relative to
the size and complexity of the application).
Minimize use of multiple inheritance, and minimize use of operator
overloading (note that the Java programming language eliminates
multiple inheritance and operator overloading.)
- for C++, keep class methods small, less than 50 lines of code
per method is preferable.
- for C++, make liberal use of exception handlers
- What is 'good design'?
'Design' could refer to many things, but often refers to
'functional design' or 'internal design'. Good internal
design is indicated by software code whose overall
structure is clear, understandable, easily modifiable, and
maintainable; is robust with sufficient error-handling and
status logging capability; and works correctly when implemented.
Good functional design is indicated by an application whose
functionality can be traced back to customer and end-user
requirements.
For programs that have a user interface, it's often a
good idea to assume that the end user will have little computer
knowledge and may not read a user manual or even the on-line
help; some common rules-of-thumb include:
- the program should act in a way that least surprises the user
- it should always be evident to the user what can be done next
and how to exit
- the program shouldn't let the users do something stupid without
warning them.
- What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?
- SEI = 'Software Engineering Institute' at Carnegie-Mellon University;
initiated by the U.S. Defense Department to help improve software
development processes.
- CMM = 'Capability Maturity Model', now called the CMMI ('Capability
Maturity Model Integration'), developed by the SEI. It's a model
of 5 levels of process 'maturity' that determine effectiveness
in delivering quality software. It is geared to large organizations
such as large U.S. Defense Department contractors. However, many of
the QA processes involved are appropriate to any organization, and
if reasonably applied can be helpful. Organizations can receive
CMMI ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic
efforts required by individuals to successfully
complete projects. Few if any processes in place;
successes may not be repeatable.
Level 2 - software project tracking, requirements management,
realistic planning, and configuration management
processes are in place; successful practices can
be repeated.
Level 3 - standard software development and maintenance processes
are integrated throughout an organization; a Software
Engineering Process Group is is in place to oversee
software processes, and training programs are used to
ensure understanding and compliance.
Level 4 - metrics are used to track productivity, processes,
and products. Project performance is predictable,
and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The
impact of new processes and technologies can be
predicted and effectively implemented when required.
Perspective on CMM ratings: During 1997-2001, 1018 organizations
were assessed. Of those, 27% were rated at Level 1, 39% at 2,
23% at 3, 6% at 4, and 5% at 5. (For ratings during the period
1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and
0.4% at 5.) The median size of organizations was 100 software
engineering/maintenance personnel; 32% of organizations were
U.S. federal contractors or agencies. For those rated at
Level 1, the most problematical key process area was in
Software Quality Assurance.
- ISO = 'International Organisation for Standardization' -
The ISO 9001:2000 standard (which replaces the previous standard
of 1994) concerns quality systems that are assessed
by outside auditors, and it applies to many kinds of production
and manufacturing organizations, not just software. It covers
documentation, design, development, production, testing, installation,
servicing, and other processes. The full set of standards consists of:
(a)Q9001-2000 - Quality Management Systems: Requirements;
(b)Q9000-2000 - Quality Management Systems: Fundamentals and
Vocabulary;
(c)Q9004-2000 - Quality Management Systems: Guidelines for
Performance Improvements.
To be ISO 9001 certified, a third-party auditor assesses
an organization, and certification is typically good
for about 3 years, after which a complete reassessment
is required. Note that ISO certification does not necessarily
indicate quality products - it indicates only that documented
processes are followed.
Also see
http://www.iso.ch/for the
latest information. In the U.S. the standards can be purchased
via the ASQ web site at
http://e-standards.asq.org/
- IEEE = 'Institute of Electrical and Electronics Engineers' - among
other things, creates standards such as 'IEEE Standard for Software
Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard
of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard
for Software Quality Assurance Plans' (IEEE/ANSI Standard 730),
and others.
- ANSI = 'American National Standards Institute', the primary industrial
standards body in the U.S.; publishes some software-related standards
in conjunction with the IEEE and ASQ (American Society for Quality).
- Other software development/IT management process assessment
methods besides CMMI and ISO 9000 include SPICE, Trillium, TickIT,
Bootstrap, ITIL, MOF, and CobiT.
- What is the 'software life cycle'?
The life cycle begins when an application is first conceived
and ends when it is no longer in use. It includes aspects such as
initial concept, requirements analysis, functional design,
internal design, documentation planning, test planning, coding,
document preparation, integration, testing, maintenance,
updates, retesting, phase-out, and other aspects.
- Will automated testing tools make testing easier?
- Possibly. For small projects, the time needed to learn
and implement them may not be worth it. For larger projects,
or on-going long-term projects they can be valuable.
- A common type of automated tool is the 'record/playback' type.
For example, a tester could click through all combinations
of menu choices, dialog box choices, buttons, etc. in an
application GUI and have them 'recorded' and the results
logged by a tool. The 'recording' is typically in the form of
text based on a scripting language that is interpretable by the
testing tool. If new buttons are added, or some underlying code in
the application is changed, etc. the application might then be
retested by just 'playing back' the 'recorded' actions, and
comparing the logging results to check effects of the changes.
The problem with such tools is that if there are continual
changes to the system being tested, the 'recordings' may have to
be changed so much that it becomes very time-consuming to
continuously update the scripts. Additionally, interpretation and
analysis of results (screens, data, logs, etc.) can be a difficult
task. Note that there are record/playback tools for text-based
interfaces also, and for all types of platforms.
- Another common type of approach for automation of functional testing
is 'data-driven' or 'keyword-driven' automated testing, in which the
test drivers are separated from the data and/or actions utilized in testing
(an 'action' would be something like 'enter a value in a text box'). Test
drivers can be in the form of automated test tools or custom-written
testing software. The data and actions can be more easily maintained - such
as via a spreadsheet - since they are separate from the test drivers.
The test drivers 'read' the data/action information to perform specified
tests. This approach can enable more efficient control, development,
documentation, and maintenance of automated tests/test cases.
- Other automated tools can include:
code analyzers - monitor code complexity, adherence to
standards, etc.
coverage analyzers - these tools check which parts of the
code have been exercised by a test, and may
be oriented to code statement coverage,
condition coverage, path coverage, etc.
memory analyzers - such as bounds-checkers and leak detectors.
load/performance test tools - for testing client/server
and web applications under various load
levels.
web test tools - to check that links are valid, HTML code
usage is correct, client-side and
server-side programs work, a web site's
interactions are secure.
other tools - for test case management, documentation
management, bug reporting, and configuration
management.
- What makes a good Software Test engineer?
A good test engineer has a 'test to break' attitude,
an ability to take the point of view of the customer, a strong
desire for quality, and an attention to detail. Tact and diplomacy
are useful in maintaining a cooperative relationship with developers,
and an ability to communicate with both technical (developers) and
non-technical (customers, management) people is useful. Previous
software development experience can be helpful as it provides
a deeper understanding of the software development process,
gives the tester an appreciation for the developers' point
of view, and reduce the learning curve in automated test
tool programming. Judgement skills are needed to assess high-risk
areas of an application on which to focus testing efforts
when time is limited.
-
What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA
engineer. Additionally, they must be able to understand
the entire software development process and how it can fit
into the business approach and goals of the organization.
Communication skills and the ability to understand various sides
of issues are important. In organizations in the early stages of
implementing QA processes, patience and diplomacy are
especially needed. An ability to find problems as well as
to see 'what's missing' is important for inspections
and reviews.
-
What makes a good QA or Test manager?
A good QA, test, or QA/Test(combined) manager should:
- be familiar with the software development process
- be able to maintain enthusiasm of their team and promote a positive
atmosphere, despite what is a somewhat 'negative' process (e.g.,
looking for or preventing problems)
- be able to promote teamwork to increase productivity
- be able to promote cooperation between software, test, and QA engineers
- have the diplomatic skills needed to promote improvements in
QA processes
- have the ability to withstand pressures and say 'no' to other
managers when quality is insufficient or QA processes are not
being adhered to
- have people judgement skills for hiring and keeping skilled personnel
- be able to communicate with technical and non-technical people,
engineers, managers, and customers.
- be able to run meetings and keep them focused
- What's the role of documentation in QA?
Critical. (Note that documentation can be electronic, not
necessarily paper, may be embedded in code comments, etc.)
QA practices should be documented such that they are repeatable.
Specifications, designs, business rules, inspection
reports, configurations, code changes, test plans,
test cases, bug reports, user manuals, etc. should all
be documented in some form. There should ideally be a system for
easily finding and obtaining information and determining
what documentation will have a particular piece of information.
Change management for documentation should be used if
possible.
- What's the big deal about 'requirements'?
One of the most reliable methods of ensuring problems,
or failure, in a large, complex software project is to have
poorly documented requirements specifications. Requirements
are the details describing an application's
externally-perceived functionality and properties.
Requirements should be clear, complete, reasonably
detailed, cohesive, attainable, and testable.
A non-testable requirement would be, for
example, 'user-friendly' (too subjective). A testable
requirement would be something like 'the user must
enter their previously-assigned password to access the
application'. Determining and organizing requirements details
in a useful and efficient way can be a difficult
effort; different methods are available
depending on the particular project. Many
books are available that describe various
approaches to this task.
Care should be taken to involve ALL of a project's significant
'customers' in the requirements process. 'Customers' could be
in-house personnel or out, and could include end-users,
customer acceptance testers, customer contract officers,
customer management, future software maintenance engineers,
salespeople, etc. Anyone who could later derail the project
if their expectations aren't met should be included if
possible.
Organizations vary considerably in their handling of
requirements specifications. Ideally, the requirements
are spelled out in a document with statements such as
'The product shall.....'. 'Design' specifications should not
be confused with 'requirements'; design specifications
should be traceable back to the requirements.
In some organizations requirements may end up in
high level project plans, functional specification
documents, in design documents, or in other documents
at various levels of detail. No matter what they are
called, some type of documentation with detailed requirements
will be needed by testers in order to properly plan and
execute tests. Without such documentation, there will
be no clear-cut way to determine if a software
application is performing correctly.
'Agile' methods such as XP use methods requiring close
interaction and cooperation between programmers and customers/end-users
to iteratively develop requirements. The programmer uses 'Test first'
development to first create automated unit testing code, which
essentially embodies the requirements.
-
What steps are needed to develop and run software tests?
The following are some of the steps to consider:
- Obtain requirements, functional design, and internal design
specifications and other necessary documents
- Obtain budget and schedule requirements
- Determine project-related personnel and their responsibilities,
reporting requirements, required standards and processes
(such as release processes, change processes, etc.)
- Determine project context, relative to the existing quality
culture of the organization and business, and how it might impact
testing scope, aproaches, and methods.
- Identify application's higher-risk aspects, set priorities,
and determine scope and limitations of tests
- Determine test approaches and methods - unit, integration, functional,
system, load, usability tests, etc.
- Determine test environment requirements (hardware, software,
communications, etc.)
- Determine testware requirements (record/playback tools, coverage
analyzers, test tracking, problem/bug tracking, etc.)
- Determine test input data requirements
- Identify tasks, those responsible for tasks, and labor
requirements
- Set schedule estimates, timelines, milestones
- Determine input equivalence classes, boundary value analyses,
error classes
- Prepare test plan document and have needed reviews/approvals
- Write test cases
- Have needed reviews/inspections/approvals of test cases
- Prepare test environment and testware, obtain needed user
manuals/reference documents/configuration guides/installation
guides, set up test tracking processes, set up logging and
archiving processes, set up or obtain test input data
- Obtain and install software releases
- Perform tests
- Evaluate and report results
- Track problems/bugs and fixes
- Retest as needed
- Maintain and update test plans, test cases, test environment,
and testware through life cycle
-
What's a 'test plan'?
A software project test plan is a document that describes
the objectives, scope, approach, and focus of a software
testing effort. The process of preparing a test plan
is a useful way to think through the efforts needed to
validate the acceptability of a software product. The
completed document will help people outside the test
group understand the 'why' and 'how' of product validation.
It should be thorough enough to be useful but not so
thorough that no one outside the test group will read it.
The following are some of the items that might be
included in a test plan, depending on the particular project:
- Title
- Identification of software including version/release numbers
- Revision history of document including authors, dates, approvals
- Table of Contents
- Purpose of document, intended audience
- Objective of testing effort
- Software product overview
- Relevant related document list, such as requirements, design
documents, other test plans, etc.
- Relevant standards or legal requirements
- Traceability requirements
- Relevant naming conventions and identifier conventions
- Overall software project organization and
personnel/contact-info/responsibilties
- Test organization and personnel/contact-info/responsibilities
- Assumptions and dependencies
- Project risk analysis
- Testing priorities and focus
- Scope and limitations of testing
- Test outline - a decomposition of the test approach by test type,
feature, functionality, process, system, module, etc.
as applicable
- Outline of data input equivalence classes, boundary value
analysis, error classes
- Test environment - hardware, operating systems,
other required software, data configurations, interfaces
to other systems
- Test environment validity analysis - differences between the
test and production systems and their impact on test validity.
- Test environment setup and configuration issues
- Software migration processes
- Software CM processes
- Test data setup requirements
- Database setup requirements
- Outline of system-logging/error-logging/other capabilities,
and tools such as screen capture software, that will be used
to help describe and report bugs
- Discussion of any specialized software or hardware tools
that will be used by testers to help track the cause or
source of bugs
- Test automation - justification and overview
- Test tools to be used, including versions, patches, etc.
- Test script/test code maintenance processes and version control
- Problem tracking and resolution - tools and processes
- Project test metrics to be used
- Reporting requirements and testing deliverables
- Software entrance and exit criteria
- Initial sanity testing period and criteria
- Test suspension and restart criteria
- Personnel allocation
- Personnel pre-training needs
- Test site/location
- Outside test organizations to be utilized and their
purpose, responsibilties, deliverables, contact persons,
and coordination issues
- Relevant proprietary, classified, security, and licensing issues.
- Open issues
- Appendix - glossary, acronyms, etc.
- What's a 'test case'?
- A test case is a document that describes an input, action,
or event and an expected response, to determine if a
feature of an application is working correctly. A test case
should contain particulars such as test case identifier,
test case name, objective, test conditions/setup, input data
requirements, steps, and expected results.
- Note that the process of developing test cases can help find
problems in the requirements or design of an application,
since it requires completely thinking through the operation
of the application. For this reason, it's useful to prepare
test cases early in the development cycle if possible.
-
What should be done after a bug is found?
The bug needs to be communicated and assigned to
developers that can fix it. After the problem is resolved,
fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes
didn't create problems elsewhere. If a problem-tracking system
is in place, it should encapsulate these processes. A variety
of commercial problem-tracking/management software tools
are available. The following
are items to consider in the tracking process:
- Complete information such that developers can understand the
bug, get an idea of it's severity, and reproduce it if necessary.
- Bug identifier (number, ID, etc.)
- Current bug status (e.g., 'Released for Retest', 'New', etc.)
- The application name or identifier and version
- The function, module, feature, object, screen, etc. where
the bug occurred
- Environment specifics, system, platform, relevant hardware specifics
- Test case name/number/identifier
- One-line bug description
- Full bug description
- Description of steps needed to reproduce the bug
if not covered by a test case or if the developer doesn't
have easy access to the test case/test script/test tool
- Names and/or descriptions of file/data/messages/etc. used in test
- File excerpts/error messages/log file excerpts/screen shots/test
tool logs that would be helpful in finding the cause of the problem
- Severity estimate (a 5-level range such as 1-5 or
'critical'-to-'low' is common)
- Was the bug reproducible?
- Tester name
- Test date
- Bug reporting date
- Name of developer/group/organization the problem is assigned to
- Description of problem cause
- Description of fix
- Code section/file/module/class/method that was fixed
- Date of fix
- Application version that contains the fix
- Tester responsible for retest
- Retest date
- Retest results
- Regression testing requirements
- Tester responsible for regression tests
- Regression testing results
A reporting or tracking process should enable notification
of appropriate personnel at various stages. For instance,
testers need to know when retesting is needed, developers
need to know when bugs are found and how to get the needed
information, and reporting/summary capabilities are needed
for managers.
- What is 'configuration management'?
Configuration management covers the processes used to control,
coordinate, and track: code, requirements, documentation,
problems, change requests, designs, tools/compilers/libraries/patches,
changes made to them, and who makes the changes.
-
What if the software is so buggy it can't really be tested at all?
The best bet in this situation is for the testers to go through
the process of reporting whatever bugs or blocking-type problems
initially show up, with the focus being on critical bugs. Since
this type of problem can severely affect schedules,
and indicates deeper problems in the software development
process (such as insufficient unit testing or insufficient
integration testing, poor design, improper build or release
procedures, etc.) managers should be notified, and provided
with some documentation as evidence of the problem.
-
How can it be known when to stop testing?
This can be difficult to determine. Many modern software
applications are so complex, and run in such an interdependent
environment, that complete testing can never be done. Common
factors in deciding when to stop are:
- Deadlines (release deadlines, testing deadlines, etc.)
- Test cases completed with certain percentage passed
- Test budget depleted
- Coverage of code/functionality/requirements reaches a specified point
- Bug rate falls below a certain level
- Beta or alpha testing period ends
-
What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an
application, every possible combination of events, every
dependency, or everything that could go wrong, risk analysis
is appropriate to most software development projects. This requires
judgement skills, common sense, and experience. (If warranted,
formal methods are also available.) Considerations can include:
- Which functionality is most important to the project's intended purpose?
- Which functionality is most visible to the user?
- Which functionality has the largest safety impact?
- Which functionality has the largest financial impact on users?
- Which aspects of the application are most important to the customer?
- Which aspects of the application can be tested early in the development
cycle?
- Which parts of the code are most complex, and thus most subject
to errors?
- Which parts of the application were developed in rush or panic mode?
- Which aspects of similar/related previous projects caused problems?
- Which aspects of similar/related previous projects had large
maintenance expenses?
- Which parts of the requirements and design are unclear or
poorly thought out?
- What do the developers think are the highest-risk aspects of
the application?
- What kinds of problems would cause the worst publicity?
- What kinds of problems would cause the most customer
service complaints?
- What kinds of tests could easily cover multiple functionalities?
- Which tests will have the best high-risk-coverage to
time-required ratio?
-
What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of
the project. However, if extensive testing is still not justified,
risk analysis is again needed and the same considerations as
described previously in 'What if there isn't enough time for thorough testing?'
apply. The tester might then do ad hoc testing, or write
up a limited test plan based on the risk analysis.
-
What can be done if requirements are changing continuously?
A common problem and a major headache.
- Work with the project's stakeholders early on to understand
how requirements might change so that alternate test plans and
strategies can be worked out in advance, if possible.
- It's helpful if the application's initial design allows
for some adaptability so that later changes do not require
redoing the application from scratch.
- If the code is well-commented and well-documented this makes
changes easier for the developers.
- Use rapid prototyping whenever possible to help customers
feel sure of their requirements and minimize changes.
- The project's initial schedule should allow for some extra
time commensurate with the possibility of changes.
- Try to move new requirements to a 'Phase 2' version of an
application, while using the original requirements for the
'Phase 1' version.
- Negotiate to allow only easily-implemented new requirements
into the project, while moving more difficult new requirements
into future versions of the application.
- Be sure that customers and management understand
the scheduling impacts, inherent risks, and costs of
significant requirements changes. Then let management or
the customers (not the developers or testers) decide
if the changes are warranted - after all, that's their job.
- Balance the effort put into setting up automated testing
with the expected effort required to re-do them to deal
with changes.
- Try to design some flexibility into automated test
scripts.
- Focus initial automated testing on application aspects that
are most likely to remain unchanged.
- Devote appropriate effort to risk analysis of changes
to minimize regression testing needs.
- Design some flexibility into test cases (this is not easily done;
the best bet might be to minimize the detail in the test cases,
or set up only higher-level generic-type test plans)
- Focus less on detailed test plans and test cases and more on
ad hoc testing (with an understanding of the added risk that
this entails).
-
What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application
has significant unexpected or hidden functionality, and it would
indicate deeper problems in the software development process.
If the functionality isn't necessary to the purpose of the
application, it should be removed, as it may have unknown impacts
or dependencies that were not taken into account by the designer
or the customer. If not removed, design information will be
needed to determine added testing needs or regression testing needs.
Management should be made aware of any significant added risks as a
result of the unexpected functionality. If the functionality only
effects areas such as minor improvements in the user interface, for
example, it may not be a significant risk.
-
How can Software QA processes be implemented without stifling productivity?
By implementing QA processes slowly over time, using
consensus to reach agreement on processes, and adjusting and
experimenting as an organization grows and matures, productivity
will be improved instead of stifled. Problem prevention will
lessen the need for problem detection, panics and burn-out
will decrease, and there will be improved focus and less
wasted effort. At the same time, attempts should be made to
keep processes simple and efficient, minimize paperwork,
promote computer-based processes and automated tracking and
reporting, minimize time required in meetings, and promote
training as part of the QA process. However, no one - especially
talented technical types - likes rules or bureacracy, and
in the short run things may slow down a bit. A typical
scenario would be that more days of planning and development
will be needed, but less time will be required for late-night
bug-fixing and calming of irate customers.
-
What if an organization is growing so fast that fixed QA
processes are impossible?
This is a common problem in the software industry, especially
in new technology areas. There is no easy solution in this
situation, other than:
- Hire good people
- Management should 'ruthlessly prioritize' quality issues
and maintain focus on the customer
- Everyone in the organization should be clear on what
'quality' means to the customer
-
How does a client/server environment affect testing?
Client/server applications can be quite complex due to
the multiple dependencies among clients, data communications,
hardware, and servers. Thus testing requirements can be
extensive. When time is limited (as it usually is) the
focus should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining
client/server application limitations and capabilities.
There are commercial tools to assist with such testing.
-
How can World Wide Web sites be tested?
Web sites are essentially client/server applications -
with web servers and 'browser' clients.
Consideration should be given to the interactions between
html pages, TCP/IP communications, Internet connections,
firewalls, applications that run in web pages (such
as applets, javascript, plug-in applications), and
applications that run on the server side (such as cgi
scripts, database interfaces, logging applications,
dynamic page generators, asp, etc.). Additionally, there are
a wide variety of servers and browsers, various
versions of each, small but sometimes significant
differences between them, variations in connection
speeds, rapidly changing technologies, and multiple
standards and protocols. The end result is that
testing for web sites can become a major ongoing effort.
Other considerations might include:
- What are the expected loads on the server (e.g., number of
hits per unit time?), and what kind of performance is
required under such loads (such as web server response time,
database query response times). What kinds of tools will
be needed for performance testing (such as web load testing tools,
other tools already in house that can be adapted, web robot
downloading tools, etc.)?
- Who is the target audience? What kind of browsers will they be using?
What kind of connection speeds will they by using? Are they intra-
organization (thus with likely high connection speeds and similar
browsers) or Internet-wide (thus with a wide variety of connection
speeds and browser types)?
- What kind of performance is expected on the client side (e.g.,
how fast should pages appear, how fast should animations, applets, etc.
load and run)?
- Will down time for server and content maintenance/upgrades be
allowed? how much?
- What kinds of security (firewalls, encryptions, passwords, etc.) will
be required and what is it expected to do? How can it be tested?
- How reliable are the site's Internet connections required to be?
And how does that affect backup system or redundant connection
requirements and testing?
- What processes will be required to manage updates to the web site's
content, and what are the requirements for maintaining, tracking,
and controlling page content, graphics, links, etc.?
- Which HTML specification will be adhered to? How strictly? What
variations will be allowed for targeted browsers?
- Will there be any standards or requirements for page appearance
and/or graphics throughout a site or parts of a site??
- How will internal and external links be validated and
updated? how often?
- Can testing be done on the production system, or will a
separate test system be required? How are browser caching,
variations in browser option settings, dial-up connection
variabilities, and real-world internet 'traffic congestion'
problems to be accounted for in testing?
- How extensive or customized are the server logging and
reporting requirements; are they considered an integral part of
the system and do they require testing?
- How are cgi programs, applets, javascripts, ActiveX components,
etc. to be maintained, tracked, controlled, and tested?
- Pages should be 3-5 screens max unless content is tightly
focused on a single topic. If larger, provide internal links
within the page.
- The page layouts and design elements should be consistent throughout
a site, so that it's clear to the user that they're still within
a site.
- Pages should be as browser-independent as possible, or pages should be
provided or generated based on the browser-type.
- All pages should have links external to the page; there should be
no dead-end pages.
- The page owner, revision date, and a link to a contact person or
organization should be included on each page.
-
How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier
to trace from code to internal design to functional design
to requirements. While there will be little affect on black
box testing (where an understanding of the internal design
of the application is unnecessary), white-box testing
can be oriented to the application's objects. If the
application was well-designed this can simplify test design.
- What is Extreme Programming and what's it got to do with testing?
Extreme Programming (XP) is a software development approach
for small teams on risk-prone projects with unstable requirements.
It was created by Kent Beck who described the approach in
his book 'Extreme Programming Explained'
Testing ('extreme testing') is a core aspect of Extreme Programming.
Programmers are expected to write unit and functional test code
first - before writing the application code. Test code is under
source control along with the rest of the code. Customers are expected
to be an integral part of the project team and to help develope
scenarios for acceptance/black box testing. Acceptance tests
are preferably automated, and are modified and rerun for each of
the frequent development iterations. QA and test personnel are also
required to be an integral part of the project team. Detailed
requirements documentation is not used, and frequent re-scheduling,
re-estimating, and re-prioritizing is expected. For more
info on XP and other 'agile' software development approaches
(Scrum, Crystal, etc.)
-
Why do you recommended that we test during the design phase?
Because testing during the design phase can prevent defects later on. We recommend verifying three things...
-
Verify the design is good, efficient, compact, testable and maintainable.
-
Verify the design meets the requirements and is complete (specifies all relationships between modules, how to pass data, what happens in exceptional circumstances, starting state of each module and how to guarantee the state of each module).
-
Verify the design incorporates enough memory, I/O devices and quick enough
runtime for the final product.
-
What is the ratio of developers and testers?
This ratio is not a fixed one, but depends on what phase of the software development life cycle the project is in. When a product is in the design phase, this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp contrast, when the product is in the testing phase, just before alpha testing begins, this ratio tends to be 1:1, or even 1:2, in favor of testers.
-
What is the general testing process?
The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.
-
What is software quality assurance?
Software Quality Assurance, when Rob Davis does it,
is oriented to *prevention*. It involves the entire software
development process. Prevention is monitoring and improving the process, making
sure any agreed-upon standards and procedures are followed and ensuring
problems are found and dealt with.
Software Testing, when performed by Rob Davis, is also
oriented to *detection*. Testing involves the operation of a system or
application under controlled conditions and evaluating the results.
Organizations vary considerably in how they assign responsibility for QA and
testing. Sometimes they're the combined responsibility of one group or
individual. Also common are project teams, which include a mix of test engineers, testers and developers who work closely together, with overall QA processes monitored by project managers. It depends on what best fits your organization's size and business structure.
Rob Davis can provide QA and/or Software QA. This document
details some aspects of how he can provide software testing/QA service.
-
What is software quality?
The quality of the software does vary widely from system to system. Some common quality attributes are stability, usability, reliability, portability, and maintainability. See quality standard ISO 9126 for more information on this subject.
-
Process and procedures - why follow them?
Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a process is repeatable.
Once Rob Davis has learned and reviewed customer's business processes and
procedures, he will follow them. He will also recommend improvements and/or
additions.
-
What is the role of documentation in QA?
Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system
for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.
-
What is supposed to be in a document?
All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want.
Lastly, with standards and templates, information will not be accidentally
omitted from a document.
Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.
-
Standards and templates - what is supposed to be in a document?
All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want.
Lastly, with standards and templates, information will not be accidentally
omitted from a document.
Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.
-
Q: What is the role of documentation in QA?
Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system
for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.
-
What is documentation change management?
Documentation change management is part of configuration management (CM). CM covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts. Rob Davis can easily adapt to your software tool and process needs.
-
What are the different levels of testing?
Rob Davis has expertise in testing at all testing levels listed below. At each test level, he documents the results. Each level of testing is either considered black or white box testing.
-
What is black box testing?
Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and functionality.
-
What is white box testing?
White box testing is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths and conditions.
-
What testing approaches can you tell me about?
Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.
-
What is black box testing?
Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and functionality.
-
What is white box testing?
White box testing is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths and conditions.
-
What testing approaches can you tell me about?
Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.
-
What is closed box testing?
Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the "inner workings" of the software.
-
What is white box testing?
White box testing is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths and conditions.
-
What is black box testing?
Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and functionality.
-
What is open box testing?
Open box testing is same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.
-
What is clear box testing?
Clear box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic. You CAN learn clear box testing, with little or no outside help. Get CAN get free information. Click on a link!
-
What is unit testing?
Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.
-
What testing approaches can you tell me about?
Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.
-
How do you perform integration testing?
First, unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input. You CAN learn to perform integration testing, with little or no outside help. Get CAN get free information. Click on a link!
-
What is parallel/audit testing?
Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.
-
What testing approaches can you tell me about?
Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.
-
How do you execute tests?
Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects.
Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.
-
What is functional testing?
Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers *should* perform functional testing.
-
What is black box testing?
Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and functionality.
-
What testing approaches can you tell me about?
Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.
-
What is closed box testing?
Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the "inner workings" of the software.
-
What is usability testing?
Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.
-
What testing approaches can you tell me about?
Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.
-
What is user friendly
software?
A computer program is user friendly, when it is designed with ease of use, as one of the primary objectives of its design.
-
What is a user friendly document?
A document is user friendly, when it is designed with ease of use, as one of the primary objectives of its design.
-
What is incremental integration testing?
Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application's functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.
-
What is incremental testing?
Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.
-
What is integration testing?
Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected
results are either in line or differences are explainable/acceptable based on
client input.
-
What is incremental integration testing?
Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application's functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.
-
What is incremental testing?
Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.
-
What is system testing?
System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment.
The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed.
System testing simulates real life scenarios that occur in a "simulated real life" test environment
and test all functions of the system that are required in real life.
System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.
Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved.
For a higher level of testing it is important to
understand unresolved problems that originate at unit and integration test levels.
You CAN learn system testing, with little or no outside help. Get CAN get free information. Click on a link!
-
What is end-to-end testing?
Similar to system testing, the *macro* end of the test scale is testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.
-
What is regression testing?
The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.
-
What is sanity testing?
Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to
demonstrate connectivity to the database, application servers, printers, etc.
-
What should be done after a bug is found?
When a bug is found, it needs to be communicated and assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested.
Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere.
If a problem-tracking system is in place, it should encapsulate these determinations.
A variety of commercial, problem-tracking/management software tools are available.
These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.
-
What is sanity testing?
Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to
demonstrate connectivity to the database, application servers, printers, etc.
-
What is installation testing?
Installation testing is testing full, partial, upgrade, or install/un-install processes. The installation test for a release is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application's System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. When necessary, a sanity test is performed, following installation testing.
-
What testing approaches can you tell me about?
Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.
Sources :
DEVFYI - Developer Resource - FYI
TechGuider
Click here to get Interview's Topic Index