| `ASSERT_THROW(`_statement_, _exception\_type_`);` | `EXPECT_THROW(`_statement_, _exception\_type_`);` | _statement_ throws an exception of the given type |
| `ASSERT_ANY_THROW(`_statement_`);` | `EXPECT_ANY_THROW(`_statement_`);` | _statement_ throws an exception of any type |
In the above, _predn_ is an _n_-ary predicate function or functor, where
_val1_, _val2_, ..., and _valn_ are its arguments. The assertion succeeds
if the predicate returns `true` when applied to the given arguments, and fails
otherwise. When the assertion fails, it prints the value of each argument. In
either case, the arguments are evaluated exactly once.
Here's an example. Given
```
// Returns true iff m and n have no common divisors except 1.
bool MutuallyPrime(int m, int n) { ... }
const int a = 3;
const int b = 4;
const int c = 10;
```
the assertion `EXPECT_PRED2(MutuallyPrime, a, b);` will succeed, while the
assertion `EXPECT_PRED2(MutuallyPrime, b, c);` will fail with the message
<pre>
!MutuallyPrime(b, c) is false, where<br>
b is 4<br>
c is 10<br>
</pre>
**Notes:**
1. If you see a compiler error "no matching function to call" when using `ASSERT_PRED*` or `EXPECT_PRED*`, please see [this FAQ](FAQ.md#the-compiler-complains-no-matching-function-to-call-when-i-use-assert_predn-how-do-i-fix-it) for how to resolve it.
1. Currently we only provide predicate assertions of arity <= 5. If you need a higher-arity assertion, let us know.
| `ASSERT_NEAR(`_val1, val2, abs\_error_`);` | `EXPECT_NEAR`_(val1, val2, abs\_error_`);` | the difference between _val1_ and _val2_ doesn't exceed the given absolute error |
_Availability_: Linux, Windows, Mac.
### Floating-Point Predicate-Format Functions ###
Some floating-point operations are useful, but not that often used. In order
to avoid an explosion of new macros, we provide them as predicate-format
functions that can be used in predicate assertion macros (e.g.
| `ASSERT_DEATH(`_statement, regex_`);` | `EXPECT_DEATH(`_statement, regex_`);` | _statement_ crashes with the given error |
| `ASSERT_DEATH_IF_SUPPORTED(`_statement, regex_`);` | `EXPECT_DEATH_IF_SUPPORTED(`_statement, regex_`);` | if death tests are supported, verifies that _statement_ crashes with the given error; otherwise verifies nothing |
| `ASSERT_EXIT(`_statement, predicate, regex_`);` | `EXPECT_EXIT(`_statement, predicate, regex_`);` |_statement_ exits with the given error and its exit code matches _predicate_ |
where _statement_ is a statement that is expected to cause the process to
die, _predicate_ is a function or function object that evaluates an integer
exit status, and _regex_ is a regular expression that the stderr output of
_statement_ is expected to match. Note that _statement_ can be _any valid
statement_ (including _compound statement_) and doesn't have to be an
expression.
As usual, the `ASSERT` variants abort the current test function, while the
`EXPECT` variants do not.
**Note:** We use the word "crash" here to mean that the process
terminates with a _non-zero_ exit status code. There are two
possibilities: either the process has called `exit()` or `_exit()`
with a non-zero value, or it may be killed by a signal.
This means that if _statement_ terminates the process with a 0 exit
code, it is _not_ considered a crash by `EXPECT_DEATH`. Use
`EXPECT_EXIT` instead if this is the case, or if you want to restrict
the exit code more precisely.
A predicate here must accept an `int` and return a `bool`. The death test
succeeds only if the predicate returns `true`. Google Test defines a few
predicates that handle the most common cases:
```
::testing::ExitedWithCode(exit_code)
```
This expression is `true` if the program exited normally with the given exit
code.
```
::testing::KilledBySignal(signal_number) // Not available on Windows.
```
This expression is `true` if the program was killed by the given signal.
The `*_DEATH` macros are convenient wrappers for `*_EXIT` that use a predicate
that verifies the process' exit code is non-zero.
Note that a death test only cares about three things:
1. does _statement_ abort or exit the process?
1. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status satisfy _predicate_? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`) is the exit status non-zero? And
1. does the stderr output match _regex_?
In particular, if _statement_ generates an `ASSERT_*` or `EXPECT_*` failure, it will **not** cause the death test to fail, as Google Test assertions don't abort the process.
To write a death test, simply use one of the above macros inside your test
function. For example,
```
TEST(MyDeathTest, Foo) {
// This death test uses a compound statement.
ASSERT_DEATH({ int n = 5; Foo(&n); }, "Error on line .* of Foo()");
* calling `Foo(5)` causes the process to die with the given error message,
* calling `NormalExit()` causes the process to print `"Success"` to stderr and exit with exit code 0, and
* calling `KillMyself()` kills the process with signal `SIGKILL`.
The test function body may contain other assertions and statements as well, if
necessary.
_Important:_ We strongly recommend you to follow the convention of naming your
test case (not test) `*DeathTest` when it contains a death test, as
demonstrated in the above example. The `Death Tests And Threads` section below
explains why.
If a test fixture class is shared by normal tests and death tests, you
can use typedef to introduce an alias for the fixture class and avoid
duplicating its code:
```
class FooTest : public ::testing::Test { ... };
typedef FooTest FooDeathTest;
TEST_F(FooTest, DoesThis) {
// normal test
}
TEST_F(FooDeathTest, DoesThat) {
// death test
}
```
_Availability:_ Linux, Windows (requires MSVC 8.0 or above), Cygwin, and Mac (the latter three are supported since v1.3.0). `(ASSERT|EXPECT)_DEATH_IF_SUPPORTED` are new in v1.4.0.
## Regular Expression Syntax ##
On POSIX systems (e.g. Linux, Cygwin, and Mac), Google Test uses the
syntax in death tests. To learn about this syntax, you may want to read this [Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions).
On Windows, Google Test uses its own simple regular expression
implementation. It lacks many features you can find in POSIX extended
regular expressions. For example, we don't support union (`"x|y"`),
grouping (`"(xy)"`), brackets (`"[xy]"`), and repetition count
(`"x{5,7}"`), among others. Below is what we do support (Letter `A` denotes a
literal character, period (`.`), or a single `\\` escape sequence; `x`
and `y` denote regular expressions.):
| `c` | matches any literal character `c` |
|:----|:----------------------------------|
| `\\d` | matches any decimal digit |
| `\\D` | matches any character that's not a decimal digit |
| `\\f` | matches `\f` |
| `\\n` | matches `\n` |
| `\\r` | matches `\r` |
| `\\s` | matches any ASCII whitespace, including `\n` |
| `\\S` | matches any character that's not a whitespace |
| `\\t` | matches `\t` |
| `\\v` | matches `\v` |
| `\\w` | matches any letter, `_`, or decimal digit |
| `\\W` | matches any character that `\\w` doesn't match |
| `\\c` | matches any literal character `c`, which must be a punctuation |
| `\\.` | matches the `.` character |
| `.` | matches any single character except `\n` |
| `A?` | matches 0 or 1 occurrences of `A` |
| `A*` | matches 0 or many occurrences of `A` |
| `A+` | matches 1 or many occurrences of `A` |
| `^` | matches the beginning of a string (not that of each line) |
| `$` | matches the end of a string (not that of each line) |
| `xy` | matches `x` followed by `y` |
To help you determine which capability is available on your system,
Google Test defines macro `GTEST_USES_POSIX_RE=1` when it uses POSIX
extended regular expressions, or `GTEST_USES_SIMPLE_RE=1` when it uses
the simple version. If you want your death tests to work in both
cases, you can either `#if` on these macros or use the more limited
syntax only.
## How It Works ##
Under the hood, `ASSERT_EXIT()` spawns a new process and executes the
that happens depend on the platform and the variable
`::testing::GTEST_FLAG(death_test_style)` (which is initialized from the
command-line flag `--gtest_death_test_style`).
* On POSIX systems, `fork()` (or `clone()` on Linux) is used to spawn the child, after which:
* If the variable's value is `"fast"`, the death test statement is immediately executed.
* If the variable's value is `"threadsafe"`, the child process re-executes the unit test binary just as it was originally invoked, but with some extra flags to cause just the single death test under consideration to be run.
* On Windows, the child is spawned using the `CreateProcess()` API, and re-executes the binary to cause just the single death test under consideration to be run - much like the `threadsafe` mode on POSIX.
Other values for the variable are illegal and will cause the death test to
fail. Currently, the flag's default value is `"fast"`. However, we reserve the
right to change it in the future. Therefore, your tests should not depend on
this.
In either case, the parent process waits for the child process to complete, and checks that
1. the child's exit status satisfies the predicate, and
1. the child's stderr matches the regular expression.
If the death test statement runs to completion without dying, the child
process will nonetheless terminate, and the assertion fails.
## Death Tests And Threads ##
The reason for the two death test styles has to do with thread safety. Due to
well-known problems with forking in the presence of threads, death tests should
be run in a single-threaded context. Sometimes, however, it isn't feasible to
arrange that kind of environment. For example, statically-initialized modules
may start threads before main is ever reached. Once threads have been created,
it may be difficult or impossible to clean them up.
Google Test has three features intended to raise awareness of threading issues.
1. A warning is emitted if multiple threads are running when a death test is encountered.
1. Test cases with a name ending in "DeathTest" are run before all other tests.
1. It uses `clone()` instead of `fork()` to spawn the child process on Linux (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely to cause the child to hang when the parent process has multiple threads.
It's perfectly fine to create threads inside a death test statement; they are
executed in a separate process and cannot affect the parent.
## Death Test Styles ##
The "threadsafe" death test style was introduced in order to help mitigate the
risks of testing in a possibly multithreaded environment. It trades increased
test execution time (potentially dramatically so) for improved thread safety.
We suggest using the faster, default "fast" style unless your test has specific
problems with it.
You can choose a particular style of death tests by setting the flag
The _statement_ argument of `ASSERT_EXIT()` can be any valid C++ statement.
If it leaves the current function via a `return` statement or by throwing an exception,
the death test is considered to have failed. Some Google Test macros may return
from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid them in _statement_.
Since _statement_ runs in the child process, any in-memory side effect (e.g.
modifying a variable, releasing memory, etc) it causes will _not_ be observable
in the parent process. In particular, if you release memory in a death test,
your program will fail the heap check as the parent process will never see the
memory reclaimed. To solve this problem, you can
1. try not to free memory in a death test;
1. free the memory again in the parent process; or
1. do not use the heap checker in your program.
Due to an implementation detail, you cannot place multiple death test
assertions on the same line; otherwise, compilation will fail with an unobvious
error message.
Despite the improved thread safety afforded by the "threadsafe" style of death
test, thread problems such as deadlock are still possible in the presence of
handlers registered with `pthread_atfork(3)`.
# Using Assertions in Sub-routines #
## Adding Traces to Assertions ##
If a test sub-routine is called from several places, when an assertion
inside it fails, it can be hard to tell which invocation of the
sub-routine the failure is from. You can alleviate this problem using
extra logging or custom failure messages, but that usually clutters up
your tests. A better solution is to use the `SCOPED_TRACE` macro:
| `SCOPED_TRACE(`_message_`);` |
|:-----------------------------|
where _message_ can be anything streamable to `std::ostream`. This
macro will cause the current file name, line number, and the given
message to be added in every failure message. The effect will be
undone when the control leaves the current lexical scope.
For example,
```
10: void Sub1(int n) {
11: EXPECT_EQ(1, Bar(n));
12: EXPECT_EQ(2, Bar(n + 1));
13: }
14:
15: TEST(FooTest, Bar) {
16: {
17: SCOPED_TRACE("A"); // This trace point will be included in
18: // every failure in this scope.
19: Sub1(1);
20: }
21: // Now it won't.
22: Sub1(9);
23: }
```
could result in messages like these:
```
path/to/foo_test.cc:11: Failure
Value of: Bar(n)
Expected: 1
Actual: 2
Trace:
path/to/foo_test.cc:17: A
path/to/foo_test.cc:12: Failure
Value of: Bar(n + 1)
Expected: 2
Actual: 3
```
Without the trace, it would've been difficult to know which invocation
of `Sub1()` the two failures come from respectively. (You could add an
extra message to each assertion in `Sub1()` to indicate the value of
`n`, but that's tedious.)
Some tips on using `SCOPED_TRACE`:
1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the beginning of a sub-routine, instead of at each call site.
1. When calling sub-routines inside a loop, make the loop iterator part of the message in `SCOPED_TRACE` such that you can know which iteration the failure is from.
1. Sometimes the line number of the trace point is enough for identifying the particular invocation of a sub-routine. In this case, you don't have to choose a unique message for `SCOPED_TRACE`. You can simply use `""`.
1. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer scope. In this case, all active trace points will be included in the failure messages, in reverse order they are encountered.
1. The trace dump is clickable in Emacs' compilation buffer - hit return on a line number and you'll be taken to that line in the source file!
_Availability:_ Linux, Windows, Mac.
## Propagating Fatal Failures ##
A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that
when they fail they only abort the _current function_, not the entire test. For
example, the following test will segfault:
```
void Subroutine() {
// Generates a fatal failure and aborts the current function.
ASSERT_EQ(1, 2);
// The following won't be executed.
...
}
TEST(FooTest, Bar) {
Subroutine();
// The intended behavior is for the fatal failure
// in Subroutine() to abort the entire test.
// The actual behavior: the function goes on after Subroutine() returns.
int* p = NULL;
*p = 3; // Segfault!
}
```
Since we don't use exceptions, it is technically impossible to
implement the intended behavior here. To alleviate this, Google Test
provides two solutions. You could use either the
`(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the
`HasFatalFailure()` function. They are described in the following two
subsections.
### Asserting on Subroutines ###
As shown above, if your test calls a subroutine that has an `ASSERT_*`
failure in it, the test will continue after the subroutine
returns. This may not be what you want.
Often people want fatal failures to propagate like exceptions. For
| `ASSERT_NO_FATAL_FAILURE(`_statement_`);` | `EXPECT_NO_FATAL_FAILURE(`_statement_`);` | _statement_ doesn't generate any new fatal failures in the current thread. |
Only failures in the thread that executes the assertion are checked to
determine the result of this type of assertions. If _statement_
creates new threads, failures in these threads are ignored.
Examples:
```
ASSERT_NO_FATAL_FAILURE(Foo());
int i;
EXPECT_NO_FATAL_FAILURE({
i = Bar();
});
```
_Availability:_ Linux, Windows, Mac. Assertions from multiple threads
are currently not supported.
### Checking for Failures in the Current Test ###
`HasFatalFailure()` in the `::testing::Test` class returns `true` if an
assertion in the current test has suffered a fatal failure. This
allows functions to catch fatal failures in a sub-routine and return
early.
```
class Test {
public:
...
static bool HasFatalFailure();
};
```
The typical usage, which basically simulates the behavior of a thrown
exception, is:
```
TEST(FooTest, Bar) {
Subroutine();
// Aborts if Subroutine() had a fatal failure.
if (HasFatalFailure())
return;
// The following won't be executed.
...
}
```
If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test
fixture, you must add the `::testing::Test::` prefix, as in:
```
if (::testing::Test::HasFatalFailure())
return;
```
Similarly, `HasNonfatalFailure()` returns `true` if the current test
has at least one non-fatal failure, and `HasFailure()` returns `true`
if the current test has at least one failure of either kind.
_Availability:_ Linux, Windows, Mac. `HasNonfatalFailure()` and
`HasFailure()` are available since version 1.4.0.
# Logging Additional Information #
In your test code, you can call `RecordProperty("key", value)` to log
additional information, where `value` can be either a string or an `int`. The _last_ value recorded for a key will be emitted to the XML output
*`RecordProperty()` is a static member of the `Test` class. Therefore it needs to be prefixed with `::testing::Test::` if used outside of the `TEST` body and the test fixture class.
*`key` must be a valid XML attribute name, and cannot conflict with the ones already used by Google Test (`name`, `status`, `time`, `classname`, `type_param`, and `value_param`).
* Calling `RecordProperty()` outside of the lifespan of a test is allowed. If it's called outside of a test but between a test case's `SetUpTestCase()` and `TearDownTestCase()` methods, it will be attributed to the XML element for the test case. If it's called outside of all test cases (e.g. in a test environment), it will be attributed to the top-level XML element.
_Availability_: Linux, Windows, Mac.
# Sharing Resources Between Tests in the Same Test Case #
Google Test creates a new test fixture object for each test in order to make
tests independent and easier to debug. However, sometimes tests use resources
that are expensive to set up, making the one-copy-per-test model prohibitively
expensive.
If the tests don't change the resource, there's no harm in them sharing a
single resource copy. So, in addition to per-test set-up/tear-down, Google Test
also supports per-test-case set-up/tear-down. To use it:
1. In your test fixture class (say `FooTest` ), define as `static` some member variables to hold the shared resources.
1. In the same test fixture class, define a `static void SetUpTestCase()` function (remember not to spell it as **`SetupTestCase`** with a small `u`!) to set up the shared resources and a `static void TearDownTestCase()` function to tear them down.
That's it! Google Test automatically calls `SetUpTestCase()` before running the
_first test_ in the `FooTest` test case (i.e. before creating the first
`FooTest` object), and calls `TearDownTestCase()` after running the _last test_
in it (i.e. after deleting the last `FooTest` object). In between, the tests
can use the shared resources.
Remember that the test order is undefined, so your code can't depend on a test
preceding or following another. Also, the tests must either not modify the
state of any shared resource, or, if they do modify the state, they must
restore the state to its original value before passing control to the next
test.
Here's an example of per-test-case set-up and tear-down:
```
class FooTest : public ::testing::Test {
protected:
// Per-test-case set-up.
// Called before the first test in this test case.
// Can be omitted if not needed.
static void SetUpTestCase() {
shared_resource_ = new ...;
}
// Per-test-case tear-down.
// Called after the last test in this test case.
// Can be omitted if not needed.
static void TearDownTestCase() {
delete shared_resource_;
shared_resource_ = NULL;
}
// You can define per-test set-up and tear-down logic as usual.
virtual void SetUp() { ... }
virtual void TearDown() { ... }
// Some expensive resource shared by all tests.
static T* shared_resource_;
};
T* FooTest::shared_resource_ = NULL;
TEST_F(FooTest, Test1) {
... you can refer to shared_resource here ...
}
TEST_F(FooTest, Test2) {
... you can refer to shared_resource here ...
}
```
_Availability:_ Linux, Windows, Mac.
# Global Set-Up and Tear-Down #
Just as you can do set-up and tear-down at the test level and the test case
level, you can also do it at the test program level. Here's how.
First, you subclass the `::testing::Environment` class to define a test
environment, which knows how to set-up and tear-down:
```
class Environment {
public:
virtual ~Environment() {}
// Override this to define how to set up the environment.
virtual void SetUp() {}
// Override this to define how to tear down the environment.
virtual void TearDown() {}
};
```
Then, you register an instance of your environment class with Google Test by
calling the `::testing::AddGlobalTestEnvironment()` function:
However, we strongly recommend you to write your own `main()` and call
`AddGlobalTestEnvironment()` there, as relying on initialization of global
variables makes the code harder to read and may cause problems when you
register multiple environments from different translation units and the
environments have dependencies among them (remember that the compiler doesn't
guarantee the order in which global variables from different translation units
are initialized).
_Availability:_ Linux, Windows, Mac.
# Value Parameterized Tests #
_Value-parameterized tests_ allow you to test your code with different
parameters without writing multiple copies of the same test.
Suppose you write a test for your code and then realize that your code is affected by a presence of a Boolean command line flag.
```
TEST(MyCodeTest, TestFoo) {
// A code to test foo().
}
```
Usually people factor their test code into a function with a Boolean parameter in such situations. The function sets the flag, then executes the testing code.
```
void TestFooHelper(bool flag_value) {
flag = flag_value;
// A code to test foo().
}
TEST(MyCodeTest, TestFoo) {
TestFooHelper(false);
TestFooHelper(true);
}
```
But this setup has serious drawbacks. First, when a test assertion fails in your tests, it becomes unclear what value of the parameter caused it to fail. You can stream a clarifying message into your `EXPECT`/`ASSERT` statements, but it you'll have to do it with all of them. Second, you have to add one such helper function per test. What if you have ten tests? Twenty? A hundred?
Value-parameterized tests will let you write your test only once and then easily instantiate and run it with an arbitrary number of parameter values.
Here are some other situations when value-parameterized tests come handy:
* You want to test different implementations of an OO interface.
* You want to test your code over various inputs (a.k.a. data-driven testing). This feature is easy to abuse, so please exercise your good sense when doing it!
## How to Write Value-Parameterized Tests ##
To write value-parameterized tests, first you should define a fixture
class. It must be derived from both `::testing::Test` and
`::testing::WithParamInterface<T>` (the latter is a pure interface),
where `T` is the type of your parameter values. For convenience, you
can just derive the fixture class from `::testing::TestWithParam<T>`,
which itself is derived from both `::testing::Test` and
`::testing::WithParamInterface<T>`. `T` can be any copyable type. If
it's a raw pointer, you are responsible for managing the lifespan of
the pointed values.
```
class FooTest : public ::testing::TestWithParam<constchar*> {
// You can implement all the usual fixture class members here.
// To access the test parameter, call GetParam() from class
// TestWithParam<T>.
};
// Or, when you want to add parameters to a pre-existing fixture class:
class BaseTest : public ::testing::Test {
...
};
class BarTest : public BaseTest,
public ::testing::WithParamInterface<constchar*> {
...
};
```
Then, use the `TEST_P` macro to define as many test patterns using
this fixture as you want. The `_P` suffix is for "parameterized" or
"pattern", whichever you prefer to think.
```
TEST_P(FooTest, DoesBlah) {
// Inside a test, access the test parameter with the GetParam() method
// of the TestWithParam<T> class:
EXPECT_TRUE(foo.Blah(GetParam()));
...
}
TEST_P(FooTest, HasBlahBlah) {
...
}
```
Finally, you can use `INSTANTIATE_TEST_CASE_P` to instantiate the test
case with any set of parameters you want. Google Test defines a number of
functions for generating test parameters. They return what we call
(surprise!) _parameter generators_. Here is a summary of them,
which are all in the `testing` namespace:
| `Range(begin, end[, step])` | Yields values `{begin, begin+step, begin+step+step, ...}`. The values do not include `end`. `step` defaults to 1. |
| `ValuesIn(container)` and `ValuesIn(begin, end)` | Yields values from a C-style array, an STL-style container, or an iterator range `[begin, end)`. `container`, `begin`, and `end` can be expressions whose values are determined at run time. |
| `Bool()` | Yields sequence `{false, true}`. |
| `Combine(g1, g2, ..., gN)` | Yields all combinations (the Cartesian product for the math savvy) of the values generated by the `N` generators. This is only available if your system provides the `<tr1/tuple>` header. If you are sure your system does, and Google Test disagrees, you can override it by defining `GTEST_HAS_TR1_TUPLE=1`. See comments in [include/gtest/internal/gtest-port.h](../include/gtest/internal/gtest-port.h) for more information. |
For more details, see the comments at the definitions of these functions in the [source code](../include/gtest/gtest-param-test.h).
The following statement will instantiate tests from the `FooTest` test case
each with parameter values `"meeny"`, `"miny"`, and `"moe"`.
```
INSTANTIATE_TEST_CASE_P(InstantiationName,
FooTest,
::testing::Values("meeny", "miny", "moe"));
```
To distinguish different instances of the pattern (yes, you can
instantiate it more than once), the first argument to
`INSTANTIATE_TEST_CASE_P` is a prefix that will be added to the actual
test case name. Remember to pick unique prefixes for different
instantiations. The tests from the instantiation above will have these
names:
*`InstantiationName/FooTest.DoesBlah/0` for `"meeny"`
*`InstantiationName/FooTest.DoesBlah/1` for `"miny"`
*`InstantiationName/FooTest.DoesBlah/2` for `"moe"`
*`InstantiationName/FooTest.HasBlahBlah/0` for `"meeny"`
*`InstantiationName/FooTest.HasBlahBlah/1` for `"miny"`
*`InstantiationName/FooTest.HasBlahBlah/2` for `"moe"`
You can use these names in [--gtest\_filter](#running-a-subset-of-the-tests).
This statement will instantiate all tests from `FooTest` again, each
To define abstract tests, you should organize your code like this:
1. Put the definition of the parameterized test fixture class (e.g. `FooTest`) in a header file, say `foo_param_test.h`. Think of this as _declaring_ your abstract tests.
1. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes `foo_param_test.h`. Think of this as _implementing_ your abstract tests.
Once they are defined, you can instantiate them by including
`foo_param_test.h`, invoking `INSTANTIATE_TEST_CASE_P()`, and linking
with `foo_param_test.cc`. You can instantiate the same abstract test
case multiple times, possibly in different source files.
# Typed Tests #
Suppose you have multiple implementations of the same interface and
want to make sure that all of them satisfy some common requirements.
Or, you may have defined several types that are supposed to conform to
the same "concept" and you want to verify it. In both cases, you want
the same test logic repeated for different types.
While you can write one `TEST` or `TEST_F` for each type you want to
test (and you may even factor the test logic into a function template
that you invoke from the `TEST`), it's tedious and doesn't scale:
if you want _m_ tests over _n_ types, you'll end up writing _m\*n_
`TEST`s.
_Typed tests_ allow you to repeat the same test logic over a list of
types. You only need to write the test logic once, although you must
know the type list when writing typed tests. Here's how you do it:
First, define a fixture class template. It should be parameterized
by a type. Remember to derive it from `::testing::Test`:
```
template <typenameT>
class FooTest : public ::testing::Test {
public:
...
typedef std::list<T> List;
static T shared_;
T value_;
};
```
Next, associate a list of types with the test case, which will be
| `$ foo_test --gtest_repeat=-1` | A negative count means repeating forever. |
| `$ foo_test --gtest_repeat=1000 --gtest_break_on_failure` | Repeat foo\_test 1000 times, stopping at the first failure. This is especially useful when running under a debugger: when the testfails, it will drop into the debugger and you can then inspect variables and stacks. |
| `$ foo_test --gtest_repeat=1000 --gtest_filter=FooBar` | Repeat the tests whose name matches the filter 1000 times. |
If your test program contains global set-up/tear-down code registered
using `AddGlobalTestEnvironment()`, it will be repeated in each
iteration as well, as the flakiness may be in it. You can also specify
the repeat count by setting the `GTEST_REPEAT` environment variable.
_Availability:_ Linux, Windows, Mac.
## Shuffling the Tests ##
You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE`
environment variable to `1`) to run the tests in a program in a random
order. This helps to reveal bad dependencies between tests.
By default, Google Test uses a random seed calculated from the current
time. Therefore you'll get a different order every time. The console
output includes the random seed value, such that you can reproduce an
order-related test failure later. To specify the random seed
explicitly, use the `--gtest_random_seed=SEED` flag (or set the
`GTEST_RANDOM_SEED` environment variable), where `SEED` is an integer
between 0 and 99999. The seed value 0 is special: it tells Google Test
to do the default behavior of calculating the seed from the current
time.
If you combine this with `--gtest_repeat=N`, Google Test will pick a
different random seed and re-shuffle the tests in each iteration.
_Availability:_ Linux, Windows, Mac; since v1.4.0.
## Controlling Test Output ##
This section teaches how to tweak the way test results are reported.
### Colored Terminal Output ###
Google Test can use colors in its terminal output to make it easier to spot
the separation between tests, and whether tests passed.
You can set the GTEST\_COLOR environment variable or set the `--gtest_color`
command line flag to `yes`, `no`, or `auto` (the default) to enable colors,
disable colors, or let Google Test decide. When the value is `auto`, Google
Test will use colors if and only if the output goes to a terminal and (on
non-Windows platforms) the `TERM` environment variable is set to `xterm` or
`xterm-color`.
_Availability:_ Linux, Windows, Mac.
### Suppressing the Elapsed Time ###
By default, Google Test prints the time it takes to run each test. To
suppress that, run the test program with the `--gtest_print_time=0`
command line flag. Setting the `GTEST_PRINT_TIME` environment
variable to `0` has the same effect.
_Availability:_ Linux, Windows, Mac. (In Google Test 1.3.0 and lower,
the default behavior is that the elapsed time is **not** printed.)
### Generating an XML Report ###
Google Test can emit a detailed XML report to a file in addition to its normal
textual output. The report contains the duration of each test, and thus can
help you identify slow tests.
To generate the XML report, set the `GTEST_OUTPUT` environment variable or the
`--gtest_output` flag to the string `"xml:_path_to_output_file_"`, which will
create the file at the given location. You can also just use the string
`"xml"`, in which case the output can be found in the `test_detail.xml` file in
the current directory.
If you specify a directory (for example, `"xml:output/directory/"` on Linux or
`"xml:output\directory\"` on Windows), Google Test will create the XML file in
that directory, named after the test executable (e.g. `foo_test.xml` for test
program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left
over from a previous run), Google Test will pick a different name (e.g.
`foo_test_1.xml`) to avoid overwriting it.
The report uses the format described here. It is based on the
`junitreport` Ant task and can be parsed by popular continuous build
systems like [Hudson](https://hudson.dev.java.net/). Since that format
was originally intended for Java, a little interpretation is required
to make it apply to Google Test tests, as shown here:
```
<testsuitesname="AllTests"...>
<testsuitename="test_case_name"...>
<testcasename="test_name"...>
<failuremessage="..."/>
<failuremessage="..."/>
<failuremessage="..."/>
</testcase>
</testsuite>
</testsuites>
```
* The root `<testsuites>` element corresponds to the entire test program.
*`<testsuite>` elements correspond to Google Test test cases.
*`<testcase>` elements correspond to Google Test test functions.
* The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how many test functions the Google Test program or test case contains, while the `failures` attribute tells how many of them failed.
* The `time` attribute expresses the duration of the test, test case, or entire test program in milliseconds.
* Each `<failure>` element corresponds to a single failed Google Test assertion.
* Some JUnit concepts don't apply to Google Test, yet we have to conform to the DTD. Therefore you'll see some dummy elements and attributes in the report. You can safely ignore these parts.
_Availability:_ Linux, Windows, Mac.
## Controlling How Failures Are Reported ##
### Turning Assertion Failures into Break-Points ###
When running test programs under a debugger, it's very convenient if the
debugger can catch an assertion failure and automatically drop into interactive
mode. Google Test's _break-on-failure_ mode supports this behavior.
To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value
other than `0` . Alternatively, you can use the `--gtest_break_on_failure`
command line flag.
_Availability:_ Linux, Windows, Mac.
### Disabling Catching Test-Thrown Exceptions ###
Google Test can be used either with or without exceptions enabled. If
a test throws a C++ exception or (on Windows) a structured exception
(SEH), by default Google Test catches it, reports it as a test
failure, and continues with the next test method. This maximizes the
coverage of a test run. Also, on Windows an uncaught exception will
cause a pop-up window, so catching the exceptions allows you to run
the tests automatically.
When debugging the test failures, however, you may instead want the
exceptions to be handled by the debugger, such that you can examine
the call stack when an exception is thrown. To achieve that, set the
`GTEST_CATCH_EXCEPTIONS` environment variable to `0`, or use the
`--gtest_catch_exceptions=0` flag when running the tests.
**Availability**: Linux, Windows, Mac.
### Letting Another Testing Framework Drive ###
If you work on a project that has already been using another testing
framework and is not ready to completely switch to Google Test yet,
you can get much of Google Test's benefit by using its assertions in
your existing tests. Just change your `main()` function to look
like:
```
#include "gtest/gtest.h"
int main(int argc, char** argv) {
::testing::GTEST_FLAG(throw_on_failure) = true;
// Important: Google Test must be initialized.
::testing::InitGoogleTest(&argc, argv);
... whatever your existing testing framework requires ...
}
```
With that, you can use Google Test assertions in addition to the
native assertions your testing framework provides, for example:
```
void TestFooDoesBar() {
Foo foo;
EXPECT_LE(foo.Bar(1), 100); // A Google Test assertion.
CPPUNIT_ASSERT(foo.IsEmpty()); // A native assertion.
}
```
If a Google Test assertion fails, it will print an error message and
throw an exception, which will be treated as a failure by your host
testing framework. If you compile your code with exceptions disabled,
a failed Google Test assertion will instead exit your program with a
non-zero code, which will also signal a test failure to your test
runner.
If you don't write `::testing::GTEST_FLAG(throw_on_failure) = true;` in
your `main()`, you can alternatively enable this feature by specifying
the `--gtest_throw_on_failure` flag on the command-line or setting the
`GTEST_THROW_ON_FAILURE` environment variable to a non-zero value.
Death tests are _not_ supported when other test framework is used to organize tests.
_Availability:_ Linux, Windows, Mac; since v1.3.0.
## Distributing Test Functions to Multiple Machines ##
If you have more than one machine you can use to run a test program,
you might want to run the test functions in parallel and get the
result faster. We call this technique _sharding_, where each machine
is called a _shard_.
Google Test is compatible with test sharding. To take advantage of
this feature, your test runner (not part of Google Test) needs to do
the following:
1. Allocate a number of machines (shards) to run the tests.
1. On each shard, set the `GTEST_TOTAL_SHARDS` environment variable to the total number of shards. It must be the same for all shards.
1. On each shard, set the `GTEST_SHARD_INDEX` environment variable to the index of the shard. Different shards must be assigned different indices, which must be in the range `[0, GTEST_TOTAL_SHARDS - 1]`.
1. Run the same test program on all shards. When Google Test sees the above two environment variables, it will select a subset of the test functions to run. Across all shards, each test function in the program will be run exactly once.
1. Wait for all shards to finish, then collect and report the results.
Your project may have tests that were written without Google Test and
thus don't understand this protocol. In order for your test runner to
figure out which test supports sharding, it can set the environment
variable `GTEST_SHARD_STATUS_FILE` to a non-existent file path. If a
test program supports sharding, it will create this file to
acknowledge the fact (the actual contents of the file are not
important at this time; although we may stick some useful information
in it in the future.); otherwise it will not create it.
Here's an example to make it clear. Suppose you have a test program
`foo_test` that contains the following 5 test functions:
```
TEST(A, V)
TEST(A, W)
TEST(B, X)
TEST(B, Y)
TEST(B, Z)
```
and you have 3 machines at your disposal. To run the test functions in
parallel, you would set `GTEST_TOTAL_SHARDS` to 3 on all machines, and
set `GTEST_SHARD_INDEX` to 0, 1, and 2 on the machines respectively.
Then you would run the same `foo_test` on each machine.
Google Test reserves the right to change how the work is distributed
across the shards, but here's one possible scenario:
* Machine #0 runs `A.V` and `B.X`.
* Machine #1 runs `A.W` and `B.Y`.
* Machine #2 runs `B.Z`.
_Availability:_ Linux, Windows, Mac; since version 1.3.0.
# Fusing Google Test Source Files #
Google Test's implementation consists of ~30 files (excluding its own
tests). Sometimes you may want them to be packaged up in two files (a
`.h` and a `.cc`) instead, such that you can easily copy them to a new
machine and start hacking there. For this we provide an experimental
Python script `fuse_gtest_files.py` in the `scripts/` directory (since release 1.3.0).
Assuming you have Python 2.4 or above installed on your machine, just
go to that directory and run
```
python fuse_gtest_files.py OUTPUT_DIR
```
and you should see an `OUTPUT_DIR` directory being created with files
`gtest/gtest.h` and `gtest/gtest-all.cc` in it. These files contain
everything you need to use Google Test. Just copy them to anywhere
you want and you are ready to write tests. You can use the
[scripts/test/Makefile](../scripts/test/Makefile)
file as an example on how to compile your tests against them.
# Where to Go from Here #
Congratulations! You've now learned more advanced Google Test tools and are
ready to tackle more complex testing tasks. If you want to dive even deeper, you
can read the [Frequently-Asked Questions](FAQ.md).