The httperf - HTTP load generator with post-quantum support from BoringSSL
選択できるのは25トピックまでです。 トピックは、先頭が英数字で、英数字とダッシュ('-')を使用した35文字以内のものにしてください。

README.md 18 KiB

18年前
18年前
18年前
18年前
18年前
18年前
18年前
18年前
18年前
18年前
18年前
18年前
18年前
18年前
18年前
18年前
18年前
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427
  1. # httperf
  2. httperf is a tool for measuring web server performance. It provides a flexible facility for generating various HTTP workloads and for measuring server performance.
  3. The focus of httperf is not on implementing one particular benchmark but on providing a robust, high-performance tool that facilitates the construction of both micro- and macro-level benchmarks. The three distinguishing characteristics of httperf are its robustness, which includes the ability to generate and sustain server overload, support for the HTTP/1.1 and SSL protocols, and its extensibility to new workload generators and performance measurements.
  4. ## Building httperf
  5. This release of httperf is using the standard GNU configuration
  6. mechanism. The following steps can be used to build it:
  7. In this example, SRCDIR refers to the httperf source directory. The
  8. last step may have to be executed as "root".
  9. $ mkdir build
  10. $ cd build
  11. $ SRCDIR/configure
  12. $ make
  13. $ make install
  14. NOTE: If building source code exported from CVS Repository rather than
  15. the official gzipped source tar file, the following commands must be
  16. executed before the preceding ones.
  17. $ cd SRCDIR/
  18. $ autoreconf -i
  19. Since httperf 0.9.1, the the idleconn program is no longer built by
  20. default. Using the configure option --enable-idleconn will instruct
  21. the build system to compile the tool.
  22. To build httperf with debug support turned on, invoke configure with
  23. option "--enable-debug".
  24. By default, the httperf binary is installed in /usr/local/bin/httperf
  25. and the man-page is installed in /usr/local/man/man1/httperf. You can
  26. change these defaults by passing appropriate options to the
  27. "configure" script. See "configure --help" for details.
  28. This release of httperf has preliminary SSL support. To enable it,
  29. you need to have OpenSSL (http://www.openssl.org/) already installed
  30. on your system. The configure script assumes that the OpenSSH header
  31. files and libraries can be found in standard locations (e.g.,
  32. /usr/include and /usr/lib). If the files are in a different place,
  33. you need to tell the configure script where to find them. This can be
  34. done by setting environment variables CPPFLAGS and LDFLAGS before
  35. invoking "configure". For example, if the SSL header files are
  36. installed in /usr/local/ssl/include and the SSL libraries are
  37. installed in /usr/local/ssl/lib, then the environment variables should
  38. be set like this:
  39. CPPFLAGS="-I/usr/local/ssl/include"
  40. LDFLAGS="-L/usr/local/ssl/lib"
  41. With these settings in place, "configure" can be invoked as usual and
  42. SSL should now be found. If SSL has been detected, the following
  43. three checks should be answered with "yes":
  44. checking for main in -lcrypto... yes
  45. checking for SSL_version in -lssl... yes
  46. :
  47. checking for openssl/ssl.h... yes
  48. Note: you may have to delete "config.cache" to ensure that "configure"
  49. re-evaluates those checks after changing the settings of the
  50. environment variables.
  51. WARNING:
  52. httperf uses a deterministic seed for the random number
  53. generator used by SSL. Thus, the SSL encrypted data is
  54. likely to be easy to crack. In other words, do not assume
  55. that SSL data transferred when using httperf is (well)
  56. encrypted!
  57. This release of httperf has been tested under the following operating systems:
  58. HP-UX 11i (64-bit PA-RISC and IA-64)
  59. Red Hat Enterprise Linux AS (AMD64 and IA-64)
  60. SUSE Linux 10.1 (i386)
  61. openSUSE 10.2 (i386)
  62. OpenBSD 4.0 (i386)
  63. FreeBSD 6.0 (AMD64)
  64. Solaris 8 (UltraSparc 64-bit)
  65. It should be straight-forward to build httperf on other platforms, please report
  66. any build problems to the mailing list along with the platform specifications.
  67. ## Mailing list
  68. A mailing list has been set up to encourage discussions among the
  69. httperf user community. This list is managed by majordomo. To
  70. subscribe to the list, send a mail containing the body:
  71. subscribe httperf
  72. to majordomo@linux.hpl.hp.com. To post an article to the list, send
  73. it directly to httperf@linux.hpl.hp.com.
  74. ## Running httperf
  75. IMPORTANT: It is crucial to run just one copy of httperf per client
  76. machine. httperf sucks up all available CPU time on a machine. It is
  77. therefore important not to run any other (CPU-intensive) tasks on a
  78. client machine while httperf is running. httperf is a CPU hog to
  79. ensure that it can generate the desired workload with good accuracy,
  80. so do not try to change this without fully understanding what the
  81. issues are.
  82. ### Examples
  83. The simplest way to invoke httperf is with a command line of the form:
  84. > httperf --server wailua --port 6800
  85. This command results in httperf attempting to make one request for URL
  86. http://wailua:6800/. After the reply is received, performance
  87. statistics will be printed and the client exits (the statistics are
  88. explained below).
  89. A list of all available options can be obtained by specifying the
  90. --help option (all option names can be abbreviated as long as they
  91. remain unambiguous).
  92. A more realistic test case might be to issue 1000 HTTP requests at a
  93. rate of 10 requests per second. This can be achieved by additionally
  94. specifying the --num-conns and --rate options. When specifying the
  95. --rate option, it's generally a good idea to also specify a timeout
  96. value using the --timeout option. In the example below, a timeout of
  97. one second is specified (the ramification of this option will be
  98. explained later):
  99. > httperf --server wailua --port 6800 --num-conns 100 --rate 10 --timeout 1
  100. The performance statistics printed by httperf at the end of the test
  101. might look like this:
  102. Total: connections 100 requests 100 replies 100 test-duration 9.905 s
  103. Connection rate: 10.1 conn/s (99.1 ms/conn, <=1 concurrent connections)
  104. Connection time [ms]: min 4.6 avg 5.6 max 19.9 median 4.5 stddev 2.0
  105. Connection time [ms]: connect 1.4
  106. Connection length [replies/conn]: 1.000
  107. Request rate: 10.1 req/s (99.1 ms/req)
  108. Request size [B]: 57.0
  109. Reply rate [replies/s]: min 10.0 avg 10.0 max 10.0 stddev 0.0 (1 samples)
  110. Reply time [ms]: response 4.1 transfer 0.0
  111. Reply size [B]: header 219.0 content 204.0 footer 0.0 (total 423.0)
  112. Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0
  113. CPU time [s]: user 2.71 system 7.08 (user 27.4% system 71.5% total 98.8%)
  114. Net I/O: 4.7 KB/s (0.0*10^6 bps)
  115. Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
  116. Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
  117. There are six groups of statistics: overall results ("Total"),
  118. connection related results ("Connection"), results relating to the
  119. issuing of HTTP requests ("Request"), results relating to the replies
  120. received from the server ("Reply"), miscellaneous results relating to
  121. the CPU time and network bandwidth used, and, finally, a summary of
  122. errors encountered ("Errors"). Let's discuss each in turn:
  123. ## "Total" Results
  124. The "Total" line summarizes how many TCP connections were initiated by
  125. the client, how many requests it sent, how many replies it received,
  126. and what the total test duration was. The line below shows that 100
  127. connections were initiated, 100 requests were performed and 100
  128. replies were received. It also shows that total test-duration was
  129. 9.905 seconds meaning that the average request rate was almost exactly
  130. 10 request per second.
  131. Total: connections 100 requests 100 replies 100 test-duration 9.905 s
  132. ## "Connection" Results
  133. These results convey information related to the TCP connections that
  134. are used to communicate with the web server.
  135. Specifically, the line below show that new connections were initiated
  136. at a rate of 10.1 connections per second. This rate corresponds to a
  137. period of 99.1 milliseconds per connection. Finally, the last number
  138. shows that at most one connection was open to the server at any given
  139. time.
  140. Connection rate: 10.1 conn/s (99.1 ms/conn, <=1 concurrent connections)
  141. The next line in the output gives lifetime statistics for successful
  142. connections. The lifetime of a connection is the time between a TCP
  143. connection was initiated and the time the connection was closed. A
  144. connection is considered successful if it had at least one request
  145. that resulted in a reply from the server. The line shown below
  146. indicates that the minimum ("min") connection lifetime was 4.6
  147. milliseconds, the average ("avg") lifetime was 5.6 milliseconds, the
  148. maximum ("max") was 19.9 milliseconds, the median ("median") lifetime
  149. was 4.5 milliseconds, and that the standard deviation of the lifetimes
  150. was 2.0 milliseconds.
  151. Connection time [ms]: min 4.6 avg 5.6 max 19.9 median 4.5 stddev 2.0
  152. To compute the median time, httperf collects a histogram of connection
  153. lifetimes. The granularity of this histogram is currently 1
  154. milliseconds and the maximum connection lifetime that can be
  155. accommodated with the histogram is 100 seconds (these numbers can be
  156. changed by editing macros BIN_WIDTH and MAX_LIFETIME in stat/basic.c).
  157. This implies that the granularity of the median time is 1 millisecond
  158. and that at least 50% of the lifetime samples must have a lifetime of
  159. less than 100 seconds.
  160. The next statistic in this section is the average time it took to
  161. establish a TCP connection to the server (all successful TCP
  162. connections establishments are counted, even connections that may have
  163. failed eventually). The line below shows that, on average, it took
  164. 1.4 milliseconds to establish a connection.
  165. Connection time [ms]: connect 1.4
  166. The final line in this section gives the average number of replies
  167. that were received per connection. With regular HTTP/1.0, this value
  168. is at most 1.0 (when there are no failures), but with HTTP Keep-Alives
  169. or HTTP/1.1 persistent connections, this value can be arbitrarily
  170. high, indicating that the same connection was used to receive multiple
  171. responses.
  172. Connection length [replies/conn]: 1.000
  173. ## "Request" Results
  174. The first line in the "Request"-related results give the rate at which
  175. HTTP requests were issued and the period-length that the rate
  176. corresponds to. In the example below, the request rate was 10.1
  177. requests per second, which corresponds to 99.1 milliseconds per
  178. request.
  179. Request rate: 10.1 req/s (99.1 ms/req)
  180. As long as no persistent connections are employed, the "Request"
  181. results are typically very similar or identical to the "Connection"
  182. results. However, when persistent connections are used, several
  183. requests can be issued on a single connection in which case the
  184. results would be different.
  185. The next line gives the average size of the HTTP request in bytes. In
  186. the line show below, the average request size was 57 bytes.
  187. Request size [B]: 57.0
  188. ## "Reply" Results
  189. For simple measurements, the section with the "Reply" results is
  190. probably the most interesting one. The first line gives statistics on
  191. the reply rate:
  192. Reply rate [replies/s]: min 10.0 avg 10.0 max 10.0 stddev 0.0 (1 samples)
  193. The line above indicates that the minimum ("min"), average ("avg"),
  194. and maximum ("max") reply rate was ten replies per second. Given
  195. these numbers, the standard deviation is, of course, zero. The last
  196. number shows that only one reply rate sample was acquired. The
  197. present version of httperf collects one rate sample about once every
  198. five seconds. To obtain a meaningful standard deviation, it is
  199. recommended to run each test long enough so at least thirty samples
  200. are obtained---this would correspond to a test duration of at least
  201. 150 seconds, or two and a half minutes.
  202. The next line gives information on how long it took for the server to
  203. respond and how long it took to receive the reply. The line below
  204. shows that it took 4.1 milliseconds between sending the first byte of
  205. the request and receiving the first byte of the reply. The time to
  206. "transfer", or read, the reply was too short to be measured, so it
  207. shows up as zero (as we'll see below, the entire reply fit into a
  208. single TCP segment and that's why the transfer time was measured as
  209. zero).
  210. Reply time [ms]: response 4.1 transfer 0.0
  211. Next follow some statistics on the size of the reply---all numbers are
  212. reported in bytes. Specifically, the average length of reply headers,
  213. the average length of the content, and the average length of reply
  214. footers are given (HTTP/1.1 uses footers to realize the "chunked"
  215. transfer encoding). For convenience, the average total number of
  216. bytes in the replies is also given. In the example below, the average
  217. header length ("header") was 219 bytes, the average content length
  218. ("content") was 204 bytes, and there were no footers ("footer"),
  219. yielding a total reply length of 423 bytes on average.
  220. Reply size [B]: header 219.0 content 204.0 footer 0.0 (total 423.0)
  221. The final piece in this section is a histogram on the status codes
  222. received in the replies. The example below shows that all 100 replies
  223. were "successful" replies as they contained a status code of 200
  224. (presumably):
  225. Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0
  226. ## Miscellaneous Results
  227. This section starts with a summary of the CPU time the client
  228. consumed. The line below shows that 2.71 seconds were spent executing
  229. in user mode ("user"), 7.08 seconds were spent executing in system
  230. mode ("system") and that this corresponds to 27.4% user mode execution
  231. and 71.5% system execution. The total utilization was almost exactly
  232. 100%, which is expected given that httperf is a CPU hog:
  233. CPU time [s]: user 2.71 system 7.08 (user 27.4% system 71.5% total 98.8%)
  234. Note that any time the total CPU utilization is significantly less
  235. than 100%, some other processes must have been running on the client
  236. machine while httperf was executing. This makes it likely that the
  237. results are "polluted" and the test should be rerun.
  238. The next line gives the average network throughput in kilobytes per
  239. second (where a kilobyte is 1024 bytes) and in megabits per second
  240. (where a megabit is 10^6 bit). The line below shows an average
  241. network bandwidth of about 4.7 kilobyte per second. The megabit per
  242. second number is zero due to rounding errors.
  243. Net I/O: 4.7 KB/s (0.0*10^6 bps)
  244. The network bandwidth is computed from the number of bytes sent and
  245. received on TCP connections. This means that it accounts for the
  246. network payload only (i.e., it doesn't account for protocol headers)
  247. and does not take into account retransmissions that may occur at the
  248. TCP level.
  249. ## "Errors"
  250. The final section contains statistics on the errors that occurred
  251. during the test. The "total" figure shows the total number of errors
  252. that occurred. The two lines below show that in our example run there
  253. were no errors:
  254. Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
  255. Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
  256. The meaning of each error is described below:
  257. total:
  258. The sum of all following error counts.
  259. client-timo:
  260. Each time a request is made to the server, a watchdog timer
  261. is started. If no (partial) response is received by the time
  262. the watchdog timer expires, httperf times out that request
  263. a increments this error counter. This is the most common error
  264. when driving a server into overload.
  265. socket-timo
  266. The number of times a TCP connection failed with a
  267. socket-level time out (ETIMEDOUT).
  268. connrefused
  269. The number of times a TCP connection attempt failed with
  270. a "connection refused by server" error (ECONNREFUSED).
  271. connreset
  272. The number of times a TCP connection failed due to a reset
  273. (close) by the server.
  274. fd-unavail
  275. The number of times the httperf client was out of file
  276. descriptors. Whenever this count is bigger than
  277. zero, the test results are meaning less because the client
  278. was overloaded (see discussion on setting --timeout below).
  279. addrunavail
  280. The number of times the client was out of TCP port numbers
  281. (EADDRNOTAVAIL). This error should never occur. If it
  282. does, the results should be discarded.
  283. ftab-full
  284. The number of times the system's file descriptor table
  285. was full. Again, this error should never occur. If it
  286. does, the results should be discarded.
  287. other
  288. The number of times other errors occurred. Whenever this
  289. occurs, it is necessary to track down the actual error
  290. reason. This can be done by compiling httperf with
  291. debug support and specifying option --debug 1.
  292. ## Selecting appropriate timeout values
  293. Since the client machine has only a limited set of resource available,
  294. it cannot sustain arbitrarily high HTTP request rates. One limit is
  295. that there are only roughly 60,000 TCP port numbers that can be in use
  296. at any given time. Since, on HP-UX, it takes one minute for a TCP
  297. connection to be fully closed (leave the TIME_WAIT state), the maximum
  298. rate a client can sustain is about 1,000 requests per second.
  299. The actual sustainable rate is typically lower than this because
  300. before running out of TCP ports, a client is likely to run out of file
  301. descriptors (one file descriptor is required per open TCP connection).
  302. By default, HP-UX 10.20 allows 1024 file descriptors per process.
  303. Without a watchdog timer, httperf could potentially quickly use up all
  304. available file descriptors, at which point it could not induce any new
  305. load on the server (this would primarily happen when the server is
  306. overloaded). To avoid this problem, httperf requires that the web
  307. server must respond within the time specified by option --timeout. If
  308. it does not respond within that time, the client considers the
  309. connection to be "dead" and closes it (and increases the "client-timo"
  310. error count). The only exception to this rule is that after sending a
  311. request, httperf allows the server to take some additional time before
  312. it starts responding (to accommodate HTTP requests that take a long
  313. time to complete on the server). This additional time is called the
  314. "server think time" and can be specified by option --think-timeout.
  315. By default, this additional think time is zero, so by default the
  316. server has to be able to respond within the time allowed by the
  317. --timeout option.
  318. In practice, we found that with a --timeout value of 1 second, an HP
  319. 9000/735 machine running HP-UX 10.20 can sustain a rate of about 700
  320. connections per second before it starts to run out of file descriptor
  321. (the exact rate depends, of course, on a number of factors). To
  322. achieve web server loads bigger than that, it is necessary to employ
  323. several independent machines, each running one copy of httperf. A
  324. timeout of one second effectively means that "slow" connections will
  325. typically timeout before TCP even gets a chance to retransmit (the
  326. initial retransmission timeout is on the order of 3 seconds). This is
  327. usually OK, except that one should keep in mind that it has the effect
  328. of truncating the connection life time distribution.