/[suikacvs]/webroot/www/2004/id/draft-ietf-http-connection-00.txt
Suika

Contents of /webroot/www/2004/id/draft-ietf-http-connection-00.txt

Parent Directory Parent Directory | Revision Log Revision Log


Revision 1.1 - (show annotations) (download)
Tue Jun 15 08:04:04 2004 UTC (19 years, 11 months ago) by wakaba
Branch: MAIN
CVS Tags: HEAD
File MIME type: text/plain
New

1
2
3
4
5 HTTP Working Group J. Gettys, Digital Equipment Corporation
6 INTERNET-DRAFT A. Freier, Netscape Communications Corporation
7 Expires September 26, 1997 March 26, 1997
8
9
10 HTTP Connection Management
11
12 draft-ietf-http-connection-00.txt
13
14 Status of This Memo
15
16 This document is an Internet-Draft. Internet-Drafts are working
17 documents of the Internet Engineering Task Force (IETF), its areas,
18 and its working groups. Note that other groups may also distribute
19 working documents as Internet-Drafts.
20
21 Internet-Drafts are draft documents valid for a maximum of six months
22 and may be updated, replaced, or obsoleted by other documents at any
23 time. It is inappropriate to use Internet-Drafts as reference
24 material or to cite them other than as "work in progress."
25
26 To learn the current status of any Internet-Draft, please check the
27 "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow
28 Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe),
29 munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or
30 ftp.isi.edu (US West Coast).
31
32 Distribution of this document is unlimited. Please send comments to
33 the HTTP working group at "http-wg@cuckoo.hpl.hp.com". Discussions
34 of the working group are archived at
35 "http://www.ics.uci.edu/pub/ietf/http/". General discussions about
36 HTTP and the applications which use HTTP should take place on the
37 "www-talk@w3.org" mailing list.
38
39 1. Abstract
40
41 The HTTP/1.1 specification (RFC 2068) is silent about various details
42 of TCP connection management when using persistent connections. This
43 document discusses some of the implementation issues discussed during
44 HTTP/1.1's design, and introduces a few new requirements on HTTP/1.1
45 implementations learned from implementation experience, not fully
46 understood when RFC 2068 was issued. This is an initial draft for
47 working group comment, and we expect further drafts.
48
49
50
51
52
53
54
55
56 Gettys & Freier [Page 1]
57
58 Internet-Draft HTTP Connection Management March 1997
59
60
61
62
63 2. Table of Contents
64
65 1. Abstract ....................................................... 1
66 2. Table of Contents .............................................. 2
67 3. Key Words ...................................................... 2
68 4. Connection Management for Large Scale HTTP Systems ............. 2
69 5. Resource Usage (Who is going to pay?) .......................... 2
70 6. Go to the Head of the Line ..................................... 6
71 7. The Race is On ................................................. 7
72 8. Closing Half of the Connection ................................. 8
73 9. Capture Effect ................................................. 9
74 10. Security Considerations ...................................... 10
75 12. References ................................................... 12
76 13. Acknowlegements .............................................. 13
77 14. Authors' Addresses ........................................... 13
78
79 3. Key Words
80
81 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
82 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
83 document are to be interpreted as described in RFC xxxx. [Bradner]
84
85 4. Connection Management for Large Scale HTTP Systems
86
87 Recent development of popular protocols (such as HTTP, LDAP, ...)
88 have demonstrated that the standards and engineering communities have
89 not yet come to grip with the concept of connection management. For
90 instance, HTTP/1.0 [HTTP/1.0] uses a TCP connection for carrying
91 exactly one request/response pair. The simplistic beauty of that
92 model has much less than optimal behavior.
93
94 This document focuses HTTP/1.1 implementations but the conclusions
95 drawn here may be applicable to other protocols as well.
96
97 The HTTP/1.1 Proposed Standard [HTTP/1.1] specification is silent on
98 when, or even if, the connection should be closed (implementation
99 experience was desired before the specification was frozen on this
100 topic). So HTTP has moved from a model that closed the connection
101 after every request/response to one that might never close. Neither
102 of these two extremes deal with "connection management" in any
103 workable sense.
104
105
106
107
108
109
110 5. Resource Usage (Who is going to pay?)
111
112 The Internet is all about scale: scale of users, scale of servers,
113
114
115 Gettys & Freier [Page 2]
116
117 Internet-Draft HTTP Connection Management March 1997
118
119
120 scale over time, scale of traffic. For many of these attributes,
121 clients must be cooperative with servers.
122
123 Clients of a network service are unlikely to communicate with more
124 than a few servers (small number of 10s). Considering the power of
125 desktop machines of today, maintaining that many idle connections
126 does not appear to be overly burdensome, particularly when you
127 consider the client is the active party and doesn't really have to
128 pay attention to the connection unless it is expecting some response.
129
130 Servers will find connections to be critical resources and will be
131 forced to implement some algorithm to shed existing connections to
132 make room for new ones. Since this is an area not treated by the
133 protocol, one might expect a variety of "interesting" efforts.
134
135 Maintaining an idle connection is almost entirely a local issue.
136 However, if that local issue is too burdensome, it can easily become
137 a network issue. A server, being passive, must always have a read
138 pending on any open connection. Some implementations of the multi-
139 wait mechanisms tend to bog down as the number of connections climbs
140 in to the hundreds, though operating system implementations can scale
141 this into the thousands, tens of thousands, or even beyond. Whether
142 server implementations can also scale to so many simultaneous clients
143 is likely much less clear than if the operating system can
144 theoretically support such use. Implementations might be forced to
145 use fairly bizarre mechanisms, which could lead to server
146 instability, and then perhaps service outages, which are indeed a
147 network issues. And despite any heroic efforts, it will all be to no
148 avail. The number of clients that could hold open a connection will
149 undoubtedly overwhelm even the most robust of servers over time.
150
151 When this happens, the server will of necessity be forced to close
152 connections. The most often considered algorithm is an LRU. The
153 success of LRU algorithms in other areas of computer engineering is
154 based on locality of reference. I.e., in this case, if LRU is better
155 than random, then this is because the "typical" client's behavior is
156 predictable based on its recent history. Clients that have made
157 requests recently are probably more likely to make them again, than
158 clients which have been idle for a while. While we are not sure we
159 can point to rigorous proof of this principle, we believe it does
160 hold for Web service and client reference patterns are certainly a
161 very powerful "clue".
162
163 The client has more information that could be used to drive the
164 process. For instance, it does not seem to much to expect that a
165 connection be held throughout the loading of a page and all its
166 embedded links. It could further sense user sincerity towards the
167 page by detecting such events as mouse movement, scrolling, etc., as
168 indicators that there is still some interest in pursing the page's
169 content, and therefore the chance of accessing subsequent links. But
170 if the user has followed a number of links in succession away to a
171 different server, it may be likely that the first connection will not
172
173
174 Gettys & Freier [Page 3]
175
176 Internet-Draft HTTP Connection Management March 1997
177
178
179 be used again soon. Whether this is significantly better than LRU is
180 an open question, but it is clear that unlikely to be used
181 connections should be closed, to free the server resouces involved.
182 Server resouces are much more scarce than client resources, and
183 clients should be frugal, if the Web is to have good scaling
184 properties.
185
186 Authoritative knowledge that it is appropriate to close a connection
187 can only come from the user. Unfortunately, that source is not to be
188 trusted. First, most users don't know what a connection is, and
189 having them indicate it is okay to close it is meaningless. Second, a
190 user that does know what a connection is probably inherently greedy.
191 Such a user would never surrender the attention that a connection to
192 a server implies. Research [Mogul2] does show that most of the
193 benefits of persistent connections are gained if connections can be
194 held open after last use approximately one minute for the HTTP
195 traffic studied; this captures most "click ahead" behavior of a
196 user's web browsing.
197
198 For many important services, server resources are critical resources;
199 there are many more clients than services. For example, the AltaVista
200 search service handles (as of this writing) tens of millions of
201 searches per day, for millions of different clients. While it is one
202 of the two or three most popular services on the Internet today, it
203 is clearly small relative to future services built with Internet
204 technology and HTTP. From this perspective, it is clear that clients
205 need to cooperate with servers to enable servers to continue to
206 scale.
207
208 System resources at a server:
209
210 * Server resources (open files, file system buffers, processes,
211 memory for applications, memory for socket buffers for
212 connections currently in use (16-64Kbytes each, data base
213 locks). In BSD derived TCP implementations, socket buffers are
214 only needed on active connections. This usually works because
215 it's seldom the case that there is data queued to/from more
216 than a small fraction of the open connections.
217
218 * PCB (Protocol control blocks, only ~100-140 bytes; even after a
219 connection is closed, you can't free this data structure for a
220 significant amount of time, of order minutes. More severe,
221 however, is that many inferior TCP implementations have had
222 linear or quadratic algorithms relating to the number of PCB's
223 to find PCB's when needed.
224
225 These are organized from most expensive, to least.
226
227 Clients should read data from their TCP implementations aggressively,
228 for several reasons:
229
230 * TCP implementations will delay acknowledgements if socket
231
232
233 Gettys & Freier [Page 4]
234
235 Internet-Draft HTTP Connection Management March 1997
236
237
238 buffers are not emptied. This will lower TCP performance, and
239 cause increased elapsed time for the end user. [Frystyk et.
240 al.] while continuing to consume the server's resources.
241
242 * Servers must be able to free the resources held on behalf of
243 the client as quickly as possible, so that the server can reuse
244 these resources on behalf of others. These are often the
245 largest and scarcest server system resource (processes, open
246 files, file system buffers, data base locks, etc.)
247
248 When HTTP requests complete (and a connection is idle), an open
249 connection still consumes resources some of which are not under the
250 server's control:
251
252 * socket buffers (16-64KB both in the operating system, and often
253 similar amounts in the server process itself)
254
255 * Protocol Control Blocks (.15 KB/PCB's). (??? Any other data
256 structures associated with PCB's?)
257
258 If, for example, an HTTP server had to indefinitely maintain these
259 resources, this memory alone for a million clients (and there are
260 already HTTP services larger than this scale in existence today)
261 using a single connection each would be tens of gigabytes of memory.
262 One of the reasons the Web has succeeded is that servers can, and do
263 delete connections, and require clients to reestablish connections.
264
265 If connections are destroyed too aggressively (HTTP/1.0 is the
266 classic limiting case), other problems ensue.
267
268 * The state of congestion of the network is forgotten [Jacobson].
269 Current TCP implementations maintain congestion information on
270 a per-connection basis, and when the connection is closed, this
271 information is lost. The consequences of this are well known:
272 general Internet congestion, and poor user performance
273
274 * Round trip delays and packets to re-establish the connections.
275 Since most objects in the Web are very small, of order half the
276 packets in the network has been due to just the TCP open and
277 close operation.
278
279 * Slow Start lowers initial throughput of the TCP connection
280
281 * PCB's become a performance bottleneck in some TCP
282 implementations (and cannot be reused for a XXX timeout after
283 the connection has been terminated). The absolute number of
284 PCBs in the TIME_WAIT state could be much larger than the
285 number in the ESTABLISHED state. Closing connections too
286 quickly can actually consume more memory than closing them
287 slowly, because all PCBs consume memory and idle socket buffers
288 do not.
289
290
291
292 Gettys & Freier [Page 5]
293
294 Internet-Draft HTTP Connection Management March 1997
295
296
297 From these two extreme examples, it is obvious that connection
298 management becomes a central issue for both clients and servers.
299
300 Clearly, benefits of persistent connections will be lost if clients
301 open many connections simultaneously. RFC2068 therefore specifies no
302 more than 2 connections from a client to a server should be open at
303 any one time, or 2N connections (where N is the number of clients a
304 proxy is serving) for proxies. Frystyk et. al. have shown that
305 roughly twice the performance of HTTP/1.0 using four to six
306 connections can be reached using HTTP/1.1 over a single TCP
307 connection using HTTP/1.1, even over a LAN, once combined with
308 compression of the HTML documents [Frystyk].
309
310 6. Go to the Head of the Line
311
312 The HTTP/1.1 specification requires that proxies use no more than 2N
313 connections, where N is the number of client connections being
314 served. Mogul has shown that persistent connections are a "good
315 thing", and Frystyk et. al. show data that significant (a factor of
316 2-8) savings in number of packets transmitted result by using
317 persistent connections.
318
319 If fewer connections are better, then, why does HTTP/1.1 permit
320 proxies to establish more than the absolute minimum of connections?
321 In the interests of brevity, the HTTP/1.1 specification is silent on
322 some of the motivations for some requirements of the specification.
323 At the time HTTP/1.1 was specified, we realized that if a proxy
324 server attempted to aggregate requests from multiple client
325 connections onto a single TCP connection, a proxy would become
326 vulnerable to the "head of line" blocking problem. If Client A, for
327 example, asks for 10 megabytes of data (or asked for a dynamicly
328 generated document of unlimited length), then if a proxy combined
329 that request with requests from another Client B, Client B would
330 never get its request processed. This would be a very "bad thing",
331 and so the HTTP/1.1 specification allows proxies to scale up their
332 connection use in proportion to incoming connections. This will also
333 result in proxy servers getting roughly fair allocation of bandwidth
334 from the Internet proportional to the number of clients.
335
336 Since the original HTTP/1.1 design discussions, we realized that
337 there is a second, closely related denial of service security arises
338 if proxies attempt to use the same TCPconnection for multiple
339 clients. An attacker could note that a particular URL of a server
340 that they wished to attack was either very large, very slow (script
341 based), or never returned data. By making requests for that URL, the
342 attacker could easily block other clients from using that server
343 entirely, due to head of line blocking, so again, simultaneously
344 multiplexing requests from different clients would be very bad, and
345 therefore implementations MUST not attempt such multipexing.
346
347 In other words, head-of-line blocking couples the fates of what
348 should be independent interactions, which allows for both denial-of-
349
350
351 Gettys & Freier [Page 6]
352
353 Internet-Draft HTTP Connection Management March 1997
354
355
356 service attacks, and for accidental synchronization.
357
358 Here is another example of head-of-line blocking: imagine clients A
359 and B are connected to proxy P1, which is connected to firewall proxy
360 P2, which is connected to the Internet. If P1 only has one connection
361 to P2, and A attempts to connect (via P1 and P2) to a dead server on
362 the Internet, all of B's operations are blocked until the connection
363 attempt from P2 to the dead server times out. This is not a good
364 situation.
365
366 Note that serial reuse of a TCP connection does not have these
367 considerations: a proxy might first establish a connection to an
368 origin server for Client A, and possibly leave the connection open
369 after Client A finishes and closes
370
371 its connection, and then use the same connection for Client B, and so
372 on. As in normal clients, such a proxy should close idle
373 connections.
374
375 Future HTTP evolution also dictates that simultaneous multiplexing of
376 clients over a connection should be prohibited. A number of schemes
377 for compactly encoding HTTP rely on associating client state with a
378 connection, which HTTP 1.X does not currently do. If proxies do such
379 multiplexing, then such designs will be much harder to implement.
380
381 7. The Race is On
382
383 Deleting a connection without authoritative knowledge that it will
384 not be soon reused is a fundamental race that is part of any timeout
385 mechanism. Depending on how the decision is made will determine the
386 penalties imposed.
387
388 It is intuitively (and most certainly empirically) less expensive for
389 the active (client) partner to close a connection than the server.
390 This is due in most part to the natural flow of events. For instance,
391 a server closing a connection cannot know that the client might at
392 that very moment be sending a request. The new request and the close
393 message can pass by in the night simply because the server and the
394 client are separated by a network. That type of failure is a network
395 issue. The code of both the client and the server must to be able to
396 deal with such failures, but they should not have to deal with it
397 efficiently. A client closing a connection, on the other hand, will
398 at least be assured that any such race conditions are mostly local
399 issues. The flow will be natural, assuming one treats closing as a
400 natural event. To paraphrase Butler Lampson's 1983 paper on system
401 design, "The events that happen normally must be efficient. The
402 exceptional need to make progress." [Lampson]
403
404 Having the client closing the connection will decrease the
405 probability of the client having to do automatic connection recovery
406 of a pipeline caused by a premature close on server side. From an
407 client implementation point of view this is advantageous as automatic
408
409
410 Gettys & Freier [Page 7]
411
412 Internet-Draft HTTP Connection Management March 1997
413
414
415 connection recovery of a pipeline is significantly more complicated
416 than closing an idle connection. In HTTP, however, servers are free
417 to close connections any time, and this observation does not help,
418 but may simplify other protocols. It will, however, reduce the number
419 of TCP resets observed, and make the exceptional case exceptional,
420 and avoid a TCP window full of requests being transmitted under some
421 circumstances.
422
423 On the one hand, it is a specific fact about TCP that if the client
424 closes the connection, the server does not have to keep the TIME_WAIT
425 entry lying around. This is goodness.
426
427 On the other hand, if the server has the resources to keep the
428 connection open, then the client shouldn't close it unless there is
429 little chance that the client will use the server again soon, since
430 closing & then reopening adds computational overhead to the server.
431 So allowing the server to take the lead in closing connections does
432 have some benefits.
433
434 A further observation is that congestion state of the network varies
435 with time, so the benefits of the congestion state being maintained
436 by TCP diminishes the longer a connection is idle.
437
438 This discussion also shows that a client should close idle
439 connections before the server does. Currently in the HTTP standard
440 there is no way for a server to provide such a "hint" to the client,
441 and there should be a mechanism. This memo solicits other opinions on
442 this topic.
443
444 8. Closing Half of the Connection
445
446 In simple request/response protocols (e.g. HTTP/1.0), a server can go
447 ahead and close both recieve and transmit sides of its connection
448 simultaneously whenever it needs to. A pipelined or streaming
449 protocol (e.g. HTTP/1.1) connection, is more complex [Frystyk et.
450 al.], and an implementation which does so can create major problems.
451
452 The scenario is as follows: an HTTP/1.1 client talking to a HTTP/1.1
453 server starts pipelining a batch of requests, for example 15 on an
454 open TCP connection. The server decides that it will not serve more
455 than 5 requests per connection and closes the TCP connection in both
456 directions after it successfully has served the first five requests.
457 The remaining 10 requests that are already sent from the client will
458 along with client generated TCP ACK packets arrive on a closed port
459 on the server. This "extra" data causes the server's TCP to issue a
460 reset which makes the client TCP stack pass the last ACK'ed packet to
461 the client application and discard all other packets. This means that
462 HTTP responses that are either being received or already have been
463 received successfully but haven't been ACK'ed will be dropped by the
464 client TCP. In this situation the client does not have any means of
465 finding out which HTTP messages were successful or even why the
466 server closed the connection. The server may have generated a
467
468
469 Gettys & Freier [Page 8]
470
471 Internet-Draft HTTP Connection Management March 1997
472
473
474 "Connection: Close" header in the 5th response but the header may
475 have been lost due to the TCP reset. Servers must therefore close
476 each half of the connection independently.
477
478 9. Capture Effect
479
480 One of the beauties of the simple single connection for each
481 request/response pair is that it did not favor an existing client
482 over another. In general, this natural rotation made for a fairer
483 offering of the overall service, albeit a bit heavy handed. Our
484 expectation is that when protocols with persistent connections get
485 heavily deployed, that aspect of fairness will not exist. Without
486 some moderately complex history, it might be that only the first 1000
487 clients will ever be able to access a server (providing that your
488 server can handle 1000 connections).
489
490 There needs to be some policy indicating when it is appropriate to
491 close connections. Such a policy should favor having the client be
492 the party to initiate the closure, but must provide some manner in
493 which the server can protect itself from misbehaving clients. Servers
494 can control greedy clients in HTTP/1.1 by use of the 503 (Service
495 Unavailable) response code in concert with the Retry-After response-
496 header field, or by not reading further requests from that client, at
497 the cost of temporarily occupying the connection. As long as the
498 server can afford to keep the connection open, it can delay a "greedy
499 client" by simply closing the TCP receive window. As soon as it
500 drops the connection, it has no way to distinguish this client from
501 any other. Either of these techniques may in fact be preferable to
502 closing the client's connection; the client might just immediately
503 reopen the connection, and you are unlikely to know if it is the same
504 greedy client.
505
506 Implementation complexity will need to be balanced against scheduling
507 overhead. A number of possible server scheduling algorithms exist,
508 with different costs and benefits. The implementation experience of
509 one of us (jg) with the X Window System [Gettys et. al.] may be of
510 use to those implementing Web server schedulers.
511
512 * Strict round robin scheduling: a operating system select or
513 poll operation is executed for each request processed, and each
514 request is handled in turn (across connections). Since select
515 is executed frequently, new connections get a good chance of
516 service sooner rather than later. Some algorithm must be chosen
517 to avoid capture effect if the server is loaded. This is most
518 fair, and approximates current behavior. The disadvantage is,
519 however, a (relatively expensive) system call / request, which
520 will likely become too expensive as Web servers become
521 carefully optimized after HTTP/1.1 is fully implemented.
522
523 * Modified round robin scheduling: a operating system select or
524 poll operation is executed. Any new connections are
525 established, and for each connection showing data available,
526
527
528 Gettys & Freier [Page 9]
529
530 Internet-Draft HTTP Connection Management March 1997
531
532
533 all available requests are read into buffers for later
534 execution. Then all requests are processed, round robin between
535 buffers. Some algorithm must be chosen to avoid capture effect
536 if the server is loaded. This eliminates the system call per
537 operation. This is quite efficient, and still reasonably
538 fairly apportions server capabilities.
539
540 * Some servers are likely to be multithreaded, possibly with a
541 thread per connection. These servers will have to have some
542 mechanism to share state so that no client can forever capture
543 a connection on a busy server.
544
545 A final note: indefinite round robin scheduling may not in fact be
546 the most desirable algorithm, due to the timesharing fallacy. If a
547 connection makes progress more slowly than possible, not only will
548 the client (the end user) observe poorer performance, but the
549 connection (and the considerable system overhead each one represents)
550 will be open longer, and more connections and server resources will
551 be required as a result.
552
553 At some point, large, loaded servers will have to choose a connection
554 to close; research [Padmanabhan and Mogul] shows that LRU may be as
555 good as more complex algorithms for choosing which to close.
556
557 Further experimentation with HTTP/1.1 servers will be required to
558 understand the most useful scheduling and connection management
559 algorithms.
560
561 10. Security Considerations
562
563 Most HTTP related security considerations are discussed in RFC2068.
564 This document identifies a further security concern: proxy
565 implementations that simultaneously multiplex requests from multiple
566 clients over a TCP connection are vulnerable to a form of denial of
567 service attacks, due to head of line blocking problems, as discussed
568 further above.
569
570 The capture effect discussed above also presents opportunities for
571 denial of service attacks.
572
573 11. Requirements on HTTP/1.1 Implementations
574
575 Here are some simple observations and requirements from the above
576 discussion.
577
578 * clients and proxies SHOULD close idle connections. Most of the
579 benefits of an open connection diminish the longer the
580 connection is idle: the congestion state of the network is a
581 dynamic and changing phenomena [Paxson]. The client, better
582 than a server, knows when it is likely not to revisit a site.
583 By monitoring user activity, a client can make reasonable
584 guesses as to when a connection needs closing. Research has
585
586
587 Gettys & Freier [Page 10]
588
589 Internet-Draft HTTP Connection Management March 1997
590
591
592 shown [Mogul2] shows that most of the benefits of a persistent
593 connection are likely to occur within approximately
594 60 seconds. Further research in this area is needed. On the
595 client side, define a connection as "idle" if it meets at least
596 one of these two criteria:
597
598 * no user-interface input events during the last 60 seconds
599 (parameter value shouldn't be defined too precisely)
600
601 * user has explicitly selected a URL from a different
602 server. Don't switch just because inlined images are from
603 somewhere else! Even in this case, dally for some seconds
604 (e.g., 10) in case the user hits the "back" button.
605 On the server side, use a timeout that is adapted based on
606 resource constraints: short timeout during overload, long
607 timeout during underload. Memory, not CPU cycles, is likely to
608 be the controlling resource in a well-implemented system.
609
610 * servers SHOULD implement some mechanism to avoid the capture
611 effect.
612
613 * proxies MUST use independent TCPconnections to origin or futher
614 proxy servers for different client connections, both to avoid
615 head of line blocking between clients, and to avoid the denial
616 of service attacks that implementations that attempt to
617 multiplex multiple clients over the same connection would be
618 open to.
619
620 * proxies MAY serially reuse connections for multiple clients.
621
622 * servers MUST properly close incoming and outgoing halves of TCP
623 connections independently.
624
625 * clients SHOULD close connections before servers when possible.
626 Currently, HTTP has no "standard" way to indicate idle time
627 behavior to clients, though we note that the Apache HTTP/1.1
628 implementation advertizes this information using the Keep-Alive
629 header if Keep-Alive is requested. We note, however, that Keep-
630 Alive is NOT currently part of the HTTP standard, and that the
631 working group may need to consider providing this "hint" to
632 clients in the future of the standard by this or other means
633 not currently specified in this initial draft.
634
635
636
637
638
639
640
641
642
643
644
645
646 Gettys & Freier [Page 11]
647
648 Internet-Draft HTTP Connection Management March 1997
649
650
651 12. References
652
653 [Apache]
654 The Apache Authors, The Apache Web Server is distributed by The
655 Apache Group.
656
657 [Bradner]
658 S. Bradner, "Keywords for use in RFCs to Indicate Requirement
659 Levels", RFC XXXX
660
661 [Frystyk]
662 Henrik Frystyk Nielsen, "The Effect of HTML Compression on a LAN
663 ", W3C. URL:
664 http://www.w3.org/pub/WWW/Protocols/HTTP/Performance/Compression/LAN.html
665
666 [Frystyk et. al]
667 Henrik Frystyk Nielsen, Jim Gettys, Anselm Baird-Smith, Eric
668 Prud'hommeaux, W3C, Håkon Wium Lie, Chris Lilley, W3C,
669 "Network Performance Effects of HTTP/1.1, CSS1, and PNG". W3C
670 Note, February, 1997. See URL:
671 http://www.w3.org/pub/WWW/Protocols/HTTP/Performance/ for this and
672 other HTTP/1.1 performance information.
673
674 [Gettys et. al.]
675 Gettys, J., P.L. Karlton, and S. McGregor, " The X Window System,
676 Version 11.'' Software Practice and Experience Volume 20, Issue
677 No. S2, 1990 ISSN 0038-0644.
678
679 [HTTP/1.0]
680 T. Berners-Lee, R. Fielding, H. Frystyk. "Informational RFC 1945
681 - Hypertext Transfer Protocol -- HTTP/1.0," MIT/LCS, UC Irvine,
682 May 1996
683
684 [HTTP/1.1]
685 R. Fielding, J. Gettys, J.C. Mogul, H. Frystyk, T. Berners-Lee,
686 "RFC 2068 - Hypertext Transfer Protocol -- HTTP/1.1," UC Irvine,
687 Digital Equipment Corporation, MIT
688
689 [Jacobson]
690 Van Jacobson. "Congestion Avoidance and Control." In Proc. SIGCOMM
691 '88 Symposium on Communications Architectures and Protocols, pages
692 314-329. Stanford, CA, August, 1988.
693
694 [Lampson]
695 B. Lampson, "Hints for Computer System Design", 9th ACM SOSP, Oct.
696 1983, pp. 33-48.
697
698 [Mogul]
699 Jeffrey C. Mogul. "The Case for Persistent-Connection HTTP." In
700 Proc. SIGCOMM '95 Symposium on Communications Architectures and
701 Protocols, pages 299-313. Cambridge, MA, August, 1995.
702
703
704
705 Gettys & Freier [Page 12]
706
707 Internet-Draft HTTP Connection Management March 1997
708
709
710 [Mogul2]
711 Jeffrey C. Mogul. "The Case for Persistent-Connection HTTP".
712 Research Report 95/4, Digital Equipment Corporation Western
713 Research Laboratory, May, 1995. URL:
714 http://www.research.digital.com/wrl/techreports/abstracts/95.4.html
715
716 [Padmanabhan and Mogul]
717 Venkata N. Padmanabhan and Jeffrey C. Mogul. Improving HTTP
718 Latency. In Proc. 2nd International WWW Conf. '94: Mosaic and the
719 Web, pages 995-1005. Chicago, IL, October, 1994. URL:
720 http://www.ncsa.uiuc.edu/SDG/IT94/Proceedings/DDay/mogul/HTTPLatency.html
721
722 [Padmanabhan & Mogul]
723 V.N. Padmanabhan, J. Mogul, "Improving HTTP Latency", Computer
724 Networks and ISDN Systems, v.28, pp. 25-35, Dec. 1995. Slightly
725 revised version of paper in Proc. 2nd International WWW Conference
726 '94: Mosaic and the Web, Oct. 1994
727
728 [Paxson]
729
730 Vern Paxson, "End-to-end Routing Behavior in the Internet" ACM
731 SIGCOMM '96, August 1996, Stanford, CA.
732
733 13. Acknowlegements
734
735 Our thanks to Henrik Frystyk Nielsen for comments on the first draft
736 of this document.
737
738 14. Authors' Addresses
739
740 Jim Gettys
741 W3 Consortium
742 MIT Laboratory for Computer Science
743 545 Technology Square
744 Cambridge, MA 02139, USA
745 Fax: +1 (617) 258 8682
746 Email: jg@w3.org
747
748 Alan Freier
749 Netscape Communications Corporation
750 Netscape Communications
751 501 East Middlefield Rd.
752 Mountain View, CA 94043
753 Email: freier@netscape.com
754
755
756
757
758
759
760
761
762
763
764 Gettys & Freier [Page 13]
765

admin@suikawiki.org
ViewVC Help
Powered by ViewVC 1.1.24