Index: net/docs/life-of-a-url-request.md |
diff --git a/net/docs/life-of-a-url-request.md b/net/docs/life-of-a-url-request.md |
new file mode 100644 |
index 0000000000000000000000000000000000000000..6a3b5286ba92726929419c36b0e4dc3f2850cbbd |
--- /dev/null |
+++ b/net/docs/life-of-a-url-request.md |
@@ -0,0 +1,480 @@ |
+# Life of a URLRequest |
Randy Smith (Not in Mondays)
2015/07/08 20:36:25
Could you add this doc (and maybe the other docs i
mmenke
2015/07/09 19:38:28
Done. I was unaware of that target...and suspect
Randy Smith (Not in Mondays)
2015/07/13 15:39:30
Agreed; not sure what to do about it, though.
|
+ |
+This document is intended as an overview of the core layers of the network |
+stack, their basic responsibilities, how they fit together, and where some of |
+the pain points are, without going into too much detail. Though it touches a |
+bit on the renderer process and the content/loader stack, the focus is on net/ |
+itself. |
+ |
+It's particularly targeted at people new to the Chrome network stack, but |
+should also be useful for team members who may be experts at some parts of the |
+stack, but are largely unfamiliar with other components. It starts by walking |
+through how a basic request issued by Blink works its way through the network |
+stack, and then moves on to discuss how various components plug in. |
+ |
+ |
+# Anatomy of the Network Stack |
+ |
+The main top-level network stack object is the URLRequextContext. The context |
+has non-owning pointers to everything needed to create and issue a URLRequest. |
+The context must outlive all requests that use it. Creating a context is a |
+rather complicated process, and it's recommended that most embedders use |
+URLRequestContextBuilder to do this. Chrome itself has several request |
+contexts that the network stack team owns: |
+ |
+* The proxy URLRequestContext, owned by the IOThread and used to get PAC |
+scripts while avoiding re-entrancy. |
+* The system URLRequestContext, also owned by the IOThread, used for requests |
+that aren't associated with a profile. |
+* Each profile, including incognito profiles, has a number of URLRequestContexts |
+that are created as needed: |
+ * The main URLRequestContext is mostly created in ProfileIOData, though it |
Randy Smith (Not in Mondays)
2015/07/08 20:36:25
These sub-items aren't showing up as indented/seco
mmenke
2015/07/09 19:38:29
Fixed. Weird, they're fine when I paste them into
|
+ has a couple components that are passed in from content's StoragePartition |
+ code. Several other components are shared with the system URLRequestContext, |
+ like the HostResolver. |
+ * Each non-incognito profile also has a media request context, which uses a |
+ different on-disk cache than the main request context. This prevents a single |
+ huge media file from evicting everything else in the cache. |
+ * On desktop platforms, each profile has a request context for extensions. |
+ * Each profile has two contexts for each isolated app (One for media, one for |
+ everything else). |
+ |
+The "HttpNetworkSession" is another major network stack object. It has |
+pointers to the network stack objects that more directly deal with sockets, and |
+their dependendencies. Its main objects are the HttpStreamFactory, the socket |
+pools, and the SPDY/QUIC session pools. |
+ |
+This document does not mention either of these objects much, but at layers |
+above the HttpStreamFactory, objects often grab their dependencies from the |
+URLRequestContext, while the HttpStreamFactory and layers below it generally |
+get their dependencies from the HttpNetworkSession. |
+ |
+ |
+# How many "Delegates"? |
+ |
+The network stack informs the embedder of important events for a request using |
+two main interfaces: The URLRequest::Delegate interface and the NetworkDelegate |
+interface. |
+ |
+The URLRequest::Delegate interface consists of small set callbacks needed to let |
+the embedder drive a request forward. URLRequest::Delegates generally own the |
+URLRequest. |
+ |
+The NetworkDelegate is geerally a single object shared by all requests, and |
+consists includes callbacks corresponding to most of the URLRequest::Delegate's |
+callbacks, as well as an assortment of other methods. The NetworkDelegate is |
+optional, the URLRequest::Delegate is not. |
+ |
+ |
+# Life of a "Simple" URLRequest |
+ |
+Consider a simple request issued by the renderer process. Suppose it's an HTTP |
+request, the response is uncompressed, has no request body (i.e. is not an |
+upload), and no matching entry in the cache. |
+ |
+ |
+## Overview |
Randy Smith (Not in Mondays)
2015/07/08 20:36:25
I'd go for a (short!) paragraph before you dive in
|
+ |
+### Request Starts |
+ |
+* ResourceDispatcher to creates an IPCResourceLoaderBridge. |
+* The IPCResourceLoaderBridge asks ResourceDispatcher to start the request. |
+* ResourceDispatcher sends an IPC to the ResourceDispatcherHost in the browser |
Randy Smith (Not in Mondays)
2015/07/08 20:36:24
I'd add a line with some visual distinction after
mmenke
2015/07/09 19:38:28
I split up the section. Let me know if you think
|
+process. |
+* ResourceDispatcherHost uses the URLRequestContext to create the URLRequest. |
+* ResourceDispatcherHost creates a ResourceLoader and ResourceHandler chain to |
Randy Smith (Not in Mondays)
2015/07/08 20:36:24
In this line and the next one, I'd refer to the "U
mmenke
2015/07/09 19:38:28
Done.
|
+manage the request. |
+* ResourceLoader starts the request. |
+ |
+### Request is Issued |
Randy Smith (Not in Mondays)
2015/07/08 20:36:24
This section is intimidating in the number of bull
Randy Smith (Not in Mondays)
2015/07/08 20:36:25
I'd put some verbiage here indicating that what's
mmenke
2015/07/09 19:38:29
I don't want to hammer everywhere that this is a "
|
+ |
+* The URLRequest asks the URLRequestJobFactory to create a URLRequest[Http]Job. |
Randy Smith (Not in Mondays)
2015/07/08 20:36:24
Especially if you're targeting this document at fo
mmenke
2015/07/09 19:38:28
Done.
|
+* The URLRequestHttpJob asks the HttpCache to create an HttpTransaction (always |
+an HttpCache::Transaction). |
+* The HttpCache::Transaction sees there's no cache entry for the request, and |
+creates an HttpNetworkTransaction. |
+* The HttpNetworkTransaction calls into the HttpStreamFactory to request an |
+HttpStream. |
+* HttpStreamFactory creates an HttpStreamFactoryImpl::Job. |
+* HttpStreamFactoryImpl::Job calls into the TransportClientSocketPool to |
+populate an ClientSocketHandle. |
+* TransportClientSocketPool has no idle sockets, so it creates a |
+TransportConnectJob and starts it. |
+* TransportConnectJob creates a StreamSocket and establishes a connection. |
+* TransportClientSocketPool puts the StreamSocket in the ClientSocketHandle, |
+and calls into HttpStreamFactoryImpl::Job. |
+* HttpStreamFactoryImpl::Job creates an HttpBasicStream, which takes ownership |
+of the ClientSocketHandle. |
+* It returns the HttpBasicStream to the HttpNetworkTransaction. |
+* HttpNetworkTransaction gives the request headers to the HttpBasicStream, and |
+tells it to start the request. |
+* HttpBasicStream sends the request, and waits for the response. |
+* The HttpBasicStream sends the response headers back to the |
+HttpNetworkTransaction. |
+* Headers are sent up to the URLRequest, to the ResourceLoader, through the |
+ResourceHandler stack. |
+* They're then send by the AsyncResourceHandler to the ResourceDispatcher. |
+ |
+### Response body is read |
+ |
+* AsyncResourceHandler allocates a 512k ring buffer of shared memory to read |
+the body of the request. |
+* AsyncResourceHandler tells the ResourceLoader to read the response body to |
+the buffer, 32kB at a time. |
+* AsyncResourceHandler informs the ResourceDispatcher of each read. |
+* ResourceDispatcher tells the AsyncResourceHandler when it's done with the |
+data with each read, so it knows when parts of the buffer can be reused. |
+ |
+### URLRequest is Destroyed |
+ |
+* When complete, the RDH deletes the ResourceLoader, which deletes the |
+URLRequest. |
+* During destruction, the HttpNetworkTransaction determines if the socket is |
+reusable, and if so, tells the HttpBasicStream to return it to the socket pool. |
+ |
+## Details |
Randy Smith (Not in Mondays)
2015/07/08 20:36:24
I'd suggest breaking this section out into subsect
mmenke
2015/07/09 19:38:28
I've split this into the same sections, with the s
|
+ |
+Each child process has at most one ResourceDispatcher, which is responsible for |
+all URL request-related communication with the browser process. When something |
+in the renderer needs to issue a resource request, it calls into the |
+ResourceDispatcher, which returns an IPCResourceLoaderBridge to the caller. |
+The caller uses the bridge to start a request. When started, the |
+ResourceDispatcher assigns the request a per-renderer ID, and then sends the |
+ID, along with all information needed to issue the request, to the |
+ResourceDispatcherHost in the browser process. |
+ |
+The ResourceDispatcherHost (RDH), along with most of the network stack, lives |
+on the browser process's IO thread. The browser process only has one RDH, |
+which is responsible for handling all network requests initiated by |
+ResourceDispatchers in all child processes, not just renderer process. |
+Browser-initiated don't go through the RDH, with some exceptions. |
+ |
+When the RDH sees the request, it calls into a URLRequestContext to create the |
+URLRequest. The URLRequestContext has pointers to all the network stack |
+objects needed to issue the request over the network, such as the cache, cookie |
+store, and host resolver. The RDH then creates a chain of ResourceHandlers |
+each of which can monitor/modify/delay/cancel the URLRequest and the |
+information it returns. The only one of these I'll talk about here is the |
+AsyncResourceHandler, which is the last ResourceHandler in the chain. The RDH |
+then creates a ResourceLoader (Which is the URLRequest::Delegate), passes |
+ownership of the URLRequest and the ResourceHandler chain to it, and then starts |
+the ResourceLoader. |
+ |
+The ResourceLoader checks that none of the ResourceHandlers want to cancel, |
+modify, or delay the request, and then finally starts the URLRequest. The |
+URLRequest then calls into the URLRequestJobFactory to create a URLRequestJob |
+and then starts it. In the case of an HTTP or HTTPS request, this will be a |
+URLRequestHttpJob. |
+ |
+The URLRequestHttpJob calls into the HttpCache to create an |
+HttpCache::Transaction. If there's no matching entry in the cache, the |
+HttpCache::Transaction will just call into the HttpNetworkLayer to create an |
+HttpNetworkTransaction, and transparently wrap it. |
+ |
+The HttpNetworkTransaction calls into the HttpStreamFactory to request an |
+HttpStream to the server. The HttpStreamFactoryImpl::Job creates a |
+ClientSocketHandle to hold a socket, once connected, and passes it in to the |
+ClientSocketPoolManager. The ClientSocketPoolManager assembles the |
+TransportSocketParams needed to establish the connection and creates a group |
+name ("host:port") used to identify sockets that can be used interchangeably. |
+ |
+The ClientSocketPoolManager directs the request to the |
+TransportClientSocketPool, since there's no proxy and it will be using |
+HTTP/1.x. The pool sends it on to the |
+ClientSocketPoolBase<TransportSocketParams> it wraps, which sends it on to its |
+ClientSocketPoolBaseHelper, which actually manages the socket pool. If there |
+isn't already an idle connection, and there are available socket slots, the |
+ClientSocketPoolBaseHelper will create a new TransportConnectJob using the |
+aforementioned params object. The Job will do the actual DNS lookup by calling |
+into the HostResolverImpl, if needed, and then finally establish a connection. |
+ |
+When the socket is connected, ownership of the socket is passed to the |
+ClientSocketHandle. The HttpStreamFactoryImpl::Job is informed the |
+connection attempt succeeded, and it then creates an HttpBasicStream, which |
+takes ownership of the ClientSocketHandle. It then passes ownership of the |
+HttpBasicStream back to the HttpNetworkTransaction. The Transaction passes |
+the request headers to the HttpBasicStream, which uses an HttpStreamParser to |
+(finally) format the request headers and send them to the server. The |
+HttpStreamParser waits to receive the response and then parses the HTTP/1.x |
+response headers, and then passes them up through both Transaction classes |
+to the URLRequestHttpJob, which passes them up to the URLRequest and on to |
+the ResourceLoader. |
+ |
+The ResourceLoader passes them through the chain of ResourceHandlers, and then |
+they make their way to the AsyncResourceHandler. The AsyncResourceHandler uses |
+the renderer process ID ("child ID") to figure out which process the request |
+was associated with, and then sends the headers along with the request ID to |
+that process's ResourceDispatcher. The ResourceDispatcher uses the ID to |
+figure out which IPCResourceLoaderBridge the headers should be sent to, which |
+sends them on to whatever created the IPCResourceLoaderBridge in the first |
+place. |
+ |
+Without waiting to hear back from the ResourceDispatcher, the ResourceLoader |
+tells its ResourceHandler chain to allocate memory to receive the response |
+body. The AsyncResourceHandler creates a 512KB ring buffer of shared memory, |
+and then passes the first 32KB of it to the ResourceLoader for the first read. |
+The ResourceLoader then passes a 32KB body read request down through the |
+URLRequest all the way down to the HttpResponseParser. Once some data is read, |
+possibly less than 32KB, the number of bytes read makes its way back to the |
+AsyncResourceHandler, which passes the shared memory buffer and the offset and |
+amount of data read to the renderer process. |
+ |
+The AsyncResourceHandler relies on ACKs from the renderer to prevent it from |
+overwriting data that the rendererer has yet to consume. This process repeats |
+until the response body is completely read. When the URLRequest informs the |
+ResourceLoader it's complete, the ResourceLoader tells the ResourceHandlers, |
+and the AsyncResourceHandler tells the ResourceDispatcher the request is |
+complete. The RDH then deletes ResourceLoader, which deletes the URLRequest. |
+ |
+When the HttpNetworkTransaction is being torn down, it figures out if the |
+socket is reusable. If not, it tells the HttpBasicStream to close the socket. |
+Either way, the ClientSocketHandle returns the socket is then returned to the |
+socket pool, either for reuse or so the socket pool knows it has another free |
+socket slot. |
+ |
+ |
+# Additional Topics |
+ |
+## HTTP Cache |
+ |
+The HttpCache::Transaction sits between the URLRequestHttpJob and the |
+HttpNetworkTransaction, and implements the HttpTransaction interface, just like |
+the HttpNetworkTransaction. The HttpCache::Transaction checks if a request can |
+be served out of the cache. If a request needs to be revalidated, it handles |
+sending a 204 revalidation request over the network. It may also break a range |
+request into multiple cached and non-cached contiguous chunks, and may issue |
+multiple network requests for a single range URLRequest. |
+ |
+One important detail is that it has a read/write lock for each URL. The lock |
+technically allows multiple reads at once, but since an HttpCache::Transaction |
+always grabs the lock for writing and reading before downgrading it to a read |
+only lock, all requests for the same URL are effectively done serially. Blink |
+merges requests for the same URL in many cases, which mitigates this problem to |
+some extent. |
+ |
+The HttpCache::Transaction uses one of three disk_cache::Backends to actually |
+store the cache's index and files: The in memory backend, the blockfile cache |
+backend, and the simple cache backend. The first is used in incognito. The |
+latter two are both stored on disk, and are used on different platforms. |
+ |
+## Cancellation |
+ |
+A request can be cancelled by the renderer process Blink or by any of the |
+ResourceHandlers through the ResourceLoader. When the cancellation message |
+reaches the URLRequest, it passes on the fact it's been cancelled back to the |
+ResourceLoader, which then sends the message down the ResourceHandler chain. |
+ |
+When an HttpNetworkTransaction for a canelled request is being torn down, it |
+figures out if the socket the HttpStream owns can potentially be reused, based |
+on the protocol (HTTP / SPDY / QUIC) and any received headers. If the socket |
+potentially can be reused, an HttpResponseBodyDrainer is created to try and |
+read any remaining body bytes of the HttpStream, if any, before returning the |
+socket to the SocketPool. If this takes too long, or there's an error, the |
+socket is closed instead. Since this all happens at the layer below the cache, |
+any drained bytes are not written to the cache, and as far as the cache layer is |
+concerned, it only has a partial response. |
+ |
+## Redirects |
+ |
+The URLRequestHttpJob checks if headers indicate a redirect when it receives |
+them from the next layer down (Typically the HttpCache::Transaction). If they |
+indicate a redirect, it tells the cache the response is complete, ignoring the |
+body, so the cache only has the headers. The cache then treats it as a complete |
+entry, even if the headers indicated there will be a body. |
+ |
+The URLRequestHttpJob then checks if the URLRequest if the request should be |
+followed. First it checks the scheme. Then it informs the ResourceLoader |
+about the redirect, to give it a chance to cancel the request. The information |
+makes its way down through the AsyncResourceHandler to the ResourceDispatcher |
+and on into Blink, which checks if the redirect should be followed. |
+ |
+The ResourceDispatcher then asynchronously sends a message back to either |
+follow the redirect or cancel the request. In either case, the old |
+HttpTransaction is destroyed, and the HttpNetworkTransaction attempts to drain |
+the socket for reuse, just as in the cancellation case. If the redirect is |
+followed, the URLRequest calls into the URLRequestJobFactory to create a new |
+URLRequestJob, and then starts it. |
+ |
+## Filters (gzip, SDCH, etc) |
+ |
+When the URLRequestHttpJob receives headers, it sends a list of all Content- |
+Encoding values to Filter::Factory, which creates a (possibly empty) chain of |
+filters. As body bytes are received, they're passed through the filters at the |
+URLRequestJob layer and the decoded bytes are passed back to the embedder. |
+ |
+Since this is done above the cache layer, the cache stores the responses prior |
+to decompression. As a result, if files aren't compressed over the wire, they |
+aren't compressed in the cache, either. This behavior can also create problems |
+when responses are SDCH compressed, as a dictionary may be evicted from the |
+cache independently of the response that was compressed with it. |
Randy Smith (Not in Mondays)
2015/07/08 20:36:25
nit: I'd leave out "from the cache" since SDCH imp
mmenke
2015/07/09 19:38:29
Reworded it a bit.
|
+ |
+TODO(mmenke): Discuss filter creation. |
+ |
+## Socket Pools |
+ |
+The ClientSocketPoolManager is responsible for assembling the parameters needed |
+to connect a socket, and then sending the request to the right socket pool. |
+Each socket request sent to a socket pool comes with a socket params object, a |
+ClientSocketHandle, and a "group name". The params object contains all the |
+information a ConnectJob needs to create a connection of a given type, and |
+different types of socket pools take different params types. The |
+ClientSocketHandle will take temporary ownership of the socket, once connected |
+socket, and return it to the socket pool when done. All connections with the |
+same group name in the same pool can be used to service the same connection |
+request, so it consists of host, port, protocol, and whether "privacy mode" is |
+used for requests using the socket or not. |
+ |
+All socket pool classes derive from the ClientSocketPoolBase<SocketParamType>. |
+The ClientSocketPoolBase handles managing sockets - which requests to create |
+sockets for, which requests get connected sockets first, which sockets belong |
+to which groups, connection limits per group, keeping track of and closing idle |
+sockets, etc. Each ClientSocketPoolBase subclass has its own ConnectJob type, |
+which establishes a connection using the socket params, before the pool hands |
+out the connected socket. |
+ |
+## Socket Pool Layering |
+ |
+Some socket pools are layered on top other socket pools. This is done when a |
+"socket" in a higher layer needs to establish a connection in a lower level |
+pool and then take ownership of it as part of its connection process. See later |
+sections for examples. There are a couple additional complexities here. |
+ |
+From the perspective of the lower layer pool, all of its sockets that a higher |
+layer pools owns are actively in use, even when the higher layer pool considers |
+them idle. As a result, when a lower layer pool is at its connection limit and |
+needs to make a new connection, it will ask any higher layer pools pools to |
+close an idle connection if they have one, so it can make a new connection. |
+ |
+Sockets in the higher layer pool must have their own distinct group name in the |
+lower layer pool as well. This is needed so the lower layer pool won't, for |
+example, group SSL and HTTP connections to the same port together. |
+ |
+## SSL |
+ |
+When an SSL connection is needed, the ClientSocketPoolManager assembles the |
+parameters needed both to connect the TCP socket and establish an SSL |
+connection. It then passes them to the SSLClientSocketPool, which creates |
+an SSLConnectJob using them. The SSLConnectJob's first step is to call into the |
+TransportSocketPool to establish a TCP connection. |
+ |
+Once a connection is established by the lower layered pool, the SSLConnectJob |
+then starts SSL negotiation. Once that's done, the SSL socket is passed back to |
+the HttpStreamFactoryImpl::Job that initiated the request, and things proceed |
+just as with HTTP. Whe complete, the socket is returned to the |
+SSLClientSocketPool. |
+ |
+## Proxy discovery |
+ |
+The first step the HttpStreamFactoryImpl::Job performs, just before calling |
+into the ClientSocketPoolManager to create a socket, is to check with the |
+ProxyService to see if a proxy is needed for the URL it's been given. The |
+ClientSocketPoolManager then uses this information to find the correct proxy |
+socket pool to send the request to. |
+ |
+TODO(mmenke): Discuss proxy configurations, WPAD, tracing proxy resolver. |
+ |
+## Proxy Socket Pools |
+ |
+Each SOCKS or HTTP proxy has its own completely independent set of socket |
+pools. They have their own exclusive TransportSocketPool, their own protocol- |
+specific pool above it, and their own SSLSocketPool above that. HTTPS proxies |
+also have a second SSLSocketPool between the the HttpProxyClientSocketPool and |
+the TransportSocketPool, since they can talk SSL to both the proxy and the |
+destination server, layered on top of each other. |
+ |
+## SPDY |
+ |
+Once an SSL connection is established, the HttpStreamFactoryImpl::Job checks if |
+SPDY was negotiated over the socket. If so, it creates a SpdySession using the |
+socket, and a SpdyHttpStream. The SpdyHttpStream will be passed to the |
+HttpNetworkTransaction, which drives the stream as usual. |
+ |
+The SpdySession will be shared with other Jobs conecting to the same server, |
+and future Jobs will find the SpdySession before they try to create a |
+connection. HttpServerProperties also tracks which servers supported SPDY when |
+we last talked to them. We only try to establish a single connection to servers |
+we thing speak SPDY when multiple HttpStreamFactoryImpl::Jobs are trying to |
+connect to them, to avoid wasting resources. |
+ |
+## QUIC |
+ |
+HttpServerProperties also tracks which servers have advertised QUIC support in |
+the past. If a server hass advertisied QUIC support, a second |
+HttpStreamFactoryImpl::Job will be created for SPDY, and will be raced against |
+the one for HTTP/HTTPS connection. Whichever connects first will be used. |
+Existing QUIC sessions will be reused if available. |
+ |
+TODO(mmenke): Discuss SPDY/QUIC proxies? |
+ |
+## Uploads |
+ |
+Upload data is passed to a URLRequest using the UploadDataStream class. Since |
+the over-the-wire format of uploads is determined by the HttpStream type, the |
+upload body is read from the stream and prepared to be sent over the write by |
+the HttpStream classes (HttpBasicStream, SpdyHttpStream, QuicHttpStream). |
+UploadDataStreams have to be replayable, since redirects and retries may need |
+to re-upload data. |
+ |
+UploadDataStreams either have a length known in advance, or are "chunked". |
+The main implementation for the non-chunked case is ElementsUploadDataStream, |
+which consists of one or more UploadElementReader, each of which contains a |
+fixed-size chunk of data, either in memory or in a file. |
+ |
+ChunkedUploadDataStream is the main implementation for the chunked case. |
+Chunked uploads are only used by Chrome internally, since many servers don't |
+support them, and the length is always known in advance for web-initiated |
+uploads. Chrome adds data bit by bit, and the HttpStream implementation |
+sends data as long as more data is needed and it has more data to send. |
+Because of the replayable requirement mentioned above, the entire content of |
+these chunked requests must be buffered into memory. |
+ |
+One weirdness is that reads from UploadDataStreams currently aren't allowed to |
+fail. If a read from a file fails, then the contents of the file are replaced |
+by 0's. Apparently this matched FireFox's behavior at the time of |
+implementation. |
+ |
+## Cookies |
+ |
+Cookies are added to a request by the URLRequestHttpJob, and saved at that layer |
+as well, once the response headers have been received. The CookieStore (The |
+implementation of which is called "CookieMonster") handles storage of cookies, |
+and can be used either as an in-memory store or with an on-disk store, backed by |
+a SQLitePersistentCookieStore. |
+ |
+The CookieStore is currently reference counted, and outlives the rest of the |
+network stack, which has led to some lifetime issues. |
+ |
+## Prioritization |
+ |
+URLRequests are assigned a priority on creation. It only comes into play in |
+a couple places: |
+ |
+* DNS lookups are initiated based on the highest priority request for a lookup. |
+* Socket pools hand out and create sockets on prioritization. However, idle |
+sockets will be assigned to low priority requests in preference to creating new |
+sockets for higher priority requests. |
+* SPDY and QUIC both support sending priorities over-the-wire. |
+ |
+At the socket pool layer, sockets are only assigned to socket requests once the |
+socket is connected and SSL is negotiated, if needed. This is done so that if |
+a higher priority request for a group reaches the socket pool before a |
+connection is established, the first usable connection goes to the highest |
+priority socket request. |
+ |
+## ResourceScheduler |
+ |
+In addition to net's use of priorities, requests issued by other processes go |
+through the ResourceScheduler. The ResourceScheduler restricts the number of |
+low priority URLRequests for a given page can be started at once, based on the |
+presense of higher priority requests. The idea is to reduce bandwidth |
+contention, and to reduce the chance of low priority resources, like images, |
+of delaying high priority HTML, CSS, and blocking scripts, so the page is |
+displayable and interactive sooner, even if it's missing images and the like. |
+ |
+## Non-HTTP schemes |
+ |
+The URLRequestJobFactory has a ProtocolHander for each supported scheme. |
+Non-HTTP URLRequests have their own ProtocolHandlers. Some are implemented in |
+net/, (like FTP, file, and data, though blink handles some data URLs |
+internally), and others are implemented in content/ or chrome (like blob, |
+chrome, and chrome-extension). |