19. Still
• Expensive to make client - server connection ( TCP
congestion aka TCP slow start)
• One connection at time and responses sequentially ( Keep-
alive doesn't work )
• snippet from RFC
"server MUST send its responses to those
requests in the same order that the requests
were received"
• even pipeline
22. Http - current state - tricks
• prefetch DNS
• work around limitations by enhancing policy within browser
(increase number of connections per domain)
o enhance http limits in next versions by increasing number
of threads allowed per host ( 2, 4, 6 ...)
• connection warming ( make request before it has been
sent , in order to remove handshaking out of way )
• server push - comet, channel, websocket are long lived
gets ( a lot of overhead )
23. Solutions (kinda)
• pipelining ( doesn't really work )
o all browsers turn it off by default ( except Opera )
o hard to debug
o RFC
• connection sharding ( in order to involve more threads from
browser )
o img1.groupon.com, img2.groupon.com
• inlining css and js/ css sprites
• embedded images in data url
29. SPDY features
• each request assigned stream-id
• mandatory compression: headers compression is mandatory
o most of headers do not change so they are predefined
and mapped in browser - only 3B are sent
• connection multiplexing
o less connections
o less packets
• uses SSL - finally internet secure (firesheep)
• response in order which makes sense for server
• server push ( even IDs server vs client request odd IDs)
• prioritized request ( stylesheet, viewable images ... )
• non-idempotent requests supported
30. SPDY features cnt
• domain desharding by default
o one connection per domain leaving cwnd 6 time smaller
compared to http
o no dns lookup
• reduce total amount of packages
• reduce number of RT
• on closed connection SPDY sends full report ( GOAWAY
FRAME ) which request have been processed
• increased bandwidth utilization but speed to light did not
change with spdy (page load time vs RTT, RTT vs
bandwidth)
• OVERALL : send fewer bytes using fewer connections
32. Why SSL? part 2
• Meantime in Web Sockets team: measuring package lost
over non-http protocol using following ports:
o port 80 : 65% success rate ( "transparent" proxies - they
do mess with our data )
o port 6198: 86% success rate
o port 443: 95% success rate ( encrypted data goes thru
firewalls )
• Data loss pretty low
33. Why SSL? part 3
• SSL is sloooow
• Really? faster CPUs
• TLS increase size of payload up to 2-3%, buut SDPY
compresses headers
• real world data by Google:
o spdy-no-ssl vs http: for 1-2% packet loss 41-47% speed
increase
o spdy vs http: ~40% improvement
o spdy vs https: 15,3% improvement
41. SPDY Criticism more
• Next Protocol Negotiation
o http://tools.ietf.org/html/draft-agl-tls-nextprotoneg-00.html
o extension to SSL OK
o part of SSL handshake no additional RTs OK
o SSL initially requires more RTs
- new TCP connection is more expensive then old TCP connection - TCP congestion prevents internet from collapsing ( starts with small packets and increases on each RT ) - helps determining bandwidth of network -
- 4 subdomains x 6 connections per domain = 24 simultaneous connections - 24 vs 100 - connection warming: what is we do not use pre-warmed connection? - there is a lot of overhead in http world
- silently implemented ( oops I did it again)
- TCP layer could been extended , but Application layer had too severe problems