SlideShare a Scribd company logo
1 of 54
Download to read offline
Synergy	
  SYN304:	
  
    NetScaler	
  TCP	
  Performance	
  
    Tuning	
  in	
  the	
  AOL	
  Network



Presenters:
Kevin	
  Mason	
  -­‐	
  kem@aol.net
Tim	
  Wicinski	
  -­‐	
  tjw@aol.net
 May	
  11,	
  2012
                                        1
Wednesday, May 23, 12                       1
Agenda
            Goals
            InformaNon	
  on	
  the	
  AOL	
  ProducNon	
  Network
            TCP	
  Tuning	
  Concepts
            TCP	
  Tuning	
  on	
  the	
  NetScaler
            AOL	
  tcpProfiles
            Case	
  Study:	
  bps	
  Improvements
            Case	
  Study:	
  Retransmission	
  ReducNon
            Case	
  Study:	
  Engadget	
  Coverage	
  of	
  New	
  iPad	
  Announcement	
  	
  
            TroubleshooNng	
  and	
  Monitoring	
  NetScalers
            Appendixes



                                                       2
Wednesday, May 23, 12                                                                             2
Note	
  on	
  Today’s	
  Sta0s0cs
             All	
  the	
  AOL	
  staNsNcs	
  presented	
  today	
  are	
  operaNonal	
  
           numbers.	
  They	
  should	
  not	
  be	
  construed	
  to	
  represent	
  page	
  
                      views	
  or	
  other	
  types	
  of	
  site	
  popularity	
  data.




                                                    3
Wednesday, May 23, 12                                                                            3
Performance	
  Tuning	
  Goal


               Create	
  the	
  best	
  possible	
  experience	
  for	
  
                  AOL	
  Customers	
  by	
  conNnuously	
  
               analyzing	
  and	
  engineering	
  the	
  fastest,	
  
                  most	
  efficient	
  network	
  possible.




                                           4
Wednesday, May 23, 12                                                       4
Defini0ons
     Just	
  to	
  make	
  sure	
  we	
  are	
  on	
  the	
  same	
  page

            VIP	
  -­‐	
  Load	
  Balancing	
  (LB)	
  and	
  Content	
  Switching	
  (CS)	
  VServers
            Service	
  -­‐	
  Backend	
  connecNvity	
  to	
  Real	
  Hosts
            Outbound	
  -­‐	
  Traffic	
  outbound	
  from	
  the	
  VIP	
  or	
  Service	
  
            Inbound	
  -­‐	
  Traffic	
  inbound	
  towards	
  the	
  VIP	
  or	
  Service




                                                            5
Wednesday, May 23, 12                                                                                    5
AOL	
  Produc0on	
  Network	
  Stats
     Load	
  Balancing
              70	
  HA	
  pairs	
  of	
  NetScalers
                10G,	
  mulN	
  interface	
  a_ached
              19,642	
  VServers
              27,214	
  Services	
  (Host	
  +	
  Port)
              13,462	
  Servers,	
  majority	
  cloud	
  based


     GSLB/DNS
              Approx	
  18K	
  DNS	
  domains
              Approx	
  4K	
  GSLB	
  configs
              180M	
  queries/day




                                                      6
Wednesday, May 23, 12                                            6
AOL	
  Produc0on	
  Network
     Internal	
  Tooling
     Web	
  based	
  self	
  service	
  site	
  managing	
  all	
  changes,	
  using	
  a	
  
     mixture	
  of	
  SOAP	
  and	
  NITRO	
  calls:
            Avg	
  250	
  Self	
  Service	
  changes	
  /	
  week.
            Avg	
  25K	
  config	
  command	
  changes	
  /	
  week.




                                                           7
Wednesday, May 23, 12                                                                           7
TCP	
  Tuning	
  Concepts
      Have	
  a	
  solid,	
  technical	
  understanding	
  of	
  the	
  client's	
  
     connecNon	
  method.	
  
      Get	
  the	
  bits	
  to	
  the	
  customer	
  quickly	
  and	
  cleanly.
      Where	
  possible,	
  send	
  a	
  large	
  iniNal	
  burst	
  of	
  packets.
         Ramp	
  traffic	
  up	
  fast.
         Recover	
  errors	
  quickly,	
  avoid	
  slow	
  start.




     Melt	
  fiber,	
  smoke	
  servers,	
  choke	
  routers
                                                   8
Wednesday, May 23, 12                                                                  8
TCP	
  Tuning	
  Concepts
     TCP	
  Tuning	
  is	
  done	
  per	
  connecNon	
  not	
  the	
  aggregate	
  of	
  the	
  
     link.	
  	
  


     Tuning	
  Factors	
  include:
            Latency	
  or	
  Round	
  Trip	
  Time	
  (RTT)
            Minimum	
  Bandwidth
            Packet	
  Loss
            Supported	
  TCP	
  OpNons




                                                         9
Wednesday, May 23, 12                                                                              9
TCP	
  Connec0on	
  Op0ons
            TCP	
  negoNates	
  opNons	
  to	
  define	
  how	
  the	
  connecNon	
  operates.
           TCP	
  OpNons	
  are	
  generally	
  symmetrical,	
  both	
  sides	
  have	
  to	
  support	
  
          it,	
  or	
  the	
  opNon	
  is	
  dropped.
           TCP	
  OpNon	
  values	
  are	
  NOT	
  always	
  the	
  same	
  on	
  both	
  sides.	
  	
  For	
  
          example,	
  Receive	
  Windows,	
  Window	
  Scaling	
  and	
  MSS	
  (Max	
  
          Segment	
  Size)	
  are	
  ojen	
  different.




                                                             10
Wednesday, May 23, 12                                                                                             10
!!!	
  Caveat	
  Emptor	
  !!!
                      While	
  we	
  are	
  happy	
  to	
  share	
  AOL	
  experience	
  and	
  
                   configuraAons,	
  be	
  careful	
  applying	
  these	
  seEngs	
  in	
  your	
  
                            network,	
  they	
  might	
  cause	
  problems.


                   When	
  working	
  with	
  any	
  parameters,	
  have	
  a	
  rollback	
  plan!




     O’Toole’s	
  Comment	
  on	
  Murphy's	
  Law:	
  
     	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  Murphy	
  is	
  an	
  Op?mist.	
  


                                                                                                                                                                              11
Wednesday, May 23, 12                                                                                                                                                                                                                     11
TCP	
  Tuning	
  on	
  the	
  NetScaler

     Two	
  major	
  tuning	
  knobs	
  on	
  the	
  NetScaler:
                tcpParam	
  -­‐	
  Global	
  TCP	
  semngs
                tcpProfile	
  -­‐	
  Per	
  VIP	
  or	
  Service	
  semngs



     We	
  set	
  the	
  tcpParam	
  for	
  general	
  purpose	
  values,	
  but	
  since	
  it	
  is	
  
     a	
  OS	
  supplied	
  semng,	
  it	
  could	
  be	
  over	
  wri_en	
  by	
  an	
  upgrade.	
  	
  
     Using	
  custom	
  tcpProfiles	
  is	
  safer	
  since	
  they	
  will	
  not	
  be	
  
     overwri_en.




                                                             12
Wednesday, May 23, 12                                                                                       12
tcpProfile	
  Values
     -­‐WS                 -­‐maxPktPerMss    -­‐nagle
     -­‐WSVAL              -­‐pktPerRetx      -­‐ackOnPush
     -­‐maxBurst           -­‐minRTO          -­‐mss	
  (v9.3)
     -­‐iniAalCwnd         -­‐slowStartIncr   -­‐bufferSize	
  (v9.3)
     -­‐delayedAck         -­‐SACK
     -­‐oooQSize



         If you do nothing else,:
               Enable -WS (Window Scale)
               Enable -SACK (Selective Ack)

                                    13
Wednesday, May 23, 12                                                  13
TCP	
  Window	
  Scale	
  (RFC	
  1323)
     Window	
  Scale	
  is	
  a	
  TCP	
  opNon	
  to	
  increase	
  the	
  receiver’s	
  
     window	
  size,	
  compensaNng	
  for	
  long	
  and/or	
  fat	
  networks	
  (LFN).	
  
     The	
  values	
  used	
  by	
  each	
  device	
  in	
  a	
  connecNon	
  are	
  ojen	
  
     asymmetrical.
            Enabled	
  on	
  the	
  NetScaler	
  with	
  the	
  -­‐WS	
  opNon.
            The	
  value	
  adverNsed	
  by	
  the	
  NetScaler	
  is	
  set	
  with	
  -­‐WScale.
           At	
  the	
  least,	
  enable	
  -­‐WS	
  to	
  take	
  advantage	
  of	
  the	
  client	
  and	
  server	
  
          adverNsed	
  windows.




                                                               14
Wednesday, May 23, 12                                                                                                      14
TCP	
  SACK	
  (RFC	
  2018)
     SelecNve	
  Acknowledgement	
  or	
  SACK	
  is	
  a	
  TCP	
  opNon	
  enabling	
  a	
  
     receiver	
  to	
  tell	
  the	
  sender	
  the	
  range	
  of	
  non-­‐conNguous	
  packets	
  
     received.
           Without	
  SACK,	
  the	
  receiver	
  can	
  only	
  tell	
  the	
  sender	
  about	
  packets	
  
          sequenNally	
  received.	
  	
  
                This	
  slows	
  down	
  recovery.
                Can	
  force	
  Conges5on	
  control	
  to	
  kick	
  in.
            Enabled	
  with	
  the	
  -­‐SACK	
  opNon.




                                                                 15
Wednesday, May 23, 12                                                                                            15
Citrix	
  default	
  tcpProfile	
  
     SeEngs	
  appropriate	
  for	
  circa	
  1999.	
  	
  
     Have	
  stayed	
  this	
  way	
  due	
  to	
  Citrix	
  general	
  policy	
  of	
  not	
  
     changing	
  default	
  semngs.	
  Some	
  significant	
  changes	
  were	
  made	
  
     to	
  default	
  on	
  v9.2,	
  see	
  CTX130962	
  (10/7/2011).
            No	
  Window	
  Scaling	
  or	
  SACK.
                Causes	
  choppy	
  data	
  flow.
                Slow	
  to	
  recover	
  from	
  packet	
  loss.
                Can	
  trigger	
  slow	
  start	
  conges5on	
  control.
            Very	
  slow	
  CongesNon	
  Window	
  ramp	
  up.
            Small	
  -­‐iniNalCWind	
  &	
  -­‐maxBurst
                Search	
  for	
  “Ini5al	
  Conges5on	
  Window”	
  for	
  more	
  details




                                                              16
Wednesday, May 23, 12                                                                             16
Addi0onal	
  Citrix	
  supplied	
  tcpProfiles
     Citrix	
  supplies	
  several	
  predefined	
  profiles
     Depending	
  on	
  code	
  level,	
  Citrix	
  supplies	
  at	
  least	
  7	
  addiNonal	
  
     tcpProfiles	
  for	
  you	
  to	
  try	
  and	
  may	
  work	
  well	
  for	
  you.	
  	
  Since	
  
     they	
  are	
  supplied	
  with	
  the	
  O/S,	
  they	
  are	
  subject	
  to	
  changes	
  in	
  
     the	
  future.
     We	
  studied	
  these	
  tcpProfiles	
  to	
  develop	
  a	
  starNng	
  point	
  for	
  
     our	
  tesNng.




                                                      17
Wednesday, May 23, 12                                                                                      17
AOL	
  Custom	
  tcpProfiles




                                18
Wednesday, May 23, 12                18
AOL	
  Custom	
  tcpProfiles
     	
  VIP	
  Profiles
            aol_vip_std_tcpprofile
            aol_vip_mobile-­‐client_tcpprofile
            aol_vip_server-­‐2-­‐vserver_tcpprofile
            aol_vip_dialup-­‐client_tcpprofile


     Service	
  Profiles
            aol_svc_std_tcpprofile



     Complete	
  configs	
  included	
  in	
  Appendix


                                                        19
Wednesday, May 23, 12                                        19
Standard	
  Client	
  to	
  VIP	
  -­‐	
  aol_vip_std_tcpprofile
     Push	
  data	
  as	
  fast	
  as	
  possible	
  ajer	
  the	
  connecNon	
  is	
  
     established	
  by	
  increasing	
  the	
  packet	
  burst	
  and	
  ramp	
  rate,	
  
     improve	
  error	
  recovery.
     AssumpAons
            Content	
  is	
  generally	
  outbound	
  out	
  to	
  client.
            The	
  max	
  bps	
  per	
  flow	
  is	
  10	
  mb/s.
            RTT	
  is	
  75	
  ms
     Changes	
  from	
  default:
          Enable	
  -­‐WS	
  (Window	
  Scaling)                   Increase	
  -­‐slowStartIncr	
  
          Enable	
  -­‐SACK                                        Increased	
  -­‐pktPerRetx
          Increase	
  -­‐MaxBurst                                  Reduced	
  -­‐delayedAck	
  
          Increase	
  -­‐IniNalCwnd

                                                              20
Wednesday, May 23, 12                                                                                 20
Mobile	
  client	
  to	
  VIP	
  -­‐	
  aol_vip_mobile-­‐client_tcpprofile	
  
     Based	
  on	
  aol_vip_std_tcpprofile	
  &	
  aol_vip_dialup-­‐client_tcpprofile	
  
     with	
  changes	
  to	
  improve	
  the	
  the	
  mobile	
  (3G,	
  4G)	
  client	
  experience	
  by	
  
     increasing	
  tolerance	
  for	
  RTT	
  shijs	
  as	
  the	
  device	
  moves.	
  This	
  reduces	
  
     retransmissions	
  that	
  can	
  flood	
  a	
  device.
     AssumpAons:
            RTT	
  can	
  shij	
  from	
  300ms	
  to	
  900ms
            bps	
  is	
  limited	
  to	
  <	
  3mb/s
     Changes	
  from	
  aol_vip_std_tcpprofile:

           Increase	
  the	
  -­‐minRTO                            Reduced	
  –maxburst
           Reduced	
  -­‐delayedAck                                Reduced	
  -­‐slowstarNncr
     Results:
            Avg	
  30%	
  reducNon	
  in	
  retransmissions	
  to	
  mobile	
  clients
                                                         21
Wednesday, May 23, 12                                                                                            21
Server	
  to	
  VIP	
  -­‐	
  aol_vip_server-­‐2-­‐vserver_tcpprofile
     Based	
  on	
  aol_vip_std_tcpprofile	
  for	
  vips	
  that	
  are	
  handling	
  
     internal	
  server	
  flows	
  where	
  the	
  data	
  is	
  inbound	
  to	
  the	
  vserver.	
  	
  
     AssumpAons
            The	
  source	
  host	
  is	
  in	
  a	
  	
  data	
  center	
  or	
  other	
  a_ached	
  space.
           The	
  maximum	
  bps	
  per	
  flow	
  is	
  600	
  mb/s	
  inbound	
  to	
  a	
  VIP	
  or	
  
          vserver.
            The	
  max	
  propagaNon	
  delay	
  (RTT)	
  is	
  20	
  ms	
  (0.02	
  sec).

     Changes	
  from	
  aol_vip_std_tcpprofile:
            Increase	
  WScale
            Reduced	
  -­‐minRTO


                                                               22
Wednesday, May 23, 12                                                                                          22
Dial	
  Client	
  to	
  VIP	
  -­‐	
  aol_vip_dialup-­‐client_tcpprofile	
  
     Based	
  on	
  aol_vip_std_tcpprofile	
  with	
  changes	
  to	
  improve	
  the	
  
     dial	
  client	
  experience	
  by	
  reducing	
  retransmission	
  and	
  prevent	
  
     terminal	
  server	
  flooding.	
  
     AssumpAons	
  
            Max	
  bps	
  per	
  flow	
  is	
  50	
  kb/s	
  (	
  KB/s).
            Avg	
  RTT	
  is	
  500	
  ms	
  (0.5	
  sec).
     Changes	
  from	
  aol_vip_std_tcpprofile:
          Increase	
  -­‐minRTO                                           Reduced	
  –maxburst
          Reduced	
  -­‐delayedack                                        Reduced	
  -­‐slowstarNncr
     Results:
            Total	
  page	
  render	
  Nme	
  reduced	
  from	
  ~35s	
  to	
  ~16s.
            SSL	
  handshake	
  Nmes	
  reduced	
  from	
  4+s	
  to	
  ~	
  1s.
                                                                 23
Wednesday, May 23, 12                                                                                  23
Server	
  to	
  Service	
  -­‐	
  aol_svc_std_tcpprofile
     Used	
  on	
  Services	
  for	
  traffic	
  between	
  the	
  real	
  host	
  &	
  NetScaler.	
  	
  
     Even	
  though	
  the	
  propagaNon	
  delay	
  is	
  low,	
  BDP	
  is	
  sNll	
  a	
  factor	
  
     due	
  to	
  per	
  connecNon	
  bps.
     AssumpAons
            Max	
  bps	
  per	
  flow	
  is	
  650	
  mb/s	
  (81.25	
  MB/s).
            Avg	
  RTT	
  is	
  10	
  ms	
  (0.010	
  Sec).
            Majority	
  of	
  flows	
  are	
  HTTP	
  1.1	
  or	
  long	
  lived	
  TCP	
  connecNons.
     Changes	
  from	
  default:

           Enabled -SACK                                           Reduced -delayedACK
           Enabled -WS                                             Reduced -minRTO
           Increased -WScale


                                                              24
Wednesday, May 23, 12                                                                                      24
Case	
  Study:	
  
  Bits	
  per	
  Second	
  (bps)	
  Improvements




                                 25
Wednesday, May 23, 12                              25
Case	
  Study:	
  bps	
  Improvements

    Problem:
    Need	
  to	
  improve	
  the	
  customer	
  experience	
  and	
  increase	
  
    efficiency.
          Analysis	
  had	
  shown	
  more	
  than	
  99%	
  of	
  incoming	
  SYNs	
  support	
  
         Window	
  Scaling	
  and/or	
  SACK,	
  yet	
  the	
  NetScaler	
  was	
  not	
  uNlizing	
  
         these	
  opNons.
         Analysis	
  also	
  showed	
  a	
  large	
  number	
  of	
  sessions	
  adverNsing	
  Zero	
  
         Windows.
           Sessions	
  were	
  being	
  dropped	
  into	
  slow	
  start	
  due	
  to	
  packet	
  loss.




                                                          26
Wednesday, May 23, 12                                                                                      26
Case	
  Study:	
  bps	
  Improvements
     What	
  did	
  we	
  change?

              -­‐WS	
  Enabled
              -­‐SACK	
  Enabled
              -­‐WSVal	
  0
              -­‐maxBurst	
  	
  10
              -­‐iniNalCwnd	
  10
              -­‐delayedAck	
  100
              -­‐pktPerRetx	
  2
              -­‐slowStartIncr	
  4


     This	
  became	
  the	
  basis	
  for	
  the	
  aol_vip_std_tcpprofile	
  


                                                27
Wednesday, May 23, 12                                                            27
Case	
  Study	
  :	
  bps	
  Improvements




                                    28
Wednesday, May 23, 12                            28
Case	
  Study:	
  bps	
  Improvements
     Results:
            Immediate	
  30%	
  jump	
  in	
  Outbound	
  bps	
  towards	
  client.
            Retransmission	
  rate	
  did	
  not	
  increase.
               The	
  increased	
  in	
  bps	
  was	
  sustained	
  over	
  the
          	
  following	
  weeks.	
  	
  
              Allowed	
  higher	
  server	
  uNlizaNon	
  and	
  higher
          	
  NetScaler	
  efficiency	
  (more	
  bang	
  for	
  $$).	
  




                                                            29
Wednesday, May 23, 12                                                                 29
Case	
  Study:	
  
Retransmission	
  Reduc0on




                         30
Wednesday, May 23, 12         30
Case	
  Study:	
  	
  Retransmission	
  Reduc0on
     Problem:
           While	
  developing	
  the	
  Mobile	
  and	
  Dial	
  tcpProfiles,	
  we	
  realized	
  there	
  
          were	
  a	
  large	
  number	
  of	
  fast	
  retransmission	
  on	
  the	
  VServers.
            At	
  Nmes,	
  retransmissions	
  approached	
  15%	
  of	
  outbound	
  traffic.
           This	
  retransmission	
  rate	
  was	
  sustained	
  around	
  the	
  clock,	
  in	
  3	
  data	
  
          centers,	
  eliminaNng	
  network	
  congesNon	
  as	
  a	
  cause.




                                                           31
Wednesday, May 23, 12                                                                                             31
Case	
  Study:	
  Retransmission	
  Reduc0on
     TesAng:
     To	
  reduce	
  variables,	
  we	
  first	
  tested	
  on	
  one	
  pair	
  of	
  shared	
  
     service	
  NetScalers.

     We	
  tried	
  to	
  reduce	
  the	
  –delayedAck	
  to	
  100ms
            No	
  change	
  was	
  seen	
  in	
  the	
  traffic	
  over	
  24	
  hours

     Next	
  we	
  tried	
  increasing	
  the	
  –minRTO
           Instant	
  drops	
  in	
  the	
  number	
  of	
  Fast	
  retransmissions,	
  
          1st	
  -­‐	
  7th	
  retransmissions	
  and	
  
          RetransmissionGiveup.



                                                            32
Wednesday, May 23, 12                                                                              32
Case	
  Study:	
  Retransmission	
  Reduc0on




                                33
Wednesday, May 23, 12                               33
Case	
  Study:	
  Retransmission	
  Reduc0on

      Change	
  Details:
             Increasing	
  the	
  -­‐minRTO	
  from	
  100ms	
  to	
  400ms
                 	
  retransmi_ed	
  packets	
  dropped	
  from	
  14.2k/sec	
  to	
  2.5k/sec.
                 	
  bps	
  dropped	
  from	
  102mb/sec	
  to	
  30kb/sec	
  on	
  one	
  link.

            Increasing	
  -­‐minRTO	
  from	
  100ms	
  to	
  300ms	
  had	
  the	
  greatest	
  effect	
  
           on	
  reducing	
  retransmissions.
             When	
  applied	
  across	
  AOL,	
  total	
  outbound	
  bps	
  dropped	
  by	
  10%.
            	
  However,	
  increasing	
  -­‐minRTO	
  to	
  500ms	
  caused	
  outbound	
  bps	
  to	
  
           drop.	
  




                                                                     34
Wednesday, May 23, 12                                                                                        34
Case	
  Study:	
  
  Engadget	
  Coverage	
  of	
  New	
  iPad	
  Announcement	
  	
  




                                   35
Wednesday, May 23, 12                                                 35
Case	
  Study:	
  Engadget	
  Coverage	
  of	
  New	
  iPad	
  
     Announcement	
  	
  

     On	
  March	
  7,	
  2012	
  Apple	
  held	
  a	
  Keynote	
  event	
  to	
  announce	
  the	
  
     new	
  iPad.	
  	
  
     TradiNonally,	
  these	
  events	
  crush	
  sites	
  due	
  to	
  user	
  traffic.	
  	
  
     The	
  iPhone	
  4S	
  event	
  in	
  September	
  2011	
  overloaded	
  Engadget	
  
     and	
  we	
  decided	
  that	
  it	
  would	
  not	
  happen	
  again.
     We	
  used	
  all	
  the	
  performance	
  informaNon	
  presented	
  here,	
  plus	
  
     other	
  lessons	
  learned	
  to	
  build	
  a	
  plant	
  that	
  could	
  handle	
  the	
  
     projected	
  traffic.




                                                    36
Wednesday, May 23, 12                                                                                   36
Case	
  Study:	
  Engadget	
  Coverage	
  of	
  New	
  iPad	
  Announcement	
  	
  


    Goals:
           Engadget	
  would	
  not	
  fail	
  during	
  future	
  Apple	
  	
  announcements.	
  	
  
          Customers	
  would	
  be	
  able	
  to	
  access	
  Engadget	
  Live	
  Blogs	
  for	
  the	
  
         enNre	
  event.
           No	
  event	
  driven	
  changes	
  would	
  be	
  needed	
  to	
  the	
  plant.	
  




                                                           37
Wednesday, May 23, 12                                                                                       37
Case	
  Study:	
  Engadget	
  Coverage	
  of	
  New	
  iPad	
  Announcement	
  	
  

    Major	
  Concerns:
           Keep	
  the	
  NetScaler	
  CPU	
  <	
  50%.
           Distribute	
  the	
  large	
  iniNal	
  connecNons	
  per	
  second.
          Prepare	
  for	
  compeNng	
  sites	
  to	
  fail	
  and	
  those	
  users	
  shij	
  to	
  
         Engadget.
          IniNally	
  overbuild	
  to	
  guarantee	
  service	
  and	
  measure	
  the	
  true	
  
         capacity	
  requirements.




                                                            38
Wednesday, May 23, 12                                                                                    38
Case	
  Study:	
  Engadget	
  Coverage	
  of	
  New	
  iPad	
  Announcement	
  	
  

    We	
  succeeded!


    Gigaom.com	
  3/8/2012
    "Engadget’s	
  Tim	
  Stevens	
  was	
  one	
  of	
  the	
  few	
  tech	
  media	
  editors	
  
    who	
  only	
  had	
  to	
  worry	
  about	
  covering	
  the	
  event	
  itself,	
  as	
  
    opposed	
  to	
  managing	
  the	
  failure	
  of	
  his	
  tools.”


    h_p://gigaom.com/apple/live-­‐from-­‐sf-­‐sorta-­‐why-­‐apple-­‐
    events-­‐break-­‐publishers/



                                                   39
Wednesday, May 23, 12                                                                                 39
Case	
  Study:	
  Engadget	
  Coverage	
  of	
  New	
  iPad	
  Announcement	
  	
  




                                              40
Wednesday, May 23, 12                                                                      40
Case	
  Study:	
  Engadget	
  Coverage	
  of	
  New	
  iPad	
  Announcement	
  	
  

    Event	
  Details:
           Event	
  ran	
  120	
  minutes,	
  including	
  pre	
  and	
  post	
  blogs.
           3	
  second	
  updates	
  were	
  served	
  non-­‐stop	
  to	
  the	
  customers.	
  	
  
           A	
  total	
  of	
  1.14	
  Billion	
  h_p	
  requests	
  served	
  during	
  the	
  event.
           Sustained	
  >	
  200K	
  h_p	
  requests	
  /sec	
  for	
  30	
  minutes
           Peak	
  >220K	
  h_p	
  requests	
  /sec	
  over	
  10	
  min.
           No	
  changes	
  were	
  needed	
  to	
  any	
  device	
  during	
  the	
  event.




                                                           41
Wednesday, May 23, 12                                                                                    41
Case	
  Study:	
  Engadget	
  Coverage	
  of	
  New	
  iPad	
  Announcement	
  	
  


      How	
  we	
  did	
  it:
             Compression	
  Bypass	
  set	
  for	
  20%
             Integrated	
  Caching
             Flash	
  Cache	
  (1	
  sec)
            14	
  non-­‐dedicated	
  NetScalers	
  HA	
  pairs	
  to	
  minimize	
  connecNons	
  	
  
           rate,	
  located	
  in	
  5	
  data	
  centers	
  in	
  the	
  US	
  and	
  EU.	
  
             AOL	
  custom	
  tcpProfiles.
             600	
  cloud	
  based,	
  well	
  tuned,	
  virtual	
  Centos	
  servers.


              This	
  was	
  done	
  while	
  maintaining	
  normal	
  AOL	
  producNon	
  
                             services	
  on	
  the	
  shared	
  infrastructure!
                                                         42
Wednesday, May 23, 12                                                                                    42
Troubleshoo0ng	
  &	
  Monitoring




                              43
Wednesday, May 23, 12                   43
Troubleshoo0ng:	
  General
     Signs	
  your	
  NetScaler	
  is	
  gemng	
  overloaded:
           Watch	
  for	
  xoff	
  frames	
  on	
  the	
  switch	
  port,	
  indicates	
  the	
  NS	
  is	
  having	
  
          trouble	
  handling	
  packets.
           Watch	
  for	
  buffer	
  overruns,	
  may	
  indicate	
  the	
  need	
  for	
  addiNonal	
  
          interfaces	
  to	
  expand	
  buffer	
  pool.


     nstrace.sh	
  issues:
            Not	
  always	
  fully	
  capturing	
  packets.
           RelaNve	
  frame	
  Nmestamps	
  may	
  have	
  a	
  negaNve	
  value	
  due	
  to	
  
          capture	
  mechanism.
            AddiNonal	
  threads	
  added	
  in	
  10.x	
  which	
  should	
  help.


                                                             44
Wednesday, May 23, 12                                                                                                    44
Monitoring:	
  SNMP	
  OIDs

    In	
  addiNon	
  to	
  the	
  usual	
  OIDS,	
  we	
  have	
  found	
  these	
  very	
  useful	
  
    to	
  warn	
  of	
  potenNal	
  problems.	
  
           ifTotXoffSent	
  -­‐	
  .1.3.6.1.4.1.5951.4.1.1.54.1.43
           ifnicTxStalls	
  -­‐	
  .1.3.6.1.4.1.5951.4.1.1.54.1.45
           ifErrRxNoBuffs	
  -­‐	
  .1.3.6.1.4.1.5951.4.1.1.54.1.30
           ifErrTxNoNSB	
  -­‐	
  .1.3.6.1.4.1.5951.4.1.1.54.1.31




                                                     45
Wednesday, May 23, 12                                                                                    45
Future	
  work
     What	
  our	
  team	
  is	
  working	
  on	
  over	
  the	
  next	
  12	
  months
            IPFIX	
  and	
  AppFlow	
  monitoring.
            Add	
  NetScaler	
  interfaces	
  to	
  increase	
  buffer	
  pool.
            Custom	
  tcpProfiles	
  for	
  specific	
  applicaNons	
  like	
  ad	
  serving.
            Move	
  to	
  route	
  based,	
  acNve/acNve	
  architecture.
            Start	
  to	
  roll	
  out	
  Jumbo	
  Frames	
  &	
  WebSockets
            InvesNgaNng	
  OpenFlow/SDN
           Replace	
  current	
  tools	
  with	
  a	
  completely	
  new	
  tool	
  set	
  based	
  on	
  
          Trigger	
  (to	
  be	
  opensourced).




                                                           46
Wednesday, May 23, 12                                                                                        46
AOL	
  is	
  Hiring!

              As	
  you	
  have	
  seen,	
  we	
  are	
  NOT	
  just	
  your	
  parent’s	
  dialup	
  
                                                   anymore.
                                    h_p://corp.aol.com/careers




                                                         47
Wednesday, May 23, 12                                                                                    47
We value your feedback!
     Take a survey of this session now in the mobile app

     • Click 'Sessions' button

     • Click on today's tab

     • Find this session

     • Click 'Surveys'




     #CitrixSynergy



                                         48
Wednesday, May 23, 12                                      48
Appendix:	
  AOL	
  custom	
  tcpProfile	
  configs
     add ns tcpProfile aol_vip_std_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 0 -nagle DISABLED -ackOnPush
     ENABLED -maxBurst 10 -initialCwnd 10 -delayedAck 100 -oooQSize 64 -maxPktPerMss 0 –pktPerRetx 2 -minRTO 400 -
     slowStartIncr 4 -bufferSize 8190

     add ns tcpProfile aol_vip_dialup-client_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 0 -nagle Enabled -
     ackOnPush ENABLED -maxBurst 4 -initialCwnd 4 -delayedAck 50 -oooQSize 100 -maxPktPerMss 0 -pktPerRetx 2 -
     minRTO 500 -slowStartIncr 2 -bufferSize 8190

     add ns tcpProfile aol_vip_mobile-client_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 0 -nagle Enabled -
     ackOnPush ENABLED -maxBurst 10 -initialCwnd 6 -delayedAck 50 -oooQSize 100 -maxPktPerMss 0 -pktPerRetx 2 -
     minRTO 1000 -slowStartIncr 4 -bufferSize 8190

     add ns tcpProfile aol_vip_server-2-vserver_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 5 -nagle Enabled -
     ackOnPush ENABLED -maxBurst 10 -initialCwnd 10 -delayedAck 100 -oooQSize 64 -maxPktPerMss 0 -pktPerRetx 2 -
     minRTO 100 -slowStartIncr 4 -bufferSize 8190

     add ns tcpProfile aol_svc_std_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 3 -nagle DISABLED -ackOnPush
     ENABLED -maxBurst 10 -initialCwnd 10 -delayedAck 100 -oooQSize 64 -maxPktPerMss 0 -pktPerRetx 2 -minRTO 100 -
     slowStartIncr 4 -bufferSize 8190




                                                           49
Wednesday, May 23, 12                                                                                                49
Appendix:	
  AOL	
  custom	
  tcpParam	
  Config
     set ns tcpParam -WS Enabled -WSVal 2 -SACK Enabled -maxBurst 6
     -initialCwnd 4 -delayedAck 100 -downStateRST DISABLED -nagle DISABLED
     -limitedPersist ENABLED -oooQSize 64 -ackOnPush ENABLED -maxPktPerMss 0
     -pktPerRetx 2 -minRTO 100 -slowStartIncr 2




                                               50
Wednesday, May 23, 12                                                          50
Appendix:	
  NSCLI	
  Show	
  Commands
     tcpParam	
  NSCLI	
  Commands
     Show	
  all	
  current	
  values
         •sh ns tcpparam -format old -level verbose
         •sh ns tcpparam -level verbose



     tcpProfile	
  NSCLI	
  Commands
     Show	
  all	
  current	
  values
         •sh ns tcpprofile -format old -level verbose
         •sh ns tcpprofile <profile_name> -level verbose




                                                 51
Wednesday, May 23, 12                                      51
Appendix:	
  tcpParam	
  NSCLI	
  Commands
     Set	
  AOL	
  custom	
  tcpParam
              set ns tcpParam -WS Enabled -WSVal 2 -SACK Enabled -maxBurst 6 -initialCwnd 6
              -delayedAck 100 -downStateRST DISABLED -nagle DISABLED -limitedPersist ENABLED
              -oooQSize 64 -ackOnPush ENABLED -maxPktPerMss 0 -pktPerRetx 2 -minRTO 100
              -slowStartIncr 2

     Set	
  a	
  specific	
  value
              set ns tcpParam -WS enabled




          Changing	
  tcpparam	
  modifies	
  the	
  same	
  values	
  in	
  the	
  default
                                     	
  tcpprofile.




                                                  52
Wednesday, May 23, 12                                                                          52
Appendix:	
  tcpProfile	
  NSCLI	
  Commands
     CreaNng	
  new	
  profiles
                 add ns tcpProfile aol_vip_std_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 0 -nagle DISABLED
                 -ackOnPush ENABLED -maxBurst 10 -initialCwnd 10 -delayedAck 100 -oooQSize 64 -maxPktPerMss 0
                 -pktPerRetx 2 -minRTO 400 -slowStartIncr 4 -bufferSize 8190 –mss 0


     Changing	
  a	
  value
                 Set ns tcpProfile aol_vip_std_tcpprofile -WSVal 3


     Applying	
  Profiles
                 set    cs vserver <vservername:port> -tcpprofile <profilename>
                 set    cs vserver www_aol_com:80 -tcpprofile aol_vip_std_tcpprofile
                 set    lb vserver <vservername:port> -tcpprofile <profilename>
                 set    service <servicename:port> -tcpprofile <profilename>


     Resemng	
  to	
  Default
                 unset cs vserver <vservername:port> -tcpprofile
                 unset lb vserver <vservername:port> -tcpprofile
                 unset service <servicename:port> -tcpprofile




                                                                 53
Wednesday, May 23, 12                                                                                           53
Explana0on	
  of	
  TCPProfile	
  variables.	
  (V9.2	
  &	
  9.3)
  Variable               Default       Min           Max               Descrip5on
  -name                        ----      ----              ---- The name for a TCP profile. A TCP profile name can be from 1 to 127 characters and must begin with a letter, a number, or the underscore
                                                                       symbol (_). Other characters allowed after the first character in a name are the hyphen, period, pound sign, space, at sign, and equals.

  -­‐WS                   Disabled       -­‐-­‐-­‐         -­‐-­‐-­‐ Enable or disable window scaling.  If Disabled, Window Scaling is disabled for both sides of the conversation. 
  -­‐WSVal                        4             0                 8 The factor used to calculate the new window size.
  -­‐maxburst                      6            1         255 The maximum number of TCP segments allowed in a burst.  The higher this value, the more frames are able to be sent at one time.

  -­‐ini5alCwnd                    4            2            44 The initial maximum upper limit on the number of TCP packets that can be outstanding on the TCP link to the server. As of 9.2.50.1, this
                                                                       number was upped from 6 to 44

  -­‐delayedAck                100         10             300 The time-out for TCP delayed ACK, in milliseconds.

  -­‐oooQSize                   64              0      65535 The maximum size of out-of-order packets queue. A value of 0 means infinite.  The name is a misnomer, this buffer contains sent frames that
                                                              are awaiting acks or received frames that are not sequential, meaning some packets are missing due to SACK.
  -­‐maxPktPerMss                  0            0         512 The maximum number of TCP packets allowed per maximum segment size (MSS). A value of 0 means that no maximum is set.

  -­‐pktPerRetx                    1            1         512 The maximum limit on the number of packets that should be retransmitted on receiving a "partial ACK". Partial ACK are ACKs indicating
                                                             not all outstanding frames were acked.
  -­‐minRTO                    100         10          64000 The minimum Receive Time Out (RTO) value, in milliseconds.  The NetScale supports New Reno and conforms to RFC 2001 and RFC
                                                             5827 (?).  Since the Netscaler does not use TCP Timestamping, these values do not correspond to actual propagation delays. 

  -­‐slowStartIncr                 2            1         100 Multiplier determines the rate which slow start increases the size of the TCP transmission window after each ack of successful transmission.
  -­‐SACK                 Disabled       -­‐-­‐-­‐         -­‐-­‐-­‐ Enable or disable selective acknowledgement (SACK). There is NO reason this should be off

  -­‐nagle                Disabled       -­‐-­‐-­‐         -­‐-­‐-­‐ Enable or disable the Nagle algorithm on TCP connections. When enabled, reduces the number of small segments by combining them into
                                                                       one packet. Primary use is on slow or congested links such as mobile or dial.
  -­‐ackOnPush            Enabled        -­‐-­‐-­‐         -­‐-­‐-­‐   Send immediate  acknowledgement (ACK) on receipt of TCP packets with the PUSH bit set.

  -­‐mss	
  (v	
  9.3)             0            0        1460 Maximum segment size, default value 0 uses global setting in “set tcpparam”, maximum value 1460

  -­‐bufferSize	
              8190       -­‐-­‐-­‐   4194304 The value that you set is the minimum value that is advertised by the NetScaler appliance, and this buffer size is reserved when a client
  (v9.3)                                                               initiates a connection that is associated with an endpoint-application function, such as compression or SSL. The managed application can
                                                                       request a larger buffer, but if it requests a smaller buffer, the request is not honored, and the specified buffer size is used. If the TCP buffer
                                                                       size is set both at the global level and at the entity level (virtual server or service level), the buffer specified at the entity level takes
                                                                       precedence. If the buffer size that you specify for a service is not the same as the buffer size that you specify for the virtual server to which
                                                                       the service is bound, the NetScaler appliance uses the buffer size specified for the virtual server for the client-side connection and the buffer
                                                                       size specified for the service for the server-side connection. However, for optimum results, make sure that the values specified for a virtual
                                                                       server and the services bound to it have the same value. The buffer size that you specify is used only when the connection is associated with
                                                                                                           54
                                                                       endpoint-application functions, such as SSL and compression. Note: A high TCP buffer value could limit the number of connections that can
                                                                       be made to the NetScaler appliance.

Wednesday, May 23, 12                                                                                                                                                                                                      54

More Related Content

What's hot

[234] toast cloud open stack sdn 전략-박성우
[234] toast cloud open stack sdn 전략-박성우[234] toast cloud open stack sdn 전략-박성우
[234] toast cloud open stack sdn 전략-박성우NAVER D2
 
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...Vietnam Open Infrastructure User Group
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep diveSanjeev Kumar
 
OpenShift Virtualization- Technical Overview.pdf
OpenShift Virtualization- Technical Overview.pdfOpenShift Virtualization- Technical Overview.pdf
OpenShift Virtualization- Technical Overview.pdfssuser1490e8
 
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...LINE Corporation
 
An introduction to SSH
An introduction to SSHAn introduction to SSH
An introduction to SSHnussbauml
 
SELinuxの課題について
SELinuxの課題についてSELinuxの課題について
SELinuxの課題についてAtsushi Mitsu
 
OpenStack Networking
OpenStack NetworkingOpenStack Networking
OpenStack NetworkingIlya Shakhat
 
日本OpenStackユーザ会 第37回勉強会
日本OpenStackユーザ会 第37回勉強会日本OpenStackユーザ会 第37回勉強会
日本OpenStackユーザ会 第37回勉強会Yushiro Furukawa
 
Openv switchの使い方とか
Openv switchの使い方とかOpenv switchの使い方とか
Openv switchの使い方とかkotto_hihihi
 
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개OpenStack Korea Community
 
Large scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutionsLarge scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutionsHan Zhou
 
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/NeutronOverview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
 
Kvm virtualization platform
Kvm virtualization platformKvm virtualization platform
Kvm virtualization platformAhmad Hafeezi
 
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方Toru Makabe
 
Contrail Deep-dive - Cloud Network Services at Scale
Contrail Deep-dive - Cloud Network Services at ScaleContrail Deep-dive - Cloud Network Services at Scale
Contrail Deep-dive - Cloud Network Services at ScaleMarketingArrowECS_CZ
 
Boosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringBoosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringShapeBlue
 
High availability virtualization with proxmox
High availability virtualization with proxmoxHigh availability virtualization with proxmox
High availability virtualization with proxmoxOriol Izquierdo Vibalda
 
Batch Message Listener capabilities of the Apache Kafka Connector
Batch Message Listener capabilities of the Apache Kafka ConnectorBatch Message Listener capabilities of the Apache Kafka Connector
Batch Message Listener capabilities of the Apache Kafka ConnectorNeerajKumar1965
 

What's hot (20)

[234] toast cloud open stack sdn 전략-박성우
[234] toast cloud open stack sdn 전략-박성우[234] toast cloud open stack sdn 전략-박성우
[234] toast cloud open stack sdn 전략-박성우
 
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep dive
 
OpenShift Virtualization- Technical Overview.pdf
OpenShift Virtualization- Technical Overview.pdfOpenShift Virtualization- Technical Overview.pdf
OpenShift Virtualization- Technical Overview.pdf
 
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...
Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet ful...
 
An introduction to SSH
An introduction to SSHAn introduction to SSH
An introduction to SSH
 
SELinuxの課題について
SELinuxの課題についてSELinuxの課題について
SELinuxの課題について
 
OpenStack Networking
OpenStack NetworkingOpenStack Networking
OpenStack Networking
 
日本OpenStackユーザ会 第37回勉強会
日本OpenStackユーザ会 第37回勉強会日本OpenStackユーザ会 第37回勉強会
日本OpenStackユーザ会 第37回勉強会
 
Openv switchの使い方とか
Openv switchの使い方とかOpenv switchの使い方とか
Openv switchの使い方とか
 
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개
 
Large scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutionsLarge scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutions
 
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/NeutronOverview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
 
Kvm virtualization platform
Kvm virtualization platformKvm virtualization platform
Kvm virtualization platform
 
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
 
Contrail Deep-dive - Cloud Network Services at Scale
Contrail Deep-dive - Cloud Network Services at ScaleContrail Deep-dive - Cloud Network Services at Scale
Contrail Deep-dive - Cloud Network Services at Scale
 
Boosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringBoosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uring
 
High availability virtualization with proxmox
High availability virtualization with proxmoxHigh availability virtualization with proxmox
High availability virtualization with proxmox
 
Ssh
SshSsh
Ssh
 
Batch Message Listener capabilities of the Apache Kafka Connector
Batch Message Listener capabilities of the Apache Kafka ConnectorBatch Message Listener capabilities of the Apache Kafka Connector
Batch Message Listener capabilities of the Apache Kafka Connector
 

Viewers also liked

Common Pitfalls when Setting up a NetScaler for the First Time
Common Pitfalls when Setting up a NetScaler for the First TimeCommon Pitfalls when Setting up a NetScaler for the First Time
Common Pitfalls when Setting up a NetScaler for the First TimeDavid McGeough
 
Using NetScaler Insight to Troubleshoot Network and Server Performance Issues
Using NetScaler Insight to Troubleshoot Network and Server Performance IssuesUsing NetScaler Insight to Troubleshoot Network and Server Performance Issues
Using NetScaler Insight to Troubleshoot Network and Server Performance IssuesDavid McGeough
 
Integrated Cache on Netscaler
Integrated Cache on NetscalerIntegrated Cache on Netscaler
Integrated Cache on NetscalerMark Hillick
 
Citrix TechEdge 2014 - How to Troubleshoot Deployments of StoreFront and NetS...
Citrix TechEdge 2014 - How to Troubleshoot Deployments of StoreFront and NetS...Citrix TechEdge 2014 - How to Troubleshoot Deployments of StoreFront and NetS...
Citrix TechEdge 2014 - How to Troubleshoot Deployments of StoreFront and NetS...David McGeough
 
Introduction to TCP/IP
Introduction to TCP/IPIntroduction to TCP/IP
Introduction to TCP/IPMichael Lamont
 
Top 10 Interview Questions and Answers
Top 10 Interview Questions and AnswersTop 10 Interview Questions and Answers
Top 10 Interview Questions and AnswersLearn By Watch
 
Integration of SAP HANA with Hadoop
Integration of SAP HANA with HadoopIntegration of SAP HANA with Hadoop
Integration of SAP HANA with HadoopRamkumar Rajendran
 

Viewers also liked (9)

Common Pitfalls when Setting up a NetScaler for the First Time
Common Pitfalls when Setting up a NetScaler for the First TimeCommon Pitfalls when Setting up a NetScaler for the First Time
Common Pitfalls when Setting up a NetScaler for the First Time
 
Using NetScaler Insight to Troubleshoot Network and Server Performance Issues
Using NetScaler Insight to Troubleshoot Network and Server Performance IssuesUsing NetScaler Insight to Troubleshoot Network and Server Performance Issues
Using NetScaler Insight to Troubleshoot Network and Server Performance Issues
 
Training
TrainingTraining
Training
 
計概
計概計概
計概
 
Integrated Cache on Netscaler
Integrated Cache on NetscalerIntegrated Cache on Netscaler
Integrated Cache on Netscaler
 
Citrix TechEdge 2014 - How to Troubleshoot Deployments of StoreFront and NetS...
Citrix TechEdge 2014 - How to Troubleshoot Deployments of StoreFront and NetS...Citrix TechEdge 2014 - How to Troubleshoot Deployments of StoreFront and NetS...
Citrix TechEdge 2014 - How to Troubleshoot Deployments of StoreFront and NetS...
 
Introduction to TCP/IP
Introduction to TCP/IPIntroduction to TCP/IP
Introduction to TCP/IP
 
Top 10 Interview Questions and Answers
Top 10 Interview Questions and AnswersTop 10 Interview Questions and Answers
Top 10 Interview Questions and Answers
 
Integration of SAP HANA with Hadoop
Integration of SAP HANA with HadoopIntegration of SAP HANA with Hadoop
Integration of SAP HANA with Hadoop
 

Similar to NetScaler TCP Performance Tuning

OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack
 
Best practices for network troubleshooting
Best practices for network troubleshootingBest practices for network troubleshooting
Best practices for network troubleshootingCumulus Networks
 
Using Software-Defined WAN implementation to turn on advanced connectivity se...
Using Software-Defined WAN implementation to turn on advanced connectivity se...Using Software-Defined WAN implementation to turn on advanced connectivity se...
Using Software-Defined WAN implementation to turn on advanced connectivity se...RedHatTelco
 
Openflow for Cloud Scalability
Openflow for Cloud ScalabilityOpenflow for Cloud Scalability
Openflow for Cloud ScalabilityDaoliCloud Ltd
 
Mininet: Moving Forward
Mininet: Moving ForwardMininet: Moving Forward
Mininet: Moving ForwardON.Lab
 
SDN: an introduction
SDN: an introductionSDN: an introduction
SDN: an introductionLuca Profico
 
Improving performance and efficiency with Network Virtualization Overlays
Improving performance and efficiency with Network Virtualization OverlaysImproving performance and efficiency with Network Virtualization Overlays
Improving performance and efficiency with Network Virtualization OverlaysAdam Johnson
 
Open vSwitch Implementation Options
Open vSwitch Implementation Options Open vSwitch Implementation Options
Open vSwitch Implementation Options Netronome
 
Programming with Relaxed Synchronization
Programming with Relaxed SynchronizationProgramming with Relaxed Synchronization
Programming with Relaxed Synchronizationracesworkshop
 
Scalar Brocade Toronto Roadshow 2013
Scalar Brocade Toronto Roadshow 2013Scalar Brocade Toronto Roadshow 2013
Scalar Brocade Toronto Roadshow 2013patmisasi
 
Network and TCP performance relationship workshop
Network and TCP performance relationship workshopNetwork and TCP performance relationship workshop
Network and TCP performance relationship workshopKae Hsu
 
MySQL Replication Performance in the Cloud
MySQL Replication Performance in the CloudMySQL Replication Performance in the Cloud
MySQL Replication Performance in the CloudVitor Oliveira
 
Sdn dell lab report v2
Sdn dell lab report v2Sdn dell lab report v2
Sdn dell lab report v2Oded Rotter
 
Dimension data cloud for the enterprise architect
Dimension data cloud for the enterprise architectDimension data cloud for the enterprise architect
Dimension data cloud for the enterprise architectDavid Sawatzke
 
Making the Switch to Bare Metal and Open Networking
Making the Switch to Bare Metal and Open NetworkingMaking the Switch to Bare Metal and Open Networking
Making the Switch to Bare Metal and Open NetworkingCumulus Networks
 
FreeSWITCH as a Microservice
FreeSWITCH as a MicroserviceFreeSWITCH as a Microservice
FreeSWITCH as a MicroserviceEvan McGee
 
ENSURING FAST AND SECURE GAMING APPLICATION DOWNLOADS GLOBALLY
ENSURING FAST AND SECURE GAMING APPLICATION DOWNLOADS GLOBALLYENSURING FAST AND SECURE GAMING APPLICATION DOWNLOADS GLOBALLY
ENSURING FAST AND SECURE GAMING APPLICATION DOWNLOADS GLOBALLYCDNetworks
 
Naveen nimmu sdn future of networking
Naveen nimmu sdn   future of networkingNaveen nimmu sdn   future of networking
Naveen nimmu sdn future of networkingOpenSourceIndia
 

Similar to NetScaler TCP Performance Tuning (20)

OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
 
Best practices for network troubleshooting
Best practices for network troubleshootingBest practices for network troubleshooting
Best practices for network troubleshooting
 
Using Software-Defined WAN implementation to turn on advanced connectivity se...
Using Software-Defined WAN implementation to turn on advanced connectivity se...Using Software-Defined WAN implementation to turn on advanced connectivity se...
Using Software-Defined WAN implementation to turn on advanced connectivity se...
 
Openflow for Cloud Scalability
Openflow for Cloud ScalabilityOpenflow for Cloud Scalability
Openflow for Cloud Scalability
 
Mininet: Moving Forward
Mininet: Moving ForwardMininet: Moving Forward
Mininet: Moving Forward
 
SDN: an introduction
SDN: an introductionSDN: an introduction
SDN: an introduction
 
Improving performance and efficiency with Network Virtualization Overlays
Improving performance and efficiency with Network Virtualization OverlaysImproving performance and efficiency with Network Virtualization Overlays
Improving performance and efficiency with Network Virtualization Overlays
 
Open vSwitch Implementation Options
Open vSwitch Implementation Options Open vSwitch Implementation Options
Open vSwitch Implementation Options
 
Programming with Relaxed Synchronization
Programming with Relaxed SynchronizationProgramming with Relaxed Synchronization
Programming with Relaxed Synchronization
 
Scalar Brocade Toronto Roadshow 2013
Scalar Brocade Toronto Roadshow 2013Scalar Brocade Toronto Roadshow 2013
Scalar Brocade Toronto Roadshow 2013
 
Network and TCP performance relationship workshop
Network and TCP performance relationship workshopNetwork and TCP performance relationship workshop
Network and TCP performance relationship workshop
 
MySQL Replication Performance in the Cloud
MySQL Replication Performance in the CloudMySQL Replication Performance in the Cloud
MySQL Replication Performance in the Cloud
 
Sdn dell lab report v2
Sdn dell lab report v2Sdn dell lab report v2
Sdn dell lab report v2
 
An Optics Life
An Optics LifeAn Optics Life
An Optics Life
 
Dimension data cloud for the enterprise architect
Dimension data cloud for the enterprise architectDimension data cloud for the enterprise architect
Dimension data cloud for the enterprise architect
 
Umit hw6
Umit hw6Umit hw6
Umit hw6
 
Making the Switch to Bare Metal and Open Networking
Making the Switch to Bare Metal and Open NetworkingMaking the Switch to Bare Metal and Open Networking
Making the Switch to Bare Metal and Open Networking
 
FreeSWITCH as a Microservice
FreeSWITCH as a MicroserviceFreeSWITCH as a Microservice
FreeSWITCH as a Microservice
 
ENSURING FAST AND SECURE GAMING APPLICATION DOWNLOADS GLOBALLY
ENSURING FAST AND SECURE GAMING APPLICATION DOWNLOADS GLOBALLYENSURING FAST AND SECURE GAMING APPLICATION DOWNLOADS GLOBALLY
ENSURING FAST AND SECURE GAMING APPLICATION DOWNLOADS GLOBALLY
 
Naveen nimmu sdn future of networking
Naveen nimmu sdn   future of networkingNaveen nimmu sdn   future of networking
Naveen nimmu sdn future of networking
 

Recently uploaded

Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 

Recently uploaded (20)

Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 

NetScaler TCP Performance Tuning

  • 1. Synergy  SYN304:   NetScaler  TCP  Performance   Tuning  in  the  AOL  Network Presenters: Kevin  Mason  -­‐  kem@aol.net Tim  Wicinski  -­‐  tjw@aol.net May  11,  2012 1 Wednesday, May 23, 12 1
  • 2. Agenda Goals InformaNon  on  the  AOL  ProducNon  Network TCP  Tuning  Concepts TCP  Tuning  on  the  NetScaler AOL  tcpProfiles Case  Study:  bps  Improvements Case  Study:  Retransmission  ReducNon Case  Study:  Engadget  Coverage  of  New  iPad  Announcement     TroubleshooNng  and  Monitoring  NetScalers Appendixes 2 Wednesday, May 23, 12 2
  • 3. Note  on  Today’s  Sta0s0cs All  the  AOL  staNsNcs  presented  today  are  operaNonal   numbers.  They  should  not  be  construed  to  represent  page   views  or  other  types  of  site  popularity  data. 3 Wednesday, May 23, 12 3
  • 4. Performance  Tuning  Goal Create  the  best  possible  experience  for   AOL  Customers  by  conNnuously   analyzing  and  engineering  the  fastest,   most  efficient  network  possible. 4 Wednesday, May 23, 12 4
  • 5. Defini0ons Just  to  make  sure  we  are  on  the  same  page VIP  -­‐  Load  Balancing  (LB)  and  Content  Switching  (CS)  VServers Service  -­‐  Backend  connecNvity  to  Real  Hosts Outbound  -­‐  Traffic  outbound  from  the  VIP  or  Service   Inbound  -­‐  Traffic  inbound  towards  the  VIP  or  Service 5 Wednesday, May 23, 12 5
  • 6. AOL  Produc0on  Network  Stats Load  Balancing 70  HA  pairs  of  NetScalers 10G,  mulN  interface  a_ached 19,642  VServers 27,214  Services  (Host  +  Port) 13,462  Servers,  majority  cloud  based GSLB/DNS Approx  18K  DNS  domains Approx  4K  GSLB  configs 180M  queries/day 6 Wednesday, May 23, 12 6
  • 7. AOL  Produc0on  Network Internal  Tooling Web  based  self  service  site  managing  all  changes,  using  a   mixture  of  SOAP  and  NITRO  calls: Avg  250  Self  Service  changes  /  week. Avg  25K  config  command  changes  /  week. 7 Wednesday, May 23, 12 7
  • 8. TCP  Tuning  Concepts Have  a  solid,  technical  understanding  of  the  client's   connecNon  method.   Get  the  bits  to  the  customer  quickly  and  cleanly. Where  possible,  send  a  large  iniNal  burst  of  packets. Ramp  traffic  up  fast. Recover  errors  quickly,  avoid  slow  start. Melt  fiber,  smoke  servers,  choke  routers 8 Wednesday, May 23, 12 8
  • 9. TCP  Tuning  Concepts TCP  Tuning  is  done  per  connecNon  not  the  aggregate  of  the   link.     Tuning  Factors  include: Latency  or  Round  Trip  Time  (RTT) Minimum  Bandwidth Packet  Loss Supported  TCP  OpNons 9 Wednesday, May 23, 12 9
  • 10. TCP  Connec0on  Op0ons TCP  negoNates  opNons  to  define  how  the  connecNon  operates. TCP  OpNons  are  generally  symmetrical,  both  sides  have  to  support   it,  or  the  opNon  is  dropped. TCP  OpNon  values  are  NOT  always  the  same  on  both  sides.    For   example,  Receive  Windows,  Window  Scaling  and  MSS  (Max   Segment  Size)  are  ojen  different. 10 Wednesday, May 23, 12 10
  • 11. !!!  Caveat  Emptor  !!! While  we  are  happy  to  share  AOL  experience  and   configuraAons,  be  careful  applying  these  seEngs  in  your   network,  they  might  cause  problems. When  working  with  any  parameters,  have  a  rollback  plan! O’Toole’s  Comment  on  Murphy's  Law:                                                                                                  Murphy  is  an  Op?mist.   11 Wednesday, May 23, 12 11
  • 12. TCP  Tuning  on  the  NetScaler Two  major  tuning  knobs  on  the  NetScaler: tcpParam  -­‐  Global  TCP  semngs tcpProfile  -­‐  Per  VIP  or  Service  semngs We  set  the  tcpParam  for  general  purpose  values,  but  since  it  is   a  OS  supplied  semng,  it  could  be  over  wri_en  by  an  upgrade.     Using  custom  tcpProfiles  is  safer  since  they  will  not  be   overwri_en. 12 Wednesday, May 23, 12 12
  • 13. tcpProfile  Values -­‐WS -­‐maxPktPerMss -­‐nagle -­‐WSVAL -­‐pktPerRetx -­‐ackOnPush -­‐maxBurst -­‐minRTO -­‐mss  (v9.3) -­‐iniAalCwnd -­‐slowStartIncr -­‐bufferSize  (v9.3) -­‐delayedAck -­‐SACK -­‐oooQSize If you do nothing else,: Enable -WS (Window Scale) Enable -SACK (Selective Ack) 13 Wednesday, May 23, 12 13
  • 14. TCP  Window  Scale  (RFC  1323) Window  Scale  is  a  TCP  opNon  to  increase  the  receiver’s   window  size,  compensaNng  for  long  and/or  fat  networks  (LFN).   The  values  used  by  each  device  in  a  connecNon  are  ojen   asymmetrical. Enabled  on  the  NetScaler  with  the  -­‐WS  opNon. The  value  adverNsed  by  the  NetScaler  is  set  with  -­‐WScale. At  the  least,  enable  -­‐WS  to  take  advantage  of  the  client  and  server   adverNsed  windows. 14 Wednesday, May 23, 12 14
  • 15. TCP  SACK  (RFC  2018) SelecNve  Acknowledgement  or  SACK  is  a  TCP  opNon  enabling  a   receiver  to  tell  the  sender  the  range  of  non-­‐conNguous  packets   received. Without  SACK,  the  receiver  can  only  tell  the  sender  about  packets   sequenNally  received.     This  slows  down  recovery. Can  force  Conges5on  control  to  kick  in. Enabled  with  the  -­‐SACK  opNon. 15 Wednesday, May 23, 12 15
  • 16. Citrix  default  tcpProfile   SeEngs  appropriate  for  circa  1999.     Have  stayed  this  way  due  to  Citrix  general  policy  of  not   changing  default  semngs.  Some  significant  changes  were  made   to  default  on  v9.2,  see  CTX130962  (10/7/2011). No  Window  Scaling  or  SACK. Causes  choppy  data  flow. Slow  to  recover  from  packet  loss. Can  trigger  slow  start  conges5on  control. Very  slow  CongesNon  Window  ramp  up. Small  -­‐iniNalCWind  &  -­‐maxBurst Search  for  “Ini5al  Conges5on  Window”  for  more  details 16 Wednesday, May 23, 12 16
  • 17. Addi0onal  Citrix  supplied  tcpProfiles Citrix  supplies  several  predefined  profiles Depending  on  code  level,  Citrix  supplies  at  least  7  addiNonal   tcpProfiles  for  you  to  try  and  may  work  well  for  you.    Since   they  are  supplied  with  the  O/S,  they  are  subject  to  changes  in   the  future. We  studied  these  tcpProfiles  to  develop  a  starNng  point  for   our  tesNng. 17 Wednesday, May 23, 12 17
  • 18. AOL  Custom  tcpProfiles 18 Wednesday, May 23, 12 18
  • 19. AOL  Custom  tcpProfiles  VIP  Profiles aol_vip_std_tcpprofile aol_vip_mobile-­‐client_tcpprofile aol_vip_server-­‐2-­‐vserver_tcpprofile aol_vip_dialup-­‐client_tcpprofile Service  Profiles aol_svc_std_tcpprofile Complete  configs  included  in  Appendix 19 Wednesday, May 23, 12 19
  • 20. Standard  Client  to  VIP  -­‐  aol_vip_std_tcpprofile Push  data  as  fast  as  possible  ajer  the  connecNon  is   established  by  increasing  the  packet  burst  and  ramp  rate,   improve  error  recovery. AssumpAons Content  is  generally  outbound  out  to  client. The  max  bps  per  flow  is  10  mb/s. RTT  is  75  ms Changes  from  default: Enable  -­‐WS  (Window  Scaling) Increase  -­‐slowStartIncr   Enable  -­‐SACK Increased  -­‐pktPerRetx Increase  -­‐MaxBurst Reduced  -­‐delayedAck   Increase  -­‐IniNalCwnd 20 Wednesday, May 23, 12 20
  • 21. Mobile  client  to  VIP  -­‐  aol_vip_mobile-­‐client_tcpprofile   Based  on  aol_vip_std_tcpprofile  &  aol_vip_dialup-­‐client_tcpprofile   with  changes  to  improve  the  the  mobile  (3G,  4G)  client  experience  by   increasing  tolerance  for  RTT  shijs  as  the  device  moves.  This  reduces   retransmissions  that  can  flood  a  device. AssumpAons: RTT  can  shij  from  300ms  to  900ms bps  is  limited  to  <  3mb/s Changes  from  aol_vip_std_tcpprofile: Increase  the  -­‐minRTO Reduced  –maxburst Reduced  -­‐delayedAck Reduced  -­‐slowstarNncr Results: Avg  30%  reducNon  in  retransmissions  to  mobile  clients 21 Wednesday, May 23, 12 21
  • 22. Server  to  VIP  -­‐  aol_vip_server-­‐2-­‐vserver_tcpprofile Based  on  aol_vip_std_tcpprofile  for  vips  that  are  handling   internal  server  flows  where  the  data  is  inbound  to  the  vserver.     AssumpAons The  source  host  is  in  a    data  center  or  other  a_ached  space. The  maximum  bps  per  flow  is  600  mb/s  inbound  to  a  VIP  or   vserver. The  max  propagaNon  delay  (RTT)  is  20  ms  (0.02  sec). Changes  from  aol_vip_std_tcpprofile: Increase  WScale Reduced  -­‐minRTO 22 Wednesday, May 23, 12 22
  • 23. Dial  Client  to  VIP  -­‐  aol_vip_dialup-­‐client_tcpprofile   Based  on  aol_vip_std_tcpprofile  with  changes  to  improve  the   dial  client  experience  by  reducing  retransmission  and  prevent   terminal  server  flooding.   AssumpAons   Max  bps  per  flow  is  50  kb/s  (  KB/s). Avg  RTT  is  500  ms  (0.5  sec). Changes  from  aol_vip_std_tcpprofile: Increase  -­‐minRTO Reduced  –maxburst Reduced  -­‐delayedack Reduced  -­‐slowstarNncr Results: Total  page  render  Nme  reduced  from  ~35s  to  ~16s. SSL  handshake  Nmes  reduced  from  4+s  to  ~  1s. 23 Wednesday, May 23, 12 23
  • 24. Server  to  Service  -­‐  aol_svc_std_tcpprofile Used  on  Services  for  traffic  between  the  real  host  &  NetScaler.     Even  though  the  propagaNon  delay  is  low,  BDP  is  sNll  a  factor   due  to  per  connecNon  bps. AssumpAons Max  bps  per  flow  is  650  mb/s  (81.25  MB/s). Avg  RTT  is  10  ms  (0.010  Sec). Majority  of  flows  are  HTTP  1.1  or  long  lived  TCP  connecNons. Changes  from  default: Enabled -SACK Reduced -delayedACK Enabled -WS Reduced -minRTO Increased -WScale 24 Wednesday, May 23, 12 24
  • 25. Case  Study:   Bits  per  Second  (bps)  Improvements 25 Wednesday, May 23, 12 25
  • 26. Case  Study:  bps  Improvements Problem: Need  to  improve  the  customer  experience  and  increase   efficiency. Analysis  had  shown  more  than  99%  of  incoming  SYNs  support   Window  Scaling  and/or  SACK,  yet  the  NetScaler  was  not  uNlizing   these  opNons. Analysis  also  showed  a  large  number  of  sessions  adverNsing  Zero   Windows. Sessions  were  being  dropped  into  slow  start  due  to  packet  loss. 26 Wednesday, May 23, 12 26
  • 27. Case  Study:  bps  Improvements What  did  we  change? -­‐WS  Enabled -­‐SACK  Enabled -­‐WSVal  0 -­‐maxBurst    10 -­‐iniNalCwnd  10 -­‐delayedAck  100 -­‐pktPerRetx  2 -­‐slowStartIncr  4 This  became  the  basis  for  the  aol_vip_std_tcpprofile   27 Wednesday, May 23, 12 27
  • 28. Case  Study  :  bps  Improvements 28 Wednesday, May 23, 12 28
  • 29. Case  Study:  bps  Improvements Results: Immediate  30%  jump  in  Outbound  bps  towards  client. Retransmission  rate  did  not  increase. The  increased  in  bps  was  sustained  over  the  following  weeks.     Allowed  higher  server  uNlizaNon  and  higher  NetScaler  efficiency  (more  bang  for  $$).   29 Wednesday, May 23, 12 29
  • 30. Case  Study:   Retransmission  Reduc0on 30 Wednesday, May 23, 12 30
  • 31. Case  Study:    Retransmission  Reduc0on Problem: While  developing  the  Mobile  and  Dial  tcpProfiles,  we  realized  there   were  a  large  number  of  fast  retransmission  on  the  VServers. At  Nmes,  retransmissions  approached  15%  of  outbound  traffic. This  retransmission  rate  was  sustained  around  the  clock,  in  3  data   centers,  eliminaNng  network  congesNon  as  a  cause. 31 Wednesday, May 23, 12 31
  • 32. Case  Study:  Retransmission  Reduc0on TesAng: To  reduce  variables,  we  first  tested  on  one  pair  of  shared   service  NetScalers. We  tried  to  reduce  the  –delayedAck  to  100ms No  change  was  seen  in  the  traffic  over  24  hours Next  we  tried  increasing  the  –minRTO Instant  drops  in  the  number  of  Fast  retransmissions,   1st  -­‐  7th  retransmissions  and   RetransmissionGiveup. 32 Wednesday, May 23, 12 32
  • 33. Case  Study:  Retransmission  Reduc0on 33 Wednesday, May 23, 12 33
  • 34. Case  Study:  Retransmission  Reduc0on Change  Details: Increasing  the  -­‐minRTO  from  100ms  to  400ms  retransmi_ed  packets  dropped  from  14.2k/sec  to  2.5k/sec.  bps  dropped  from  102mb/sec  to  30kb/sec  on  one  link. Increasing  -­‐minRTO  from  100ms  to  300ms  had  the  greatest  effect   on  reducing  retransmissions. When  applied  across  AOL,  total  outbound  bps  dropped  by  10%.  However,  increasing  -­‐minRTO  to  500ms  caused  outbound  bps  to   drop.   34 Wednesday, May 23, 12 34
  • 35. Case  Study:   Engadget  Coverage  of  New  iPad  Announcement     35 Wednesday, May 23, 12 35
  • 36. Case  Study:  Engadget  Coverage  of  New  iPad   Announcement     On  March  7,  2012  Apple  held  a  Keynote  event  to  announce  the   new  iPad.     TradiNonally,  these  events  crush  sites  due  to  user  traffic.     The  iPhone  4S  event  in  September  2011  overloaded  Engadget   and  we  decided  that  it  would  not  happen  again. We  used  all  the  performance  informaNon  presented  here,  plus   other  lessons  learned  to  build  a  plant  that  could  handle  the   projected  traffic. 36 Wednesday, May 23, 12 36
  • 37. Case  Study:  Engadget  Coverage  of  New  iPad  Announcement     Goals: Engadget  would  not  fail  during  future  Apple    announcements.     Customers  would  be  able  to  access  Engadget  Live  Blogs  for  the   enNre  event. No  event  driven  changes  would  be  needed  to  the  plant.   37 Wednesday, May 23, 12 37
  • 38. Case  Study:  Engadget  Coverage  of  New  iPad  Announcement     Major  Concerns: Keep  the  NetScaler  CPU  <  50%. Distribute  the  large  iniNal  connecNons  per  second. Prepare  for  compeNng  sites  to  fail  and  those  users  shij  to   Engadget. IniNally  overbuild  to  guarantee  service  and  measure  the  true   capacity  requirements. 38 Wednesday, May 23, 12 38
  • 39. Case  Study:  Engadget  Coverage  of  New  iPad  Announcement     We  succeeded! Gigaom.com  3/8/2012 "Engadget’s  Tim  Stevens  was  one  of  the  few  tech  media  editors   who  only  had  to  worry  about  covering  the  event  itself,  as   opposed  to  managing  the  failure  of  his  tools.” h_p://gigaom.com/apple/live-­‐from-­‐sf-­‐sorta-­‐why-­‐apple-­‐ events-­‐break-­‐publishers/ 39 Wednesday, May 23, 12 39
  • 40. Case  Study:  Engadget  Coverage  of  New  iPad  Announcement     40 Wednesday, May 23, 12 40
  • 41. Case  Study:  Engadget  Coverage  of  New  iPad  Announcement     Event  Details: Event  ran  120  minutes,  including  pre  and  post  blogs. 3  second  updates  were  served  non-­‐stop  to  the  customers.     A  total  of  1.14  Billion  h_p  requests  served  during  the  event. Sustained  >  200K  h_p  requests  /sec  for  30  minutes Peak  >220K  h_p  requests  /sec  over  10  min. No  changes  were  needed  to  any  device  during  the  event. 41 Wednesday, May 23, 12 41
  • 42. Case  Study:  Engadget  Coverage  of  New  iPad  Announcement     How  we  did  it: Compression  Bypass  set  for  20% Integrated  Caching Flash  Cache  (1  sec) 14  non-­‐dedicated  NetScalers  HA  pairs  to  minimize  connecNons     rate,  located  in  5  data  centers  in  the  US  and  EU.   AOL  custom  tcpProfiles. 600  cloud  based,  well  tuned,  virtual  Centos  servers. This  was  done  while  maintaining  normal  AOL  producNon   services  on  the  shared  infrastructure! 42 Wednesday, May 23, 12 42
  • 43. Troubleshoo0ng  &  Monitoring 43 Wednesday, May 23, 12 43
  • 44. Troubleshoo0ng:  General Signs  your  NetScaler  is  gemng  overloaded: Watch  for  xoff  frames  on  the  switch  port,  indicates  the  NS  is  having   trouble  handling  packets. Watch  for  buffer  overruns,  may  indicate  the  need  for  addiNonal   interfaces  to  expand  buffer  pool. nstrace.sh  issues: Not  always  fully  capturing  packets. RelaNve  frame  Nmestamps  may  have  a  negaNve  value  due  to   capture  mechanism. AddiNonal  threads  added  in  10.x  which  should  help. 44 Wednesday, May 23, 12 44
  • 45. Monitoring:  SNMP  OIDs In  addiNon  to  the  usual  OIDS,  we  have  found  these  very  useful   to  warn  of  potenNal  problems.   ifTotXoffSent  -­‐  .1.3.6.1.4.1.5951.4.1.1.54.1.43 ifnicTxStalls  -­‐  .1.3.6.1.4.1.5951.4.1.1.54.1.45 ifErrRxNoBuffs  -­‐  .1.3.6.1.4.1.5951.4.1.1.54.1.30 ifErrTxNoNSB  -­‐  .1.3.6.1.4.1.5951.4.1.1.54.1.31 45 Wednesday, May 23, 12 45
  • 46. Future  work What  our  team  is  working  on  over  the  next  12  months IPFIX  and  AppFlow  monitoring. Add  NetScaler  interfaces  to  increase  buffer  pool. Custom  tcpProfiles  for  specific  applicaNons  like  ad  serving. Move  to  route  based,  acNve/acNve  architecture. Start  to  roll  out  Jumbo  Frames  &  WebSockets InvesNgaNng  OpenFlow/SDN Replace  current  tools  with  a  completely  new  tool  set  based  on   Trigger  (to  be  opensourced). 46 Wednesday, May 23, 12 46
  • 47. AOL  is  Hiring! As  you  have  seen,  we  are  NOT  just  your  parent’s  dialup   anymore. h_p://corp.aol.com/careers 47 Wednesday, May 23, 12 47
  • 48. We value your feedback! Take a survey of this session now in the mobile app • Click 'Sessions' button • Click on today's tab • Find this session • Click 'Surveys' #CitrixSynergy 48 Wednesday, May 23, 12 48
  • 49. Appendix:  AOL  custom  tcpProfile  configs add ns tcpProfile aol_vip_std_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 0 -nagle DISABLED -ackOnPush ENABLED -maxBurst 10 -initialCwnd 10 -delayedAck 100 -oooQSize 64 -maxPktPerMss 0 –pktPerRetx 2 -minRTO 400 - slowStartIncr 4 -bufferSize 8190 add ns tcpProfile aol_vip_dialup-client_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 0 -nagle Enabled - ackOnPush ENABLED -maxBurst 4 -initialCwnd 4 -delayedAck 50 -oooQSize 100 -maxPktPerMss 0 -pktPerRetx 2 - minRTO 500 -slowStartIncr 2 -bufferSize 8190 add ns tcpProfile aol_vip_mobile-client_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 0 -nagle Enabled - ackOnPush ENABLED -maxBurst 10 -initialCwnd 6 -delayedAck 50 -oooQSize 100 -maxPktPerMss 0 -pktPerRetx 2 - minRTO 1000 -slowStartIncr 4 -bufferSize 8190 add ns tcpProfile aol_vip_server-2-vserver_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 5 -nagle Enabled - ackOnPush ENABLED -maxBurst 10 -initialCwnd 10 -delayedAck 100 -oooQSize 64 -maxPktPerMss 0 -pktPerRetx 2 - minRTO 100 -slowStartIncr 4 -bufferSize 8190 add ns tcpProfile aol_svc_std_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 3 -nagle DISABLED -ackOnPush ENABLED -maxBurst 10 -initialCwnd 10 -delayedAck 100 -oooQSize 64 -maxPktPerMss 0 -pktPerRetx 2 -minRTO 100 - slowStartIncr 4 -bufferSize 8190 49 Wednesday, May 23, 12 49
  • 50. Appendix:  AOL  custom  tcpParam  Config set ns tcpParam -WS Enabled -WSVal 2 -SACK Enabled -maxBurst 6 -initialCwnd 4 -delayedAck 100 -downStateRST DISABLED -nagle DISABLED -limitedPersist ENABLED -oooQSize 64 -ackOnPush ENABLED -maxPktPerMss 0 -pktPerRetx 2 -minRTO 100 -slowStartIncr 2 50 Wednesday, May 23, 12 50
  • 51. Appendix:  NSCLI  Show  Commands tcpParam  NSCLI  Commands Show  all  current  values •sh ns tcpparam -format old -level verbose •sh ns tcpparam -level verbose tcpProfile  NSCLI  Commands Show  all  current  values •sh ns tcpprofile -format old -level verbose •sh ns tcpprofile <profile_name> -level verbose 51 Wednesday, May 23, 12 51
  • 52. Appendix:  tcpParam  NSCLI  Commands Set  AOL  custom  tcpParam set ns tcpParam -WS Enabled -WSVal 2 -SACK Enabled -maxBurst 6 -initialCwnd 6 -delayedAck 100 -downStateRST DISABLED -nagle DISABLED -limitedPersist ENABLED -oooQSize 64 -ackOnPush ENABLED -maxPktPerMss 0 -pktPerRetx 2 -minRTO 100 -slowStartIncr 2 Set  a  specific  value set ns tcpParam -WS enabled Changing  tcpparam  modifies  the  same  values  in  the  default  tcpprofile. 52 Wednesday, May 23, 12 52
  • 53. Appendix:  tcpProfile  NSCLI  Commands CreaNng  new  profiles add ns tcpProfile aol_vip_std_tcpprofile -WS ENABLED -SACK ENABLED -WSVal 0 -nagle DISABLED -ackOnPush ENABLED -maxBurst 10 -initialCwnd 10 -delayedAck 100 -oooQSize 64 -maxPktPerMss 0 -pktPerRetx 2 -minRTO 400 -slowStartIncr 4 -bufferSize 8190 –mss 0 Changing  a  value Set ns tcpProfile aol_vip_std_tcpprofile -WSVal 3 Applying  Profiles set cs vserver <vservername:port> -tcpprofile <profilename> set cs vserver www_aol_com:80 -tcpprofile aol_vip_std_tcpprofile set lb vserver <vservername:port> -tcpprofile <profilename> set service <servicename:port> -tcpprofile <profilename> Resemng  to  Default unset cs vserver <vservername:port> -tcpprofile unset lb vserver <vservername:port> -tcpprofile unset service <servicename:port> -tcpprofile 53 Wednesday, May 23, 12 53
  • 54. Explana0on  of  TCPProfile  variables.  (V9.2  &  9.3) Variable Default Min Max Descrip5on -name ---- ---- ---- The name for a TCP profile. A TCP profile name can be from 1 to 127 characters and must begin with a letter, a number, or the underscore symbol (_). Other characters allowed after the first character in a name are the hyphen, period, pound sign, space, at sign, and equals. -­‐WS Disabled -­‐-­‐-­‐ -­‐-­‐-­‐ Enable or disable window scaling.  If Disabled, Window Scaling is disabled for both sides of the conversation.  -­‐WSVal 4 0 8 The factor used to calculate the new window size. -­‐maxburst 6 1 255 The maximum number of TCP segments allowed in a burst.  The higher this value, the more frames are able to be sent at one time. -­‐ini5alCwnd 4 2 44 The initial maximum upper limit on the number of TCP packets that can be outstanding on the TCP link to the server. As of 9.2.50.1, this number was upped from 6 to 44 -­‐delayedAck 100 10 300 The time-out for TCP delayed ACK, in milliseconds. -­‐oooQSize 64 0 65535 The maximum size of out-of-order packets queue. A value of 0 means infinite.  The name is a misnomer, this buffer contains sent frames that are awaiting acks or received frames that are not sequential, meaning some packets are missing due to SACK. -­‐maxPktPerMss 0 0 512 The maximum number of TCP packets allowed per maximum segment size (MSS). A value of 0 means that no maximum is set. -­‐pktPerRetx 1 1 512 The maximum limit on the number of packets that should be retransmitted on receiving a "partial ACK". Partial ACK are ACKs indicating not all outstanding frames were acked. -­‐minRTO 100 10 64000 The minimum Receive Time Out (RTO) value, in milliseconds.  The NetScale supports New Reno and conforms to RFC 2001 and RFC 5827 (?).  Since the Netscaler does not use TCP Timestamping, these values do not correspond to actual propagation delays.  -­‐slowStartIncr 2 1 100 Multiplier determines the rate which slow start increases the size of the TCP transmission window after each ack of successful transmission. -­‐SACK Disabled -­‐-­‐-­‐ -­‐-­‐-­‐ Enable or disable selective acknowledgement (SACK). There is NO reason this should be off -­‐nagle Disabled -­‐-­‐-­‐ -­‐-­‐-­‐ Enable or disable the Nagle algorithm on TCP connections. When enabled, reduces the number of small segments by combining them into one packet. Primary use is on slow or congested links such as mobile or dial. -­‐ackOnPush Enabled -­‐-­‐-­‐ -­‐-­‐-­‐ Send immediate  acknowledgement (ACK) on receipt of TCP packets with the PUSH bit set. -­‐mss  (v  9.3) 0 0 1460 Maximum segment size, default value 0 uses global setting in “set tcpparam”, maximum value 1460 -­‐bufferSize   8190 -­‐-­‐-­‐ 4194304 The value that you set is the minimum value that is advertised by the NetScaler appliance, and this buffer size is reserved when a client (v9.3) initiates a connection that is associated with an endpoint-application function, such as compression or SSL. The managed application can request a larger buffer, but if it requests a smaller buffer, the request is not honored, and the specified buffer size is used. If the TCP buffer size is set both at the global level and at the entity level (virtual server or service level), the buffer specified at the entity level takes precedence. If the buffer size that you specify for a service is not the same as the buffer size that you specify for the virtual server to which the service is bound, the NetScaler appliance uses the buffer size specified for the virtual server for the client-side connection and the buffer size specified for the service for the server-side connection. However, for optimum results, make sure that the values specified for a virtual server and the services bound to it have the same value. The buffer size that you specify is used only when the connection is associated with 54 endpoint-application functions, such as SSL and compression. Note: A high TCP buffer value could limit the number of connections that can be made to the NetScaler appliance. Wednesday, May 23, 12 54