SlideShare une entreprise Scribd logo
1  sur  15
Télécharger pour lire hors ligne
2010 CS                                         	




                                                                          tniky1	




Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
               Page	
  1
2010 CS                                                                   	


                                                       	

                              (                                )               	

                                                                                                	


                                                                                           	



                                                                                    	
  
                                                 	

                        	
     Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
                                Page	
  2
2010 CS                                                                                     	


                                                           	

      •  5.1	
                                            	
                                                                      	
                         •                                                                                   	
      •  5.2	
                                                                	
  
                         •                                                                              	
      •  5.3	
                                                                       	
  
                         •                                                                                        	
  
      •  5.4	
                                                                                                           	
  
                         •                                                                       	
  




p87-­‐113	

         Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
                                                       Page	
  3
2010 CS                                                                                       	


                                                                   	
                                                                                                                         (^^)	
  
   •                        	
  
              –                                                               	
                                               	

   •                        	
  
              –                                                         n             	
              –                                                                      1                      	
              –                                                                                                  ,	
                     	
              –                                                                                    	
              –                                                                                    	
              –                                                                                         (
                                                                                      )	
  
 p87	

         Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
                                                           Page	
  4
2010 CS                                                                                                                                                                  	


                                                                                                                                                                        	
                                                                                                                              	
                                                                              	

                                                                                                                                                             	
                	
  1	
  	
  	
  	
  	
  	
  	
  	
  2	
  	
  	
  	
  	
  	
  	
  3	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  n	
                             	
  i	
  	
                                                                                                                                                                                                                                     best	

              HIRE-­‐ASSISTANT(n)	
                                                                                                                                                                                        	
              	
              1.                        best	
  ←	
  0 	
  =>                           0                                                                                                             	
              2.                        for	
  i	
  ←	
  1	
  to	
  n	
                                                                                                                                      	
  Ci	
            n	
  
              3.                         	
   do	
                  i                    	
              4.                        	
 	
 if	
                            i         best                                                                                                         	
      	
  Ch	
            m	
  
                                                                                                                                                                                                                                 (              )	
              5.                        	
 	
 	
 then	
  best	
  ←	
  i	
  	
  
              6.                         	
         	
        	
          	
   hire	
  candidate	
  i	
  

 p88	
                                                                                                                                                            O(nCi	
  +	
  mCh)	
  	
                                       m              	
         Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
                                                                                                                                                                        Page	
  5
2010 CS                                                                              	


                                                                                                                  	

     •                                                                                           	
  


     •                                                                                                                 	

                                                                        2                               	
  
                                                                                          	
  
                                                                               1 n                             	
  



     •                                                                                           	
             	

               •                                                               n!
p88-­‐89	
                                                	
          Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
                                             Page	
  6
2010 CS                                                                                    	


                                                                   	
                                          m(                      )

                                                                                                                                       	
   •                                                        	
  
              –                                                                                                                                     	
  
   •                                              (                     )     (                                 :                          	
  	
  	
  A:          )	
  
                                                                                                	
                                                                         	
                                                      	
                                               	


   •  (               )	
  
              –                                            Xi	
                                                       Xi	
  
                         •  A i                                                     	
               A               i	
                                    	
  
                         •  Xi I	
  {	
  i	
                                  }	
                    Xi I	
  {	
               i	
                                         }	
 p90	

         Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
                                                                                          Page	
  7
2010 CS                                                                                               	


                                                                                                                                                                           	
   •                                                                                                                           	
  
                                                                                                                                               n/2
              –  (                         )                            	
                                                                       (1/2        n    )
                         •                        n                            	
                                                                                                                                                                      	
                         •                                                                                  1/2	
  
                         •                                                                  ?	
  
   •  E	
  [XA]	
  =	
  Pr{A}                                                                              	
  
              –                    A                           E                                                  XA	
                    Pr	
  
              –                   A                                                                                                       A
                                  	
  
              –                                                                     {i                                                }
                                                                               {i                                          }                            	


 p91	
                           E[Xi]	
  =	
  Pr{i                                       }	
  =	
  ½	
                                                            	

         Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
                                                                                                       Page	
  8
both sides of the above equation to obtain
                        We wish to compute the expected number of heads, so we take the expectation of
2010 CS                                                          n
                                                       	
 to obtain
                        both sides of the above equation= E
                                                  E [X ]            X .                 i
                                            n                                   i=1
                        E [X ] = E
                                             i=1
                                                Xi     .The left side of the above equation is the expectation of the s      	
                                                        ables. By Lemma 5.1, we can easily compute the expectation
                             The left side of the above variables. is the expectation of the sum of nexpectation—it i
                                                        equation By equation (C.20)—linearity of random vari-
     •                       ables.	
  By Lemma 5.1, we can easily compute sum: it equals the each of the random
                                                        expectation of the the expectation of sum of the expectati
                             variables. By equation (C.20)—linearity of expectation—it is easy thecompute the
                                                        variables. Linearity of expectation makes to use of indicato
                                                                                           	
              –   Xi	
  =	
  expectation of the sum: it equals analytical of the expectations even whenrandom d
                              	
  {	
  i	
              }	
  
                                                        powerful the sum technique; it applies of the n there is
                             variables. Linearity of expectation makes the use ofcan easily compute the expected
              –  E[Xi]	
  =	
  Pr	
  {	
  i	
  
                                                        random variables. We now indicator random variables a
                                                }	
  =	
  ½	
  
                      powerful analytical technique; it applies even when there is dependence among the
                                                                      n
                                                                                           Xi(0	
  or	
  
                                                      E [X =	
  X compute i the expected number 1) heads:
              –       random variables. We now can] easily 	
  E 	
  
                                                                  =     X                                 of
                                                                                       1 n
                                                                                      i=1
                                                                                                            	
                                                 n                              n
                        E [X ] = E                    Xi                 =          E [X i ]
                                                i=1                            i=1 	
                                           n                                    n
          X                        =            E [X i ]                 =            1/2                       	
          (     )	
                                                            i=1
                                          i=1
                                                                          [                 ]	
  
                                           n                                         = n/2 .
                                                    =       1/2           	
  	
  2
                                                        i=1            Thus, compared to the method	
   used in equation (C.36), indic
                                                    = n/2 .            greatly simplify the	
  E[Y]	
                                                                          	
  	
  E[X+Y]	
  =	
  E[X]	
  + calculation. We shall use indicator rando
                                                                       out this book.
p91-­‐92	
                Thus, compared to the method used in equation (C.36), indicator random variables
         Copyright©	
  2010	
  tniky1	
  	
  All	
  simplify the calculation. We shall use indicator random variables through-
                          greatly rights	
  reserved.	
                                                                                Page	
  9	
                          out this book.
= 1 · (1/2) + 0 · (1/2)
2010 CS                                          = 1/2 .        	

           Thus the expected number of heads obtained by one flip of a fair
   E	
  [XA]	
  =	
  Pr{A}	
  	
  (                     )         	
           the following lemma shows, the expected value of an indicator
           associated with an event A is equal to the probability that A occur
   •                                                                          	
  
                    Lemma 5.1                   	
                              	
                    Given a sample space S and an event A in the sample space S,
                                                    	
                    Then E [X A ] = Pr { A}.
   •                                                           	
  
              –  Proof                                         	
  
                             By the definition of an indicator random variable from eq
   •                the	
definition of expected value, we have
                    E [X A ] = E [I { A}]              	

                             = 1 · Pr { A} + 0 · Pr { A}
                             = Pr { A} ,           	

 p91	
                    where A denotes S − A, the complement of A.
         Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
            Page	
  10
n+1                                     n
                                                                                We wish to compute the expected number of heads, so we take the
2010 CS                                                                         both sides 	
 the above equation to obtain f (x) dx ≤
                                                                                           of        Approximation by integrals

                                                                                                      n
                                                                                                                                        m
                                                                                                                      When a summation can be expressed as n   k=m f (k), w
                                                                                                                                                                    k=m
                                                                                                                      increasing function, we can approximate it by integra
                                                                                E [X ] = E                  Xi        .
                                                                                                     i=1
                                                                                                           f (x) dx ≤
                                                                                                                          The≤integraldxappro
                                                                                                                          n
                                                                                                                           f	
                                                                                                                             (k)       f (x) .
                                                                                                                                               n                  n+1


                                                                                The left side of the above equationnumber. For of the sum of
                                                                                                                      is the expectation a lower b
                                                                                                                       m−1                   k=m                m

                                                                                ables. By Lemma 5.1, wejustification for this approximation is shown in F
                                                                                                                        The can easily compute the expectation of eac
      •                                        	
                                                                       represented as the n of expectation—it figure,
                                                                                variables. By equation (C.20)—linearity 1 the rectangles in theis easy  area of
                                                                                                                                                              	
                                                                                                                                                                                         n+1
                                                                                                                                                                                             dx
                                                                                                                        region under the curve. When fthe is a monotonically
                                                                                expectation of the sum: it equals the sum of≥                                                 (k) expectations o
                     –  Xi	
  =	
   	
  {            i	
                        variables.}	
  
                                                                                                                        use a similar method to provide the bounds
                                                                                                  Linearity of expectation makes the use of indicator ran
                                                                                                                           n+1
                                                                                                                                                                k
                                                                                                                                                     k=1 even when there is depend
                                                                                                                                                         n                     n      1       x
                     –  E[Xi]	
  =	
  Pr	
  {	
  i	
                            powerful analytical 	
  technique; it applies f (k) ≤
                                                                                          }	
  	
  =	
  	
  1	
  /	
  i             f (x) dx ≤                                    f (x) dx .
                                                                                random variables. We m                    now can easily compute= expected + 1)            the ln(n numbe
                                                                                                                                                                            m−1
                        	
  (i                                                         98
                                                                                                )	
  
                                                                                                                                                       k=m
                                                                                                                                                                  Xi(0	
  
                                                                                                             Chapter 5 Probabilistic Analysis and Randomized Algorithms or	
  1)
                                                           98             Chapter 5 Probabilistic Analysis approximationAlgorithmsgives a tight e
                                                                                       n         The integral and Randomized (A.12)
                                                                                                                1 n            	
 98                            Chapter 5 Probabilistic Analysis and Randomized Algorithms
                     –                                        E [X ] =	
  X	
  =	
  E [X ] = iE X (by For the upper bound,
                                                                            E                number. For equation (5.3))
                                                                                                           a lower bound, we obtain
                                                                                                                              n
                                                                                           X                                          i(5.5)
                                                                                               n             n+1            i=1
                                                                                                   i=1      n  1
                                                                                                               n                   dx
                                                              n                              E [X ] =
                                                                                                n      = E E [X i X≥ (by linearity of expectation)
                                                                                                                  ] i      (by equation (5.3))
                                                                                                           i=1 k                    x         n   n
                            E [X ] = E                              Xi          (by   equation (5.3)) i n
                                                                                             =            k=1i=1
                                                                                                         E [X ]
                                                                                                                           1
                                                                                                                              1                     dx
                                                                                                                                                   (5.5)
                                                            i=1                                     i=1 	
  [
                                                                                                       = n 1/i            equation 1)
                                                                                                                      ]	
             ≤
                                                                                                                    = (byln(n +(5.4)) .
                                                                                                       For the upper bound, we derive the inequality x
                                                                                                                              k 	
  
                                                       n                                           =       E [X ]   (by linearity of expectation)
                                                                                                                      i=1
                                                                                                                                i
                                                                                                                 = ln n + O(1) (by equation (A.7)) .                               (5.6)
                                                                                               n
      X                                     =                E [X i ]           (by linearity of expectation)       i=1k=2                      1
                                                                                             	
                                                                                             =             1/2
                                                                                                          Even though we interview n people, we only actually hire approximately ln n of
                                                                                                                     nn                       n	
      (      )	
                                                                                          them, on average. We summarize n result in the following lemma.
                                                                                                                    1          dx        this
 m                    	
                                                      i=1
                                                       n
                                                                                                    i=1 =
                                                                                                       Lemma 5.2
                                                                                                                   1/i ≤ 	
 (by equation[ln]	
                                                                                                                    k           x
                                                                                                                                         (5.4))
                                                                                                                                              1	
 	
           = ln n ,
                                            =                1/i                        = (5.4)) that candidates are presented in a
                                                                                                  .
                                                                                               Assuming
                                                                                                                k=2         1
                                                                                (by equation n/2 SSISTANT hasthetotal = ln nof,equationrandom order, algorithm HIRE-
                                                                                                               i=1
                                                                                               A = ln n + O(1) cost O(c ln n). (A.7)) .
                                                                                                              a       hiring (by
                                                                                method usedbound weyields 	
  theand approx
                                                                                                     which onlyofactuallycost boun
                                                                                                                                                   h
                                                      i=1
                                                    Thus, compared to thewe interviewthe people,our definition(C.36), indicator ra
                                                                                                        in equation the hiring hire
                                                                    Proof The bound follows immediately from
                                                               Even equation which yields n
                                                                     though
                                   = ln n + O(1) (by equation (A.7)) . (5.6).                                             (5.6)
p92-­‐93	
                                                     them,   thep324              	
  
                                                    greatly simplifyon average. We summarize this use indicator random va
                                                                            calculation. We shall result in the following lemma
                                                                               n
                                                     n this book. hiring actually ≤ ln + 1 .n 1
                                                                                    1
                            Even though we interviewoutpeople, we only cost of O(nc ).hirenapproximately ln n of
                                                                      The expected interview cost is a significant improvement over the worst-case
                                                                                                                                                                           Page	
  11	
                                                               result 5.2 k=1 k
           Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
  
                                                                                                                                                           ≤ ln n + 1 .
                                                                                                                                  h
                            them, on average. We summarize thisLemmain the following lemma.
                                                                                                          Exercises
m             n+1              n          m−1     n
2010 CS                                                                               	
                                                                                           k=m
                                                                          f (x) dx ≤                 f (k) ≤                f (x) d
                                                                 The integral approximation (A.12) gives a
                                                                      m                k=m                            m−1
                                                                (
                                                              number. For a lower bound, we obtain                      )	
                                                                       The integral approximation (A.12) giv
                                                               n               n+1
    HIRE-­‐ASSISTANT(n)	
                                           1
                                                                    number. For a dxlower bound, we obtain
                                                                                                       	
          	
    1.           best	
  ←	
  0 	
  =>                            0     ≥           	
                                                              k=1
                                                                    kn 1     1       x dx
                                                                                    n+1
    2.           for	
  i	
  ←	
  1	
  to	
  n	
                                         	
  Ci	
          n	
  
    3.            	
   do	
                  i                     	
       ≥
                                                                        = ln(n + 1) . x
                                                                    k=1
                                                                        k         1
    4.           	
 	
 if	
                            i         best                  	
   	
  Ch	
     m	
  
    5.           	
 	
 	
 then	
  For←	
  i	
  	
   upper bound, + 1) .
                                                         best	
   the              = ln(n we derive the )	
                                                                                                         (
                                                                                                                 inequality
    6.            	
         	
        	
          	
   hire	
  candidate	
  i	
  
                                                           n                        n
                                                      1                         dx
                                                      For the upper bound, we derive the inequa
                                                              ≤
                                     O(nCi	
  +	
  mChk n 	
  O(mCh1 	
  	
  	
  	
  (Cn<<Ch
                                                      )	
  =              )	
   x i          )	
                                                k=2            1                         dx
             m(                             )                          ≤
                                                              = 	
  	
  ln n , 1 x
                                                      k=2
                                                               k
                                                                                                 (^^)	

                                               which yields=Ch ln n ,)	
  	
                                                                 O(	
  the bound
 p93	
                                                                  n
         Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
  
                                                                              1
                                                                              which yields the bound                    Page	
  12
2010 CS                                                                        	


                                  	


                                                                                        	



                                                                                              	
  

                                                                                    (   )	




                                                                          	




     Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
                                Page	
  13
2010 CS                                                                   	


                                  	

                                                                                              	
  
                                                                                         	




                                                                                    	




                                                                               	




     Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
                                Page	
  14
2010 CS                                                                   	


                   	
 •  Hire-­‐Assistant                                             	
  




     Copyright©	
  2010	
  tniky1	
  	
  All	
  rights	
  reserved.	
          Page	
  15

Contenu connexe

En vedette

デザインパターン(state,strategy,template)
デザインパターン(state,strategy,template)デザインパターン(state,strategy,template)
デザインパターン(state,strategy,template)tniky1
 
動画像圧縮(コーデック)
動画像圧縮(コーデック)動画像圧縮(コーデック)
動画像圧縮(コーデック)tniky1
 
アルゴリズムイントロダクション 14章
アルゴリズムイントロダクション 14章アルゴリズムイントロダクション 14章
アルゴリズムイントロダクション 14章tniky1
 
アルゴリズムイントロダクション 8章
アルゴリズムイントロダクション 8章アルゴリズムイントロダクション 8章
アルゴリズムイントロダクション 8章tniky1
 
Npc april fool2014
Npc april fool2014Npc april fool2014
Npc april fool2014Ken Ogura
 
中の下のエンジニアを脱出するための仕事術
中の下のエンジニアを脱出するための仕事術中の下のエンジニアを脱出するための仕事術
中の下のエンジニアを脱出するための仕事術Noriaki Kadota
 
Map Reduce 〜入門編:仕組みの理解とアルゴリズムデザイン〜
Map Reduce 〜入門編:仕組みの理解とアルゴリズムデザイン〜Map Reduce 〜入門編:仕組みの理解とアルゴリズムデザイン〜
Map Reduce 〜入門編:仕組みの理解とアルゴリズムデザイン〜Takahiro Inoue
 
Ctfのためのpython入門
Ctfのためのpython入門Ctfのためのpython入門
Ctfのためのpython入門shiracamus
 
学生時代に知っておきたかったWeb技術の学び方の学び方 | リブセンス
学生時代に知っておきたかったWeb技術の学び方の学び方 | リブセンス学生時代に知っておきたかったWeb技術の学び方の学び方 | リブセンス
学生時代に知っておきたかったWeb技術の学び方の学び方 | リブセンスLivesense Inc.
 

En vedette (9)

デザインパターン(state,strategy,template)
デザインパターン(state,strategy,template)デザインパターン(state,strategy,template)
デザインパターン(state,strategy,template)
 
動画像圧縮(コーデック)
動画像圧縮(コーデック)動画像圧縮(コーデック)
動画像圧縮(コーデック)
 
アルゴリズムイントロダクション 14章
アルゴリズムイントロダクション 14章アルゴリズムイントロダクション 14章
アルゴリズムイントロダクション 14章
 
アルゴリズムイントロダクション 8章
アルゴリズムイントロダクション 8章アルゴリズムイントロダクション 8章
アルゴリズムイントロダクション 8章
 
Npc april fool2014
Npc april fool2014Npc april fool2014
Npc april fool2014
 
中の下のエンジニアを脱出するための仕事術
中の下のエンジニアを脱出するための仕事術中の下のエンジニアを脱出するための仕事術
中の下のエンジニアを脱出するための仕事術
 
Map Reduce 〜入門編:仕組みの理解とアルゴリズムデザイン〜
Map Reduce 〜入門編:仕組みの理解とアルゴリズムデザイン〜Map Reduce 〜入門編:仕組みの理解とアルゴリズムデザイン〜
Map Reduce 〜入門編:仕組みの理解とアルゴリズムデザイン〜
 
Ctfのためのpython入門
Ctfのためのpython入門Ctfのためのpython入門
Ctfのためのpython入門
 
学生時代に知っておきたかったWeb技術の学び方の学び方 | リブセンス
学生時代に知っておきたかったWeb技術の学び方の学び方 | リブセンス学生時代に知っておきたかったWeb技術の学び方の学び方 | リブセンス
学生時代に知っておきたかったWeb技術の学び方の学び方 | リブセンス
 

Similaire à アルゴリズムイントロダクション 5.1-5.2章

Wwf file8 6-
Wwf file8 6-Wwf file8 6-
Wwf file8 6-y246ra
 
Adagio in G Minor for Piano and Violin Duet
Adagio in G Minor for Piano and Violin DuetAdagio in G Minor for Piano and Violin Duet
Adagio in G Minor for Piano and Violin Duetsayakahime
 
【企画書】地球横断
【企画書】地球横断【企画書】地球横断
【企画書】地球横断Shunsuke Yagi
 
【企画書】地球横断
【企画書】地球横断【企画書】地球横断
【企画書】地球横断Shunsuke Yagi
 
Facebook Page Communications for Business
Facebook Page Communications for BusinessFacebook Page Communications for Business
Facebook Page Communications for BusinessSocial-Link
 
비즈니스를 위한 페이스북_페이지_커뮤니케이션_20110210_final
비즈니스를 위한 페이스북_페이지_커뮤니케이션_20110210_final비즈니스를 위한 페이스북_페이지_커뮤니케이션_20110210_final
비즈니스를 위한 페이스북_페이지_커뮤니케이션_20110210_finalJuny Lee
 
RegExp20110305
RegExp20110305RegExp20110305
RegExp20110305tmiya
 
Cv S.Uiterwijk
Cv S.UiterwijkCv S.Uiterwijk
Cv S.UiterwijkSuiterwijk
 
Sanitary Ware Catalog
Sanitary Ware CatalogSanitary Ware Catalog
Sanitary Ware Catalogjburtka
 
Coping strategies and management measures to strengthen national capacity to ...
Coping strategies and management measures to strengthen national capacity to ...Coping strategies and management measures to strengthen national capacity to ...
Coping strategies and management measures to strengthen national capacity to ...International Aquafeed
 
If You Be My Baby
If You Be My BabyIf You Be My Baby
If You Be My Babymabbagliati
 
Illinois Launch Presentation
Illinois Launch PresentationIllinois Launch Presentation
Illinois Launch PresentationEDeLaPaz_PDA
 
Hardware acceleration of TEA and XTEA algorithms on FPGA, GPU and multi-core ...
Hardware acceleration of TEA and XTEA algorithms on FPGA, GPU and multi-core ...Hardware acceleration of TEA and XTEA algorithms on FPGA, GPU and multi-core ...
Hardware acceleration of TEA and XTEA algorithms on FPGA, GPU and multi-core ...Vivek Venugopalan
 
유저스토리랩 회사소개서
유저스토리랩 회사소개서유저스토리랩 회사소개서
유저스토리랩 회사소개서userstorylab
 

Similaire à アルゴリズムイントロダクション 5.1-5.2章 (20)

Wwf file8 6-
Wwf file8 6-Wwf file8 6-
Wwf file8 6-
 
Silent vlfree
Silent vlfreeSilent vlfree
Silent vlfree
 
Adagio in G Minor for Piano and Violin Duet
Adagio in G Minor for Piano and Violin DuetAdagio in G Minor for Piano and Violin Duet
Adagio in G Minor for Piano and Violin Duet
 
【企画書】地球横断
【企画書】地球横断【企画書】地球横断
【企画書】地球横断
 
【企画書】地球横断
【企画書】地球横断【企画書】地球横断
【企画書】地球横断
 
Facebook Page Communications for Business
Facebook Page Communications for BusinessFacebook Page Communications for Business
Facebook Page Communications for Business
 
비즈니스를 위한 페이스북_페이지_커뮤니케이션_20110210_final
비즈니스를 위한 페이스북_페이지_커뮤니케이션_20110210_final비즈니스를 위한 페이스북_페이지_커뮤니케이션_20110210_final
비즈니스를 위한 페이스북_페이지_커뮤니케이션_20110210_final
 
RegExp20110305
RegExp20110305RegExp20110305
RegExp20110305
 
Cv S.Uiterwijk
Cv S.UiterwijkCv S.Uiterwijk
Cv S.Uiterwijk
 
6o kefalaio
6o kefalaio6o kefalaio
6o kefalaio
 
Sheet11- Appendix
Sheet11- AppendixSheet11- Appendix
Sheet11- Appendix
 
Sanitary Ware Catalog
Sanitary Ware CatalogSanitary Ware Catalog
Sanitary Ware Catalog
 
Coping strategies and management measures to strengthen national capacity to ...
Coping strategies and management measures to strengthen national capacity to ...Coping strategies and management measures to strengthen national capacity to ...
Coping strategies and management measures to strengthen national capacity to ...
 
If You Be My Baby
If You Be My BabyIf You Be My Baby
If You Be My Baby
 
Pole Trailer Extension
Pole Trailer ExtensionPole Trailer Extension
Pole Trailer Extension
 
Creative Pages
Creative PagesCreative Pages
Creative Pages
 
CAR Email 6.4.02 (s)
CAR Email 6.4.02 (s)CAR Email 6.4.02 (s)
CAR Email 6.4.02 (s)
 
Illinois Launch Presentation
Illinois Launch PresentationIllinois Launch Presentation
Illinois Launch Presentation
 
Hardware acceleration of TEA and XTEA algorithms on FPGA, GPU and multi-core ...
Hardware acceleration of TEA and XTEA algorithms on FPGA, GPU and multi-core ...Hardware acceleration of TEA and XTEA algorithms on FPGA, GPU and multi-core ...
Hardware acceleration of TEA and XTEA algorithms on FPGA, GPU and multi-core ...
 
유저스토리랩 회사소개서
유저스토리랩 회사소개서유저스토리랩 회사소개서
유저스토리랩 회사소개서
 

Dernier

Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxMatsuo Lab
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1DianaGray10
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...DianaGray10
 
Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Brian Pichman
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureEric D. Schabell
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding TeamAdam Moalla
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsSafe Software
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.YounusS2
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfDianaGray10
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Websitedgelyza
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1DianaGray10
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintMahmoud Rabie
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarPrecisely
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPathCommunity
 
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Will Schroeder
 
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostKubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostMatt Ray
 
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDEADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDELiveplex
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfJamie (Taka) Wang
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopBachir Benyammi
 

Dernier (20)

Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptx
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
 
Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability Adventure
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Website
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership Blueprint
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity Webinar
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation Developers
 
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
 
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCostKubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
KubeConEU24-Monitoring Kubernetes and Cloud Spend with OpenCost
 
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDEADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 Workshop
 

アルゴリズムイントロダクション 5.1-5.2章

  • 1. 2010 CS tniky1 Copyright©  2010  tniky1    All  rights  reserved.   Page  1
  • 2. 2010 CS ( )   Copyright©  2010  tniky1    All  rights  reserved.   Page  2
  • 3. 2010 CS •  5.1     •  •  5.2     •  •  5.3     •    •  5.4     •    p87-­‐113 Copyright©  2010  tniky1    All  rights  reserved.   Page  3
  • 4. 2010 CS (^^)   •    –    •    –  n –  1 –  ,   –  –  –  ( )   p87 Copyright©  2010  tniky1    All  rights  reserved.   Page  4
  • 5. 2010 CS  1                2              3                                  n  i   best HIRE-­‐ASSISTANT(n) 1.  best  ←  0  => 0 2.  for  i  ←  1  to  n      Ci     n   3.    do   i 4.  if   i best    Ch     m   ( ) 5.  then  best  ←  i     6.          hire  candidate  i   p88 O(nCi  +  mCh)   m Copyright©  2010  tniky1    All  rights  reserved.   Page  5
  • 6. 2010 CS •    •  2     1 n   •    •  n! p88-­‐89 Copyright©  2010  tniky1    All  rights  reserved.   Page  6
  • 7. 2010 CS m( ) •    –    •  ( ) ( :      A: )   •  ( )   –  Xi   Xi   •  A i   A i     •  Xi I  {  i   } Xi I  {   i   } p90 Copyright©  2010  tniky1    All  rights  reserved.   Page  7
  • 8. 2010 CS •    n/2 –  ( )   (1/2 n ) •  n •  1/2   •  ?   •  E  [XA]  =  Pr{A}   –  A E XA   Pr   –  A A   –  {i } {i } p91 E[Xi]  =  Pr{i }  =  ½   Copyright©  2010  tniky1    All  rights  reserved.   Page  8
  • 9. both sides of the above equation to obtain We wish to compute the expected number of heads, so we take the expectation of 2010 CS n to obtain both sides of the above equation= E E [X ] X . i n i=1 E [X ] = E i=1 Xi .The left side of the above equation is the expectation of the s ables. By Lemma 5.1, we can easily compute the expectation The left side of the above variables. is the expectation of the sum of nexpectation—it i equation By equation (C.20)—linearity of random vari- •  ables.  By Lemma 5.1, we can easily compute sum: it equals the each of the random expectation of the the expectation of sum of the expectati variables. By equation (C.20)—linearity of expectation—it is easy thecompute the variables. Linearity of expectation makes to use of indicato –  Xi  =  expectation of the sum: it equals analytical of the expectations even whenrandom d  {  i   }   powerful the sum technique; it applies of the n there is variables. Linearity of expectation makes the use ofcan easily compute the expected –  E[Xi]  =  Pr  {  i   random variables. We now indicator random variables a }  =  ½   powerful analytical technique; it applies even when there is dependence among the n Xi(0  or   E [X =  X compute i the expected number 1) heads: –  random variables. We now can] easily  E   = X of 1 n i=1 n n E [X ] = E Xi = E [X i ] i=1 i=1 n n X = E [X i ] = 1/2 ( ) i=1 i=1 [ ]   n = n/2 . = 1/2    2 i=1 Thus, compared to the method   used in equation (C.36), indic = n/2 . greatly simplify the  E[Y]    E[X+Y]  =  E[X]  + calculation. We shall use indicator rando out this book. p91-­‐92 Thus, compared to the method used in equation (C.36), indicator random variables Copyright©  2010  tniky1    All  simplify the calculation. We shall use indicator random variables through- greatly rights  reserved.   Page  9 out this book.
  • 10. = 1 · (1/2) + 0 · (1/2) 2010 CS = 1/2 . Thus the expected number of heads obtained by one flip of a fair E  [XA]  =  Pr{A}    ( ) the following lemma shows, the expected value of an indicator associated with an event A is equal to the probability that A occur •    Lemma 5.1 Given a sample space S and an event A in the sample space S, Then E [X A ] = Pr { A}. •    –  Proof   By the definition of an indicator random variable from eq •  the definition of expected value, we have E [X A ] = E [I { A}] = 1 · Pr { A} + 0 · Pr { A} = Pr { A} , p91 where A denotes S − A, the complement of A. Copyright©  2010  tniky1    All  rights  reserved.   Page  10
  • 11. n+1 n We wish to compute the expected number of heads, so we take the 2010 CS both sides the above equation to obtain f (x) dx ≤ of Approximation by integrals n m When a summation can be expressed as n k=m f (k), w k=m increasing function, we can approximate it by integra E [X ] = E Xi . i=1 f (x) dx ≤ The≤integraldxappro n f (k) f (x) . n n+1 The left side of the above equationnumber. For of the sum of is the expectation a lower b m−1 k=m m ables. By Lemma 5.1, wejustification for this approximation is shown in F The can easily compute the expectation of eac •    represented as the n of expectation—it figure, variables. By equation (C.20)—linearity 1 the rectangles in theis easy area of n+1 dx region under the curve. When fthe is a monotonically expectation of the sum: it equals the sum of≥ (k) expectations o –  Xi  =    { i   variables.}   use a similar method to provide the bounds Linearity of expectation makes the use of indicator ran n+1 k k=1 even when there is depend n n 1 x –  E[Xi]  =  Pr  {  i   powerful analytical  technique; it applies f (k) ≤ }    =    1  /  i f (x) dx ≤ f (x) dx . random variables. We m now can easily compute= expected + 1) the ln(n numbe m−1  (i 98 )   k=m Xi(0   Chapter 5 Probabilistic Analysis and Randomized Algorithms or  1) 98 Chapter 5 Probabilistic Analysis approximationAlgorithmsgives a tight e n The integral and Randomized (A.12) 1 n 98 Chapter 5 Probabilistic Analysis and Randomized Algorithms –  E [X ] =  X  =  E [X ] = iE X (by For the upper bound, E number. For equation (5.3)) a lower bound, we obtain n X i(5.5) n n+1 i=1 i=1 n 1 n dx n E [X ] = n = E E [X i X≥ (by linearity of expectation) ] i (by equation (5.3)) i=1 k x n n E [X ] = E Xi (by equation (5.3)) i n = k=1i=1 E [X ] 1 1 dx (5.5) i=1 i=1 [ = n 1/i equation 1) ]   ≤ = (byln(n +(5.4)) . For the upper bound, we derive the inequality x k   n = E [X ] (by linearity of expectation) i=1 i = ln n + O(1) (by equation (A.7)) . (5.6) n X = E [X i ] (by linearity of expectation) i=1k=2 1 = 1/2 Even though we interview n people, we only actually hire approximately ln n of nn n ( )   them, on average. We summarize n result in the following lemma. 1 dx this m i=1 n i=1 = Lemma 5.2 1/i ≤ (by equation[ln] k x (5.4)) 1 = ln n , = 1/i = (5.4)) that candidates are presented in a . Assuming k=2 1 (by equation n/2 SSISTANT hasthetotal = ln nof,equationrandom order, algorithm HIRE- i=1 A = ln n + O(1) cost O(c ln n). (A.7)) . a hiring (by method usedbound weyields  theand approx which onlyofactuallycost boun h i=1 Thus, compared to thewe interviewthe people,our definition(C.36), indicator ra in equation the hiring hire Proof The bound follows immediately from Even equation which yields n though = ln n + O(1) (by equation (A.7)) . (5.6). (5.6) p92-­‐93 them, thep324   greatly simplifyon average. We summarize this use indicator random va calculation. We shall result in the following lemma n n this book. hiring actually ≤ ln + 1 .n 1 1 Even though we interviewoutpeople, we only cost of O(nc ).hirenapproximately ln n of The expected interview cost is a significant improvement over the worst-case Page  11 result 5.2 k=1 k Copyright©  2010  tniky1    All  rights  reserved.   ≤ ln n + 1 . h them, on average. We summarize thisLemmain the following lemma. Exercises
  • 12. m n+1 n m−1 n 2010 CS k=m f (x) dx ≤ f (k) ≤ f (x) d The integral approximation (A.12) gives a m k=m m−1 ( number. For a lower bound, we obtain ) The integral approximation (A.12) giv n n+1 HIRE-­‐ASSISTANT(n) 1 number. For a dxlower bound, we obtain 1.  best  ←  0  => 0 ≥ k=1 kn 1 1 x dx n+1 2.  for  i  ←  1  to  n      Ci     n   3.    do   i ≥ = ln(n + 1) . x k=1 k 1 4.  if   i best    Ch     m   5.  then  For←  i     upper bound, + 1) . best   the = ln(n we derive the ) ( inequality 6.          hire  candidate  i   n n 1 dx For the upper bound, we derive the inequa ≤ O(nCi  +  mChk n  O(mCh1        (Cn<<Ch )  = )   x i ) k=2 1 dx m( ) ≤ =    ln n , 1 x k=2 k (^^) which yields=Ch ln n ,)   O(  the bound p93 n Copyright©  2010  tniky1    All  rights  reserved.   1 which yields the bound Page  12
  • 13. 2010 CS   ( ) Copyright©  2010  tniky1    All  rights  reserved.   Page  13
  • 14. 2010 CS   Copyright©  2010  tniky1    All  rights  reserved.   Page  14
  • 15. 2010 CS •  Hire-­‐Assistant   Copyright©  2010  tniky1    All  rights  reserved.   Page  15