SlideShare une entreprise Scribd logo
1  sur  25
Télécharger pour lire hors ligne
P R A G
Pattern Recognition and Applications Group
University of Cagliari, Italy
Department of Electrical and Electronic Engineering




                Evade Hard
         Multiple Classifier Systems
              Battista Biggio, Giorgio Fumera, Fabio Roli




            ECAI / SUEMA 2008, Patras, Greece, July 21st - 25th


                                                    SUEMA 2008
About me
• Pattern Recognition and Applications Group
   http://prag.diee.unica.it
    – DIEE, University of Cagliari, Italy.




• Contact
     – Battista Biggio, Ph.D. student
       battista.biggio@diee.unica.it

21-07-2008              Evade Hard MCSs      SUEMA 2008   2
Pattern Recognition and
                                                           P R A G
Applications Group
• Research interests
     – Methodological issues
             • Multiple classifier systems
             • Classification reliability
     – Main applications
             •   Intrusion detection in computer networks
             •   Multimedia document categorization, Spam filtering
             •   Biometric authentication (fingerprint, face)
             •   Content-based image retrieval




21-07-2008                    Evade Hard MCSs          SUEMA 2008     3
Why are we working on this topic?
• MCSs are widely used in security applications,
  but…
     – Lack of theoretical motivations


• Only few theoretical works on machine learning
  for adversarial classification

• Goal of this (ongoing) work
     – To give some theoretical background to the use of
       MCSs in security applications




21-07-2008            Evade Hard MCSs        SUEMA 2008    4
Outline
• Introducing the problem
     – Adversarial Classification


• A study on MCSs for adversarial classification
     – MCS hardening strategy: adding classifiers trained on
       different features
     – A case study in spam filtering: SpamAssassin




21-07-2008             Evade Hard MCSs        SUEMA 2008       5
Adversarial Classification
                   Dalvi et al., Adversarial Classification, 10th ACM SIGKDD Int. Conf. 2004


• Adversarial classification
     – An intelligent adaptive adversary modifies patterns to
       defeat the classifier.
             • e.g., spam filtering, intrusion detection systems (IDSs).


• Goals
      – How to design adversary-
        aware classifiers?
      – How to improve classifier
        hardness of evasion?




21-07-2008                     Evade Hard MCSs                    SUEMA 2008             6
Definitions
                                                                           Dalvi et al., 2004
     • Two class problem:
          – Positive/malicious patterns (+)
          – Negative/innocent patterns (-)
                                                                       Adversarial
     Instance space                      Classifier                   cost function
                                                        -
X2            x               X2                               X2
                                            x

                                    +
              X1                            X1                             X1
     X = {X 1 , ... , X N }
                                        C : X ! {+,"}                W:X ! X "!
     Each Xi is a feature
     Instances, x ∈ X              c ∈ C, concept class             (e.g., more legible
     (e.g., emails)                (e.g., linear classifier)           spam is better)
     21-07-2008                    Evade Hard MCSs                   SUEMA 2008           7
Adversarial cost function
•   Cost is related to
     –   Adversary efforts
             •   e.g., to use a different server for sending spam
     –   Attack effectiveness
             •   more legible spam is better!


    Example
•   Original spam message: BUY VIAGRA!
     –   Easy to be detected by classifier
•   Slightly modified spam message: BU-Y V1@GR4!
     –   It can evade classifier and be effective
•   No more legible spam (uneffective message): B--Y V…!
     –   It can evade several systems, but who will still buy viagra?


21-07-2008                      Evade Hard MCSs                SUEMA 2008   8
A framework for
              adversarial classification
                                                                    Dalvi et al., 2004
• Problem formulation
     – Two player game: Classifier vs Adversary
             • Utility and cost functions for each player
             • Classifier chooses a decision function C(x) at each ply
             • Adversary chooses a modification function A(x) to evade classifier


• Assumptions in Dalvi et al., 2004
     – Perfect Information
             • Adversary knows the classifier’s discriminant function C(x)
             • Classifier knows adversary strategy A(x) for modifying patterns
     – Actions
             • Adversary can only modify malicious patterns at operation phase
               (training process is untainted)




21-07-2008                     Evade Hard MCSs               SUEMA 2008            9
In a nutshell
               Lowd & Meek, Adversarial Learning, 11th ACM SIGKDD Int. Conf. 2005




                               -                                      -

       +                                   +
       Adversary’s Task:                   Classifier’s Task:
       Choose minimum cost                 Choose a new decision
       modifications to                    function to minimise the
       evade classifier                    expected risk




21-07-2008               Evade Hard MCSs                 SUEMA 2008          10
Adversary’s strategy
        x2
                                        BUY VIAGRA!
                                              +
                                              x


Too high cost
camouflage(s)
B--Y V…!

                                              Mimimum cost
                                     +'       camouflage(s)

+                    +
                     x ''
                                     x        BUY VI@GRA!
x '''
                             C(x) = !                     C(x) = +        x1
        21-07-2008          Evade Hard MCSs           SUEMA 2008     11
Classifier’s strategy
   • The Classifier knows A(x) [perfect information]
        – Adversary-aware classifier
          Dalvi et al. showed that adversary-aware classifier can
          perform significantly better




           x2       ? +
                   detected!
                          x

                       ?
                       still evades…
                             +'
                              x
                     x
                                                          x1
C(x) = !                                    C(x) = +
                           x'
   21-07-2008             Evade Hard MCSs         SUEMA 2008    12
Goals of this work
• Analysis of a widely used strategy for hardening
  MCSs
     – Using different sets of heterogeneus and redundant
       features [Giacinto et al. (2003), Perdisci et al. (2006)]


• Only heuristic and qualitative motivations have
  been given

• Using the described framework, we give more
  formal explainations about the effectiveness of
  this strategy

21-07-2008             Evade Hard MCSs           SUEMA 2008        13
An example of the
                   considered strategy
   • Biometric verification system

                             Fingerprint




                                 Face      Decision         genuine
                                             rule          impostor

                                …

                                Voice
Claimed Identity


   21-07-2008          Evade Hard MCSs        SUEMA 2008     14
Another example of the
              considered strategy
• Spam filtering

               Header Analysis



                                           Σ
               Black/White List
                  URL Filter                               legitimate
                                                              spam
                Signature Filter
                       …                                 Assigned class
               Content Analysis




                                          http://spamassassin.apache.org
21-07-2008              Evade Hard MCSs              SUEMA 2008         15
Applying the framework
             to the spam filtering case
• Cost for Adversary

                                                               legitimate
                Header Analysis    s1 = 0.2
                                   s2 = 0                            true



                                              Σ
                Black/White List
                                                     s = 5.7
                                                         2.7
                Signature Filter   s3 = 0
                                                                 s<th
                                                                  s<5
BUY             Text Classifier    s4 = 2.5
VI@GR4!
VIAGRA!               …                                             false
                Keyword Filters    sN = 0
                                        3
                                                                 spam




Working assumption: changing “VIAGRA” to “VI@GR4” costs 3!


21-07-2008               Evade Hard MCSs          SUEMA 2008         16
Applying the framework
                 to the spam filtering case
     AFM Continues to Climb. Big News On
     Horizon | UP 50 % This Week                      Text is embedded
     Aerofoam Metals Inc.                             into an image!
     Symbol : AFML
     Price : $ 0.10 UP AGAIN
     Status : Strong Buy

                                                                        legitimate
                        Header Analysis    s1 = 3.2
                                           s2 = 0                             true



                                                      Σ
                       Black/White List
                                                              s = 5.7
                                                                  3.2
                                                                  6.2
                        Signature Filter   s3 = 0
                                                                           s<5
                     Text Classifier       sN = 2.5
                                                0
   Evasion costs 2.5       …                                                 false
                     Image Analysis        sN+1 = 3
   Evasion costs 3.0                                                      spam

Now both text and image classifiers must be evaded to evade the filter!
    21-07-2008                    Evade Hard MCSs          SUEMA 2008         17
Forcing the adversary to surrender
• Hardening the system by adding modules can
  make the evasion too costly for the adversary
     – In the end, the optimal adversary strategy becomes
       not fighting!


“The ultimate warrior is one who wins the war by forcing the
 enemy to surrender without fighting any battles”

                              The Art of War, Sun Tzu, 500 BC




21-07-2008            Evade Hard MCSs          SUEMA 2008       18
Experimental Setup
• SpamAssassin
     – 619 tests
     – includes a text classifier (naive bayes)


• Data set: TREC 2007 spam track
     – 75,419 e-mails (25,220 ham - 50,199 spam).
     – We used the first 10K e-mails (taken in chronological
       order) for training the SpamAssassin naive Bayes
       classifier.




21-07-2008             Evade Hard MCSs            SUEMA 2008   19
Experimental Setup
• Adversary
     – Cost simulated at score level
             • Manhattan distance between test scores
     – Maximum cost fixed
             • Rationale: higher cost modifications will make the spam
               message no more effective/legible
• Classifier
     – We did not take into account the computational cost
       for adding tests
• Performance measure
     – Expected utility



21-07-2008                  Evade Hard MCSs           SUEMA 2008         20
Experimental Results
                maximum cost = 1




21-07-2008        Evade Hard MCSs   SUEMA 2008   21
Experimental Results
                maximum cost = 5




21-07-2008        Evade Hard MCSs   SUEMA 2008   22
Will spammers give up?
• Spammer economics
     – Goal: beat enough of the filters temporarily to get a bit
       of mails through and generate a quick profit
     – As filters accuracy increases, spammers simply send
       larger quantities of spam in order to get the same bit
       of mails still pass through
             • the cost of sending spam is negligible with respect to the
               achievable profit!


• Is it feasible to push the accuracy of spam filters
  up to the point where only ineffective spam
  messages can pass through the filters?
     – Otherwise spammers won’t give up!
21-07-2008                   Evade Hard MCSs            SUEMA 2008      23
Future work
  • Theory of Adversarial Classification
       – Extend the model to more realistic situations


  • Investigating other defence strategies
       – We are expanding the framework to model
         information hiding strategies [Barreno et al. (2006)]
               • Possible implementation: randomising the placement of
                 the decision boundary

“Keep the adversary guessing. If your strategy is a mystery, it
 cannot be counteracted. This gives you a significant advantage”

                                                The Art of War, Sun Tzu, 500 BC

  21-07-2008                  Evade Hard MCSs              SUEMA 2008     24
Thank you!
• Contacts
  – roli@diee.unica.it
  – fumera@diee.unica.it
  – battista.biggio@diee.unica.it




                                      P R A G

21-07-2008    Evade Hard MCSs   SUEMA 2008   25

Contenu connexe

En vedette

Machine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresMachine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresPluribus One
 
Adversarial Learning_Rupam Bhattacharya
Adversarial Learning_Rupam BhattacharyaAdversarial Learning_Rupam Bhattacharya
Adversarial Learning_Rupam BhattacharyaRupam Bhattacharya
 
Causative Adversarial Learning
Causative Adversarial LearningCausative Adversarial Learning
Causative Adversarial LearningDavid Dao
 
On Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial SettingsOn Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial SettingsPluribus One
 
Secure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion AttacksSecure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion AttacksPluribus One
 
State of the Word 2011
State of the Word 2011State of the Word 2011
State of the Word 2011photomatt
 

En vedette (6)

Machine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresMachine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security Measures
 
Adversarial Learning_Rupam Bhattacharya
Adversarial Learning_Rupam BhattacharyaAdversarial Learning_Rupam Bhattacharya
Adversarial Learning_Rupam Bhattacharya
 
Causative Adversarial Learning
Causative Adversarial LearningCausative Adversarial Learning
Causative Adversarial Learning
 
On Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial SettingsOn Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial Settings
 
Secure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion AttacksSecure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion Attacks
 
State of the Word 2011
State of the Word 2011State of the Word 2011
State of the Word 2011
 

Similaire à Evade Hard Multiple Classifier Systems

Knowledge Engineering rediscovered, Towards Reasoning Patterns for the Semant...
Knowledge Engineering rediscovered, Towards Reasoning Patterns for the Semant...Knowledge Engineering rediscovered, Towards Reasoning Patterns for the Semant...
Knowledge Engineering rediscovered, Towards Reasoning Patterns for the Semant...Frank van Harmelen
 
DEF CON 27 - workshop - YACIN NADJI - hands on adverserial machine learning
DEF CON 27 - workshop - YACIN NADJI - hands on adverserial machine learningDEF CON 27 - workshop - YACIN NADJI - hands on adverserial machine learning
DEF CON 27 - workshop - YACIN NADJI - hands on adverserial machine learningFelipe Prado
 
“Fundamentals of Training AI Models for Computer Vision Applications,” a Pres...
“Fundamentals of Training AI Models for Computer Vision Applications,” a Pres...“Fundamentals of Training AI Models for Computer Vision Applications,” a Pres...
“Fundamentals of Training AI Models for Computer Vision Applications,” a Pres...Edge AI and Vision Alliance
 
Machine learning for_finance
Machine learning for_financeMachine learning for_finance
Machine learning for_financeStefan Duprey
 
第19回ステアラボ人工知能セミナー発表資料
第19回ステアラボ人工知能セミナー発表資料第19回ステアラボ人工知能セミナー発表資料
第19回ステアラボ人工知能セミナー発表資料Takayuki Osogami
 
Huong dan cu the svm
Huong dan cu the svmHuong dan cu the svm
Huong dan cu the svmtaikhoan262
 
findbugs Bernhard Merkle
findbugs Bernhard Merklefindbugs Bernhard Merkle
findbugs Bernhard Merklebmerkle
 
Bicliques for Preimages: Attacks on Skein-512 and the SHA-2 family
Bicliques for Preimages: Attacks on Skein-512 and the SHA-2 familyBicliques for Preimages: Attacks on Skein-512 and the SHA-2 family
Bicliques for Preimages: Attacks on Skein-512 and the SHA-2 familyAlexandra
 
Brief introduction on GAN
Brief introduction on GANBrief introduction on GAN
Brief introduction on GANDai-Hai Nguyen
 
Machine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University ChhattisgarhMachine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University ChhattisgarhPoorabpatel
 
Big Data LDN 2017: Serving Predictive Models with Redis
Big Data LDN 2017: Serving Predictive Models with RedisBig Data LDN 2017: Serving Predictive Models with Redis
Big Data LDN 2017: Serving Predictive Models with RedisMatt Stubbs
 
adversarial robustness lecture
adversarial robustness lectureadversarial robustness lecture
adversarial robustness lectureMuhammadAhmedShah2
 
Machine Learning : why we should know and how it works
Machine Learning : why we should know and how it worksMachine Learning : why we should know and how it works
Machine Learning : why we should know and how it worksKevin Lee
 
acmsigtalkshare-121023190142-phpapp01.pptx
acmsigtalkshare-121023190142-phpapp01.pptxacmsigtalkshare-121023190142-phpapp01.pptx
acmsigtalkshare-121023190142-phpapp01.pptxdongchangim30
 
Cluster analysis
Cluster analysisCluster analysis
Cluster analysisAcad
 
Introduction to Adversary Evaluation Tools
Introduction to Adversary Evaluation ToolsIntroduction to Adversary Evaluation Tools
Introduction to Adversary Evaluation ToolsAj MaChInE
 
OM-DS-Fall2022-Session10-Support vector machine.pdf
OM-DS-Fall2022-Session10-Support vector machine.pdfOM-DS-Fall2022-Session10-Support vector machine.pdf
OM-DS-Fall2022-Session10-Support vector machine.pdfssuserb016ab
 

Similaire à Evade Hard Multiple Classifier Systems (20)

Knowledge Engineering rediscovered, Towards Reasoning Patterns for the Semant...
Knowledge Engineering rediscovered, Towards Reasoning Patterns for the Semant...Knowledge Engineering rediscovered, Towards Reasoning Patterns for the Semant...
Knowledge Engineering rediscovered, Towards Reasoning Patterns for the Semant...
 
DEF CON 27 - workshop - YACIN NADJI - hands on adverserial machine learning
DEF CON 27 - workshop - YACIN NADJI - hands on adverserial machine learningDEF CON 27 - workshop - YACIN NADJI - hands on adverserial machine learning
DEF CON 27 - workshop - YACIN NADJI - hands on adverserial machine learning
 
“Fundamentals of Training AI Models for Computer Vision Applications,” a Pres...
“Fundamentals of Training AI Models for Computer Vision Applications,” a Pres...“Fundamentals of Training AI Models for Computer Vision Applications,” a Pres...
“Fundamentals of Training AI Models for Computer Vision Applications,” a Pres...
 
Machine learning for_finance
Machine learning for_financeMachine learning for_finance
Machine learning for_finance
 
第19回ステアラボ人工知能セミナー発表資料
第19回ステアラボ人工知能セミナー発表資料第19回ステアラボ人工知能セミナー発表資料
第19回ステアラボ人工知能セミナー発表資料
 
Support Vector Machines
Support Vector MachinesSupport Vector Machines
Support Vector Machines
 
Guide
GuideGuide
Guide
 
Huong dan cu the svm
Huong dan cu the svmHuong dan cu the svm
Huong dan cu the svm
 
findbugs Bernhard Merkle
findbugs Bernhard Merklefindbugs Bernhard Merkle
findbugs Bernhard Merkle
 
Bicliques for Preimages: Attacks on Skein-512 and the SHA-2 family
Bicliques for Preimages: Attacks on Skein-512 and the SHA-2 familyBicliques for Preimages: Attacks on Skein-512 and the SHA-2 family
Bicliques for Preimages: Attacks on Skein-512 and the SHA-2 family
 
Brief introduction on GAN
Brief introduction on GANBrief introduction on GAN
Brief introduction on GAN
 
Machine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University ChhattisgarhMachine Learning workshop by GDSC Amity University Chhattisgarh
Machine Learning workshop by GDSC Amity University Chhattisgarh
 
Big Data LDN 2017: Serving Predictive Models with Redis
Big Data LDN 2017: Serving Predictive Models with RedisBig Data LDN 2017: Serving Predictive Models with Redis
Big Data LDN 2017: Serving Predictive Models with Redis
 
adversarial robustness lecture
adversarial robustness lectureadversarial robustness lecture
adversarial robustness lecture
 
Machine Learning : why we should know and how it works
Machine Learning : why we should know and how it worksMachine Learning : why we should know and how it works
Machine Learning : why we should know and how it works
 
acmsigtalkshare-121023190142-phpapp01.pptx
acmsigtalkshare-121023190142-phpapp01.pptxacmsigtalkshare-121023190142-phpapp01.pptx
acmsigtalkshare-121023190142-phpapp01.pptx
 
Cluster analysis
Cluster analysisCluster analysis
Cluster analysis
 
MLBox
MLBoxMLBox
MLBox
 
Introduction to Adversary Evaluation Tools
Introduction to Adversary Evaluation ToolsIntroduction to Adversary Evaluation Tools
Introduction to Adversary Evaluation Tools
 
OM-DS-Fall2022-Session10-Support vector machine.pdf
OM-DS-Fall2022-Session10-Support vector machine.pdfOM-DS-Fall2022-Session10-Support vector machine.pdf
OM-DS-Fall2022-Session10-Support vector machine.pdf
 

Plus de Pluribus One

Smart Textiles - Prospettive di mercato - Davide Ariu
Smart Textiles - Prospettive di mercato - Davide Ariu Smart Textiles - Prospettive di mercato - Davide Ariu
Smart Textiles - Prospettive di mercato - Davide Ariu Pluribus One
 
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...Pluribus One
 
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning. ICMLC 201...
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning. ICMLC 201...Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning. ICMLC 201...
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning. ICMLC 201...Pluribus One
 
Wild patterns - Ten years after the rise of Adversarial Machine Learning - Ne...
Wild patterns - Ten years after the rise of Adversarial Machine Learning - Ne...Wild patterns - Ten years after the rise of Adversarial Machine Learning - Ne...
Wild patterns - Ten years after the rise of Adversarial Machine Learning - Ne...Pluribus One
 
WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019
WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019
WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019Pluribus One
 
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...Pluribus One
 
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Pluribus One
 
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"Pluribus One
 
Zahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense SlidesZahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense SlidesPluribus One
 
Design of robust classifiers for adversarial environments - Systems, Man, and...
Design of robust classifiers for adversarial environments - Systems, Man, and...Design of robust classifiers for adversarial environments - Systems, Man, and...
Design of robust classifiers for adversarial environments - Systems, Man, and...Pluribus One
 
Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...Pluribus One
 
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...Pluribus One
 
Understanding the risk factors of learning in adversarial environments
Understanding the risk factors of learning in adversarial environmentsUnderstanding the risk factors of learning in adversarial environments
Understanding the risk factors of learning in adversarial environmentsPluribus One
 
Amilab IJCB 2011 Poster
Amilab IJCB 2011 PosterAmilab IJCB 2011 Poster
Amilab IJCB 2011 PosterPluribus One
 
Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011Pluribus One
 
Ariu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - PosterAriu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - PosterPluribus One
 
Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011Pluribus One
 
Ariu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern AnalysisAriu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern AnalysisPluribus One
 
Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011Pluribus One
 
Robustness of Multimodal Biometric Systems under Realistic Spoof Attacks agai...
Robustness of Multimodal Biometric Systems under Realistic Spoof Attacks agai...Robustness of Multimodal Biometric Systems under Realistic Spoof Attacks agai...
Robustness of Multimodal Biometric Systems under Realistic Spoof Attacks agai...Pluribus One
 

Plus de Pluribus One (20)

Smart Textiles - Prospettive di mercato - Davide Ariu
Smart Textiles - Prospettive di mercato - Davide Ariu Smart Textiles - Prospettive di mercato - Davide Ariu
Smart Textiles - Prospettive di mercato - Davide Ariu
 
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...
 
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning. ICMLC 201...
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning. ICMLC 201...Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning. ICMLC 201...
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning. ICMLC 201...
 
Wild patterns - Ten years after the rise of Adversarial Machine Learning - Ne...
Wild patterns - Ten years after the rise of Adversarial Machine Learning - Ne...Wild patterns - Ten years after the rise of Adversarial Machine Learning - Ne...
Wild patterns - Ten years after the rise of Adversarial Machine Learning - Ne...
 
WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019
WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019
WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019
 
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...
 
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
 
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
 
Zahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense SlidesZahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense Slides
 
Design of robust classifiers for adversarial environments - Systems, Man, and...
Design of robust classifiers for adversarial environments - Systems, Man, and...Design of robust classifiers for adversarial environments - Systems, Man, and...
Design of robust classifiers for adversarial environments - Systems, Man, and...
 
Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...
 
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
 
Understanding the risk factors of learning in adversarial environments
Understanding the risk factors of learning in adversarial environmentsUnderstanding the risk factors of learning in adversarial environments
Understanding the risk factors of learning in adversarial environments
 
Amilab IJCB 2011 Poster
Amilab IJCB 2011 PosterAmilab IJCB 2011 Poster
Amilab IJCB 2011 Poster
 
Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011
 
Ariu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - PosterAriu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - Poster
 
Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011
 
Ariu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern AnalysisAriu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern Analysis
 
Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011
 
Robustness of Multimodal Biometric Systems under Realistic Spoof Attacks agai...
Robustness of Multimodal Biometric Systems under Realistic Spoof Attacks agai...Robustness of Multimodal Biometric Systems under Realistic Spoof Attacks agai...
Robustness of Multimodal Biometric Systems under Realistic Spoof Attacks agai...
 

Dernier

Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...Sapna Thakur
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxShobhayan Kirtania
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 

Dernier (20)

Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptx
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 

Evade Hard Multiple Classifier Systems

  • 1. P R A G Pattern Recognition and Applications Group University of Cagliari, Italy Department of Electrical and Electronic Engineering Evade Hard Multiple Classifier Systems Battista Biggio, Giorgio Fumera, Fabio Roli ECAI / SUEMA 2008, Patras, Greece, July 21st - 25th SUEMA 2008
  • 2. About me • Pattern Recognition and Applications Group http://prag.diee.unica.it – DIEE, University of Cagliari, Italy. • Contact – Battista Biggio, Ph.D. student battista.biggio@diee.unica.it 21-07-2008 Evade Hard MCSs SUEMA 2008 2
  • 3. Pattern Recognition and P R A G Applications Group • Research interests – Methodological issues • Multiple classifier systems • Classification reliability – Main applications • Intrusion detection in computer networks • Multimedia document categorization, Spam filtering • Biometric authentication (fingerprint, face) • Content-based image retrieval 21-07-2008 Evade Hard MCSs SUEMA 2008 3
  • 4. Why are we working on this topic? • MCSs are widely used in security applications, but… – Lack of theoretical motivations • Only few theoretical works on machine learning for adversarial classification • Goal of this (ongoing) work – To give some theoretical background to the use of MCSs in security applications 21-07-2008 Evade Hard MCSs SUEMA 2008 4
  • 5. Outline • Introducing the problem – Adversarial Classification • A study on MCSs for adversarial classification – MCS hardening strategy: adding classifiers trained on different features – A case study in spam filtering: SpamAssassin 21-07-2008 Evade Hard MCSs SUEMA 2008 5
  • 6. Adversarial Classification Dalvi et al., Adversarial Classification, 10th ACM SIGKDD Int. Conf. 2004 • Adversarial classification – An intelligent adaptive adversary modifies patterns to defeat the classifier. • e.g., spam filtering, intrusion detection systems (IDSs). • Goals – How to design adversary- aware classifiers? – How to improve classifier hardness of evasion? 21-07-2008 Evade Hard MCSs SUEMA 2008 6
  • 7. Definitions Dalvi et al., 2004 • Two class problem: – Positive/malicious patterns (+) – Negative/innocent patterns (-) Adversarial Instance space Classifier cost function - X2 x X2 X2 x + X1 X1 X1 X = {X 1 , ... , X N } C : X ! {+,"} W:X ! X "! Each Xi is a feature Instances, x ∈ X c ∈ C, concept class (e.g., more legible (e.g., emails) (e.g., linear classifier) spam is better) 21-07-2008 Evade Hard MCSs SUEMA 2008 7
  • 8. Adversarial cost function • Cost is related to – Adversary efforts • e.g., to use a different server for sending spam – Attack effectiveness • more legible spam is better! Example • Original spam message: BUY VIAGRA! – Easy to be detected by classifier • Slightly modified spam message: BU-Y V1@GR4! – It can evade classifier and be effective • No more legible spam (uneffective message): B--Y V…! – It can evade several systems, but who will still buy viagra? 21-07-2008 Evade Hard MCSs SUEMA 2008 8
  • 9. A framework for adversarial classification Dalvi et al., 2004 • Problem formulation – Two player game: Classifier vs Adversary • Utility and cost functions for each player • Classifier chooses a decision function C(x) at each ply • Adversary chooses a modification function A(x) to evade classifier • Assumptions in Dalvi et al., 2004 – Perfect Information • Adversary knows the classifier’s discriminant function C(x) • Classifier knows adversary strategy A(x) for modifying patterns – Actions • Adversary can only modify malicious patterns at operation phase (training process is untainted) 21-07-2008 Evade Hard MCSs SUEMA 2008 9
  • 10. In a nutshell Lowd & Meek, Adversarial Learning, 11th ACM SIGKDD Int. Conf. 2005 - - + + Adversary’s Task: Classifier’s Task: Choose minimum cost Choose a new decision modifications to function to minimise the evade classifier expected risk 21-07-2008 Evade Hard MCSs SUEMA 2008 10
  • 11. Adversary’s strategy x2 BUY VIAGRA! + x Too high cost camouflage(s) B--Y V…! Mimimum cost +' camouflage(s) + + x '' x BUY VI@GRA! x ''' C(x) = ! C(x) = + x1 21-07-2008 Evade Hard MCSs SUEMA 2008 11
  • 12. Classifier’s strategy • The Classifier knows A(x) [perfect information] – Adversary-aware classifier Dalvi et al. showed that adversary-aware classifier can perform significantly better x2 ? + detected! x ? still evades… +' x x x1 C(x) = ! C(x) = + x' 21-07-2008 Evade Hard MCSs SUEMA 2008 12
  • 13. Goals of this work • Analysis of a widely used strategy for hardening MCSs – Using different sets of heterogeneus and redundant features [Giacinto et al. (2003), Perdisci et al. (2006)] • Only heuristic and qualitative motivations have been given • Using the described framework, we give more formal explainations about the effectiveness of this strategy 21-07-2008 Evade Hard MCSs SUEMA 2008 13
  • 14. An example of the considered strategy • Biometric verification system Fingerprint Face Decision genuine rule impostor … Voice Claimed Identity 21-07-2008 Evade Hard MCSs SUEMA 2008 14
  • 15. Another example of the considered strategy • Spam filtering Header Analysis Σ Black/White List URL Filter legitimate spam Signature Filter … Assigned class Content Analysis http://spamassassin.apache.org 21-07-2008 Evade Hard MCSs SUEMA 2008 15
  • 16. Applying the framework to the spam filtering case • Cost for Adversary legitimate Header Analysis s1 = 0.2 s2 = 0 true Σ Black/White List s = 5.7 2.7 Signature Filter s3 = 0 s<th s<5 BUY Text Classifier s4 = 2.5 VI@GR4! VIAGRA! … false Keyword Filters sN = 0 3 spam Working assumption: changing “VIAGRA” to “VI@GR4” costs 3! 21-07-2008 Evade Hard MCSs SUEMA 2008 16
  • 17. Applying the framework to the spam filtering case AFM Continues to Climb. Big News On Horizon | UP 50 % This Week Text is embedded Aerofoam Metals Inc. into an image! Symbol : AFML Price : $ 0.10 UP AGAIN Status : Strong Buy legitimate Header Analysis s1 = 3.2 s2 = 0 true Σ Black/White List s = 5.7 3.2 6.2 Signature Filter s3 = 0 s<5 Text Classifier sN = 2.5 0 Evasion costs 2.5 … false Image Analysis sN+1 = 3 Evasion costs 3.0 spam Now both text and image classifiers must be evaded to evade the filter! 21-07-2008 Evade Hard MCSs SUEMA 2008 17
  • 18. Forcing the adversary to surrender • Hardening the system by adding modules can make the evasion too costly for the adversary – In the end, the optimal adversary strategy becomes not fighting! “The ultimate warrior is one who wins the war by forcing the enemy to surrender without fighting any battles” The Art of War, Sun Tzu, 500 BC 21-07-2008 Evade Hard MCSs SUEMA 2008 18
  • 19. Experimental Setup • SpamAssassin – 619 tests – includes a text classifier (naive bayes) • Data set: TREC 2007 spam track – 75,419 e-mails (25,220 ham - 50,199 spam). – We used the first 10K e-mails (taken in chronological order) for training the SpamAssassin naive Bayes classifier. 21-07-2008 Evade Hard MCSs SUEMA 2008 19
  • 20. Experimental Setup • Adversary – Cost simulated at score level • Manhattan distance between test scores – Maximum cost fixed • Rationale: higher cost modifications will make the spam message no more effective/legible • Classifier – We did not take into account the computational cost for adding tests • Performance measure – Expected utility 21-07-2008 Evade Hard MCSs SUEMA 2008 20
  • 21. Experimental Results maximum cost = 1 21-07-2008 Evade Hard MCSs SUEMA 2008 21
  • 22. Experimental Results maximum cost = 5 21-07-2008 Evade Hard MCSs SUEMA 2008 22
  • 23. Will spammers give up? • Spammer economics – Goal: beat enough of the filters temporarily to get a bit of mails through and generate a quick profit – As filters accuracy increases, spammers simply send larger quantities of spam in order to get the same bit of mails still pass through • the cost of sending spam is negligible with respect to the achievable profit! • Is it feasible to push the accuracy of spam filters up to the point where only ineffective spam messages can pass through the filters? – Otherwise spammers won’t give up! 21-07-2008 Evade Hard MCSs SUEMA 2008 23
  • 24. Future work • Theory of Adversarial Classification – Extend the model to more realistic situations • Investigating other defence strategies – We are expanding the framework to model information hiding strategies [Barreno et al. (2006)] • Possible implementation: randomising the placement of the decision boundary “Keep the adversary guessing. If your strategy is a mystery, it cannot be counteracted. This gives you a significant advantage” The Art of War, Sun Tzu, 500 BC 21-07-2008 Evade Hard MCSs SUEMA 2008 24
  • 25. Thank you! • Contacts – roli@diee.unica.it – fumera@diee.unica.it – battista.biggio@diee.unica.it P R A G 21-07-2008 Evade Hard MCSs SUEMA 2008 25

Notes de l'éditeur

  1. Precisare bene cos’e’ W(x,x’), ovvero che è il costo di aggiungere parole, etc E che è una sorta di misura di similarità tra i pattern, per cui vale 0 se e solo se x=x’
  2. Intro biometrics, poi parallelo con spam e IDSs In many security systems, hardness of evasion can be improved combining several experts trained on redundant and heterogeneus features MCSs provide a very natural architecture to achieve this task. Our goal is to provide a more formal explaination to this phenomenon, using the framework previously described.
  3. Specificare come abbiamo simulato il gioco Adversary optimal strategy Classifier adds modules