This presentation is for a talk given at a half-day workshop on Human Factors in Cyber Security, organised at University of Surrey, on 16th March 2016. Below is the abstract advertised for the event.
Observer-resistant password systems (ORPSs, also known as human authentication against observers or leakage-resilient password systems) have been studied since the early 1990s in both cryptography and computer security contexts, but until today a both secure and usable ORPS remains an open question to the research community. The concept of ORPS can be used to cover a large family of attacks against password-based human authentication systems such as shoulder surfers, hidden cameras, man-in-the-middle, keyloggers and malware. A key assumption of ORPS is that human users must respond to authentication challenges without using any computational devices. In other words, the threat model behind ORPSs assumes that other than the human user's brain, nothing is trusted. The main security requirement is to avoid disclosure of the shared secret between the human user and the verifier (i.e., password) even after a practically large number of authentication sessions observed by an untrusted party.
According to Yan et al.'s NDSS 2012 paper which reviews research efforts on this topic for over two decades, it has been clear that no existing systems meet both security and usability requirements although many meet one well. In this talk, the speaker will introduce his research on ORPSs since the early 2000s, highlighting a number of key findings such as human behavioural based timing attack reported at SOUPS 2011 and some theoretical work reported at NDSS 2013 and IEEETIFS 2015. He will contextualise some part of his talk using a particular design of ORPS called Foxtail, one of those ORPSs whose implementations were shown to have a relatively better balance between security and usability. Known rules about designing ORPSs and future research directions will also be discussed.
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
Observer-Resistant Password Systems:How hard to make them both usable and secure?
1. Observer-Resistant Password Systems:
How hard to make them both usable and
secure?
Dr Shujun Li
Deputy Director, Surrey Centre for Cyber Security (SCCS)
Senior Lecturer, Department of Computer Science
http://www.hooklee.com/
@hooklee75
7. 7 / 41
- Hidden cameras are more powerful than you
thought!
- Two ACM CCS 2014 papers give the evidence.
Passwords and observers
Not Necessarily Hidden Cameras
8. 8 / 41
- “All the attacks works[ed] with the keyboard at the 5th
floor and the antenna in the basement, 20 meters
away!” [Vuagnous and Pasini @ USENIX
Security’09]
Passwords and observers
TEMPEST =
Electromagnetic Emanations
9. 9 / 41
- Observers can listen as well!
- Acoustic attacks on passwords [Zhu et al. @ ACM
CCS 2014]
Passwords and observers
10. 10 / 41
- Too sophosicated? There are simpler ones –
Keyloggers and screenscrapers!
Passwords and observers
11. 11 / 41
- When keyloggers and screenscrapers are running
as software.
- We have more malware: computer viruses,
Trojan horses, worms, rootkits, etc.
- Untrusted computers
Passwords and observers
12. 12 / 41
- Not only untrusted computers, but also untrusted
phones and …
Passwords and observers
13. 13 / 41
- Not only untrusted computers, but also untrusted
phones and untrusted terminals (e.g. card
skimmers attached to POS terminals!).
Passwords and observers
14. 14 / 41
- It is not necessarily malware, but just a (bad) man-
in-the-middle (MitM)!
- A special case of MitM: man-in-the-browser (MitB)
- Could be pre-installed malware e.g. a web browser
extension.
- Could be dynamically loaded program e.g. JavaScript code
in a malicious website you are visiting.
Passwords and observers
15. 15 / 41
- Phishers can be considered (bad) men-in-the-
middle!
- Not just phishing emails!
Passwords and observers
16. 16 / 41
Passwords and many observers!
- Shoulder-surfers
- Hidden cameras
- Keyloggers and other password recording
devices
- Password stealing software tools
- Attacks based on electromagnetic / optical /
acoustic emanations
- Phishers
- Malware
- Man-in-the-middle/browser/computer/phone
- Public terminals (@ cafés, airports, hotels, …)
- …
17. 17 / 41
Existing “solutions” against observers?
- “What you know”
- Static passwords: not secure at all
- “What you have”
- One-time passwords (OTP) generators,
cards + card readers, security tokens, …
- Problems: not always secure, prone to
theft and loss, higher implementation
costs, less usable / portable, …
- “Who you are”
- Problems: not always secure, you can’t
change your secret (easily), privacy
concerns, higher implementation costs, …
- Multi-factor authentication?
18. 18 / 41
Name(s) of the game: ORPS and …
- Observer-resistant/Observation-resistant password
system (ORPS) (Li et al. 2015 – yes this is me and my
collaborators)
- Leakage-resilient password system (LRPS) (Yan et al.
NDSS 2012)
- Virtual passwords [Lei et al. ICC 2008 + CompComm 2008]
- Cognitive authentication (Weinshall IEEE S&P 2006)
- Secure Human-Computer Interface/Identification (SecHCI)
(Li & Shum 2002-2005)
- Human-computer cryptography (Matsumoto CCS ’96)
- Human authentication/identification (protocol / system /
scheme) (many researchers 1991-2015)
- …
20. 20 / 41
Two basic requirements
1. The password should remain secret after a
number of (ideally infinite or practically large)
authentication sessions are observed by an
untrusted party (= observer).
2. Any computation in the authentication process
must be conducted by the human user alone. =
The process should be human-executable. = Any
computing devices beyond the human user’s
brain are untrusted.
Here, the word “password” is a loose term
referring to a secret shared between a human
user (client) and a computer verifier (server).
21. 21 / 41
Manuel Blum’s words
- HUMANOIDs is a protocol that allows a naked
human inside a glass house to authenticate
securely to a non-trusted terminal. “Naked” means
that the human carries nothing: no smart cards, no
laptops, no pencil or paper. “Glass house” means
that anybody can see what the human is doing,
including everything that the human is typing.
- Special case: PhoneOIDs = HUMANOIDs over
phone
22. 22 / 41
Passive observers vs. Active observers
- Passive observers = Observers who only observe
all authentication sessions passively (without
manipulating any communications).
23. 23 / 41
Passive observers vs. Active observers
- Active observers = Observers who also try to
manipulate the communications (e.g. to choose
part of the authentication sessions).
This is harder!
25. 25 / 41
Interactive challenge-response protocol
- A secret S shared between prover/human (H) and
verifier/computer (C)
- Authentication is a challenge-response protocol
- C H: t challenges C1(S), …, Ct(S)
- H C: t responses R1=f1(C1(S),S), …, Rt=ft(Ct(S),S)
- C: Accept H if all the t responses are correct; otherwise
reject H.
- NB: For some designs, less than t (and/or more than t’<t)
correct responses may still be acceptable.
26. 26 / 41
Modelling observers
- The aim: Given n observed / chosen successful
authentication sessions (= nt challenge-response
pairs), try to solve the secret S with a computational
complexity smaller than brute force (of S).
- R1
(1)=f1(C1
(1)(S),S)
…
Rt
(1)=ft(Ct
(1)(S),S)
…
R1
(n)=f1(C1
(n)(S),S)
…
Rt
(n)=ft(Ct
(n)(S),S)
S=?
Complexity << #(S)
27. 27 / 41
- Security requires fi(Ci(S),S) to be sufficiently
complicated for observers to calculate S out of a number
of (Ci(S), Ri) pairs.
- Usability requires fi(Ci(S),S) to be sufficiently simple for
humans to understand and execute.
- Observers are computationally bounded adversaries, but
they have access to computers as auxiliary computing
resources.
- Human users have only their brains as the computing
resources.
- The only advantage human users have is knowledge of S.
- We need a human-executable one-way function.
An asymmetric war
28. 28 / 41
- Random guess (base line “attack”)
- Statistical attacks (frequency analysis)
- Algebraic attacks
- Intersection attacks
- Divide and conquer attacks
- SAT solver based attacks
- Meet-in-the-middle attacks
- Side channel attacks
- Human behavior related attacks
- “Smarter” brute force attacks
- Partially-known-password attacks
- …
A large number of attacking strategies!
29. 29 / 41
- 7 example ORPS schemes compared [Yan et al.
NDSS 2012] (a smaller usability score is better)
Bad news: Security vs. Usability
ORPS Scheme
Usability
Score
Security Level
HB protocol (LPN) 33,874 No major attacks
APW protocol 18,787 No major attacks
CAS high 8,594 Best known attack: O(10) observed
authentication sessionsCAS low 7,818
Foxtail 3,513
Best known attack: O(100) observed
authentication sessions
CHC 1,575
Best known attack: O(10) observed
authentication sessions
PAS 924
Best known attack: O(10) observed
authentication sessions
30. 30 / 41
- Some general principles have been identified.
- Some general design strategies have been
proposed.
- A number of generic attacks have been known.
- Many ORPS schemes have been proposed.
- Clues have been found about theoretical
impossibility of sufficiently secure and usable
ORPS.
- Active observers are harder to handle.
Not very bad news?
32. 32 / 41
Hopper-Blum protocols (AsiaCrypt’2001)
- A general strategy: designing ORPSs
based on known NP-hard problems.
- HB Protocol 1: Based on Learning Parity with
Noise (LPN) problem
- HB Protocol 2: Based on Sum of k Mins
problem
- Security vs. Usability
- It stands security tests since 2001.
- Not for usability: 166 seconds for login for an
implementation of Protocol 1
- Find applications in light-weight
cryptography (humans RFID chips)!
33. 33 / 41
Twins and Foxtail (my work 2004-2005)
Foxtail
Two general
architectures
Twins
34. 34 / 41
A Foxtail protocol (my work 2004-2005)
- The highlighted one in the table you saw before.
35. 35 / 41
Undercover (ACM CHI 2008)
- A general strategy: Hiding part of challenges via a
trusted channel
36. 36 / 41
- Singapore Management University researchers had
a comprehensive review of ORPS at NDSS 2012.
- My work (with collaborators from Australia) led to the
discovery of two classes of statistical attacks, which
shed light on theoretical impossibility of absolute
security.
- Two new human behaviour based timing attacks
were reported in 2015.
- My work (with collaborators from Australia) in 2015
looked at more attacks on counting-based ORPS
schemes.
More recent development
38. 38 / 41
- Existing systems are not far from ideal ones.
- Many possible ideas have not been attempted
especially graph-based schemes.
- Many NP-hard problems have not been attempted.
- In some scenarios (e.g. military and high-sensitive
financial systems), usability requirement can be
relaxed in favour of security.
- Making use of untrusted computing devices (e.g.
mobile phones) may relax requirements on usability.
- Automated solution exploration may help!
Why bother? Should we simply give up?
39. 39 / 41
- A Singapore-UK joint project
COMMANDO-HUMANS will help.
- Automating security and usability
evaluation based on both system
and human modelling.
- An open event will be organised on
12th April from 12:30pm.
- Introduction to COMMANDO-HUMANS.
- Short talks of researchers from
Singapore Management University,
CSIRO (Australia), Southampton and
UCL.
Human computational models can help!
41. 41 / 41
- ORPS is a general concept covering a
large set of threats.
- A real solution to ORPS problems can open
up a new world of password-based user
authentication.
- It is hard to balance security and usability.
- Nobody has managed to create one.
- Will you be the one?
Take home messages
We are looking at more advanced editions of the optical attacks.
http://climateviewer.com/2014/01/18/nsa-tempest-attack-can-remotely-view-computer-cellphone-screen-using-radio-waves/
TEMPEST is a National Security Agency codename referring to technical investigations for compromising emanations from electrically operated processing equipment; these investigations are conducted in support of emission security (EMSEC). TEMPEST contains standards for shielding and separating wires and electronic equipment that are used by the U.S. and other foreign nations to defend against interception of compromising emanations by foreign intelligence.
Hidden cameras:
Diksha Shukla, Rajesh Kumar, Abdul Serwadda and Vir V. Phoha, Beware, Your Hands Reveal Your Secrets! In Proc. ACM CCS 2014
Qinggang Yue, Zhen Ling, Xinwen Fu, Benyuan Liu, Kui Ren and Wei Zhao, Blind Recognition of Touched Keys on Mobile Devices, In Proc. ACM CCS 2014
Electromagnetic Emanations:
Martin Vuagnoux and Sylvain Pasini, Compromising Electromagnetic Emanations of Wired and Wireless Keyboards, in Proc. USENIX Security Symposium 2009
Optical Emanations:
Michael Backes, Tongbo Chen, Markus Dürmuth, Hendrik P. A. Lensch, and Martin Welk, Tempest in a Teapot: Compromising Reflections Revisited, in Proc. 2009 30th IEEE Symposium on Security and Privacy
Michael Backes, Markus Dürmuth, and Dominique Unruh, Compromising Reflections -or- How to Read LCD Monitors around the Corner, in Proc. 2008 IEEE Symposium on Security and Privacy
Davide Balzarotti, Marco Cova and Giovanni Vigna, ClearShot: Eavesdropping on Keyboard Input from Video, in Proc. 2008 IEEE Symposium on Security and Privacy
Markus G. Kuhn, Electromagnetic Eavesdropping Risks of Flat-Panel Displays, in Proc. PET 2004, LNCS 3424, 2005
Abe Davis, Michael Rubinstein, Neal Wadhwa, Gautham J. Mysore, Frédo Durand and William T. Freeman, The Visual Microphone: Passive Recovery of Sound from Video, in ACM Transactions on Graphics, vol. 33, no. 4 (Proc. ACM SIGGRAPH 2014), 2014
Acoustic Emanations:
Tong Zhu, Qiang Ma, Shanfeng Zhang and Yunhao Liu, Context-free Attacks Using Keyboard Acoustic Emanations, in Proc. ACM CCS 2014
Michael Backes, Markus Dürmuth, Sebastian Gerling, Manfred Pinkal and Caroline Sporleder, Acoustic Side-Channel Attacks on Printers, in Proc. USENIX Security Symposium 2010
Yigael Berger, Avishai Wool and Arie Yeredor, Dictionary Attacks Using Keyboard Acoustic Emanations, in Proc. ACM CCS 2006
Li Zhuang, Feng Zhou and J. D. Tygar, Keyboard Acoustic Emanations Revisited, in Proc. ACM CCS 2005
Dmitri Asonov and Rakesh Agrawal, Keyboard Acoustic Emanations, in Proc. 2004 IEEE Security and Privacy Symposium
Password stealing software tools:
Kehuan Zhang and XiaoFeng Wang, Peeping Tom in the Neighborhood: Keystroke Eavesdropping on Multi-User Systems, Proc. USENIX Security Symposium 2009
If an auxiliary device is used, often the problem is simply moved to the device itself (it also needs protecting against observers).
“Correct” may mean “not match” (in case of intentional errors).
The functions fi may have some other parameters such as a random number.
So basically, fi needs to be a human-executable one-way function.
If you are interested in more details, talk with me.