3. Who the hell am I?
Dave Tibbs
@LowlySysadm1n
l
Systems Administrator at Brightpearl Inc
l
Started at Brightpearl UK in October 2010
l
Back then, only about 20 people in the company
– I was the only Systems Administrator/General
IT Dogsbody
l
~7 years experience as Sysadmin working with
various flavours of Linux
4. Security – everyone knows it's
important, right?
l
Wrong!
l
In my experience, faced with “more
important” priorities (production issues,
delivery deadlines), security is one of the
first things to fall by the wayside
l
But left unchecked, it has one of the worst
damage potentials.
6. My thought process...
● First – loads of instances spun up in a very
short space of time?
● Very unlikely that someone could manage it
that quickly through the UI – even with scripts
e.g. Greasemonkey, AWS UI isn't exactly quick
● Got to have been done through the API
● What is API auth controlled by? AWS
Keypairs.
● ARGGGHHHH
● DISABLE ALL THE KEYPAIRS!
7. Speedy resolution
l Disable all the keypairs on our AWS account. Even if it breaks something in
production, with the keypairs (that likely had root privileges), an attacker could
do far worse:
●
Spin up expensive instances for nefarious purposes (they did this bit)
●
Delete RDS instances
●
Delete snapshots
●
Terminate EC2 instances
●
A whole lot more...
l Phone AWS Support and find out what keypair had been used to spin up the
instances so we could keep it disabled
l Keep everything else disabled until we could get everything else in IAM (more
about this later)
8. So how the hell did it happen?
l
Genuine badness in our setup:
●
Our AWS infrastructure and usage of it was likely
set up in a hurry
●
Our AWS keys in use all over our infrastructure
were attached to the root AWS account (all teh
privilegez!)
●
Once these were being used and working, no
impetus to change them - “Fuck it, it works”
●
These keys could well have been used outside the
organisation – external services using AWS keys to
monitor spend or test Cloudwatch integration, etc
9. So how did they get that keypair?...
l Unfortunately due to using the same AWS key
everywhere (cue more facepalming) it's hard to
know. Possibilities:
●
Nefarious or even careless ex-employee.
Keypairs had never been rotated/regenerated
●
The keypair was stored in an environment
variable for easy access by everything (as
well as other places). Bug found to expose
these?
●
Keypair used with external service which has
been compromised.
10. Steps for fixage
l Secure AWS account
●
IAM, IAM, IAM!
●
The more keypairs the better. Don't share
keypairs between functions – allows you to
restrict access per keypair/function
●
MFA for user privileged accounts.
●
Completely remove use of any root account
anywhere
l Avoid using keypairs outside of AWS
infrastructure (e.g. for external reporting sites)
l ROTATE YOUR KEYPAIRS REGULARLY
12. This is always happening to the best
of us.
l There are tales of bad security practice
everywhere you look.
●
Adobe hack – hashed but not salted
passwords – 150 million affected
●
PSN hack – 77 million accounts leaked
●
Random hackers in Russia appear to be
leaking email/password combos every week, it
seems. This means there are STILL
companies storing them unencrypted
●
#thefappening
13. Dem blockers though...
l Remember the whole “falling by the wayside”
thing? Yeeeaaaahh....
●
Security is “boring” - often a focus, especially in
startups, of concentrating on the shiny-shiny
●
People often don't have the time to implement
things “properly”
●
“Fuck it, it works” - bad security practice is
mostly a temporary test never intended to be
permanent, but always ends up that way – root
password!
●
Change (even to harden security) is seen as
“risky”
14. So what to do?
l FIGHT BACK!
●
Nobody recognises the value of securing
systems and focusing on security when you're
not being hacked
●
There's always a higher priority – releases,
features, other bugfix
●
If you are hacked, it's suddenly TOP
PRIORITY and you're strung up for not having
done it sooner
l Proactive rather than reactive
l Black Ops
●
JFDI
15. Getting people to care about security
is a good thing
l If people care, they understand the importance
of spending time to implement good security
setups
l Get rid of the “But it works now, and X is more
important” mentality. Security isn't only
important when it's breached.
l More work now to avoid major, major pain later
16. A final note...
MS14-066 - Vulnerability in SChannel Could Allow Remote Code Execution
(2992611)
●
Critical vulnerability in Microsoft SChannel – similar to Heartbleed but allows
pushing and execution of code
●
Patched yesterday
●
Affects Windows Server 2012, Windows Server 2008 R2 and Windows Server
2003, as well as Windows Vista, 7 and 8
●
This means every major TLS stack – OpenSSL, GNUTLS, NSS, MS SChannel
and Apple Secure Transport – has had a severe vulnerability this year.
Notes de l'éditeur
In an office in San Francisco, a couple of months ago....
Browsing AWS Console one morning, and suddenly noticed a lot of c3.8xlarge instances with no name
We'd been playing with test-kitchen on EC2 recently, so assumed the UK team had accidentally spun things up with a huge instance size
Asked UK team but they were busy with other things and didn't see my messages
Upon checking other regions, found they'd been spun up in all of them
My suspicion is that they were spun up by a bot for the purposes of password cracking or (more likely) altcoin mining.
Either way, it's baaaaad
Discovered from Amazon that the keypair used to spin up the services was the one in use all over our infrastructure. Couldn't initially find it within the IAM account management, and discovered that it was actually one of the two that were attached to our root AWS account, and was in use everywhere
Was on the phone to Amazon for ages before getting through, because we “only” have Gold support. Definitely worth bearing in mind if something REALLY bad was happening. Spinning up lots of instances just cost us money, which I caught quickly.
Our initial EC2 account and US setup was built a month after IAM was released and likely wasn't highly publicised. Hard to find documentation as it was back then.
However, this wasn't the failing – the failing was us never changing it.
IAM – Identity and Access Management. It's a service provided by Amazon to manage access to different AWS services for individual users and groups.
AWS try to make things as easy to use as possible – and I get why they do this – but it makes it really easy to get setup and running (and due to “Fuck it, it works” methodology, stay running) on a setup with bad security.
It's well worth sitting down and spending the time to learn IAM. It's not hard and if you've spent any time with any kind of ACLs you'll get it in no time.
Explain hashing versus salting
Hashing – Password is encrypted with a one-way algorithm, so authentication attempts are encrypted with the same algorithm – if it matches, the password is correct. HOWEVER if these encrypted password strings are ever leaked, identical strings = identical passwords. Lots of identical strings = common password, dictionary word
Salting is taking another string (usually username or email) and using that in the encryption algorithm so two people with the same password won't have the same password hash stored.
Seeing a change to harden security as “risky” is ridiculous – what could be more risky than an attacker getting in and breaking everything?