Ce diaporama a bien été signalé.
Le téléchargement de votre SlideShare est en cours. ×

WebHack #13 Web authentication essentials

Chargement dans…3

Consultez-les par la suite

1 sur 49 Publicité

Plus De Contenu Connexe

Similaire à WebHack #13 Web authentication essentials (20)

Plus récents (20)


WebHack #13 Web authentication essentials

  5. 5. Web 1.0
  6. 6. SHA-2 IS NOT E
  9. 9. USE SLOW ALGORITHMS ▸scrypt ▸Argon2id ▸PBKDF2 ▸bcrypt ALREADY TOO OLD:
  12. 12. OVER 60% REUSE THEIR PAS
  13. 13. bill.brown@gmail.com : TellNo1 fakeaccount@hotmail.com : verysecret zerocool@myserver.net : Z3r0d4y$ ❌ ✅ ❌
  17. 17. MULTI-FACTOR AUTHENTICATION SMS CODES ✔ Well-recognized ✔ No special hardware required ✔ Services like Twilio �Cumbersome �Requires a mobile number �Multiple vulnerabilities TOTP ✔ Easy to use ✔ No special hardware required ✔ Standard �Requires an app �Requires scanning QR codes �So-so security
  18. 18. MULTI-FACTOR AUTHENTICATION FIDO ✔ Plug and click! ✔ Passwordless authentication (UAF) � Mobile, Biometric, etc… �Many versions �Limited browser support �Theoretical support is theoretical
  19. 19. MULTI-FACTOR AUTHENTICATION WEB AUTHENTICATION API✔ Based on FIDO 2 ✔ All FIDO Hardware ✔ W3C Standard �Browser support still lacking �Complex! Library support immature Promising, but not ready
  21. 21. SESSIONS AND TOKENS SERVER-SIDE SESSIONS ▸ Complex state ▸ Multi-page form values ▸ Shopping cart ▸ Navigation information ▸ Smells like an Anti-pattern!
  22. 22. SESSIONS SESSIONS ARE USER HOSTILE ‣ User often cannot ‣ Share links ‣ Press back button ‣ Open tabs ‣ Annoying warnings ‣ Expire and delete user work!
  23. 23. SESSIONS AND TOKENS SERVER-SIDE SESSIONS ▸ Complex state ▸ Multi-page form values ▸ Shopping cart ▸ Navigation information ▸ Smells like an Anti-pattern! ACCESS TOKENS ▸ Lightweight ▸ Only keep auth state ▸ User ID ▸ Expiration date ▸ Scopes / Permissions ▸ State doesn’t change after issue VS.
  24. 24. BEARER TOKENS WHY NOT ACCESS COOKIES? ▸ Cookies are implicit ▸ Extra overhead ▸ Vulnerable to CSRF ▸ … CSRF tokens required ▸ Modern APIs can be explicit
  25. 25. BEARER TOKENS STATEFUL ▸ Token is unique ID ▸ Contents kept in DB ▸ Revocation is easy ▸ DB access on every call ▸ Harder to scale STATELESS ▸ Signed contents ▸ Optionally encrypted ▸ How to revoke? ▸ Blacklist: Same as DB? ▸ Bloom filters? VS. MISNOMER WARNING
  26. 26. BEARER TOKENS: OAUTH 2.0 OAUTH 2.0 ▸ Architecture not protocol ▸ Multiple flows ▸ Bearer token header ▸ Refresh token OFTEN MISUNDERSTOOD
  27. 27. BEARER TOKENS: OAUTH 2.0 AUTHENTICATION FLOWS CLIENT CREDENTIALS ▸ For authenticating clients ▸ Simplest flow ▸ Can replace with basic auth AUTHORIZATION GRANT ▸ For server side apps ▸ Client secret required ▸ Prevents hijacking code
  28. 28. BEARER TOKENS: OAUTH 2.0 AUTHENTICATION FLOWS IMPLICIT GRANT ▸ For SPA clients ▸ Access token can be hijacked ▸ Use URL fragment ▸ Only use HTTPS ▸ Protect against XSS ▸ Encrypt access token (not standard) PASSWORD GRANT ▸ Client has the password ▸ First-party mobile apps ▸ First-party login page PKCE ▸ Third-party mobile apps
  30. 30. BEARER TOKENS: OAUTH 2.0 REFRESH TOKENS REFRESH TOKENS ARE NOT ▸ Eternal tokens ▸ Long-lived access token ▸ Low-privileged access token for older sessions ▸ Eternal tokens or Eternal service? You can’t have both! ▸ What’s the point? ▸ Access tokens have timestamps
  31. 31. BEARER TOKENS: OAUTH 2.0 REFRESH TOKENS REFRESH TOKENS ARE MEANT FOR REVOCATION ▸Issue very short access tokens ▸Refresh them constantly ▸Access DB to check revocation only on refresh ▸Return new access and refresh tokens
  33. 33. BEARER TOKENS: OAUTH 2.0 REFRESH TOKENS HOW TO KEEP SESSIONS FOREVER ▸ Balance CVR and security ▸ Set a realistic lifetime for your refresh tokens ▸ Store only blacklist of revoked refresh tokens
  35. 35. SOLUTIONS THIRD-PARTY PROVIDERSa.k.a Social Login ✔ Easy and Free ✔ Sign-up not required (CVR up) �Privacy issues (CVR down) �Give up control of userbase �Still need to implement sessions Do not use as only method
  36. 36. SOLUTIONS: IDENTITY AS A SERVICE FULL-FLEDGED SOLUTIONS ▸ IDaaS + API Gateway Combo ▸ Can be white-labeled ▸ Can handle everything: e.g. Sessions, MFA, Revocation ▸ Your apps don’t need to know anything about authentication
  38. 38. SOLUTIONS: IDENTITY AS A SERVICE NOT SUCH A BAD IDEA ▸ Cheaper than do-it-yourself ▸ Better security ▸ Spirit of *aaS WOULDN’T WORK FOR ME ▸ Existing system ▸ Vendor lock-in ▸ Future flexibility is held prisoner
  39. 39. SOLUTIONS: FRAMEWORKS FRAMEWORKS ▸ Heavyweight ▸ Different levels of comprehensiveness ▸ Devise + Pundit is amazing ▸ If you can live with RoR
  40. 40. SOLUTIONS: API GATEWAYS AND PROXIES API GATEWAYS ▸ Strong Open-Source Solutions: Kong, Tyk, WSO2 ▸ Great for microservices ▸ Complicated to maintain ▸ May kill performance AUTHENTICATION PROXIES ▸ Lightweight alternative for API Gateways ▸ Can run as sidecars ▸ Need to write your own ▸ No mature third-party solutions
  41. 41. FIN

Notes de l'éditeur

  • Hi everyone.
    I want to thank for the WebHack organizers, Fonda, Bible and Greg for inviting me to give a talk again and I want to thank everyone for coming here to listen.

    This time I’m going to talk about a subject which is quite close to my heart: Authentication. For the last 8 years this was my main focus at work, and I’ve come to realize it’s a lot more complicated topic than it seems at first glance. If you want to keep your users safe, it’s not enough to just get authentication working. There are many gotchas, and I keep discovering new ones even now. This talk will try to cover some of them, as well as common solutions.
  • Let’s go back in time. You might remember the web when it was still young.
    Here’s how Hotmail used to look like just after it was acquired by Microsoft.
    Yeah, the design is pretty bad, but believe me - it was A LOT better than what most sites looked like back then.
  • Well, web 1.0 was naive not just in terms design and belief that you could pay for everything with ads. Dealing with authentication was also dead simple:
    We first just had a nice login dialog like this, asking for username and password.
    -> But we can’t ask for password on every page access, right?
  • Enter cookies:
    If password is correct, we send back a session cookie.
    Now the server framework would build session object from cookie on every request. Super-easy on the developer.
  • Well, it’s not so easy anymore.
  • You’ve got to protect yourself against that. Simplify storing and verifying a password is not enough.
  • You have to understand the risks or use an architecture which minimizes them.
  • This is the biggest revolution here. Even if you’re not doing microservices, you’ll have to deal with multiple third party APIs and at least separate the display and business logic parts of your own apps. You can’t just rely on in-process sessions managed by your favourite framework.
  • Web 1.0 is long dead, and the authentication flows we knew are dead with it.
    Don’t worry: there are new and actually even more solid foundation for auth in the brave new world of microservices and SPAs, but they are less widely understood. The aim of this talk is to cover this gap. I’m not going to focus on any specific technology, but rather review the essentials we need to better understand modern web authentication.
  • Let’s start with passwords. They’re still here and they’re not going anywhere soon - but they’re definitely endangered.

    The first issue with passwords is that it’s only the user, but also the service which has to keep them safe. If your password database is stolen, you don’t want anyone to be able to be able to read these passwords. You don’t really have to know the passwords yourself, you just need to be able to verify them.
    Trying to solve this is nothing nothing new, people have been hashing passwords long time ago. Who is hashing passwords when storing them in the DB here?
  • It’s also important to choose a strong hash algorithm. Who of you have heard of successful collision attacks against MD5 and SHA1?
    So you’d might want use SHA-2, which withstood all attacks so far and is widely considered safe… Right?

    Well, just hashing your password with SHA-2 would still make the password relatively easy to break.
  • Now who of you is thinking right now: you should also use a salt along with your password? That’s right, salts make every hashed password unique and prevents some speed-up techniques used by password crackers.

    But it still doesn’t solve the core issue:
    Even if you use the safest hash algorithm on earth…
    Even if you block all possible speedups…
    You can’t prevent brute-forcing if your algorithm itself is blazing fast.
  • That’s the speed of a monster with 8 Nvidia GTX 1080 GPUs used by password crackers. You can also hire equivalent hardware on hourly basis on the public cloud and pay 1 cent for 80 billion password attemps.
    Remember: password crackers have a x1000 speed advantage on you, by using specialized hardware.
  • How do you defend against that?
    Well, you should never use a general purpose hash algorithms. They are not insecure! They’re designed be safe, but they’re also designed to be fast and parallelizable, especially on special purpose hardware.
    They were just never designed to be used for hashing passwords, and they’re authors would be furious at you if they learned that you’re using them to do that. It’s just a sad misunderstanding.

    Good news is that we do have specialized algorithms for hashing passwords.
    Less good news is that the 2 most famous ones: PBKDF2 and bcrypt are getting old. They’re still miles better than plain SHA-256, but they would result with some percentage of your passwords being cracked if your DB is stolen and that’s not a good ending. You should use either scrypt or Argon2id. Note the “id” in the end, this is the recommend variant of Argon2. Argon2id is much stronger than scrypt, but less mature.

    Now, even if you use the best algorithms out there and your passwords are just uncrackable, there is still an issue.
  • You see, all of this is really just about being a good and responsible citizen. There are too many companies out there who do not keep their passwords safe enough. Possibly the vast majority of them.
  • Not just small players either. It’s an outright deluge.

    And other people’s irresponsibility will come back haunting you.
  • Because the harsh reality is that most users reuse their password. So that password on your site? They’re probably using the same password in 20 other sites. What if one of them gets hacked?
  • Well, the crackers will get the list of breached emails and password on their hand on the black market for leaked passwords - or even for free.
    And they’ll feed that to their bots.
    The bots will just keep trying every password on their list on your web site until they find something that works, because this user reused their password on multiple accounts.
  • To prevent that you need to use bot protection services. These services use variety of techniques to detect bots, and maintain their own blacklist of bot IPs and fingerprints.
  • How do we detect bots and otherwise suspicious logins?

    The two most common ways are browser fingerprinting and Geolocation.
    Browser fingerprinting tries to uniquely classify your browser settings and behavioral patterns.
    GeoIP uses your IP data to find the locations you usually login from.

    These methods are both imperfect, but can give you a statistical confidence score. You shouldn’t be blocking users outright based on that score, but you can use it to slow attackers down. Cause just a little bit of inconvenience to normal users, but make it much more expensive to crack users on your site.

    Remember: the attackers are only there for money. The moment they pay more for cracking (because it takes more time or CPU power), they’ll just quit.
  • So, how do we slow them down then?

    One way you all probably know is CAPTCHAs. If you ever wondered what that silly-sounding acronym means, well - it’s Completely Automated Public Turing test to tell Computers and Humans Apart. The original idea was: let’s pick a task computers are bad at but humans are good at and use it to block all bots. Like… recognizing squiggly handwriting?

    It’s far from perfect in this day and age, but it still works well at slowing bots down.
    You might have noticed that the example of captcha I put here is reCAPTCHA. That’s been the most popular CAPTCHA service for a while now and it’s offered for free by Google. One thing you might not know, is that it’s not only doing just CAPTCHAs anymore. It’s also doing proof of work.

    When you click on the checkbox, it dwonloads a mathematical puzzle from the Google server and tries to solve it using Javascript. That’s why it takes so long to get the check mark. This is the same kind of Proof of Work used for Bitcoin mining, although the bitcoin puzzles are several orders of magnitude harder.

    For the regular user, waiting an extra second every time they type their password is a minor inconvenience, but for attackers who want to try thousands of passwords each second, this makes the attack much more expensive.
  • Another way to protect your users better is to enable MFA, that is Multi-Factor authentication. It’s based on the theory of classifying authentication mechanism into three “factors”…

    Each factor is vulnerable alone
    - Passwords need to be memorable which makes them easy to guess
    - Physical devices can be stolen easily
    - Biometric features can be copied and scanners can be fooled
    But they are better together
    Examples combinations:
    - Smartphones: Isolated private key + PIN or Biometric data
    - SMS 2FA: SIM card + Password

    The theory behind MFA says that adding an extra authenticator in the same factor (e.g. multiple passwords or two different physical tokens) will give little extra security, so you need to cover as many factors as possible.
  • Here are some of the second authentication factors you can use in addition to password (or instead of it).

    SMS codes are something that a lot of places have been using for a long while. Their advantages are that everybody knows them, so your users would have no issues understanding how to use them.
    They also don’t require any special hardware and there are many services like Twilio which make it easy to send an SMS.
    On the other hand, checking and typing in a code sent by SMS is cumbersome, and it requires a mobile number - something that people often don’t have access to (e.g. when traveling abroad). But the main killer is that SMS is not considered a safe channel anymore. There are multitude of ways for hackers to steal SMS codes sent through the network.

    TOTP is another common option. It’s easier to use than SMS and still doesn’t require special hardware. You do need an app, but there many apps which support TOTP and they’re all compatible. When you configure TOTP for the first time, you just take your app and scan a QR code displayed by the site (or type in a key), and from then on the app will be able to generate TOTP codes for you, which can be verified by the site.
    The main downsides is that installing an app and scanning a QR code might be a bit daunting for the users. The security itself is also not great. It’s better than SMS, but it’s not as good as some other solutions.
  • There have been other solutions like fingerprint scanners and smart cards, but they’ve never been properly standardized for signing in to web apps. The FIDO Alliance is an organization that was set up trying to solve that (Disclaimer: Rakuten is a proud member). The idea is to create a standard which can be used for all non-password authentication mechanisms.

    FIDO envisions a future where you can use dedicated biometric scanners, biometrically-secured smartphones and NFC tokens interchangeably and apps can require an authenticator of certain type or from an approved manufacturer, or a combination of different devices.

    In reality, FIDO support in mobile devices and dedicated scanners is still quite weak, but there are many USB keys supporting FIDO and they’ve gotten quite cheap.

    And this is a pretty convenient method authentication. It’s just plug & click.

    The downsides are browser support (only Chrome and Firefox) and the heavyweight documentation and lots of different versions (UAF, U2F, FIDO 2.0) which made implementation quite complicated.
  • The solution was to standardize FIDO for the web. It’s based on FIDO 2.0, and most FIDO hardware should be compatible with it. It’s also a W3C standard, so don’t expect it to be particularly lightweight (remember these are the guys who brought you XML), but on the browser support and framework support WILL get better.
    Browser support is lacking, since this standard is so new. Firefox released a version that supports Webauthn 1 month ago, and Chrome just released a version that supports that last week! Still, Microsoft has an incompatible version implemented in Edge, but they also created a polyfill to address that and they will release native support eventually. Only Safari is late (as usual), but I guess support will eventually come there.
    The other issue is lack of mature libraries, and implementing Webauthn yourself can be quite daunting.
  • After dealing with how to authenticate users, we need to answer another question: how to keep the authentication state without re-authenticating them on each and every request?
    This is obviously even more unrealistic nowadays, with SPAs that do hundreds of API requests a minute on the users’ behalf.
    So we have to get back again to sessions and introduce a new player: tokens.
  • When talking about sessions, we usually refer to the classic implementation of server side sessions. These are serialized objects which contains the entire user interaction state, e.g. previously entered values in multi-page forms, items in the shopping carts, and sometimes even the navigation state (with particularly evil apps killing the browser back button and using their own custom back buttons and session state).

    All of this state is stored in-memory (which means you need sticky sessions associating a single user with a single server) or in a cache-backed database. In both cases the app becomes quite hard to scale, not to mention the pain of breaking a monolith when sessions are involved (how do you share a session object between different microservices?)

    If this smells like an anti-pattern - well, that’s because it is.
  • You must have seen error pages like this before in some apps. I hope no one here thinks it’s a good UX anymore, but this used be the creed of Java EE and ASP.net developers everywhere,.
  • There’s really no other way to say it. This type of sessions is user hostile.
    Since everything requires complex session information to work, they prevent your users from sharing links, saving links for later use, using the back button or working on different parts of your site in multiple tabs. What’s worse, they get annoying modals popping up and sometimes their session expires, deleting all their precious work!

    Now yes, some sites handle that better than others, but the reality is that you don’t need all of that.
  • You can do multi-page forms on the client side.
    You can store shopping cart items on the client side or a non-session oriented database (so you can share shopping cart across multiple devices).
    And you can let the browser handle navigation state.

    This is were modern access tokens come in. Access tokens basically distill the classic heavyweight sessions to ONLY the information related to authentication. The only state they keep is things like the User ID, Exp date and permissions.

    Access tokens are extremely lightweight, but perhaps the most important property they have in comparison to server-side sessions is that you can’t change their state after they’ve been issued. They user could call an API to issue a new access token, but you can’t “change” any property of an already issued tokens.

    This means that microservices can share the same access tokens without having to synchronize their state!
  • If tokens are so great, why not just keep passing the token values in cookies, like we used to do with sessions?

    Well, the main problem is that cookies are implicit. Once you set a cookie for a domain, it ALWAYS get sent to the server, whether you want it or not.

    This means extra overhead when you need to obtain public or static resources (such as images) which don’t require extra authentication.

    This means vulnerability to Cross Site Request Forgery - a very dangerous attack, that basically require another type of non-cookie token (called CSRF tokens) to protect against. Which begs the question: if we MUST use tokens if we want to use cookies for sessions safely, why do we need cookies for sessions anyway?

    If you build a client-side app (SPA or not) with Modern APIs, you don’t need cookies anymore. You can just send the token explicitly in a header.
  • This is where bearer tokens come. These are basically just tokens you send in the HTTP Authorization header with the Bearer type.

    But what should the bearer token string itself be? How do you generate it and how would you verify it?

    Well, people generally talk talk about two types of tokens: Stateful and Stateless. Here I have to come with a misnomer warning…
    When we talk about stateful or stateless we don’t talk about the tokens themselves, but rather about the server - if we need to keep server-side state somewhere that contains the information for all the valid bearer tokens, then these tokens are stateful. If the server doesn’t have to keep any database of bearer tokens, we refer to these tokens as stateless.

    With stateful token, the server basically keeps a look up table - on a database, a cache or a combination of both - and the token is just a unique ID pointing to record in this table which contains all the information. It’s quite simple to implement in software, and revoking a token is very easy - you just delete its record in the table.

    Stateless tokens use cryptography to create signed (and possibly encrypted) message. These messages contain all the necessary token state and they can’t be forged by anyone but the trusted servers. This means that in order to verify the tokens, the servers just need to decrypt the message and verify the signature. You don’t need to maintain a database, and you don’t need to suffer the performance impact of a DB or cache query on every single API call. You can also easily scale world-wide.

    The main drawback for stateless tokens is that revocation is hard. You can put all of them in the DB, but that would defeat the purpose. You can maintain just a blacklist, and that would reduce your storage cost, but you would still need to go to a DB or cache on every query. You can get smarter and put a dynamically updating bloom filter in front of your blacklist, but that’s far from an easy thing to implement.

    There is a good solution to this issue, but we will come to that later.
  • Most token-based APIs claim to adhere to OAuth 2.0. That’s pretty impressive adoption rate there. What we have to remember is that OAuth 2.0 is not really a a full protocol, but rather an architecture and a set of design guidelines for creating token-based APIs.

    It’s 3 pillars are the multiple authentication flow (which all end up producing an access token), the standardized Bearer token header for sending the access token and the token refresh mechanism, which is unfortunately too often misunderstood.

    It’s important to note that OAuth 2.0 does not say anything about the structure of the access token itself.
  • One of the more confusing parts of OAuth 2.0 is the authentication flows. I admit it me a year or so until I finally managed to get them all sorted in my head. But they can be much easier to understand if you just categorize them by the purpose they serve. Each flow is optimized to solve a different common use case and thus has different security characteristics which are meant to protect against different types of attacks while still making the target use case possible.

    The simplest flow is the client credentials flow, which is meant for authenticating API clients when no actual user is involved. In many cases you can just replace it by repeating the client credentials in basic auth on every request, but if your API has hundreds of thousands of clients (imagine Google Maps for a minute), you can’t store all of them in memory and an access token would help.

    Authorization grant is the classic flow for user authentication in a server-side app. It requires a client secret and an extra step (converting the auth code to an access token) to prevent token hijacking by malicious sites.
  • Implicit grant is meant for SPAs and other client-side browser apps. They can’t safely store a client secret, so in order to prevent access token from being hijacked, you need to use a URL fragment (instead of sending it in a query string which can be intercepted by a malicious server) and protect against XSS. Another (non-standard) option is to add public key encryption to the token.

    Password Grant is the flow you use if you have to implement first party authentication, either in an app or your own main login page. Unless you use traditional HTML forms, then at point you’ll have to have an API which receives username and password and produces an authentication token based on that.

    I won’t get much into PKCE, but I’ll just mention it’s meant to prevent custom URL hijacking on mobile apps, so this is the preferred flow if you’re using OAuth 2.0 in a mobile app.
  • Let’s go back to what I hinted at twice before: Refresh tokens.
    They are the most misunderstood part of OAuth, and this is quite unfortunate, since they are our absolute savior when it comes to supporting BOTH revocation and stateless tokens at the same time.

    How do refresh tokens work? Well, the theory is simple:
    When you receive an access token from an OAuth server, you also get a matching refresh token. The refresh token can be used to generate a new access token in a later date.
  • Before we get to what refresh tokens ARE, we need to understand what they are not. Or at least not supposed to be - I’ve unfortunately seen them used for all the purposes above.
  • The access tokens themselves are stateless.

    OAuth refresh grant allows refreshing the refresh token itself!
    OAuth specifies that the client must replace the refresh token if its value was changed.
  • Why this is important to make the refresh token expire sometime (but allow refreshing it)?
    This allows you to use a blacklist for the refresh tokens, and limit the blacklist size, so the blacklist TTL == refresh token TTL.

    This gives us a very easy formula for deciding expiries:
    The access token TTL is the maximum delay before token revocation is enforced.
  • Now, if the client constantly needs to refresh a session, how do we make that last forever? At one point, your refresh tokens will expire and clients who didn’t refresh for a will have to re-login. That hurts CVR, so what would you do?

    You need to balance CVR and security and choose the right maximum lifetime for your refresh tokens. There’s always a place where you wouldn’t get any CVR if you push the session lifetime longer. If a user didn’t login to your site for 2 years, they chance they’re coming again with a valid token is slim - they probably switched to a new phone and reformatted their hard-drive since then.
  • Well, there are ready made solutions out there.

    First and easiest is just using a third party identity provider, a.k.a Social Login.
  • Full-fledged solutions combine IDaaS - that is Identity as a Service - providers like Auth0 or Amazong Cognito with an API Gateway. There are also more enterprise focused IDaaS solutions like Okta and the venerable Active Directory.
  • The general architecture is pretty simple: all the auth is handled by the IDaaS provider, which returns SAML or Open ID Connect proof-of-identity back to the API gateway - or a separate auth server - which creates access tokens. These are consumed by the API gateway. You put all of your APIs behind the gateway and make them trust the authentication headers that they’re getting from the API gateway.

    So all the authentication, including MFA, bot detection, password hashing, etc. is handled by the IDaaS provider, and all the session management is handled by the API gateway.
  • This will cost you money and bring some lock-in, but it’s not necessarily a bad idea. Unless you’re big enough, you’ll probably be paying less than what it costs to maintain teams of engineers dedicated to authentication. This is the spirit of *aaS in general - you outsource something which is not your core competency.

    There are some valid reasons to say that wouldn’t work for you however.
  • Frameworks are another solution. At their best, they can really handle everything, including authorization, revocation, user management and integration with all the third party providers and MFA solutions you could imagine. Devise + Pundit and their various plugins in the Ruby on Rails world are an amazing example for a strong ecosystem that works very well with almost zero setup cost and without giving up security.

    The problem with these frameworks is that they are all made for monoliths. They don’t scale very well, and are often pretty awful if you need to handle major traffic. Besides, not all languages have comprehensive solution as what you have in the Java and Ruby world. Go and Node.js don’t and that’s okay - since backend development in these languages never tried to appeal to people writing monoliths.
  • So what would be the right solution for microservices?

    Currently the main contender seems to be API gateways.