44. launchd
• pretty sweet (on 10.5)
• somewhat sweet on 10.4
• 10.3 still exists?!?
• check out <key>KeepAlive</key> for
watchdog-related goodness, in
launchd.plist(5)
The goal of this presentation is to give you an idea of how security experts think about designing security into applications. A few examples of Mac OS X technologies will be used to indicate how these principles can be applied in real applications. Finally, we&#x2019;ll look at an example of a vulnerability in an app, so that we can apply the ideas we&#x2019;ve learned.
Why should I want to talk about security, and why should you want to listen? The press and security researchers like talking about insecure Macs. They don&#x2019;t care whether the holes are in our apps or in Apple&#x2019;s; come to that, neither do our customers. If my app is less secure than the competitor&#x2019;s then that&#x2019;s a reason to choose the competition; just like UI fit and finish, usability or performance.
First, remember that security is not a one-size-fits-all operation. Something which works in one context may not be appropriate elsewhere.
Questions to ask are of risk: what could go wrong, how likely is it and what would the impact be? Can I live with that? How much am I (or my customers) willing to pay to reduce that risk?
My &#x201C;Pythagoras theorem&#x201D;, i.e. my fundamental rule of software security is to think of it like real-world security. Securing an office building by locking _everyone_ out would stop burglars getting in, but it would stop the workers getting in too. Ultimately the user has to be confident that they can get their work done without untoward problems, just as good real-world security provides assurance that law-abiders can go about their business.
So if we want to identify and mitigate threats which pose a risk to our app, we need to know what a threat is. We want to know _who_ is doing something which compromises our app, _what_ they get by doing it (or, conversely, what we lose), and _how_ they get in and acquire that asset.
So if we want to identify and mitigate threats which pose a risk to our app, we need to know what a threat is. We want to know _who_ is doing something which compromises our app, _what_ they get by doing it (or, conversely, what we lose), and _how_ they get in and acquire that asset.
So if we want to identify and mitigate threats which pose a risk to our app, we need to know what a threat is. We want to know _who_ is doing something which compromises our app, _what_ they get by doing it (or, conversely, what we lose), and _how_ they get in and acquire that asset.
So if we want to identify and mitigate threats which pose a risk to our app, we need to know what a threat is. We want to know _who_ is doing something which compromises our app, _what_ they get by doing it (or, conversely, what we lose), and _how_ they get in and acquire that asset.
So if we want to identify and mitigate threats which pose a risk to our app, we need to know what a threat is. We want to know _who_ is doing something which compromises our app, _what_ they get by doing it (or, conversely, what we lose), and _how_ they get in and acquire that asset.
Could be a malicious person, could be someone accidentally exploiting a problem, such as misconfiguring their own application. That&#x2019;s why I used the term &#x201C;misuser&#x201D; instead of &#x201C;abuser&#x201D;. They could be known to the customer/user or you or not. Each attacker will have different characteristics.
Example: CanSecWest held the pwn2own competition, where competitors were encouraged to compromise various computers in order to win that computer as a prize. In that arena, the attacker is motivated by personal gain, there is little to no chance of recrimination so they&#x2019;re likely to take huge risks and it&#x2019;s also probable that they&#x2019;d be security experts. That&#x2019;s quite an edge case though.
Could be a malicious person, could be someone accidentally exploiting a problem, such as misconfiguring their own application. That&#x2019;s why I used the term &#x201C;misuser&#x201D; instead of &#x201C;abuser&#x201D;. They could be known to the customer/user or you or not. Each attacker will have different characteristics.
Example: CanSecWest held the pwn2own competition, where competitors were encouraged to compromise various computers in order to win that computer as a prize. In that arena, the attacker is motivated by personal gain, there is little to no chance of recrimination so they&#x2019;re likely to take huge risks and it&#x2019;s also probable that they&#x2019;d be security experts. That&#x2019;s quite an edge case though.
Could be a malicious person, could be someone accidentally exploiting a problem, such as misconfiguring their own application. That&#x2019;s why I used the term &#x201C;misuser&#x201D; instead of &#x201C;abuser&#x201D;. They could be known to the customer/user or you or not. Each attacker will have different characteristics.
Example: CanSecWest held the pwn2own competition, where competitors were encouraged to compromise various computers in order to win that computer as a prize. In that arena, the attacker is motivated by personal gain, there is little to no chance of recrimination so they&#x2019;re likely to take huge risks and it&#x2019;s also probable that they&#x2019;d be security experts. That&#x2019;s quite an edge case though.
Could be a malicious person, could be someone accidentally exploiting a problem, such as misconfiguring their own application. That&#x2019;s why I used the term &#x201C;misuser&#x201D; instead of &#x201C;abuser&#x201D;. They could be known to the customer/user or you or not. Each attacker will have different characteristics.
Example: CanSecWest held the pwn2own competition, where competitors were encouraged to compromise various computers in order to win that computer as a prize. In that arena, the attacker is motivated by personal gain, there is little to no chance of recrimination so they&#x2019;re likely to take huge risks and it&#x2019;s also probable that they&#x2019;d be security experts. That&#x2019;s quite an edge case though.
Could be a malicious person, could be someone accidentally exploiting a problem, such as misconfiguring their own application. That&#x2019;s why I used the term &#x201C;misuser&#x201D; instead of &#x201C;abuser&#x201D;. They could be known to the customer/user or you or not. Each attacker will have different characteristics.
Example: CanSecWest held the pwn2own competition, where competitors were encouraged to compromise various computers in order to win that computer as a prize. In that arena, the attacker is motivated by personal gain, there is little to no chance of recrimination so they&#x2019;re likely to take huge risks and it&#x2019;s also probable that they&#x2019;d be security experts. That&#x2019;s quite an edge case though.
Could be a malicious person, could be someone accidentally exploiting a problem, such as misconfiguring their own application. That&#x2019;s why I used the term &#x201C;misuser&#x201D; instead of &#x201C;abuser&#x201D;. They could be known to the customer/user or you or not. Each attacker will have different characteristics.
Example: CanSecWest held the pwn2own competition, where competitors were encouraged to compromise various computers in order to win that computer as a prize. In that arena, the attacker is motivated by personal gain, there is little to no chance of recrimination so they&#x2019;re likely to take huge risks and it&#x2019;s also probable that they&#x2019;d be security experts. That&#x2019;s quite an edge case though.
The assets in an application can be tangible data held by the app, such as a password, a user&#x2019;s identity or some information of financial value. Alternatively they can be intangible; there&#x2019;s no file on the Sophos webserver which actually contains the company&#x2019;s reputation, but the reputation could still be damaged by a successful attack on the webserver content.
The asset at risk could also be something which the app has access to but doesn&#x2019;t actually &#x201C;own&#x201D;, such as the network connectivity or CPU time which are often the targets of zombie networks.
The assets in an application can be tangible data held by the app, such as a password, a user&#x2019;s identity or some information of financial value. Alternatively they can be intangible; there&#x2019;s no file on the Sophos webserver which actually contains the company&#x2019;s reputation, but the reputation could still be damaged by a successful attack on the webserver content.
The asset at risk could also be something which the app has access to but doesn&#x2019;t actually &#x201C;own&#x201D;, such as the network connectivity or CPU time which are often the targets of zombie networks.
The assets in an application can be tangible data held by the app, such as a password, a user&#x2019;s identity or some information of financial value. Alternatively they can be intangible; there&#x2019;s no file on the Sophos webserver which actually contains the company&#x2019;s reputation, but the reputation could still be damaged by a successful attack on the webserver content.
The asset at risk could also be something which the app has access to but doesn&#x2019;t actually &#x201C;own&#x201D;, such as the network connectivity or CPU time which are often the targets of zombie networks.
The assets in an application can be tangible data held by the app, such as a password, a user&#x2019;s identity or some information of financial value. Alternatively they can be intangible; there&#x2019;s no file on the Sophos webserver which actually contains the company&#x2019;s reputation, but the reputation could still be damaged by a successful attack on the webserver content.
The asset at risk could also be something which the app has access to but doesn&#x2019;t actually &#x201C;own&#x201D;, such as the network connectivity or CPU time which are often the targets of zombie networks.
The assets in an application can be tangible data held by the app, such as a password, a user&#x2019;s identity or some information of financial value. Alternatively they can be intangible; there&#x2019;s no file on the Sophos webserver which actually contains the company&#x2019;s reputation, but the reputation could still be damaged by a successful attack on the webserver content.
The asset at risk could also be something which the app has access to but doesn&#x2019;t actually &#x201C;own&#x201D;, such as the network connectivity or CPU time which are often the targets of zombie networks.
The assets in an application can be tangible data held by the app, such as a password, a user&#x2019;s identity or some information of financial value. Alternatively they can be intangible; there&#x2019;s no file on the Sophos webserver which actually contains the company&#x2019;s reputation, but the reputation could still be damaged by a successful attack on the webserver content.
The asset at risk could also be something which the app has access to but doesn&#x2019;t actually &#x201C;own&#x201D;, such as the network connectivity or CPU time which are often the targets of zombie networks.
So we can classify the importance of assets - and thus the value in protecting them - along at least three axes:
* how much damage would be done (put another way: how much would it cost) if this asset were to be read by someone who shouldn&#x2019;t be able to?
* how much damage would be done if the asset were modified in an unexpected fashion?
* how much damage would be done if the asset disappeared, or could not be used for the legitimate use cases?
So we can classify the importance of assets - and thus the value in protecting them - along at least three axes:
* how much damage would be done (put another way: how much would it cost) if this asset were to be read by someone who shouldn&#x2019;t be able to?
* how much damage would be done if the asset were modified in an unexpected fashion?
* how much damage would be done if the asset disappeared, or could not be used for the legitimate use cases?
So we can classify the importance of assets - and thus the value in protecting them - along at least three axes:
* how much damage would be done (put another way: how much would it cost) if this asset were to be read by someone who shouldn&#x2019;t be able to?
* how much damage would be done if the asset were modified in an unexpected fashion?
* how much damage would be done if the asset disappeared, or could not be used for the legitimate use cases?
So we can classify the importance of assets - and thus the value in protecting them - along at least three axes:
* how much damage would be done (put another way: how much would it cost) if this asset were to be read by someone who shouldn&#x2019;t be able to?
* how much damage would be done if the asset were modified in an unexpected fashion?
* how much damage would be done if the asset disappeared, or could not be used for the legitimate use cases?
Filesystem permissions can protect the confidentiality and integrity of persistent assets - up to a point. The super-user gets to trump the permissions model.
Of course, it&#x2019;s easier to change the permissions or ACLs on a file than it is to protect it against misuse - think carefully about what classes of user will be interacting with your app, and what they should be able to change or read.
Filesystem permissions can protect the confidentiality and integrity of persistent assets - up to a point. The super-user gets to trump the permissions model.
Of course, it&#x2019;s easier to change the permissions or ACLs on a file than it is to protect it against misuse - think carefully about what classes of user will be interacting with your app, and what they should be able to change or read.
Of course the filesystem permissions can be trumped by the super-user, but it&#x2019;s not always the case that their superior status should mean they can read a regular user&#x2019;s data. That&#x2019;s where encryption comes in. Keychain is actually very easy to use for the usual case of keeping one password for an app to access a single service such as a web application or e-mail account.
Of course the filesystem permissions can be trumped by the super-user, but it&#x2019;s not always the case that their superior status should mean they can read a regular user&#x2019;s data. That&#x2019;s where encryption comes in. Keychain is actually very easy to use for the usual case of keeping one password for an app to access a single service such as a web application or e-mail account.
Of course the filesystem permissions can be trumped by the super-user, but it&#x2019;s not always the case that their superior status should mean they can read a regular user&#x2019;s data. That&#x2019;s where encryption comes in. Keychain is actually very easy to use for the usual case of keeping one password for an app to access a single service such as a web application or e-mail account.
Of course the filesystem permissions can be trumped by the super-user, but it&#x2019;s not always the case that their superior status should mean they can read a regular user&#x2019;s data. That&#x2019;s where encryption comes in. Keychain is actually very easy to use for the usual case of keeping one password for an app to access a single service such as a web application or e-mail account.
Of course the filesystem permissions can be trumped by the super-user, but it&#x2019;s not always the case that their superior status should mean they can read a regular user&#x2019;s data. That&#x2019;s where encryption comes in. Keychain is actually very easy to use for the usual case of keeping one password for an app to access a single service such as a web application or e-mail account.
Of course the filesystem permissions can be trumped by the super-user, but it&#x2019;s not always the case that their superior status should mean they can read a regular user&#x2019;s data. That&#x2019;s where encryption comes in. Keychain is actually very easy to use for the usual case of keeping one password for an app to access a single service such as a web application or e-mail account.
Of course the filesystem permissions can be trumped by the super-user, but it&#x2019;s not always the case that their superior status should mean they can read a regular user&#x2019;s data. That&#x2019;s where encryption comes in. Keychain is actually very easy to use for the usual case of keeping one password for an app to access a single service such as a web application or e-mail account.
The longer a secret is kept in memory, the easier it is for a debugging tool such as gdb or F-Script Anywhere to retrieve it. Keychain allows us to pass around references to the encrypted secret, only retrieving the plain-text at the point where it&#x2019;s really needed.
The longer a secret is kept in memory, the easier it is for a debugging tool such as gdb or F-Script Anywhere to retrieve it. Keychain allows us to pass around references to the encrypted secret, only retrieving the plain-text at the point where it&#x2019;s really needed.
The longer a secret is kept in memory, the easier it is for a debugging tool such as gdb or F-Script Anywhere to retrieve it. Keychain allows us to pass around references to the encrypted secret, only retrieving the plain-text at the point where it&#x2019;s really needed.
So that was how we can protect the confidentiality and integrity (and to some extent, the availability) of filesystem assets. But what about the integrity of our app itself?
So it&#x2019;s incredibly easy to sign apps with Xcode, but for some reason few apps actually ship signed. Why is that? I think it&#x2019;s because there&#x2019;s very minimal UI related to the feature in Leopard, so it&#x2019;s hard to see that there&#x2019;s any benefit for the Mac user on the Clapham omnibus. However, look at the iPhone where the code signature is used everywhere, and the administration features on OS X (and Server) which rely on code signatures such as the application controls and the firewall.
So it&#x2019;s incredibly easy to sign apps with Xcode, but for some reason few apps actually ship signed. Why is that? I think it&#x2019;s because there&#x2019;s very minimal UI related to the feature in Leopard, so it&#x2019;s hard to see that there&#x2019;s any benefit for the Mac user on the Clapham omnibus. However, look at the iPhone where the code signature is used everywhere, and the administration features on OS X (and Server) which rely on code signatures such as the application controls and the firewall.
So it&#x2019;s incredibly easy to sign apps with Xcode, but for some reason few apps actually ship signed. Why is that? I think it&#x2019;s because there&#x2019;s very minimal UI related to the feature in Leopard, so it&#x2019;s hard to see that there&#x2019;s any benefit for the Mac user on the Clapham omnibus. However, look at the iPhone where the code signature is used everywhere, and the administration features on OS X (and Server) which rely on code signatures such as the application controls and the firewall.
So, presumably, I&#x2019;m going to address availability next.
Launchd offers some very cool and flexible configuration as a service watchdog, so if there&#x2019;s some service used by your app for which availability is important this should be your first port of call. Note that there were a few bugs on 10.4 and the whole thing was less flexible. 10.3 and before never existed - we have always been at war with Eurasia.
Launchd offers some very cool and flexible configuration as a service watchdog, so if there&#x2019;s some service used by your app for which availability is important this should be your first port of call. Note that there were a few bugs on 10.4 and the whole thing was less flexible. 10.3 and before never existed - we have always been at war with Eurasia.
Launchd offers some very cool and flexible configuration as a service watchdog, so if there&#x2019;s some service used by your app for which availability is important this should be your first port of call. Note that there were a few bugs on 10.4 and the whole thing was less flexible. 10.3 and before never existed - we have always been at war with Eurasia.
Launchd offers some very cool and flexible configuration as a service watchdog, so if there&#x2019;s some service used by your app for which availability is important this should be your first port of call. Note that there were a few bugs on 10.4 and the whole thing was less flexible. 10.3 and before never existed - we have always been at war with Eurasia.
Look at this screenshot of iTunes, and rather than complaining about my taste in music try and think of what the various assets are. Which of the CIA attributes are important in each case? Who might have a stake in protecting them? Who might compromise them?
So once we&#x2019;ve identified a threat (I didn&#x2019;t explicitly discuss entry points and routes around the app - those are highly app-specific), we can see what type of damage is done should the threat succeed.
So once we&#x2019;ve identified a threat (I didn&#x2019;t explicitly discuss entry points and routes around the app - those are highly app-specific), we can see what type of damage is done should the threat succeed.
So once we&#x2019;ve identified a threat (I didn&#x2019;t explicitly discuss entry points and routes around the app - those are highly app-specific), we can see what type of damage is done should the threat succeed.
So once we&#x2019;ve identified a threat (I didn&#x2019;t explicitly discuss entry points and routes around the app - those are highly app-specific), we can see what type of damage is done should the threat succeed.
So once we&#x2019;ve identified a threat (I didn&#x2019;t explicitly discuss entry points and routes around the app - those are highly app-specific), we can see what type of damage is done should the threat succeed.
So once we&#x2019;ve identified a threat (I didn&#x2019;t explicitly discuss entry points and routes around the app - those are highly app-specific), we can see what type of damage is done should the threat succeed.
So once we&#x2019;ve identified a threat (I didn&#x2019;t explicitly discuss entry points and routes around the app - those are highly app-specific), we can see what type of damage is done should the threat succeed.
Does authorisation services protect us against elevation of privilege attacks? Not directly - note that the rights obtained are passed back to the calling application, which is running as the user who made the request - if the user can make the request they can do whatever was &#x201C;behind&#x201D; the right without bothering your app to acquire the right. This is why we must consider &#x201C;factored&#x201D; apps - where the auth right is used to invoke a privileged helper, which then retrieves the right from the calling app to verify that it really should perform the privileged task. In this way, the user cannot circumvent the requirement to obtain the right in order to perform the gated task.
Does authorisation services protect us against elevation of privilege attacks? Not directly - note that the rights obtained are passed back to the calling application, which is running as the user who made the request - if the user can make the request they can do whatever was &#x201C;behind&#x201D; the right without bothering your app to acquire the right. This is why we must consider &#x201C;factored&#x201D; apps - where the auth right is used to invoke a privileged helper, which then retrieves the right from the calling app to verify that it really should perform the privileged task. In this way, the user cannot circumvent the requirement to obtain the right in order to perform the gated task.
Does authorisation services protect us against elevation of privilege attacks? Not directly - note that the rights obtained are passed back to the calling application, which is running as the user who made the request - if the user can make the request they can do whatever was &#x201C;behind&#x201D; the right without bothering your app to acquire the right. This is why we must consider &#x201C;factored&#x201D; apps - where the auth right is used to invoke a privileged helper, which then retrieves the right from the calling app to verify that it really should perform the privileged task. In this way, the user cannot circumvent the requirement to obtain the right in order to perform the gated task.
Does authorisation services protect us against elevation of privilege attacks? Not directly - note that the rights obtained are passed back to the calling application, which is running as the user who made the request - if the user can make the request they can do whatever was &#x201C;behind&#x201D; the right without bothering your app to acquire the right. This is why we must consider &#x201C;factored&#x201D; apps - where the auth right is used to invoke a privileged helper, which then retrieves the right from the calling app to verify that it really should perform the privileged task. In this way, the user cannot circumvent the requirement to obtain the right in order to perform the gated task.
Does authorisation services protect us against elevation of privilege attacks? Not directly - note that the rights obtained are passed back to the calling application, which is running as the user who made the request - if the user can make the request they can do whatever was &#x201C;behind&#x201D; the right without bothering your app to acquire the right. This is why we must consider &#x201C;factored&#x201D; apps - where the auth right is used to invoke a privileged helper, which then retrieves the right from the calling app to verify that it really should perform the privileged task. In this way, the user cannot circumvent the requirement to obtain the right in order to perform the gated task.
Does authorisation services protect us against elevation of privilege attacks? Not directly - note that the rights obtained are passed back to the calling application, which is running as the user who made the request - if the user can make the request they can do whatever was &#x201C;behind&#x201D; the right without bothering your app to acquire the right. This is why we must consider &#x201C;factored&#x201D; apps - where the auth right is used to invoke a privileged helper, which then retrieves the right from the calling app to verify that it really should perform the privileged task. In this way, the user cannot circumvent the requirement to obtain the right in order to perform the gated task.
Does authorisation services protect us against elevation of privilege attacks? Not directly - note that the rights obtained are passed back to the calling application, which is running as the user who made the request - if the user can make the request they can do whatever was &#x201C;behind&#x201D; the right without bothering your app to acquire the right. This is why we must consider &#x201C;factored&#x201D; apps - where the auth right is used to invoke a privileged helper, which then retrieves the right from the calling app to verify that it really should perform the privileged task. In this way, the user cannot circumvent the requirement to obtain the right in order to perform the gated task.
Does authorisation services protect us against elevation of privilege attacks? Not directly - note that the rights obtained are passed back to the calling application, which is running as the user who made the request - if the user can make the request they can do whatever was &#x201C;behind&#x201D; the right without bothering your app to acquire the right. This is why we must consider &#x201C;factored&#x201D; apps - where the auth right is used to invoke a privileged helper, which then retrieves the right from the calling app to verify that it really should perform the privileged task. In this way, the user cannot circumvent the requirement to obtain the right in order to perform the gated task.