Security Bits Logo no alpha channel

Security Bits — 22 November 2020

Feedback & Followups

Deep Dive 1 — 3rd-Party Firewalls on Big Sur

There’s been quite a bit of reporting about the fact that macOS Big Sur has changed the APIs 3rd party network filtering apps like Firewalls have to use. Before Big Sur they were kernel extensions, which gave them immense power over the OS, now they have to run as regular apps and interact with the OS using APIs designed to allow apps control network traffic.

There are three good reasons for Apple to make this change, and an understandable one:

  1. Code running in the kernel is extremely highly privileged, so it’s hard for the OS to defend against bugs or malware in a Kernel Extension.
  2. When the kernel crashes the whole OS goes down, so buggy kernel extensions are a real pain for users
  3. To be effective code enforcing security must be running with a higher privilege than the code it’s policing. Moving 3rd-party code out of the kernel is a fundamentally more secure architecture.
  4. 3rd-party firewalls breaking the functionality of core OS Services and built-in apps generates support calls for Apple!

The big change here is that before Big Sur 3rd-party firewalls were effectively peers of the OS, so the OS couldn’t effectively impose security restrictions on them, and, they could assert their will on core OS apps and services. A malicious 3rd-party firewall could do some real damage on pre-Big-Sur versions of macOS — you really needed to trust your firewall vendor!

With Big Sur Apple have demoted 3rd-party firewalls so they are subservient to the core OS, and the core OS can impose rules on them, and they can’t impose their will on the core OS.

In theory Apple could design their core OS and APIs in such a way that absolutely all network traffic was routed through 3rd-party firewall apps for filtering before being passed on to the app that’s doing the communicating. But that’s not quite what Apple chose to do — they chose to send almost all network traffic through 3rd-party firewall apps for approval, but not quite all. All traffic to 3rd-party apps of any kind gets passed through 3rd-party firewall apps, but there’s an exceptions list that instructs the OS to directly route network traffic to specific Apple app and services, bypassing 3rd-party firewall apps.

Does this mean that these allow-listed (get out of the bad habit of using white listed folks) apps unfettered internet access? No! The Mac’s built-in firewall sits in front of the APIs that do or don’t route traffic through 3rd-party VPN apps, so it still filters all network traffic to and from the apps that are exempted from inspection by 3rd-party firewall apps. And of course, all traffic restrictions applied outside the computer can’t possibly be affected by this change to the OS, so your router still gets to control all network traffic to all apps on all devices on your network!

In the abstract I’m completely in favour of this change of power-balance. I want my OS to be in charge, and I want 3rd-party apps to be subservient to my OS. Apple have put very strong protections in place to protect the OS from being tampered (SEP, read-only root file system, digitally signed code, and secure boot), no 3rd-party code can ever be as well secured.

The one wrinkle is that Apple’s implementation seems to be suffering some teething trouble — security researcher extraordinaire Patrick Wardle has found a way of tricking at least some of the allow-listed apps into acting as network proxies, allowing malicious code to trick these apps into relaying network traffic.

Apple will need to do one or both of the following to fully realise the improved security this new design could offer:

  1. harden the APIs offered by all allow-listed apps so they can’t be tricked into relaying network traffic on behalf of other apps
  2. cut the list of allow-listed apps to the bare minimum

Read more: Apple lets some Big Sur network traffic bypass firewalls —…

Deep Dive 2 — OCSP, Apple & Privacy

There was a big controversy recently when coincident with the launch of macOS 11 Big Sur some users in some parts of the world started to experience problems launching apps on older versions of macOS.

The cause of this slow-down was that one of Apple’s back-end systems became slow to respond but didn’t go off-line. In some circumstances, when launching an app macOS would reach out to this service, and if there was no answer it would just launch the app, but if there was a slow answer the launching of the app would stall waiting for a response. The problem was resolved relatively quickly (an hour or two), and most people just got on with their day, but the fact that an Apple service being half-down could stop apps from launching raised privacy flags for some.

Why is me launching an app dependent on an Apple server? What is my Mac sending to that server? What is that server sending back, and, critically, what is that server storing? All very legitimate questions worthy of investigation.

Unfortunately, what caught everyone’s attention on social media was not a carefully researched well informed article, but a sensationalist hot-take filled with inaccuracies. The claim being made was that Macs phone home to Apple with a unique per-app ID every time you launch an app. Because there is a network connection involved, your IP address must be in the packets, the author went on to focus on how much information can theoretically be inferred from an IP. The implication was that Apple were tracking every app every person launched.

The careful analyses did follow, so we now know the truth. MacOS periodically uses an industry-standard protocol (OCSP) to check that the developer cert that signed the app you’re launching has not been revoked. All apps from a given developer share a cert, and the replies are cached, so there’s actually a lot less information in these checks than the original article implied. In reality, storing every interaction with the certificate server could only reveal that someone at a given IP ran an app from a given developer at least once recently. Only the developer and the IP are specific. So ‘someone in your house ran a Microsoft app’ is basically what we’re talking about here, and even that is only what Apple could store, not necessarily what it does store!

Bottom line — the information being sent is a lot less specific than the original post claimed, and, the information being send defines what Apple could store, not what Apple do store.

At this stage you might be wondering why macOS performs these certificate status checks?

An Important Security Feature, not a Privacy Violation

For many years now macOS has had support for digitally signed apps. When an app has been digitally signed you get two cryptographic promises:

  1. Authenticity — the signed app really did come from the developer listed on the cert.
  2. Integrity — the signed app has not been altered since it was signed. In other words, it hasn’t had malware injected into it.

On top of these cryptographic promises Apple layers policy promises. To obtain a cert you have to prove your identity to Apple, and you have to agreed to, and then abide by, certain rules.

The promises Apple make are meaningless without the cryptography that underlies them, and the cryptography rests entirely on one very important assumption — the security of the certificate’s matching private key.

The ability to revoke a certificate is vital in the following two common scenarios:

  1. A developer loses control of their private key. Maybe they got hacked, or maybe they accidentally published it on GitHub (that happens a lot more than you might thing).
  2. An approved developer does something malicious.

If macOS didn’t periodically check that a given certificate was still valid then attackers could steal private keys, sign malware with them, and Macs would run the malware without question until the cert expired. Or, malicious developers could be on their best behaviour when they apply for a cert, then reveal their true colours once they have the cert and abuse it to sign malware until it expires.

What actually happens is that when a developer fears they have lost control of their private key they ask Apple for a new cert, Apple revoke the old and generate them a fresh one with a new private key. And, when Apple discovers a developer being malicious they revoke their cert.

Certificate revocation has been a bit of a problem within the industry for decades, but we now have an agreed on standard protocol that’s actually quite good at the job — the Online Certificate Status Protocol, or OCSP. An OSCP server is a web server that accepts specially formatted HTTP requests that contain the hash of a certificate issued by the organisation running the server, and answer back with a simple yes or no to indicate whether or not the cert has been revoked. There’s no more information in these requests or responses.

Because OCSP was designed to allow the validation of HTTPS certificates over HTTP, they are not encrypted, but the replies are instead digitally signed. This cryptographically prevents the answer being spoofed, but it does mean the hashes are sent in plain-text.

For context — In my experience as a sysadmin, all web servers default to logging connections, but not the contents of those connections, so web server logs will include IP addresses and the URLs those IPs fetched, but not the content of the request or reply. While those logs exist by default, they are also rotated by default, so they’re not permanent. Unless you make an effort to move the logs to some kind of permanent storage system they’re a trouble shooting tool not a data source.

So, while it seems very likely that Apple’s OCSP server would have logs that contain IP addresses, the existence of those logs doesn’t mean there was any kind of tracking going on.

Even if Apple had said nothing more this would be a non-issue, but Apple did respond to this kerfuffle by promising to take steps to stop the service going down again, to re-configure the OSCP servers not to log IP addresses, to delete the existing logs, and to invent a whole new protocol to replace standard OCSP with something that’s encrypted within the next year.

My Final Conclusion — there was never any scandal here, just good security practices, and Apple have responded by moving from good security practices to great security practices.


❗ Action Alerts

Worthy Warnings

Notable News

Interesting Insights

Palate Cleansers

  • A great Twitter thread showing the stunning difference between the first ARM CPUs and Apple’s new M1 SOCs —…
  • 🎧 The EFF have launched How to Fix the Internet, a podcast focused on both explaining the problems plaguing the internet, and, more importantly, exploring realistic solutions to those problems —…
    • 🎧 🇺🇸 The first episode does a great job explaining the US’s FISA ‘court’ —…
  • 🎧 Interview with Tim Triemstra from Apple on the M1 and macOS Big Sur and Ken Case of OmniGroup on The Changelog: The Future of Mac…


When the textual description of a link is part of the link it is the title of the page being linked to, when the text describing a link is not part of the link it is a description written by Bart.

Emoji Meaning
🎧 A link to audio content, probably a podcast.
A call to action.
flag The story is particularly relevant to people living in a specific country, or, the organisation the story is about is affiliated with the government of a specific country.
📊 A link to graphical content, probably a chart, graph, or diagram.
🧯 A story that has been over-hyped in the media, or, “no need to light your hair on fire” 🙂
💵 A link to an article behind a paywall.
📌 A pinned story, i.e. one to keep an eye on that’s likely to develop into something significant in the future.
🎩 A tip of the hat to thank a member of the community for bringing the story to our attention.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top