Deep Dive 1 — The Updated EU ⬌ US Data Sharing Framework
Since the European Court of Justice struck down the Privacy Shield framework that large US-based tech companies relied on to easily transfer data on European citizens to the US in 2020, quiet negotiations have been ongoing to try replace it with an updated version that will survive the inevitable court challenges from European privacy advocacy groups.
That new deal now appears to be done, with the European Commission officially adopting a motion that finds the safeguards in the updated agreement adequate. In other words, it is now the official opinion of the Commission that the new framework will give EU citizens data protections ‘comparable’ to what they get in Europe.
The negotiated deal adds binding safeguards intended to ensure that only ‘what is necessary and proportionate’ about EU citizens will be shared with US Intelligence agencies. A treaty agreement with the US also establishes a Data Protection Review Court that EU citizens can access.
For context, before the GDPR passed there had been a safe harbour agreement between the EU Commission & the US government which meant that for GDPR purposes, US law could be considered adequate protection for EU Citizen data. That was patently not true, and privacy advocates sued, and won, striking down that practice. Then the first data sharing framework was agreed to try provide a new basis for easy trans-Atlantic data sharing, and that’s what got shot down in 2020. So this newly updated framework is the third attempt at simplifying data transfers for US-based companies with European users.
In theory, companies like Meta, Microsoft, Alphabet, and Apple can sign up to the framework, and if they stick to it, their data transfers from the EU to the US will be automatically considered GDPR-compliant.
Deep Dive 2 — Apple’s on-again-off-again Rapid Security Response
Last week it became clear that Apple are still having some teething problems with their new Rapid Security Response mini-patches. Mind you, this time it’s not really their fault, the problem was websites using poorly coded regular expressions!
When a web browser sends a request to a web server it includes some standard headers in that request, including one known as the user agent string (
User-Agent) which is how the browser tells the server a little bit about itself. User agent strings carry a lot of information in a rather impenetrable mess that is what it is for all kinds of historic reasons. Basically, if you want an example of technical debt, user agent strings fit the bill perfectly.
As a practical example, here’s mine as I write these show notes:
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 Safari/605.1.15
Believe it or not, that mess tells you I’m running the latest Safari on a Mac. If I’m running Safari, why does it say
Mozilla/5.0? Because when Safari was new websites didn’t support it, so Apple told websites it was Firefox so they wouldn’t give ‘unsupported browser’ errors all the time. Safari is really WebKit, which is derived from the original open source KHTML project, and the KHTML project marked its browser as being compatible with Netscape by saying it’s Like Gecko, which was Netscape’s HTML rendering engine.
So, while the meaning is far from obvious at first glance, in theory, you can pull useful information out of user agent strings, and you can do so with regular expressions (pattern matching). The tricky bit is writing good regular expressions — make your pattern too broad and it won’t correctly tell similar browsers apart, but make it too specific and even tiny formatting changes from browser or OS updates will stop some browsers being recognised at all.
You’ll notice the full OS version number is included in Apple’s User Agent String —
Mac OS X 10_15_7 to represent macOS 10.15.7 for me. That’s where the problems arose — rapid security responses update macOS & iOS version numbers by appending a letter in brackets after the minor minor version, e.g. macOS 10.15.6(a), which Safari rendered in its user agent string as
Mac OS X 10_15_7(a), and that broke the overly sensitive regular expressions used by some really major internet sites like Facebook, causing them to see Safari as an unknown browser they did not support.
Apple started by retracting the original rapid response, then issued a new one that updated Safari’s code to update the user agent string logic. The user agent string should still make it possible to tell that the user is running a version of Safari that has been altered by a rapid security response, but without changing the OS version number. Apple’s workaround is to rely on the WebKit build number rather than the OS version to communicate that information, i.e. to update the bit that says
AppleWebKit/605.1.15 in my current user agent string.
All in all, I think this line from TidBits sums things up quite nicely:
“While Apple’s choice of letters for Rapid Security Response updates is questionable, Meta and other companies whose websites were affected also bear responsibility for not failing gracefully when encountering unexpected user agent identifiers.”
Note that the reason Apple issued the rapid security responses was to address zero-day bugs affecting Safari, so it’s important to note that these fixes have now been included in the latest round of full OS updates from Apple, so everyone who’s patched is secure from these vulnerabilities.
- Apple issues security update for iOS 16.5.1 & macOS 13.4.1 — appleinsider.com/…
- Apple Pulls Rapid Security Responses Due to Website Loading Issues — tidbits.com/…
- Apple says fix for pulled security update will be released soon — appleinsider.com/…
- Apple updates RSR patch for iPhone, iPad, and Mac — appleinsider.com/…
Deep Dive 3 — Apple’s Stricter App Store Rules
Apple have updated the rules developers must follow to get their apps into the app store. The change is low-level and rather geeky, so it’s not getting a huge amount of press, but IMO the change is significant enough to be worth taking a deeper look.
As proof that this is a significant change, it’s being rolled out on a phased basis:
- Fall 2023 — developers who upload new or updated apps that are not compliant with the new rules will get a notification warning them of that fact
- Spring 2024 — non-compliant app uploads will be automatically blocked
The Problem to be Solved — ‘Device Fingerprinting’
Apple have been working to make it ever more difficult for companies to track users cross-app. The rough outline of the story so far goes like this:
- On early versions of iOS, it was possible for apps to read permanent identifiers from devices, e.g. serial numbers
- Apple removed access to permanent IDs and replace them with a pseudo-random ID for Advertising (IDFA) and added a button in the Settings app users could use to cycle the IDs, hence limiting tracking
- With App Tracking Transparency Apple updated the APIs for accessing the IDFA so users would need to provide explicit consent before the OS would hand over the IDFA to an app.
It turns out users don’t like being tracked, so when you require informed consent, you can’t do nearly as much tracking! The ad industry still wants to track users, so they have been trying to find combinations of innocent-looking APIs whose results they can combine together to generate effectively unique device IDs. These kinds of generated IDs are known as fingerprints.
To be clear, this kind of fingerprinting is against Apple’s rules, and have been since the IDFA was introduced. Apple have also taken down apps they find building fingerprints, but, this process has been human-driven to date. Now, Apple are adding technical controls into the mix, to help them fight back against this kind of inappropriate tracking.
The kinds of APIs used for fingerprinting are those that reveal genuinely useful information that’s not directly sensitive but does change from device to device. No single such API can be used for tracking, but if you combine the results from enough of them, you can get to the point where your fingerprint really does pinpoint just a single device.
Apple has reviewed the APIs used by the apps they’ve found flouting the rules, and reviewed their other APIs with a view to identifying others that could be abused as part of a fingerprint, and labeled them ‘justification required’. As part of App Tracking Transparency, apps now have to contain a specially formatted file known as the app’s Privacy Manifest that describes how the app handles user data. It’s these files that are used to generate apps’ privacy nutrition labels in Apple’s app stores. The new policy states that any app that uses any of the flagged APIs must have a matching entry in their privacy manifest justifying the use of the API. In other words, if you can’t describe why you legitimately need to access a piece of data that could contribute to a fingerprint, you can’t access it.
The end result should be that no legitimate apps lose any functionality, but, fingerprints become effectively impossible to generate because while every app will have justification to access some sensitive APIs, none should be able to access enough of them to build fingerprints that are unique enough to enable cross-app tracking.
- Apple cracks down on apps that use device fingerprinting to track users — www.cultofmac.com/…
- Apple will require app devs to explain exactly why they use certain APIs — arstechnica.com
❗ Action Alerts
- Apple have released full updates (not rapid responses) for just about all their OSes (iOS 16.6, iPadOS 16.6, iOS 15.7.8, iPadOS 15.7.8, macOS Ventura 13.5, macOS Monterey 12.6.8, macOS Big Sur 11.7.9, tvOS 16.6 & watchOS 9.6) — isc.sans.edu/…
- Microsoft also released their July patches, squishing 130 bugs including 5 under active exploitation — krebsonsecurity.com/…
- Two notable AI-safety initiatives have been launched:
- The White House secured voluntary commitments from Amazon, Anthropic, Alphabet, Inflection, Meta, Microsoft & OpenAI to cooperate on AI safety — appleinsider.com/…
- Alphabet, Microsoft, OpenAI & Anthropic launched the Frontier Model Forum, an industry body intended to promote AI safety research and develop related standards and best practices — blog.google/…
- Editorial by Bart: there’s a lot of media hype about Apple not being included in either initiative, but since they have not actually released any products using new and poorly understood generative AI technologies, I don’t see why they would be included at the moment. Apple’s AI approach, at least so far, has been extremely cautious, they’ve not taken any big risks, so they’re not part of the problem IMO.
- White House sets security standard for smart home devices — appleinsider.com/… (A voluntary certification process to earn standardised labels)
- The Browser Company have released the first public version of their revolutionary web browser ARC — tidbits.com/… & arstechnica.com/…
- Editorial by Bart: it’s getting positive reviews, it does appear to be genuinely original in its approach, and the company’s privacy statement describes what seem to be good practices, but there’s a problem — this is a for-profit company who have, so far at least, been unwilling to explain their business model, making it impossible to follow the money, and hence, to understand that incentives driving the company. For that reason, I suggest users avoid the browser, at least until we get some clarity on how this is supposed to be monetised! (Bart Busschots: “Yes, the ARC browser looks cool, but it’s expensi…” – Mastodon )
- A timely reminder: Realst Mac malware targets macOS Sonoma; here’s how to stay safe — 9to5mac.com/…
- Most malware still gets onto Macs by the user installing it themselves!
- Malware is just software, and Malware developers are just developers, so anything regular apps can do, malware can do too — most apps run just fine on Apple’s beta OSes, and Apple’s dev tools make it easy to fix any apps that don’t, so it’s completely expected that most malware will run just fine on beta OSes, and any that doesn’t can probably be easily fixed!
- From Bart:
- A fascinating old training video from Bell Labs’ Holmdel Computing Center intended to show newly hired programmes how to use the computing facilities — www.youtube.com/…
- An excellent article explaining why anyone designing websites or apps needs to think more deeply about their use of colour, it really is an accessibility thing: Designing for colorblindness — www.theverge.com/… (and yes, I’m one of those 8% of men who have deficient colour vision)
- I learned a few new tricks, you might too: The hidden Mac keyboard shortcuts you don’t know — www.cultofmac.com/…
When the textual description of a link is part of the link it is the title of the page being linked to, when the text describing a link is not part of the link it is a description written by Bart.
|A link to audio content, probably a podcast.
|A call to action.
|The story is particularly relevant to people living in a specific country, or, the organisation the story is about is affiliated with the government of a specific country.
|A link to graphical content, probably a chart, graph, or diagram.
|A story that has been over-hyped in the media, or, “no need to light your hair on fire”
|A link to an article behind a paywall.
|A pinned story, i.e. one to keep an eye on that’s likely to develop into something significant in the future.
|A tip of the hat to thank a member of the community for bringing the story to our attention.