Security Bits Logo no alpha channel

Security Bits — 8 August 2021

Feedback & Followups

Deep Dive — Apple’s New Child Protections

Apple announced three distinct child protection features coming in the next version of their OSes later this year. The three features are very distinct, and the technical detail is extremely important in understanding their implications. Unfortunately, a lot of news outlets, commentators, and even public interest groups have rushed to release hot-takes, many choosing to try to out-sensationalise each other. The end result is that many articles are riddled with misunderstandings, conflation, and outright errors. This rocky foundation has then served as the starting point for extrapolations and opinions that are misleading at best, and naked click-bait, and even anti-Apple propaganda at worst.

Rather than calling out specific misunderstandings, errors, and illogical analyses, I want to try to explain what Apple are actually doing so you can form your own we-informed opinions. I will share my own opinions at the end, but only as that, one person’s opinions.

Some Very Important Context

Apple are rolling out three distinct new features, and each is independent of the other, and each is based on very different technology. All three share one overriding goal though — to do all the processing on-device, and to use cryptography to mathematically ensure the restrictions Apple are imposing. When Apple say they can’t see something, that’s not enforced by policy, it’s enforced with cryptography, and that’s a very important distinction.

Another very important global point is that Apple have detailed their process and the encryption protecting it in great detail in a whole series of white papers. Nothing is being done in secret here. Finally, Apple have had their design vetted by a panel of three security experts who have each written their own report explaining why they’re confident Apple’s cryptography does what Apple say it does.

Key Cryptographic Concepts

Cryptography is vital to what Apple have developed, so let’s look at the important concepts and technologies

  1. Image Hashes — we know that a cryptographic hash of some text gives you an un-reversible unique fingerprint for any arbitrary piece of text. If you change even one character, the hash changes, and if you have a hash, you can’t reverse it to get the original text. Apple are using a similar technique but designed especially for images. It reduces an image to a collection of numbers based on the image content and then converts those numbers to a hash. The hash is designed to uniquely identify a specific image, including scaled-up and scaled-down copies of that image. Two different photos of the same thing will not generate the same hash, but a low-res Gif and a high-res JPEG copy of the same original photo will.
  2. Threshold Encryption — this is cryptography that ensures that a recipient can only decrypt all the encrypted content after a certain threshold has been reached. Conceptually, the sender generates two secret keys, an inner secret key, and an outer key that encrypts the first. The outer key is broken into pieces, and a piece is only shared when a given rule is met. The recipient receives many pieces of content all encrypted with the inner key. Until they have that inner key, they can’t read anything. Over time, pieces of the outer key are sent each time some event happens. Until the final piece is released, i.e. until some threshold is met, the recipient does not have the outer key, but once the threshold is met, they do, and they can then use that to unlock the inner key, which they can then use to unlock all the content ever encrypted with that inner key. The actual maths is complicated and has protections to ensure a partial key can’t be easily brute-forced, but that’s not relevant here — what matters is what the encryption delivers.
  3. End-to-End Encryption (E2E) — a quick reminder that with end-to-end encryption the two end-points of a communication can share information in such a way that no other party, even one that sees every bit transmitted from start to finish, can decrypt the message. This is achieved using asymmetric encryption. Each party has a key pair with one key kept secret and never shared (the private key), and the other (the public key) shared. Whatever is encrypted with a public key can only be decrypted with the matching private key, and vice-versa. Each end can use the other’s public key to send data that only they can decrypt. Any one or anything intercepting the encrypted messages has neither private key, so they can read nothing.

An Important Apple Concept — iCloud Families

Apple IDs can be arranged into families. Each family member has their own Apple ID. A family is created by an iCloud user who becomes that family’s organiser. They can invite other Apple IDs into the family. Organisers can also create special limited Apple IDs for kids. These Apple IDs are associated with the family and must contain a date of birth, so Apple know that child’s actual age at every moment.

iCloud families are how Apple manage many of their existing parental controls, replacing the older, more primitive approach of relying on a parent PIN on a child’s devices.

Important Child Protection Context

We don’t use terms like child porn (or worse kiddie porn) anymore, instead, we use the term CSAM, which stands for Child Sexual Abuse Material. Its depictions in any form showing children engaged in sexual activity of any kind.

In America, there is a quasi-governmental agency known as the National Center for Missing & Exploited Children, or NCMEC (pronounced Nic-mec). NCMEC maintain a database of known CSAM. They have a special license from the US government to store this otherwise illegal material. NCMEC apply image hashing functions to this database of known CSAM and share those hashes with tech companies. The idea is that companies can filter CSAM without needing a copy of the illegal material they’re matching against.

Tool 1 — Siri/Search Protections

This is the simplest new tool to explain and is completely without controversy. Rather than wasting my time re-wording something short and simple, here’s Apple’s description:

Apple is also expanding guidance in Siri and Search by providing additional resources to help children and parents stay safe online and get help with unsafe situations. For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report.

Siri and Search are also being updated to intervene when users perform searches for queries related to CSAM. These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.

Tool 2 — CSAM Protections in the Messages App (Parental Controls)

If, and only if, a parent enables it, Apple will use on-device machine learning to scan images received or about to be sent in the Messages app for explicit imagery on children’s iPhones.

If the machine learning detects that an image might be explicit it blurs it and presents a warning to the kid that the image might be inappropriate. Children can click by the warning, but, depending on their age, their parents may be informed that they clicked past a warning. If they’re 12 or under and parents have chosen to be notified, they will be, otherwise, they won’t.

There are some really important subtleties in that sentence, and some very important details in what Apple are actually doing that I want to highlight

  1. This is a parental control feature — it is limited to devices registered to children’s Apple IDs, and, like all other parental controls, this will be off by default.
  2. The warnings are not shared with anyone, not Apple, not the parents
  3. The notifications that a child under 13 bypassed a warning are sent from the child’s iPhone to parents using end-to-end encryption, so no one but the child and parent know a warning has been bypassed. Apple do not know, so they can’t report anything to anyone.
  4. This is a feature of the Messages app, not the iMessage protocol. This has some very important effects
    1. Everything happens on the child’s phone, nothing happens in the cloud
    2. This scanning is 100% agnostic of how the images are sent or received — MMS, the iMessage protocol, it doesn’t matter!
    3. There is zero change to the iMessage protocol, it remains end-to-end encrypted
  5. The ML is looking for all possibly explicit images, not for CSAM, so it is not generating hashes and checking them against the NCMEC database.
  6. Because this is a scan for explicit images, a positive match does not imply the image is illegal, all it implies is that the image might be inappropriate or a child. This is no different from other parental controls like web filters.

As implemented there is no privacy concern here.

The one valid concern is that Apple could be compelled to alter the technology to scan for other things on other iPhones. The danger of bad laws existed before these parental controls, and they will always exist. Without a crystal ball, there’s no way to know what will happen. All we know is that Apple have a track record of standing up to government demands.

Tool 3 CSAM Scanning iCloud Photo Uploads

Apple will embed the hashes from the NCMEC CSAM database into every iPhone. On US iPhones photos queued for upload to iCloud Photo will be hashed, and that hash will be compared to the hashes from the NCMEC database. The results of that comparison will be encrypted using threshold encryption and attached to the photo as metadata which Apple are calling an image safety ticket. Every photo will get an encrypted ticket, most will say the image did not match the DB, but each time one does match the DB a piece of the secret will be included, nudging the account towards a threshold.

The threshold encryption ensures that until the threshold is met, Apple can’t read any of the safety tickets. Then, when the threshold is met, Apple can read the tickets, and only the tickets. Let me be very clear here, even when the threshold is met, Apple still cannot see the photos, they only get the data about the match. What the ticket does contain is a low-res thumbnail. Before Apple send the tickets on to US law enforcement, an Apple employee will manually review the safety tickets to double-check there really is something to report, and the user will get a right to appeal.

Apple are promising a one-in-a-trillion false positive rate per year.

Apple have also designed their cryptography to ensure there is no way for the encrypted scan results to leak information about the number of matches there have been. This is a huge privacy protection, and, it stops criminals abusing iPhones to check if specific images are in NCMEC’s database.

Some very important details to note:

  1. The scanning only happens on photos queued to be uploaded to iCloud Photos
  2. The scanning is performed entirely on-device, not in the cloud
  3. The scanning is not looking for generic patterns of things that look like they might be CSAM, the scanning is looking for exact matches to specific known SCAM.
  4. Apple have used cryptography to ensure they can’t tell there has been even one match in an account until that account crosses a threshold of matches — this aligns with how we know CSAM spreads. Only a tiny proportion of people share CSAM, but those that do share a lot of CSAM. Apple are not looking for the odd stray image, they’re looking for large caches of CSAM.

Notice that Apple are not relying on policy to protect user privacy, they’re relying on cryptography!

The one place where humans enter into this is the point at which the database of hashes is chosen. By design, Apple cannot see the images the hashes describe, so Apple have to trust the organisations they partner with to only give them hashes of actual CSAM. For now, that’s just NCMEC, so the thing to watch is who else Apple chooses to partner with.

FWIW — my gut reaction was great concern, but the deeper I looked into the details, the more reassured I became. My considered opinion is that what Apple is doing is not just OK, but actually a good thing. Assuming Apple do what they say they will do, and assuming they do not partner with any dubious organisations for additional hashes, I don’t see any problems.

Links

❗ Action Alerts

Worthy Warnings

Notable News

Top Tips

Excellent Explainers

Interesting Insights

Palate Cleansers

Legend

When the textual description of a link is part of the link it is the title of the page being linked to, when the text describing a link is not part of the link it is a description written by Bart.

Emoji Meaning
🎧 A link to audio content, probably a podcast.
A call to action.
flag The story is particularly relevant to people living in a specific country, or, the organisation the story is about is affiliated with the government of a specific country.
📊 A link to graphical content, probably a chart, graph, or diagram.
🧯 A story that has been over-hyped in the media, or, “no need to light your hair on fire” 🙂
💵 A link to an article behind a paywall.
📌 A pinned story, i.e. one to keep an eye on that’s likely to develop into something significant in the future.
🎩 A tip of the hat to thank a member of the community for bringing the story to our attention.

1 thought on “Security Bits — 8 August 2021

  1. Brad from Los Angeles - August 17, 2021

    Hi Bart,
    I just noticed a typo I your deep dive description above.

    You have the text
    “ The scanning is not looking for generic patterns of things that look like they might be CSAM, the scanning is looking for exact matches to specific known SCAM.”

    For the last word in that sentence, I think you meant to use the word CSAM again and not the word SCAM which has a very different meaning .

    Cheers

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top