Security Bits Logo no alpha channel

Security Bits — 30 October 2022

Feedback & Followups

🧯 Deep Dive 1 — An Over-hyped (but Interesting) Messaging App Location Leak

Something we come across often in this segment is cool security research that gets over-hyped in terms of its actual effect on user safety. There are perverse incentives at play all along the chain so that probably doesn’t come as a surprise. Corporate researchers are incentivised to make their work seem as impressive as possible to drive business, academic researchers to drive funding, PR people to boost their company/university, and news media to boost clicks.

Some very interesting research has been released that can theoretically be leveraged to infer people’s location against their will, but there are so many caveats that it’s just not a real threat to regular folks, and it probably never will be because the fix is trivially easy for messaging platforms to implement.

It is worth exploring though because it’s a very nice example of a so-called side channel attack.

It is possible to share your location in a chat using encrypted messaging services, but you have to do it explicitly, and that data is sent very securely, so an attacker can’t simply eves drop on the network traffic to read people’s messages, including any location sharing events. So the primary channel is well protected.

But there are always things attackers can see — data about the data, or about the sending of the data, or the processing of the data. These indirect sources of information are referred to as side channels.

Amateurish websites often have glaring obvious side channels like different error messages when a username doesn’t exist and a username does exist but the password is wrong. Less amateurish sites can have more subtle side channels where the time taken to return an identical message for incorrect username and incorrect password are different enough to measure to give the game away anyway.

What security researchers have found in Signal, WhatsApp & Threema is a more complex version of that kind of time side channel, and when all the conditions line up, it can reveal a user’s location with about an 80% accuracy.

To understand what the researcher did, it’s important to understand that large cloud services are not served by single servers, the workload is shared in two ways — firstly, different tasks are performed by different groups of servers, and each group of servers is duplicated across the world so there should always be done nearby every user. It’s also important to understand that it’s easy for anyone to watch the network traffic too and from their own devices.

When you use any modern messaging app you get a little icon to show when a message has been read. The notifications for that icon arrive to the device in a different stream of data than the messages themselves, so if you watch your network, traffic you can pick out the packets that contain the signals powering those icons.

What the researchers wondered was whether the timings of those packets could be meaningful enough to deduce location data, and they are … sorta, and sometimes.

Each of the messaging systems they examined had large distributed server deployments that don’t change very much — if they have servers in New York today, they’ll probably still have those servers tomorrow, next week, next month, or next year. They also discovered that the network speeds to and from these servers are consistent, so if it takes 5ms to send a packet from Philadelphia to the New York servers today, it’ll take the same tomorrow, and next week, and so on. Furthermore, they discovered that routing of traffic to the servers is also stable, so if everyone in Atlantic City is routed to the New York servers today, they’ll be routed there tomorrow, and next week … Finally, they discovered that the servers are consistently efficient, so any delays observed are down to the path the packets took.

So, what all that means is that if you’re at your house, the time it takes for the little read checkmarks to appear will be different from the time it takes when you’re in the office, and both of those times will be pretty consistent over the long term.

So, if an attacker can benchmark the timings at times when they know where you are, they can then check if you’re at that location at any time in the future by matching the timings they see to those recorded timings.

For this to work all the following must be true:

  1. You must have a previous conversation with the attackers
  2. You must have a conversation with the attacker at a time when they know where you are to start mapping important locations for you
  3. You must have your phone on and your messages app open at the time the attackers wants to check if you are at one of the known locations.

So, no one can use this attack to find arbitrary locations for anyone, all attackers can do is be 80% sure you’re back at a location they were able to verify you were at before. For an attacker to be able to match you back to a location they need to have a conversation with you over the messaging app at a time they know you are at that location, so they need to have regular conversations with you. Finally, if your phone is asleep, or the messaging app is not running, the notifications will go via push notification, so they will be massively delayed, and the attack fails utterly!

OK, so right now, in some very limited circumstances, and attackers can be 80% sure you’re back at a place they recorded you being at before. So how can app developers prevent these kinds of attacks? Simple — add a small random wait to status update messages! The signal is weak, so adding in just a little noise is all it will take!

Of the three apps tested, Threema is the only one to have responded so far, and they’ve updated their code to add the randomised delay and will be pushing it out to users in a future software update. The others are likely to follow suit.

So, despite what you may have read, no, messaging apps are not really leaking your location!

Links

Deep Dive 2 — Apple Improves its Engagement With the Security Community

Of all the major OS vendors, Apple tends to come in for the most criticism from the security community because they tend towards secrecy, and they’re quite fond of doing things their way. Over the years they have been proactively reaching out to the community and improving their policies and practices. That continued this week with a handful of small but meaningful changes.

Firstly, Apple have released a new security research portal — https://security.apple.com

The portal is surprisingly approachable, with lots of interesting information about the security feature Apple builds into its products, and its various security-related policies and programs.

One of the things Apple did was clarify the differences between software updates and software upgrades, and in the process, they made official something they’ve been doing for as long as I’ve followed Apple, but it’s getting some un-deserved bad press.

We’ve always known that the newest Apple operating systems are the most secure because each new release adds new security features and improvement to existing security features. We’ve also always seen that the newest OSes get software updates the quickest, and their updates patch the most bugs. Older OS updates often lag behind by a few days, and they don’t usually cover as many bugs.

All bugs in all OSes get triaged, and there are always bugs that don’t meet the bar for patching. We’ve seen that Apple apply a different bar for the older OSes, only patching the more serious bugs, and now we have a support document from Apple that explicitly says that’s what they do. This changes nothing — Apple have always patched the bugs that pose real risk to regular users, and they’re continuing to do so. Apple have never back-ported every fix to their older OSes, and they’re not starting now. Their newer OSes were always inherently more secure, and that continues to be the case.

As I see it, Apple being more open about what they do is a good thing. The next thing they could do would be to follow Microsoft’s lead and share the criteria they use to triage bugs.

Finally, Apple have opened up applications for the special security research version of the iPhone promised earlier in the year. Apple call this the Apple Security Research Device, but it’s an iPhone with a bunch of restrictions removed so security researchers can get low-level access to the OS without needing to resort to jailbreaking. Apple are limiting access to these devices to verified security researchers, and numbers are limited.

Links

❗ Action Alerts

Worthy Warnings

Notable News

  • Another Log4Shell-esque but has been found in another commonly used Java library, this time it’s Apache Commons Text. The bug has been patched and it’s more difficult to exploit reliably than Log4Shell, so while it means a lot of Java-based servers and apps need to be patched, it’s not the same kind of drop-everything panic as Log4Shell — nakedsecurity.sophos.com/… (Editorial by Bart: But do still buy your friendly neighbourhood sysadmin a coffee 😉)
  • OpenSSL have pre-announced a critical bug fix to be released on 1 Nov, but not given any specific details — isc.sans.edu/… (Editorial by Bart: so buy your sysadmin a second coffee!)
  • DuckDuckGo have released a public beta of their Mac browser. One of the nicer features is automatic handling of cookie preference popovers on websites — appleinsider.com/… (Editorial by Bart: I’ve installed it and first impression are good)
  • 🇺🇸 The US state of New York has fined the fashion brand SHEIN $1.9m as a result of a data breach in 2018. They didn’t have adequate protection in place, and tried to cover up the breach — nakedsecurity.sophos.com/…
  • 🇺🇸 Women in Cryptology – USPS celebrates WW2 codebreakers — nakedsecurity.sophos.com/…

Palate Cleansers

Legend

When the textual description of a link is part of the link it is the title of the page being linked to, when the text describing a link is not part of the link it is a description written by Bart.

Emoji Meaning
🎧 A link to audio content, probably a podcast.
A call to action.
flag The story is particularly relevant to people living in a specific country, or, the organisation the story is about is affiliated with the government of a specific country.
📊 A link to graphical content, probably a chart, graph, or diagram.
🧯 A story that has been over-hyped in the media, or, “no need to light your hair on fire” 🙂
💵 A link to an article behind a paywall.
📌 A pinned story, i.e. one to keep an eye on that’s likely to develop into something significant in the future.
🎩 A tip of the hat to thank a member of the community for bringing the story to our attention.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top