According to an Apple representative, the changes are the result of feedback that both companies received about the specifications in the original proposal and how they might be improved. The companies also released a “Frequently Asked Questions” (FAQ) document, which rehashed most of the information that had already been made public.
Some changes to the proposal look to address privacy concerns that came up after the proposal’s initial release. Under the new specifications, daily tracing keys, which identify daily contact traces, will now be randomly generated instead of mathematically derived from a user’s private key, which is tied to individual users.
The daily tracing key is shared with the central database if a user decides to report a positive diagnosis. Under the old encryption protocol, encryption experts worried that attackers would be able to link those keys with a specific user. Connecting a user to a diagnosis should be more difficult now that the keys are randomly generated. Additionally, the daily tracing key is now referred to as the “temporary tracing key,” and the long-term tracing key that was in the original specification has been removed.
Metadata associated with the system’s Bluetooth transmissions are also given specific protections under the new encryption specification. Along with randomized codes, devices will also broadcast which version of the tool they’re running and even their base power level (used in calculating proximity). As this information could be used to fingerprint specific users, the engineers laid out a new system for encrypting them in a way that makes it difficult to decode them in transit. (Related: Google is now your coronavirus Big Brother, begins sharing your private location data with governments that seek to track your movements.)
While Friday’s changes addressed a lot of issues, none of them addressed questions of how health authorities will verify positive diagnoses to prevent trolls or other false positives. Apple and Google will leave it up to specific app developers to solve this problem.
With the wide variations of health systems around the world, engineers working on the proposal have said that they felt it would be better for local authorities to develop their own verification system for distribution tests.
As much as this makes sense from a development perspective, it still raises some concerns. Letting different app developers work on their own implementations of how to filter false positives and troll responses means that the accuracy of contact tracing data will vary depending on the app used. In addition to this, since these app developers will have to work on the filtering of the data for these false positives, they will need to have access to that data, which is a privacy risk on its own.
Another thing that Apple and Google are leaving in the hands of other people is the decision of when to say the outbreak is contained. This decision will have to be made on a region-by-region basis, though it’s unclear how public health officials would reach such a determination.
Apple and Google’s engineers state that the application program interfaces for their contact tracer aren’t meant to be maintained indefinitely. However, this doesn’t stop any unscrupulous actors or governments from stating that the outbreak hasn’t ended in their region, continuing to track positive patients for as long as they can. Even if Apple and Google have the final say, it still leaves an opening that can be abused.
According to the FAQ, Apple and Google will be vetting the applications submitted by developers.
“Apps will receive approval based on a specific set of criteria designed to ensure they are only administered in conjunction with public health authorities, meet our privacy requirements, and protect user data,” they state.
However, it remains to be seen whether or not they’ll be able to keep the system secure this way.
Sources include: