You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Agenda+: Discuss accountability designs for aggregated designs
One aspect that was considered a significant factor in the design of Apple's proposed design for aggregated attribution measurement (what I'm calling AAAAA here), was that it gave people the ability to observe what was going on. After some discussion, it became clear that this wasn't necessarily a feature that would be presented through the user interface of browsers. Still, the ability to observe the system in operation was considered a useful tool in ensuring that the system as a whole was trustworthy. That is, while we might not expect most people to investigate what is going on, the option to do so was perhaps important in choosing a design.
After doing some investigation into the subject, I (with substantial assistance from @benjaminsavage and @bmcase) have reached some conclusions, which are presented in this document. In this document, I look at the accountability options available in a setting where attribution occurs on-device and compare those with IPA, where attribution happens off-device.
The conclusions I reach is that while there are some minor differences, each of the proposals offers a fairly similar level of accountability guarantees. Most of the places where there are shortfalls in accountability correspond to aspects of the design where we consider flexibility and usability for attribution to be very important, so there are very few options to make a substantive improvement in terms of explaining to a regular person what is going on.
The baseline information that we can share is, however, fairly good and it looks like all of the options we are considering have similar sorts of options. We should have ways of presenting people with information on how attribution might affect their privacy.
I'd like some time to discuss these findings and to discuss whether there are any opportunities we might take to improve the transparency of the different designs in operation.
Time
One hour, ideally. We could probably manage in 40 minutes if time is short. This is a fairly substantial topic, so while the document is short, there are lots of details to work through.
Links
The bulk of what we'll be discussing is in this document.
The text was updated successfully, but these errors were encountered:
Agenda+: Discuss accountability designs for aggregated designs
One aspect that was considered a significant factor in the design of Apple's proposed design for aggregated attribution measurement (what I'm calling AAAAA here), was that it gave people the ability to observe what was going on. After some discussion, it became clear that this wasn't necessarily a feature that would be presented through the user interface of browsers. Still, the ability to observe the system in operation was considered a useful tool in ensuring that the system as a whole was trustworthy. That is, while we might not expect most people to investigate what is going on, the option to do so was perhaps important in choosing a design.
After doing some investigation into the subject, I (with substantial assistance from @benjaminsavage and @bmcase) have reached some conclusions, which are presented in this document. In this document, I look at the accountability options available in a setting where attribution occurs on-device and compare those with IPA, where attribution happens off-device.
The conclusions I reach is that while there are some minor differences, each of the proposals offers a fairly similar level of accountability guarantees. Most of the places where there are shortfalls in accountability correspond to aspects of the design where we consider flexibility and usability for attribution to be very important, so there are very few options to make a substantive improvement in terms of explaining to a regular person what is going on.
The baseline information that we can share is, however, fairly good and it looks like all of the options we are considering have similar sorts of options. We should have ways of presenting people with information on how attribution might affect their privacy.
I'd like some time to discuss these findings and to discuss whether there are any opportunities we might take to improve the transparency of the different designs in operation.
Time
One hour, ideally. We could probably manage in 40 minutes if time is short. This is a fairly substantial topic, so while the document is short, there are lots of details to work through.
Links
The bulk of what we'll be discussing is in this document.
The text was updated successfully, but these errors were encountered: