Skip to content
This repository has been archived by the owner on Jul 31, 2024. It is now read-only.

Retiring the Mozilla Location Service #2065

Open
mostlygeek opened this issue Mar 13, 2024 · 69 comments
Open

Retiring the Mozilla Location Service #2065

mostlygeek opened this issue Mar 13, 2024 · 69 comments

Comments

@mostlygeek
Copy link
Contributor

The accuracy of Mozilla Location Service (MLS) has steadily declined. With no plans to restart the stumbler program or increase investments to MLS we have made the decision to retire the service. 

In 2013, Mozilla launched MLS as an open service to provide geolocation lookups based on publicly observable radio signals. The service received community submissions of GPS data from the open source MozStumbler Android app. In 2019, Skyhook Holdings, Inc contacted Mozilla and alleged that MLS infringed a number of its patents. We reached an agreement with Skyhook that avoided litigation. This required us to make changes to our MLS policies and made it difficult for us to invest in and expand MLS. In early 2021, we retired the MozStumbler program. 

We are grateful for the contributions of the community to MLS to both the code and the dataset. To minimize disruptions and allow people time to make alternative arrangements, we have created a schedule that implements the retirement in stages. The retirement plan can be tracked in this issue. 

There will be five stages. 

  1. As of today (Mar 13th, 2024) we will stop granting new API access keys. All pending applications will be rejected. 
  2. On March 27th, 2024 we will stop accepting POST data submissions to the API. All submissions will receive a 403 response and the submitted data will be discarded. Additionally, we will stop publishing new exports of cell data for download. 
  3. On April 10th, 2024 the cell data downloads will be deleted and will no longer be available. 
  4. On June 12th, 2024 third party API keys will be removed and the service will only be available for Mozilla’s use cases. 
  5. On July 31st, 2024 this source repo (https://github.com/mozilla/ichnaea) will be archived. 

The source code for the MLS application, Mozilla Ichnaea, will remain available under the Apache License 2.0.

@mostlygeek mostlygeek pinned this issue Mar 13, 2024
@heftig
Copy link

heftig commented Mar 13, 2024

Firefox still uses MLS for browser.region.network.url; will that also move to Google Location Services?

@alexcottner
Copy link
Collaborator

alexcottner commented Mar 13, 2024

Firefox still uses MLS for browser.region.network.url; will that also move to Google Location Services?

This endpoint will be migrated to another service (classify-client) that will return the expected response. We'll adjust DNS entries when it's time to make that move so firefox won't see any difference.

@heftig
Copy link

heftig commented Mar 13, 2024

Will downstream builds of Firefox need to obtain API keys for this new service?

@rtsisyk
Copy link

rtsisyk commented Mar 14, 2024

Does anyone want to collaborate to run MLS in some jurisdiction that isn't concerned about patents? Drop a message to roman@organicmaps.app

@mar-v-in
Copy link

mar-v-in commented Mar 14, 2024

@rtsisyk

Do we have any insights on which patents are applicable in this context and which jurisdictions they apply in? As you might know, Europe is generally far less into software related patents, so maybe the "some jurisdiction" could be very easy to find.

The other question would be if Mozilla is willing to handover the WiFi dataset to a new organization running an ichnaea server or if we'd have to start from scratch.

@DylanVanAssche
Copy link

Also the Bluetooth beacons would be useful. I know Mozilla did not publish the WiFi and Bluetooth beacons for privacy reasons, but handing them over would be super beneficial. We would be set back by a decade if we have start from scratch.

@thestinger
Copy link

thestinger commented Mar 16, 2024

GrapheneOS Foundation has been planning to host a network location service for GrapheneOS and projects collaborating with us for a while now. We've received significant funding we can put to use for this to make a high quality, modern implementation on both the client and server side. A new unified app (cellular, Wi-Fi, Bluetooth beacons) for gathering data to publish as fully open data could also be part of it. We also plan to make a SUPL implementation as part of the same service as an alternative to our Google SUPL proxy to replace it as the default in the long term.

The main issue is obtaining high quality and reliable data to run the service. It's necessary to get a lot more users helping with it than Mozilla had submitting data. It seems to have been dying off for a while now. Simply accepting all user submissions of data also makes it easy to poison the database, which becomes a major concern for a less research focused project that's meant to be widely used in production. We have a plan for dealing with this using hardware-based attestation able to support submitting data from any modern Android device with the security model intact and recent security patches. Other mitigations are also needed. OpenCelliD exists but it would be very easy for people to ruin it if they haven't already, and we're dealing with those kinds of attacks on a regular basis so we know we can't use something not resistant to it. The experience of users with the existing services indicates to us that the existing data is problematic.

We're not aware of sources for Wi-Fi and Bluetooth data. It would be best to gather it from a unified app handling all of it with the resulting database being completely open data. We've never understood the privacy concern with providing a map of networks that are publicly broadcasting and are meant to be static landmarks. If they move around regularly, they shouldn't be included anyway, and those should be possible to distinguish. It seems highly unlikely that Mozilla would pass on this data to anyone else based on how they see it. They also may be forbidden to do that from the settlement they reached.

The key is figuring out how to get a large number of users to run an app submitting cellular, Wi-Fi and Bluetooth beacon data in a way that it's difficult enough to mess with it maliciously. The people who care about this tend to care about their privacy and probably don't want to be submitting this kind of data during regular use of their devices, especially if the data is kept tied to the accounts submitting it for anti-abuse. Publishing it all as open data available for any purpose is crucial for getting people to be interested in submitting the data. Anyone being able to use for anything without constraints will give a lot of incentive to contribute to it. Mozilla wasn't making the most valuable data available to others. Having that data public also allows people to use it locally, which is important for privacy. Sending your location in real-time to a service isn't great regardless of who is hosting it even if you can do it without an account via a VPN / Tor, particularly if relatively few people use it.

We're not at all happy with the existing approaches in this area and if anyone wants to build something better, we're interested in funding serious work on building it as remote work by people with experience working on similar things. This has been something we haven't considered a near term project but if there's suddenly a bunch of interest in it, perhaps it could be.

Relying on another centralized service that's not publishing the data would be a shame.

@badrihippo
Copy link

What's the plan for KaiOS devices? IIRC they rely on MLS for location. I hope mine continues to function!

@vfosnar
Copy link

vfosnar commented Mar 16, 2024

@thestinger

We have a plan for dealing with this using hardware-based attestation able to support submitting data from any modern Android device with the security model intact and recent security patches.

This feels bad. Can't see this implemented in a open source/community friendly way.

We've never understood the privacy concern with providing a map of networks that are publicly broadcasting and are meant to be static landmarks.

+1

The people who care about this tend to care about their privacy and probably don't want to be submitting this kind of data during regular use of their devices, especially if the data is kept tied to the accounts submitting it for anti-abuse.

Hot take, I'm okay with that. Trying to fight anti-abuse when only collecting anonymous data is impossible. I would data connected to my account as long as I can trust the provider (clear privacy policy, legal limitation on the use of provided data) and a good anonymization process of the data (wifi endpoints only published after multiple accounts have seen them independently) and obviously if both backend and clients are open source.

@badrihippo
Copy link

Hot take, I'm okay with that. Trying to fight anti-abuse when only collecting anonymous data is impossible. I would data connected to my account as long as I can trust the provider (clear privacy policy, legal limitation on the use of provided data) and a good anonymization process of the data (wifi endpoints only published after multiple accounts have seen them independently) and obviously if both backend and clients are open source.

I'm thinking of this as an "OpenStreetMap of location data". When people contribute locations to OSM, they are by nature implying that they were in the area—and they're fine with that.

Of course, location data is very different because the sharing process will likely be automated, not submitted through a conscious process. I can't speak for everyone, but I personally would be happy to submit this data to a trusted organisation. To make things even better, perhaps the account data can be de-linked after a reasonably long time, when most of the anti-abuse use-case has expired.

@cookieguru
Copy link

We're not aware of sources for Wi-Fi and Bluetooth data

wigle.net

@oguz-ismail
Copy link

good
firefox next

@thestinger
Copy link

thestinger commented Mar 16, 2024

We're not aware of sources for Wi-Fi and Bluetooth data

wigle.net

Unfortunately, this isn't open data for any usage. It's not clear what they would allow but it's non-commercial usage only which somewhat implies not making a service usable for anything including commercial usage.

@Goldmaster
Copy link

I am a little surprised as it helps improve accuracy when GPS is not very good. It doesn't help that the option to help map was removed from the Web browser rewrite and that the stumbler app didn't get updated or made compatible with modern android versions.

@thestinger
Copy link

@vfosnar

We have a plan for dealing with this using hardware-based attestation able to support submitting data from any modern Android device with the security model intact and recent security patches.

This feels bad. Can't see this implemented in a open source/community friendly way.

Data could be accepted from anywhere and simply not trusted without a way to confirm, with this being one way to show it has high confidence. It's possible to support any alternate OS as long as it implements a security model where the app can have strong confidence that it's not getting poisoned data. Apple and Google have a massive amount of data being submitted and can use lots of signals to determine if it's valid too. For a service with very little data, it's very easy for people to mess with it submitting poisoned data. It's too easy to submit fake data to a service like OpenCelliD, or even more so with Wi-Fi networks and Bluetooth beacons. That brings the overall data into question. Making it significantly harder will get rid of nearly all the poisoned data even though it's still theoretically possible to do it. The rest can be handled with moderation.

Hot take, I'm okay with that. Trying to fight anti-abuse when only collecting anonymous data is impossible. I would data connected to my account as long as I can trust the provider (clear privacy policy, legal limitation on the use of provided data) and a good anonymization process of the data (wifi endpoints only published after multiple accounts have seen them independently) and obviously if both backend and clients are open source.

There could be an opt-in to the level of privacy provided where the default could be something like not using the data until 3+ accounts have seen the networks, but with the option for less. Hardware attestation can give high confidence in the data being valid without needing to confirm it from several accounts believed to be separate people, and any app can use hardware attestation without any special privileges since it's privacy preserving, so I really think that's a good way to avoid needing to do something privacy invasive to confirm accounts are legitimate.

@cookieguru
Copy link

We're not aware of sources for Wi-Fi and Bluetooth data

wigle.net

Unfortunately, this isn't open data for any usage

That is incorrect, the EULA very explicitly states that you have a "right to use the maps and access point database...solely for your personal, research or educational, non-commercial purposes"

It's not clear what they would allow but it's non-commercial usage only which somewhat implies not making a service usable for anything including commercial usage.

Your misuse of the apostrophe makes it difficult to decipher the intent, but if you're using the data for commercial purposes then you by definition have the funds to license the data. Whether or not it the license fees are feasible for your project can only be speculated.

It certainly sounds like you're forging ahead on this path

@thestinger
Copy link

@cookieguru

That is incorrect, the EULA very explicitly states that you have a "right to use the maps and access point database...solely for your personal, research or educational, non-commercial purposes"

That's what I said: it's not open data usable for any purpose. Non-commercial usage restriction would prevent hosting a service which can be used for commercial usage itself. Normally, you can't simply bypass a license wrapping it behind something. They'd need to be asked what they would permit and there's no way to make an open data service from licensing proprietary data. We want anyone to be able to host it.

Your misuse of the apostrophe makes it difficult to decipher the intent, but if you're using the data for commercial purposes then you by definition have the funds to license the data. Whether or not it the license fees are feasible for your project can only be speculated.

Writing "it is" as "it's" twice in a row is not misuse of the apostrophe, and I'm not sure how it would make it harder to understand.

but if you're using the data for commercial purposes then you by definition have the funds to license the data

Publishing a service usable by anyone for any purpose for free is likely considered commercial use of the data because people would be using the service for commercial purposes. Cannot simply wrap it up in a non-profit service that's used commercially. They may also have an issue with taking donations to support a service.

We want an open source service with open data. Paying for data with heavily restricted usage terms wouldn't work.

It certainly sounds like you're forging ahead on this path

Mozilla's location service wasn't even an implementation of what we want because it was no longer maintaining / promoting data submission and the Wi-Fi/Bluetooth data wasn't open. It has degraded over time. I hadn't seen the legal situation it was in before yesterday and that partially explains the situation. I think it would have rotted away and been discontinued either way, but it probably accelerated that. I think the death of the service was inevitable with FirefoxOS. Cannot really expect them to maintain a service they don't need and which has little to do with their main project.

Wigle is a proprietary service with proprietary data. Making an open source and open data service is not reinventing the wheel. Mozilla's service had largely proprietary, non-published data too, for the Wi-Fi/Bluetooth part of it. The service itself and the cell data were published.

@maciejsszmigiero
Copy link

Note that Geoclue has always allowed submitting data to MLS from systems where it can access a GNSS receiver.

It is opt-in, however, both for privacy reasons and because that's what our users expect.

@woj-tek
Copy link

woj-tek commented Mar 16, 2024

Hot take, I'm okay with that. Trying to fight anti-abuse when only collecting anonymous data is impossible. I would data connected to my account as long as I can trust the provider (clear privacy policy, legal limitation on the use of provided data) and a good anonymization process of the data (wifi endpoints only published after multiple accounts have seen them independently) and obviously if both backend and clients are open source.

There could be an opt-in to the level of privacy provided where the default could be something like not using the data until 3+ accounts have seen the networks, but with the option for less. Hardware attestation can give high confidence in the data being valid without needing to confirm it from several accounts believed to be separate people, and any app can use hardware attestation without any special privileges since it's privacy preserving, so I really think that's a good way to avoid needing to do something privacy invasive to confirm accounts are legitimate.

my 3c: I try to be privacy focused but at the same time I do (try to) contribute to openstreetmap, which requires account. I think there could be a group of people that wouldn't mind using account to help collect the data (assuming it would be only used for verification during submission and would be anonymised/erased)

@aembleton
Copy link

my 3c: I try to be privacy focused but at the same time I do (try to) contribute to openstreetmap, which requires account.

The data collection could even be build into something like Street Complete. As you're using it to update OSM, it could listen for SSIDs and feed those back. It's already using the GPS for Street Complete and so wouldn't hurt the battery life.

@mar-v-in
Copy link

mar-v-in commented Mar 16, 2024

For users that are very privacy aware, I suggest to prefetch the cell towers in their region and not use accurate wifi-based location at all. Prefetched cell tower data would also be sufficient for faster GPS via local SUPL, which doesn't need high accuracy and can handle inaccuracies of tens of kilometers pretty fine.

I doubt that protecting a location service with device attestation makes a lot of sense for two reasons:

  1. You want people that are able and willing to collect wifi locations using highly sensitive long distance hardware to do that and contribute. This hardware is very likely not compliant with any attestations, but can be an extremely good source.
  2. Even when device attestation is used, it is "trivial" to introduce wrong records, as we're talking about uploading data that was previously retrieved from public radio airspace. Setting up a device that appears as a hotspot with arbitrary strength and mac address is easy enough for people that are willing to abuse.

Publishing the raw data (both raw data of submissions and raw locations of wifi hotspots) can be a serious privacy issue - entirely independent of what privacy laws might state. As you are concerned with abuse, please also consider the abuse risk of this data. For example, stalkers will be able to follow their victims even when they move to a different location for as long as they continue to use the same wifi hardware.

Requiring an account for uploading data is not generally a bad idea, however this account setup should be as easy as possible (e.g. could be just generating a unique random id client side). If passive contributing (that is: uploading data about wifi networks when GPS is already in use anyway) is easy to activate (e.g. just ticking a box) this would allow users to contribute without any downsides. A unique random id would still be useful for abuse prevention, as continued and proven correct submissions from a unique id could be used to increase trust in its data - so the "account" would only be used to gain trust, not to ban bad actors (which is hard anyways, as they will likely be able to create a new identity).

@thestinger
Copy link

I doubt that protecting a location service with device attestation makes a lot of sense for two reasons

Any service we provide needs to be heavily protected from abuse, as do the people involved in it. All of our services are targeted with attacks including denial of service, Child Sex Abuse Material spam, gore spam and harassment. If the service has any form of comments or discussions related to it, all of that is relevant too.

This gives us the advantage of already being prepared for the abuse that will eventually target a service like this once it becomes popular. We're already thinking about it and preparing for it before even starting.

You want people that are able and willing to collect wifi locations using highly sensitive long distance hardware to do that and contribute. This hardware is very likely not compliant with any attestations, but can be an extremely good source.

It's possible to have both the concept of trusted contributors who have an established account with a history of submitting valid data and can apply to submit data from a non-verified device. The quality of the resulting data heavily depends on not simply accepting any data submission and treating all data submissions as equal. The data submitted by different hardware will also vary in how it needs to be treated.

Even when device attestation is used, it is "trivial" to introduce wrong records, as we're talking about uploading data that was previously retrieved from public radio airspace. Setting up a device that appears as a hotspot with arbitrary strength and mac address is easy enough for people that are willing to abuse.

Hardware attestation provides a baseline which allows deploying other mitigations against abuse. People doing this also have a limit to how much money they're willing to spend on new phones.

Publishing the raw data (both raw data of submissions and raw locations of wifi hotspots) can be a serious privacy issue - entirely independent of what privacy laws might state. As you are concerned with abuse, please also consider the abuse risk of this data. For example, stalkers will be able to follow their victims even when they move to a different location for as long as they continue to use the same wifi hardware.

Wigle supports lookups by SSID/MAC already:

https://wigle.net/

Therefore, it doesn't seem to be a valid reason to oppose publishing open data. Many people believe in gathering and publishing this as data in the public domain usable for any purpose. A service making the data available to everyone has a much higher chance of success than one hoarding it and building a business model around it. That's not what we want to do. Submitting data to companies profiting off it without compensating the people submitting it or giving them rights to the overall data doesn't seem right.

Some of the main perpetrators of the attacks on GrapheneOS and myself are among the main developers of your project. It was one of your supporters who did multiple swatting attacks targeting me in April 2023 with the clear aim of having me killed by law enforcement. It was carefully crafted to maximize the chance of that outcome. Perhaps if was one of the people who makes commits to your project who did it. You probably wouldn't kick them out even if we obtained proof of that based on past experience. Doesn't make any sense for someone like yourself who says you only care about the code and will allow nazis to contribute to your project if they write good code to start pretending to care about abuse.

fossfreedom added a commit to BuddiesOfBudgie/budgie-control-center that referenced this issue Mar 16, 2024
Mozilla will deprecate its service at the end of March 2024.
Thus location support will be broken until a replacement
solution is found.
mozilla/ichnaea#2065
@mar-v-in
Copy link

Wigle supports lookups by SSID/MAC already:

https://wigle.net/

Therefore, it doesn't seem to be a valid reason to oppose publishing open data. Many people believe in gathering and publishing this as data in the public domain usable for any purpose. A service making the data available to everyone has a much higher chance of success than one hoarding it and building a business model around it.

Someone else doing it this way doesn't mean it's a good idea ;) There likely are reasons why the database of Mozilla had many more contributors even if the Wigle database is more open. I personally would not want to contribute to a database that can be easily abused by people with malicious intent.

And I wasn't suggesting to make a business model out of it, nor to "hoard" it. Making data not open to everyone is not the same as keeping data private. Data could be provided to others under clear rules that prevent or make abuse sufficiently hard, while not impacting its usefulness. For example, a rule could be that a full dump of raw data is provided to researchers, if usage is monitored by an IRB and all copies of the data is removed within a year after the research was conducted. I'm not saying it's easy to come up with appropriate rules for this, but I would argue it would be worth it to prevent abuse.

Off-topic: on accusations As I previously said, I feel sorry for what happened to you and you are open to provide detailed accusations and supporting material by email to me, e.g. by mail to admin at microg.org, or to some independent third-party. And of course these can lead to a ban of individuals from the project. I don't think it's fair to claim wrongdoings by me personally because of a conflict you seem to have with an unnamed contributor to my project. I'd also like to point out that the project only has a single "main developer" beside myself and I doubt you are referring to that person, so maybe you should downgrade your wording to "minor contributor". And to reiterate the obvious: I don't want any contributions from Nazis and will happily ban those, no matter how good or how much code they were to contribute. But I also don't think this is the venue to discuss any of this.

@alexcottner
Copy link
Collaborator

Could Mozilla provide full cell data exports about all the cells in their database before closing everything down?

Great suggestion, we took a full dump of the cell tower data and have made it available for download here.

@leag1234
Copy link

e Foundation (based in the EU; behind the /e/OS project), like other projects, is obviously looking for alternatives to MLS as well. We are ready to join and help credible initiatives in the field.

Also there are remaning sub-grants up to 50K€ available from the MobiFree project that we have initiated (https://mobifree.org/). This would probably nicely fit with the development of (a part of) a new Location Service.

Also, e Foundation is ready to host such service.

Feel free to contact me about it.

@lucasmz-dev
Copy link

lucasmz-dev commented Mar 25, 2024

Whatever service "we" come up with, I believe should have some properties, or at least try to achieve them:

  • Cryptographic privacy (hashing for example, SSID + MAC + other fingerprintable details, can help with making SSIDs not possible to easily look for in a public database, but findable by people who already know the details and just wanna find the location). It would also be possible for lookup, similar to how leaked password services work, where they hash the content, and then only share a fraction of the data, and the service replies with all content that matches that fraction; that way you can make it harder for the hosting to figure out the user location.

  • Resistant to patents, as Mozilla has experienced, patents can sabotage the project (federation, decentralization can help here, I see an issue where independent data could be a problem, as people would contribute to different sources, and they wouldn't be merged, and none of them would be accurate or have enough data)

  • Easily contributable

  • Contributions should be bundled, this is a client thing, but submissions should be bundled IMO by default for every 7 days, so that a current location can't be revealed, and tracking is not possible, the more data there is around an area, the more it could update.

  • Some resistance to profiling; It shouldn't be possible to check areas with more data and found out a person in a certain house, that goes to a certain school, goes to a certain clinic, is contributing to a low-contributor area. It's easy to connect dots that way in a public database, as more frequently visited places end up having more data around them. In a place where only one person is contributing, you can figure out easily what it is they're doing and where they're going, even if you're not knowing the dates. Perhaps client software should refrain from logging close-by places and detect when someone's still.

@FedericoCeratto
Copy link

@LucasMZReal Indeed. As mentioned before there are various cryptographic methods to protect the SSIDs and macaddrs and prevent users to have their devices and traveling habits exposed. https://ieeexplore.ieee.org/document/9079655 is an example. Bundling a whole dataset might be impractical, but fetching and caching large areas e.g. a whole city or region should be easy.
Ideally the server-side component should rely on open data and not encourage user lock-in, so that multiple organizations could run the location service together.

@thestinger
Copy link

We're currently looking for someone to hire to work on a fully open system with open data for mapping networks to power both local location detection and services providing it including one hosted by GrapheneOS for both GrapheneOS users and individuals. This will involve creating the app for gathering data, the service for collecting/processing it and an app for consuming the data as an Android network location service with either locally downloaded data or a hosted service. Privacy and security will be heavily taken into account, and it will not have the massive privacy/security flaws of microG including the long history of location data leaks to apps or unnecessarily tying network-based location to Google service compatibility.

@LucasMZReal

Cryptographic privacy (hashing for example, SSID + MAC + other fingerprintable details, can help with making SSIDs not possible to easily look for in a public database, but findable by people who already know the details and just wanna find the location). It would also be possible for lookup, similar to how leaked password services work, where they hash the content, and then only share a fraction of the data, and the service replies with all content that matches that fraction; that way you can make it harder for the hosting to figure out the user location.

Contributions should be bundled, this is a client thing, but submissions should be bundled IMO by default for every 7 days, so that a current location can't be revealed, and tracking is not possible, the more data there is around an area, the more it could update.

Some resistance to profiling; It shouldn't be possible to check areas with more data and found out a person in a certain house, that goes to a certain school, goes to a certain clinic, is contributing to a low-contributor area. It's easy to connect dots that way in a public database, as more frequently visited places end up having more data around them. In a place where only one person is contributing, you can figure out easily what it is they're doing and where they're going, even if you're not knowing the dates. Perhaps client software should refrain from logging close-by places and detect when someone's still.

Privacy for people gathering data can be improved through not using the data until it gets confirmed by multiple separate people. People could have a privacy control to raise or lower this threshold. Delaying usage of the data based on a timer could also help with both mitigating poisoning the data and protecting privacy. Publishing the resulting data determined from the raw submitted data shouldn't hurt privacy for people gathering data in practice. It's a very theoretical problem as long as basic privacy mitigations are put in place. Once there's a solid map with most of the networks, it also becomes much less of an issue. Trying to track someone who submits data by putting new networks along the paths they might travel doesn't seem like a realistic attack and it would be easier to do it other ways if someone specific was being targeted. Also no need to publish all the raw data directly such as times, etc.

The raw data doesn't need to exist on the service in unencrypted form. It can be submitted encrypted with a public key for processing. This can avoid having the data on an internet-facing service, making it far less likely the raw data would ever be leaked. It also doesn't need to be persisted forever.

Hashing will not protect people submitting data to a service in order to obtain location from it. It will know their location from the results. The best approach to this is the resulting mapping data being open data that's possible to download and use locally with that used as the main approach rather than querying a service. Not publishing the mapping data for local use substantially hurts privacy and the concept that users need to submit location to a service in real time for privacy reasons doesn't pass muster. People need their privacy protected, not static landmarks that are already mapped with the data already available to query at https://www.wigle.net/ and elsewhere. The users of the service are the main group who need their privacy protected along with the people submitting data.

If someone knows the SSID/MAC they can look it up if queries with one network are supported even with closed data. https://www.wigle.net/ and other services provide the ability to query the data for one network already. It won't be a new capability for another service to exist providing this functionality. MAC is 64-bit with 32-bit of that used for a vendor prefix and the next 32-bit generally not being randomized but rather incremented. SSID is a label for humans and isn't generated like a password. SSID can be used to opt-out of mapping via _nomap but that's incredibly uncommon.

See https://www.wigle.net/ for an example of what's already available to query. A new service should take into account what's already available. It should be possible to explain what the privacy concerns actually are with providing open data. If there aren't any concrete concerns, it shouldn't result in crippling what a service provides and not providing open data because of it.

Resistant to patents, as Mozilla has experienced, patents can sabotage the project (federation, decentralization can help here, I see an issue where independent data could be a problem, as people would contribute to different sources, and they wouldn't be merged, and none of them would be accurate or have enough data)

We'll be making our own service for GrapheneOS with open data published for others to use locally or host themselves. We plan to implement significant mitigations for protecting the privacy of people gathering data. We also intend to add significant mitigations against the data being poisoned, which will make the mapping data that's published much more useful. We haven't decided on how this should be licensed. We generally prefer permissive licensing so that people can use it for anything, but giving everything to a proprietary service without open data could cripple it early on before it has a chance to reach critical mass.

Publishing open data usable both locally by apps, by other services, etc. is also something we think is important and it's a strong motivation to do it ourselves to make sure it happens. The overall opposition to open data permitting people to query the Wi-Fi data locally on their device is very strange particularly when services like https://www.wigle.net/ already exist which is not being acknowledged by people opposed to providing the resulting data (not raw data which may compromise privacy of people submitting it).

@thestinger
Copy link

@FedericoCeratto

@LucasMZReal Indeed. As mentioned before there are various cryptographic methods to protect the SSIDs and macaddrs and prevent users to have their devices and traveling habits exposed. https://ieeexplore.ieee.org/document/9079655 is an example. Bundling a whole dataset might be impractical, but fetching and caching large areas e.g. a whole city or region should be easy.

The best way to avoid a service receiving's people data in real time is doing it locally on the user's device based on downloading data for a region. Users can be offered a way to control storage usage vs. privacy. Making the data format extremely efficient should avoid needing to have much of a compromise on this. There are not a lot of cell towers and even downloading a database for the entire world could be extremely efficient and comparable to a large app. There are far more Wi-Fi networks but downloading them for a whole region is still completely practical if it's stored efficiently. It doesn't really seem necessary to even have the SSIDs for this but rather simply MAC + coordinates. A naive approach would be a massive hash table stored in zstd compressed blocks but I'm sure that there are much better ways to do it than that. It's a massive set of 64-bit integer keys with 2x 32-bit integer values. That's something which can be done super efficiently. It only needs to be queried by the keys for the purpose of local location detection, no need for querying by location.

@lucasmz-dev
Copy link

Point is, I'm not contributing to a project that ignores privacy because "someone else is already doing it". These services need contributors, and I'm not about to be one of them for a service that's just as problematic.

And especially not to a project that won't help others, and undermine others.

@mar-v-in
Copy link

mar-v-in commented Mar 25, 2024

long history of location data leaks to apps

I'm not aware of any location data leaking from microG. Mind opening an issue so it can be fixed?

tying network-based location to Google service compatibility

microG already provides a system network location provider, so it's not tied to using the Google services, but can also be used from the AOSP network location provider interface (for apps that support that). It's also a separate Gradle module, so you can easily build it independent of the rest of microG

Of course only providing that interface is not enough as most apps use Google's location API, that's why GrapheneOS essentially did the same as microG here and provided an alternative implementation of Google's AIDL interface (using binder redirection instead of signature spoofing, both of which are techniques to make an app call a service that is not the one their code intended to call).

If someone knows the SSID/MAC they can look it up if queries with one network are supported even with closed data. https://www.wigle.net/ and other services provide the ability

I don't think the database of Wigle can be used as a valid example to prove that data is generally accessible. I just tested against my personal wifi hotspot and there is no match in Wigle, but Apple, Google and Mozilla all have it in their database. Wigle's database is pretty large, but that's also because it has a ton of stale and outdated records. Mozilla Location Service had 50 times more submissions in half of the lifetime and still we get repeated complains by users about the quality of its database.

SSID can be used to opt-out of mapping via _nomap but that's incredibly uncommon.

People want their devices to be able to geolocate themselves when at home, that doesn't mean they want every stranger that for some reason was able to discover their mac address to be able to find them. Also adding _nomap really sucks on the UX and many users are not even aware on how they can change their SSID.

but downloading them for a whole region is still completely practical if it's stored efficiently

What is region in your mind here? It is estimated that in the western world, we have more than on wifi cell on average per person (some cells are shared, but many access points nowadays have at least 2 cells, one for 2.4GHz and one 5GHz). For the metropolitan region of NYC, that would mean more than 20M wifi cells. Then if you store their address (6 byte), latitude (4 byte), longitude (4 byte), altitude (2 byte) and signal strength modifier (1 byte) - which is the bare minimum of what you need for accurate indoor locating - you end up with more than 300MB of raw data for a single region. Full countries are probably off the table, at least for larger countries.

Now I do recognize that for performance and privacy reasons, it does make a lot of sense to have some data on the device. I personally see value in Apple's approach: When Apple devices request the Apple servers for a location based on wifi networks and the location can be resolved (i.e. the request is either for public hotspots or for at least three non-open access points), they provide the location data of the requested access points as well as the closest 100 access points nearby. This allows the Apple devices to cache that data locally and handle further location updates to some degree (e.g. when the user walks a little bit and therefor signal strength of cells changes and/or new cells show up) completely offline, while still only storing a small amount of data locally.


I see there are some different approaches being discussed here and I think that makes a lot of sense. However I'd certainly wish for us to agree on a single common design and shared data that's being used by all of us - in the interest of having good data quality and the best service for all of our users and the users of other free software systems that require a replacement of Mozilla Location Service - I particularly also want to always include geoclue, which may not be as vocal here, but has a significant userbase. We really should make sure we don't end up in an XKCD#927 like situation.

And if financial resources are the issue here, I'm certainly willing to throw some money in the pot for GrapheneOS (or anyone else) to hire someone that works on providing this common solution for all of us.

@maciejsszmigiero
Copy link

The main problem here is securing funding for running the service and also for possible future R&D efforts.

If there was enough funding then it would be enough to simply keep running this software (ichnaea) on rented infrastructure to at least keep the service running past June.

This doesn't exclude further improvements in the future, if resources permit.

@thestinger
Copy link

@LucasMZReal

And especially not to a project that won't help others, and undermine others.

You're once against spreading misinformation about GrapheneOS and lying about us. You've done this consistently and across platforms along with your colleagues. You're repeatedly spread fabricated stories about us, misinformation about the GrapheneOS project and supported harassment. It is you undermining others, not us. We worked with CalyxOS early on despite the involvement in the takeover attempt in GrapheneOS in 2018 by the lead developer and several other project members. We helped CalyxOS get through breaking their update path and helped them find a way to get users updated automatically despite breaking their OS update client. What we got in return for that was Calyx leadership and most of the members of the project doubling down on spreading misinformation about GrapheneOS and heavily participating in harassment targeting myself and other project members. We have years of evidence of this happening. Every GrapheneOS user active in our rooms sees the raids from your community members. Calyx project members actively helped with making Techlore's highly dishonest harassment video targeting me and helped with subsequent harassment. Please do not try to pretend that you are not involved in this abuse. Your project has a long history of abuse, as do you.

Calyx also has a history of covering up privacy and security vulnerabilities. Each CalyxOS release announcement even includes objectively false claims about privacy/security patches, falsely claiming to provide all open source patches and downplaying frequently delayed and missed patches. There were cover ups with your leaky firewall feature, your leaky VPN tethering feature and other features you provide. Meanwhile, you've repeatedly spread blatant misinformation about GrapheneOS claiming the actual privacy and security hardening work we do isn't useful despite you repeatedly plagiarizing our work. Calyx's Chromium fork has extensive plagiarism from GrapheneOS, as do the camera app and various other features included in the OS. We will take legal action in the future base on this. microG will also face legal action from GrapheneOS due to plagiarized code. This is the consequence of allowing contributions from people known to do this.

Point is, I'm not contributing to a project that ignores privacy because "someone else is already doing it". These services need contributors, and I'm not about to be one of them for a service that's just as problematic.

We aren't ignoring privacy but rather we are focused on truly protecting people's privacy instead of giving the false appearance of doing that while doing massive amounts of false marketing and underhanded attacks on GrapheneOS as the projects you work on are known for doing along.

@mar-v-in
Copy link

mar-v-in commented Mar 25, 2024

The main problem here is securing funding for running the service and also for possible future R&D efforts.

From what I read here and also heard from other sources, funding doesn't seem to be the main issue. /e/ offered to help with the hosting and for R&D, there is likely funds available through NGI Mobifree or some other public funding source.

The main issues seem to be:

  • What can legally be done? Patents seem to be an issue when running Ichnaea. This may lead to the requirement to ensure fully non-commercial operations or other restrictions, which some might be unwilling to.
  • What do we want to do? Just running a new instance of Ichnaea would be easy, but also would result in insufficient data quality if we can't all agree on submitting to such a service. Also Ichnaea has some downsides and this might be the point to fix them (as it might be impossible to fix them later on without loss of data).

@thestinger
Copy link

@mar-v-in

I'm not aware of any location data leaking from microG. Mind opening an issue so it can be fixed?

You must be aware you were leaking data to apps for years due to not implementing proxy AppOps enforcement. It wasn't the only issue. We're open to collaboration if microG developers stop spreading misinformation about GrapheneOS, stop engaging in and encouraging harassment targeting our team / community members and start working towards repairing the massive amount of harm caused to us. If you want to play games denying it, that's simply making it harder to ever make things right in the future. These attacks being made right here by other members of your project simply reinforce this.

microG already provides a system network location provider, so it's not tied to using the Google services, but can also be used from the AOSP network location provider interface (for apps that support that). It's also a separate Gradle module, so you can easily build it independent of the rest of microG

We plan to make an OS location provider with a specific set of functionality. You may remember years ago that I reported a few issues in the location provider which existed at the time which as far as I know you fixed. I didn't stop finding issues but rather I don't report vulnerabilities to groups targeting me the way you folks have been doing for years now.

Of course only providing that interface is not enough as most apps use Google's location API, that's why GrapheneOS essentially did the same as microG here and provided an alternative implementation of Google's AIDL interface (using binder redirection instead of signature spoofing, both of which are techniques to make an app call a service that is not the one their code intended to call).

The approach currently used in GrapheneOS properly enforces the permission model and attributes power usage / permission usage to the app since the app is sending the requests to the OS itself. We previously used the AppOps proxy / attribution APIs and found major issues with those along with it being overly complex and more prone to bugs.

I don't think the database of Wigle can be used as a valid example to prove that data is generally accessible. I just tested against my personal wifi hotspot and there is no match in Wigle, but Apple, Google and Mozilla all have it in their database. Wigle's database is pretty large, but that's also because it has a ton of stale and outdated records. Mozilla Location Service had 50 times more submissions in half of the lifetime and still we get repeated complains by users about the quality of its database.

There are other larger databases available. Needing to pay for access to them doesn't mean they aren't available.

People want their devices to be able to geolocate themselves when at home, that doesn't mean they want every stranger that for some reason was able to discover their mac address to be able to find them. Also adding _nomap really sucks on the UX and many users are not even aware on how they can change their SSID.

How are you suggesting someone discovers it beyond physical access? APs being static landmarks broadcasting to a large area over public airwaves with some identifiers is the whole reason that using them for location detection works at all. The case where it matters is when the AP is moved, since it can be used to see that someone who was at a location moved to another. This is also why Wi-Fi hotspots are horrible for privacy. They're missing the Bluetooth LE attempt at making this more private, which is at least a serious attempt at doing it despite major flaws.

What is region in your mind here? It is estimated that in the western world, we have more than on wifi cell on average per person (some cells are shared, but many access points nowadays have at least 2 cells, one for 2.4GHz and one 5GHz). For the metropolitan region of NYC, that would mean more than 20M wifi cells. Then if you store their address (6 byte), latitude (4 byte), longitude (4 byte), altitude (2 byte) and signal strength modifier (1 byte) - which is the bare minimum of what you need for accurate indoor locating - you end up with more than 300MB of raw data for a single large region.

That's raw data size before any data sharing or compression methods are used. Latitude, longitude and altitude can be relative values rather than absolute. MAC addresses are not very random in practice due to vendor prefixes and incrementing values. Variable length integers and many ways of sharing data between many similar integers exist. It doesn't simply all need to be stored in a flat table. There can be a lot of improvements made to use far less space per entry overall. Generating specialized compressed databases for user selected regions is just compute time and could be done locally on the user's device. It shouldn't take that long. It doesn't need to be a hard-wired set of regions / granularity.

Now I do recognize that for performance and privacy reasons, it does make a lot of sense to have some data on the device. I personally see value in Apple's approach: When Apple devices request the Apple servers for a location based on wifi networks and the location can be resolved (i.e. the request is either for public hotspots or for at least three non-open access points), they provide the location data of the requested access points as well as the closest 100 access points nearby. This allows the Apple devices to cache that data locally and handle further location updates to some degree (e.g. when the user walks a little bit and therefor signal strength of cells changes and/or new cells show up) completely offline, while still only storing a small amount of data locally.

There is value in their approach but a properly optimized local implementation hasn't even been attempted even for cell towers.

I see there are some different approaches being discussed here and I think that makes a lot of sense. However I'd certainly wish for us to agree on a single common design and shared data that's being used by all of us - in the interest of having good data quality and the best service for all of our users and the users of other free software systems that require a replacement of Mozilla Location Service - I particularly also want to always include geoclue, which may not be as vocal here, but has a significant userbase. We really should make sure we don't end up in an XKCD#927 like situation.

We're going to be working on it, and we're open to collaboration, but we want open data and we want a focus on local location lookups via a highly optimized data format

It would be unreasonable to expect us to work with groups trying to harm our project and team with misinformation and harassment but we've always been open to those groups making amends for what they've done over the years. If people who have engaged in these attacks want to be on good terms with us, they're welcome to contact us and start working towards repairing the harm they've caused. Several people involved in this are instead doubling down on it, including right here. If people want to make the usual dishonest attacks on our project and downvote my comments mentioning this, that's fine. I've add @LucasMZReal to a list of people who won't be involved.

And if financial resources are the issue here, I'm certainly willing to throw some money in the pot for GrapheneOS (or anyone else) to hire someone that works on providing this common solution for all of us.

We can afford to hire a single full time engineer to work on this already. Hiring multiple people is not really an option and the person we hire is going to need to have a good track record and confidence in their ability to get this done. There are other ways to approach it than just hiring someone such as purchasing something.

@thestinger
Copy link

@maciejsszmigiero

The main problem here is securing funding for running the service and also for possible future R&D efforts.

If there was enough funding then it would be enough to simply keep running this software (ichnaea) on rented infrastructure to at least keep the service running past June.

This doesn't exclude further improvements in the future, if resources permit.

If we come up with a clear proposal with specific people who want to work on it with a proven track record, then it would be entirely possible to get a substantial amount of funding for the overall project to hire multiple developers rather than only one person. We may not even need to spend any of our current significant funding on it.

Paying the costs of hosting infrastructure of something like this for a couple million users is nothing if it's implemented well. It would be less than $100/month at that initial scale of a couple million users which is nothing. The significant cost is paying engineers to develop the overall server and client side software. The hard part is really finding the right people to do it, not finding the funds to pay them. We already have the problem of funding those people essentially solved as long as it doesn't require more than 2 developers working on it.

Focusing on local database lookups would make it mostly a matter of providing high amounts of cheap, unmetered bandwidth distributed around the world with automatic geo-based load balancing. That can be done extremely cheaply with the same approach we're already using at a smaller scale for our app and OS updates.

We're not willing to spend millions of dollars of funding on this to the detriment of the rest of our project, but certainly paying a couple people to work on it is doable. It's not going to become our main focus but we can certainly afford to make and run these kinds of apps/services, and we could get a lot of people outside the usual audience for it interested via certain connections.

Acquiring an existing project/service is an option. We'll put in whatever it takes to avoid a better successor being a proprietary service without high quality open data for users to query locally.

@thestinger
Copy link

thestinger commented Mar 25, 2024

@mar-v-in I think it would be better to talk via direct message if you actually want to work anything out even if that simply means coming to a mutual understanding about a few things.

@spaetz
Copy link

spaetz commented Mar 25, 2024

Just a bystander, but may I suggest that a few issues in here should be discussed in separate rooms/fora, in order to not derail from the core issue at hand. I don't think now (and this issue) is the right time and place to discuss implementation details of MLS v2.

I think the question about which patents are to be cared about and which jurisdictions are (most) vulnerable are pretty urgent ones to be clarified.

There is a foundation (thanks /e/) willing to help run it, so I would suggest one starts to setup an instance that can be used as a drop in replacement first, and goes from there.

@thestinger
Copy link

Just a bystander, but may I suggest that a few issues in here should be discussed in separate rooms/fora, in order to not derail from the core issue at hand. I don't think now (and this issue) is the right time and place to discuss implementation details of MLS v2.

There's no way to simply continue the current service as is without access to the current data. Mozilla considered the Wi-Fi data sensitive so it's not available. That also means there was an understanding with people submitting it that it wouldn't be published or shared. Since the Wi-Fi data isn't available, doing location detection based on Wi-Fi will require either purchasing access to existing data or beginning to gather it again as soon as possible. This time around, we strongly believe it should be open data. Simply hosting a new service accepting submissions with the same policies is likely to go badly. There are more than the legal issues. There's also the question about whether it was a good approach when now all the work people did for multiple years on collecting Wi-Fi data may be lost.

GrapheneOS Foundation was already in the process of looking into our options for setting up a network location service, SUPL service and geocoding service because we don't use services hosted by third parties in GrapheneOS. Hosting our own geocoding service and figuring out a way to make it possible locally was the priority for us. We currently host a SUPL proxy and have experimented with the different SUPL modes along with the looking into how it works on the client side to figure out the privacy differences between those in practice and whether our SUPL proxy and client configuration works as intended.

Our network location service was simply going to be based on cellular data due to lack of publicly usable Wi-Fi data. It's not expected to provide any value for navigation, etc. but rather will be useful for things like weather apps, etc. It will be disabled by default in GrapheneOS since we don't want to send location data to a service by default. We want a local option available for both using a compressed cell tower database, since we strongly believe this should primarily be based on local data as the main approach. We want to replace our SUPL proxy with a service hosted by us directly. We were already doing the early work of setting up a geocoding service and will probably finish that up first before starting on the rest.

I think the question about which patents are to be cared about and which jurisdictions are (most) vulnerable are pretty urgent ones to be clarified.

There are a lot more companies currently and historically involved in location detection based on cellular, Wi-Fi and Bluetooth. There are going to be a lot of patents involved, as with many other areas.

There is a foundation (thanks /e/) willing to help run it, so I would suggest one starts to setup an instance that can be used as a drop in replacement first, and goes from there.

This is a "non-profit" which exists to prop up a for-profit company enriching the people involved with it. It's also becoming clear that it would be open in name only with proprietary data they can use for their commercial ventures. Completely unacceptable to us, and we're going to push back strongly against it. A real non-profit which isn't simply part of a business model enriching the founders and their associates should be responsible for it... and a T-Mobile MVNO which pretends to be a charity isn't one of those either while scamming people isn't one either.

@leag1234
Copy link

There is a foundation (thanks /e/) willing to help run it, so I would suggest one starts to setup an instance that can be used as a drop in replacement first, and goes from there.

This is a "non-profit" which exists to prop up a for-profit company enriching the people involved with it. It's also becoming clear that it would be open in name only with proprietary data they can use for their commercial ventures. Completely unacceptable to us, and we're going to push back strongly against it. A real non-profit which isn't simply part of a business model enriching the founders and their associates should be responsible for it... and a T-Mobile MVNO which pretends to be a charity isn't one of those either while scamming people isn't one either.

Again you and your toxic attitude are attacking us.

I won't comment more: you are well known for your paranoia and for aggressing many projects, including us. And as I've told you before when you threatened to sue us for such and such overrealistic reasons (and this is endless and no-constructive discussions), we just prefer to stay away from you.

Regarding e Foundation and you defamating claims, the open source community and people will judge by themselves.

@mar-v-in
Copy link

mar-v-in commented Mar 25, 2024

Off-Topic microG / GrapheneOS

You must be aware you were leaking data to apps for years due to not implementing proxy AppOps enforcement.

I'm a bit surprised by you pulling out this issue that was fixed 4 years ago and only affected apps which did not target SDK 23 or higher (which at the time was already a requirement for apps uploaded to Play Store). Your phrasing made me think there might be a new issue, but if this is what you were referring to, all good.

The approach currently used in GrapheneOS properly enforces the permission model and attributes power usage / permission usage to the app since the app is sending the requests to the OS itself. We previously used the AppOps proxy / attribution APIs and found major issues with those along with it being overly complex and more prone to bugs.

Yes, as a privileged system component (being the OS itself) this is obviously easier. As microG is targeting to also run unprivileged / user-installed, this is unfortunately not that easy for us. I would see some benefits for splitting microG into multiple smaller pieces, which would make it easier and more reasonable to have some of them run privileged.

I think it would be better to talk via direct message if you actually want to work anything out even if that simply means coming to a mutual understanding about a few things.

I wrote you on Matrix hoping to resolve this conflict outside this venue.

How are you suggesting someone discovers it beyond physical access?

Examples could be:

  • Software leaking mac addresses to unauthorized third-party software on the same device
  • Software on nearby devices leaking mac addresses to third-party software running on adjacent devices

I'm just being realistic here, today it's pretty easy to get access to this identifier if you can run software on someone's device and that itself is also pretty easy as people use apps and websites with code these days, that have significant privileges on the systems they are executed.

A drop-in replacement would require access to the current data.
There's no way to simply continue the current service as is without access to the current data.

I think he was referring to an API drop-in, a server that implements the same API as MLS, so that systems that currently use MLS can switch easily. Even if the data in such a service is lower quality and/or it's only used for cell tower data and the IP-based fallback, it's probably better than nothing.

And of course it is possible to spin up the Ichnaea server software on a new machine without any of the wifi data and hope for users to contribute to it. It might not be the best approach technologically or when we look at our users privacy, but it might be the best approach when also considering the privacy of people whose data is collected and given the legal situation, as there seems to be consensus within privacy lawyers that wifi mac addresses are to be considered PII under GDPR, which likely makes sharing of this as open data illegal.

@alexcottner
Copy link
Collaborator

Hi everyone,

This is a polite reminder that this is our professional working environment as much as it is our issue tracker. I'm asking to keep comments focused on this issue and refrain from discussing projects outside of MLS or this repo. I also encourage you to review our Community Participation guidelines.

Thank you.

@thestinger
Copy link

@alexcottner Please ban the people engaging in harassment targeting me across platforms. I can provide extensive proof of the harassment, bullying and fabricated stories from these people. I can show their regular chats with people who have openly done things such as telling me to kill myself.

@leag1234

Again you and your toxic attitude are attacking us.

I have archives showing you falsely claiming GrapheneOS isn't open source and engaging in harassment targeting me including trying to portray me as crazy.

I won't comment more: you are well known for your paranoia and for aggressing many projects, including us. And as I've told you before when you threatened to sue us for such and such overrealistic reasons (and this is endless and no-constructive discussions), we just prefer to stay away from you.

This is an example of you engaging in name calling and pushing fabricated claims about me as you have done many times. You have targeted other projects including DivestOS in the same way, and the DivestOS developer is being targeted with harassment too.

Regarding e Foundation and you defamating claims, the open source community and people will judge by themselves.

The open source community at large recognizes that you're grifters. Engaging in Kiwi Farms style harassment targeting privacy and security researchers including at both DivestOS and GrapheneOS demonstrates who you are.

@thestinger
Copy link

@alexcottner It should not be permitted to repeatedly post claims about someone claiming them to be paranoid, insane, delusional, schizophrenic, etc. People who spread harassment/bullying videos from Kiwi Farms members shouldn't be participating here with the expectation that the targets of their harassment are silent about it.

@leag1234
Copy link

leag1234 commented Mar 25, 2024

@leag1234

Again you and your toxic attitude are attacking us.

I have archives showing you falsely claiming GrapheneOS isn't open source and engaging in harassment targeting me including trying to portray me as crazy.

I won't comment more: you are well known for your paranoia and for aggressing many projects, including us. And as I've told you before when you threatened to sue us for such and such overrealistic reasons (and this is endless and no-constructive discussions), we just prefer to stay away from you.

This is an example of you engaging in name calling and pushing fabricated claims about me as you have done many times. You have targeted other projects including DivestOS in the same way, and the DivestOS developer is being targeted with harassment too.

Regarding e Foundation and you defamating claims, the open source community and people will judge by themselves.

The open source community at large recognizes that you're grifters. Engaging in Kiwi Farms style harassment targeting privacy and security researchers including at both DivestOS and GrapheneOS demonstrates who you are.

As usual...

I will be interested in having a look at your "archives" and to (not)happy to discuss all this somewhere else, because here is about MLS.

@mozilla mozilla locked as too heated and limited conversation to collaborators Mar 25, 2024
@alexcottner
Copy link
Collaborator

Update - submission of data has now been disabled, completing stage 2 of the retirement process. Clients trying to submit data will see a 403 response with the below content.

{
  "error": {
    "errors": [
      {
        "domain": "global",
        "reason": "Forbidden",
        "message": "Forbidden"
      }
    ],
    "code": 403,
    "message": "Forbidden"
  }
}

@alexcottner
Copy link
Collaborator

Will downstream builds of Firefox need to obtain API keys for this new service?

@heftig - We'd like to implement a simple plan that will allow downstream builds of firefox to self-identify with the classify-client service. Please see this github issue for details.

@alexcottner
Copy link
Collaborator

Step 3 is now complete. The downloads page has been updated leaving only the final cell export listed.

@alexcottner
Copy link
Collaborator

Step 4, disabling of third party API keys, begins today. Requests to MLS will be receiving a 4xx error response, depending on the endpoint called (ex: v1/geolocate requests will receive 404). This shift will happen gradually over the next few days with the full migration targeted for Friday.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests