Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ethical considerations #4

Closed
skullface opened this issue Jun 20, 2020 · 15 comments · Fixed by #11
Closed

Ethical considerations #4

skullface opened this issue Jun 20, 2020 · 15 comments · Fixed by #11

Comments

@skullface
Copy link

skullface commented Jun 20, 2020

In the current climate of mass law enforcement and government surveillance plus mass protest in the United States, I find the release of this technology disturbing.

I urge you to read the core 4 principles of the Algorithmic Justice League:

  • Affirmative consent: Everyone should have a real choice in how and whether they interact with AI systems.
  • Meaningful transparency: It is of vital public interest that people are able to understand the processes of creating and deploying AI in a meaningful way, and that we have full understanding of what AI can and cannot do.
  • Continuous oversight and accountability: Politicians and policymakers need to create robust mechanisms that protect people from the harms of AI and related systems both by continuously monitoring and limiting the worst abuses and holding companies and other institutions accountable when harms occur. Everyone, especially those who are most impacted, must have access to redress from AI harms. Moreover, institutions and decision makers that utilize AI technologies must be subject to accountability that goes beyond self-regulation.
  • Actionable critique: We aim to end harmful practices in AI, rather than name and shame. We do this by conducting research and translating what we’ve learned into principles, best practices and recommendations that we use as the basis for our advocacy, education and awareness-building efforts. We are focused on shifting industry practices among those creating and commercializing today’s systems.

How does this work interact with those principles? We can see what this technology does, but what is its intended use? What are the worst-case scenarios for the use of this openly-available technology around the world? How will further technology built on top of this work be used?

PULSE does not contain a publicly available license to determine what type of use is acceptable or not. I urge you to consider licensing this derivative work under stipulations that require any end users to Do No Harm, like the Hippocratic License.

(In case there is any ambiguity, I am speaking as an individual in the technology industry, not on behalf of or with any relation to my employer.)

@maferVV
Copy link

maferVV commented Jun 20, 2020

As someone who actively participates in local protests and human-rights discourse online, I'll try to reply why I disagree with this in as good faith as possible.
First I'll assume this software has the potential of working correctly. As it seems it seems it doesn't identify faces correctly as of today. Like for example:
image
image
image
I agree that the consecuences of benign institutions/organizations using this software to depixelate protester's faces is dangerous and can result in their well-being being harmed. It is crucial to conceal the identity of protesters who might be targeted by the police and other fascist-aligned organizations. However, the depixelation can also work to identify anti-protesters, police behaving in a violent way and even go back to white supremasist protesters and see if we can identify their faces:
https://www.phillyvoice.com/comcast-fires-employee-proud-boys-alt-right-philadelphia-rally/
As you can read from the article, exposing fascist to their boss makes them less likely to openly express nazi apologia and stop the discourse from being normalized. This is done by watching footage of nazi rallies and crossreferencing their faces with social media profiles. For online activists that dedicate their free time to exposing nazis to their employers, this app is just another tool for their arsenal.
This application is just a tool, and as such, depends on the moral ends of the person using it. It's not inherently unethical.

@voidn
Copy link

voidn commented Jun 21, 2020

It's impossible to add detail back to an image that's been treated like the example inputs. In order to actually make any image this outputs useful to law enforcement to any degree, it would need to provide specific features in it's result to make someone identifiable by another facial recognition database that aren't already available in the pixelated image.

Simply put, it's an average approximation of identifiable features like skin tone, facial ratios like eye width and mouth/nose position. Even when it gets these features identified correctly, they're often averaged to a degree that it's borderline useless to any recognition software it could be put into.

Facial recognition software for identifying people looks for specifics in facial features to identify a specific person, or group of people, whereas this makes drastic generalizations in order to make a recognizable face at all. They'd be better off using the pixelated image than the resulting average.

Now if it was applied to a reasonably clear video or stream of facial captures, it might stand a chance by comparing the different frames for reference, but this just isn't the case. (and likely wont be the case because of how difficult it is to get useful data from high movement video often captured at a resolution where faces aren't recognizable anyway)

@nk-fouque
Copy link

I very much agree with what was said by @voidn
However, I still fear that people will try to use it in that way, and law enforcements might be tempted to use in as evidence
That is why I think it should have a very explicit point to explain what the AI can and can't do

@VilterPD
Copy link

I‘m quite sure this is not useable for a dystopian future, but is is fun. No, you cannot use it for law enforcement/suppression, as there are just too many possibilities. To actually match a face, you would have to be extremely lucky. And most likely be wrong. This is maybe a way to show how someone might look, but never definitely. That is simply impossible, you just cannot „enhance“ as in the movies.

@Moredread
Copy link

Just because it isn't good enough for evil purposes yet doesn't mean it doesn't have the potential for causing harm in the future. We have to start the discourse now when we we still have the chance to change things.

Releasing potential harmful software without at least critically commenting on the issue IMO is very problematic and normalizes the view that software is harmless or at least ethically neutral.

@Whanos asking for a discussing of ethical implications of your code is not censorship. I agree though that just hiding problematic technology doesn't help and showing that it is in the realm of hobbyists in itself is an important statement. But just posting it as a "cool" project is just too dangerous.

@woctezuma
Copy link

woctezuma commented Jun 22, 2020

This tool cannot be used to identify people from low-quality images.

First, there is no one-to-one mapping from a low-quality face image to a high-quality face image, as shown in the article,
Figure 3

Second, it is evident that this tool is merely a toy after you have played with the Colab notebook.

For instance, if I downsample this 1024x1024 image (ground truth), on which the face was properly aligned:
Ground truth

And then feed this 16x16 input to the algorithm:
Input

Then I receive this 1024x1024 output, which has nothing in common with the ground truth
Output

Finally, have a look at this Github issue and this Twitter thread. The model checkpoint seems to only generate faces of seemingly "white" people. In its current form, this tool is just a toy, and it is a toy with very limited capabilities.

@tayhengee
Copy link

Just because it isn't good enough for evil purposes yet doesn't mean it doesn't have the potential for causing harm in the future. We have to start the discourse now when we we still have the chance to change things.

Releasing potential harmful software without at least critically commenting on the issue IMO is very problematic and normalizes the view that software is harmless or at least ethically neutral.

@Whanos asking for a discussing of ethical implications of your code is not censorship. I agree though that just hiding problematic technology doesn't help and showing that it is in the realm of hobbyists in itself is an important statement. But just posting it as a "cool" project is just too dangerous.

First of all, there's a branch in ML research to study biased and its consequences. You can't expect someone who invented depixelizer or GAN to always look into the problem of a biased dataset, their contribution is on the tool and the algorithm. People who use the algorithm should bear with ethics and other practical considerations. I just want to highlight, there's nothing wrong to train on that dataset, it's based on their own preferences as well as the availability of the dataset. We should be grateful while people share the knowledge! The situation now is, the contributors are giving away free pizza, but all these people who got the pizza are complaining the pizza is not vegan. Muhammad ibn Musa al-Khwarizmi, father of Algebra wouldn't know that algebra will cause the problem we facing right now, given that deep learning models have tonnes of algebras. We can't force Muhammad ibn Musa al-Khwarizmi to think of the possible implications on ALL ASPECTS, such as economy, social well-being, etc. These are not their expertise, you can't force an artist to do the math, same goes to the AI algorithm inventors, they just explore the tech! Now you know we can actually restore low res photo to a decent looking picture, thanks to their hard work! You can create a non-biased model, if you could, that would be a great contribution, or perhaps the guideline for it.

Just raising some ideas, and hope people calm down and think critically, nothing personal!

@voidn
Copy link

voidn commented Jun 22, 2020

@woctezuma To expand this, it's also worth mentioning that AI like this often come to incorrect solutions because of either loosely defined "success" or simply using unexpected methods. Often, it'll come up with a result that's "good enough", or altogether cheat and shortcut to doing the least amount of work possible for a minimum viable product (surprisingly/unsurprisingly mimicking human behavior) in order to get a "success". At least this gives us some really fun results to laugh at.

In this case, it's difficult to more specifically pin down a "good" or "real" success, because of what we're doing. For each unique image input, there's thousands of possible, and probably equally plausible results that could be generated, like you mentioned. But because we're limited to only our pixelated input, even a more robustly trained AI would be limited to saying "any of these couple thousand images generated could be a match", because they're equally as likely to produce the downscaled result.

...our technique can create a set of images, each of
which is visually convincing, yet look different from each
other, where (without ground truth) any of the images could
plausibly have been the source of the low-resolution input.

The information required for details is often not present in the LR image and
must be ‘imagined’ in.

A lot of the hubbub here seems to be caused by people not understanding what this system actually does, and not understanding the limitations of not only this, but related tech. If pixelation is doubted to be effective, there are other methods that are more bulletproof.

da027dabc596c6756569dc8960e651bf

@nk-fouque
Copy link

A lot of the hubbub here seems to be caused by people not understanding what this system actually does, and not understanding the limitations of not only this, but related tech. If pixelation is doubted to be effective, there are other methods that are more bulletproof.

My personal concern comes precisely from the fact that people will not understand what this system does and its limitations, and therefore they will try to use it in incorrect way (say they think they can "enhance" pixelated pictures) which will lead to real consequences.

I know everything is pretty clear from the article, but I still think there should be a powerful disclaimer somewhere you can't miss it, because people are lazy and will not bother read the article if they just read the title "This is called depixelizer, I can depixelize a pixelated image"

@benjaffe
Copy link

I'll chime in just to say, regardless of your opinion on the ethics of writing code based on biased data sets (not stating mine here), this topic should be mentioned specifically in the Readme.

When a repo like this gets attention, people will try to use it in unintended ways. At the least, the limitations of something like this should be stated upfront and clearly so people don't either 1. apply it blindly as a free solution to a complex problem, or 2. demonize contributors for toying around with code in a field where discrimination and application for harm are so rampant.

@victorca25
Copy link

A lot of the hubbub here seems to be caused by people not understanding what this system actually does, and not understanding the limitations of not only this, but related tech. If pixelation is doubted to be effective, there are other methods that are more bulletproof.

That is correct. The problem is that most people have opinions on things they don't understand.

@ghost
Copy link

ghost commented Jun 23, 2020

I'll just say this: whether or not this software is capable of actual malcontent, the README looks to provide an example that, quite frankly, looks like the software is capable of accurately de-pixelizing the photo.

In this issue, I can clearly see examples of it failing hardcore. That and other trials should be on the README as well to deter people to actually using it for bad purposes and ultimately give people peace of mind that accurate de-pixelizers are years, perhaps decades away from being usable.

@woctezuma
Copy link

woctezuma commented Jun 23, 2020

For information, this repository merely contains a Colab interface for the code available in the PULSE repository. There is no need to write a disclaimer from scratch. The simpler solution would be to copy the disclaimer from the original PULSE repository:

NOTE

We have noticed a lot of concern that PULSE will be used to identify individuals whose faces have been blurred out. We want to emphasize that this is impossible - PULSE makes imaginary faces of people who do not exist, which should not be confused for real people. It will not help identify or reconstruct the original image.

We also want to address concerns of bias in PULSE. We have now included a new section in the paper and an accompanying model card directly addressing this bias.

Moreover, assuming people who arrive in this repository from social media want to learn about the algorithm, there is this IEEE Tech Talk which covers the matter and should help them understand the situation a bit better. People might not read it, but at least, they would have educational materials of quality.

@ghost
Copy link

ghost commented Jun 23, 2020

@woctezuma

I've gone ahead and removed my custom disclaimer in favor of PULSE's, reference both b663f9d and 40817cb

@krainboltgreene
Copy link

I'm not at all shocked that the conclusion of this thread was "We're so ineffective that there's no way police would use this" as if Police have never hired psychics, touted DNA forensics, or used fingerprint analysis.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.