Skip to content

Commit

Permalink
Merge pull request #259 from very-good-science/just-walk-out
Browse files Browse the repository at this point in the history
Create 2024-05-08_writeup.md
  • Loading branch information
HuwWDay committed May 21, 2024
2 parents 08e8d95 + 41248c4 commit 50456da
Showing 1 changed file with 62 additions and 0 deletions.
62 changes: 62 additions & 0 deletions site/write_ups/2024-05-08_writeup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
---
blogpost: true
date: May 21st, 2024
author: Jessica Woodgate
category: Write Up
tags: computer vision, worker's rights, automation
---

# Data Ethics Club: [Amazon’s Just Walk Out technology relies on hundreds of workers in India watching you shop](https://www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4)

```{admonition} What's this?
This is summary of Wednesday 8th May’s Data Ethics Club discussion, where we spoke and wrote about the claims that [Amazon’s Just Walk Out technology relies on hundreds of workers in India watching you shop](https://www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4).
The article summary was written by Vanessa Hanschke and edited by Jessica Woodgate. The discussion summary was written by Jessica Woodgate , who tried to synthesise everyone's contributions to this document and the discussion. "We" = "someone at Data Ethics Club".
Huw Day, Vanessa Hanschke, Amy Joint, Nina Di Cara and Natalie Thurlby helped with the final edit.
```

## Article Summary

Amazon’s Just Walk Out stores have been touted to provide a seamless shopping experience where customers can automatically pay by placing objects in their physical shopping basket. The contents are analysed using computer vision technology, thereby customers completely avoid interacting with a cashier. However, according to [a report in The Information](https://www.theinformation.com/articles/how-amazons-big-bet-on-just-walk-out-stumbled), an unnamed person claims that in 2022 as many as 700 out of 1000 sales had to be reviewed by Amazon’s team in India. In response, Dilip Kumar, the vice president of AWS Applications, writes that [“the erroneous reports that Just Walk Out technology relies on human reviewers watching from afar is untrue. Most AI systems, including the underlying ML models behind these technologies, are continuously improved by annotating synthetic (AI generated) and real shopping data. Our associates are responsible for this labelling and annotation step”](https://www.aboutamazon.com/news/retail/amazon-just-walk-out-dash-cart-grocery-shopping-checkout-stores). This story has broken along with Amazon’s announcement of [replacing Just Walk Out with Dash Carts](https://www.theinformation.com/articles/amazons-grocery-stores-to-drop-just-walk-out-checkout-tech?utm_campaign=Editorial&utm_content=Article&utm_medium=organic_social&utm_source=twitter) (essentially a smart shopping cart with a display, scanner and scales).

Just Walk Out is currently implemented in 27 of 44 Amazon Fresh stores and some Whole Foods stores. Other startups are testing similar technology in Aldi.

If you fancy some extra reading, [this article by The Verge](https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots) goes into more detail about the enormous amount of human labour behind AI development.

## Discussion Summary

### What do you think about Amazon selling human reviewers as computer vision?

Passing off human reviewers as computer vision leans on and encourages misconceptions in the public sphere about the limits of computer vision and AI generally. The Information claimed that [sometimes it can take hours for customers to receive their receipts](https://www.theinformation.com/articles/how-amazons-big-bet-on-just-walk-out-stumbled), lending support to the suggestion that human assessment was required. This raises transparency issues by marketing tools to be much more capable than they actually are. Just Walk Out is not an isolated incident; there are many examples of how the power of “AI” is down to copious amounts of low paid human labour. In particular, there is a prevalence of companies hiring people to perform lengthy and tedious annotation work, often in low paid and insecure positions. [“Behind even the most impressive AI system are people – huge numbers of people labelling data to train it and clarifying data when it gets confused... Only the companies that can afford to buy this data can compete, and those that get it are highly motivated to keep it secret. The result is that, with few exceptions, little is known about the information shaping these systems’ behaviour, and even less is known about the people doing the shaping... These AI jobs are... work that people want to automate, and often think is already automated, yet still requires a human stand-in.”]( https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots). The underbelly of the industry consists of plugging the gaps in AI capabilities through cheap labour dispersed across the world. We would not, therefore, be surprised if the claims in the article are true.

The employment of data labourers raises concerns for worker ethics as well as privacy and consent implications for customers. For worker ethics, even though [annotation is creating many jobs, those jobs are often low paid and insecure]( https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots). Companies should be transparent about their employment of annotators, so we can ensure workers are being treated according to adequate labour laws. In India, for example, there [aren’t currently any specific laws regulating AI](https://www.morganlewis.com/blogs/sourcingatmorganlewis/2024/01/ai-regulation-in-india-current-state-and-future-perspectives) and [labour regulations are not always enforced](https://www.india-briefing.com/news/labor-laws-india-guide-federal-state-industry-specific-regulations-18133.html/). For customers, the right to make informed decisions is withdrawn when they are misled by deceptive marketing. A step towards transparency would be implementing regulation requiring companies to disclose the amount of human labour which goes into technology. This does not seem to be an unreasonable request and would pay dividends in improving trust and public awareness of the reality of AI capabilities.

Public education concerning AI is not aided by mainstream media; whilst this article has generated some noise around Amazon, it was perhaps lost in the general news cycle. Mainstream coverage of AI tends to overhype technology, maintaining that more detail is [“deemed too complex for a mainstream audience”. This cultivates alarmism, and a dichotomy where AI research is either bad and uncontrollable or responsible, vague, and non-commital.](https://theconversation.com/news-coverage-of-artificial-intelligence-reflects-business-and-government-hype-not-critical-voices-203633)

### If Amazon’s response to the claim is true, would that change the way you think of the technology? Why or why not?

Generally, our reaction to Amazon’s response is one of suspicion; [it would not be the first instance of the company lying](https://www.reuters.com/technology/five-us-lawmakers-accuse-amazon-possibly-lying-congress-following-reuters-report-2021-10-18/). There is precedence for companies [AI washing]( https://fortune.com/2024/03/18/ai-washing-sec-charges-companies-false-misleading-statments/), and [exaggerating the abilities of their technology](https://www.wsj.com/articles/ai-startup-boom-raises-questions-of-exaggerated-tech-savvy-11565775004). [“Move fast and break things” is still a prevalent attitude]( https://www.businessinsider.com/meta-mark-zuckerberg-new-values-move-fast-and-break-things-2022-2), as there is more money to be made by releasing the technology early and dealing with consequences as they arise. Making mistakes is less consequential if you are rich, summarised as [“fines are nothing but a price tag for the wealthy”](https://medium.com/bouncin-and-behavin-blogs/fines-are-nothing-but-a-price-tag-for-the-wealthy-74cb3754de0c). Financial motivations thus lend support to the potential of big companies deploying underdeveloped technologies.

Regardless of whether or not Amazon’s claim (that humans play a minor role in the system) is true, we see various ethical issues with the collection and usage of data. Privacy and security issues arise such as anonymisation, length of data storage, and data transfer between jurisdictions. Amazon claim that [“Just Walk Out technology… doesn’t use or collect any biometric information… simply links a customer with their payment instrument. When shoppers enter the store, the technology assigns them a temporary numeric code, which serves as the shopper’s unique digital signature for that shopping trip. The system preserves the code throughout the shopper’s time in the store. When they exit, the code disappears, and if they come back, they get a new code.”](https://www.aboutamazon.com/news/retail/how-does-amazon-just-walk-out-work) Whilst this informs us that biometric data isn’t collected, it doesn’t say anything about how the data which is collected is handled. A lot of information can be inferred from the products that people purchase, some of which is very personal such as [finding out people are pregnant](https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/?sh=7fa0a2996668).

Other ethical issues arise with the development of the system, including the data used for training and the validation procedure. Amazon state that Just Walk Out was [trained using synthetic data](https://www.aboutamazon.com/news/retail/how-does-amazon-just-walk-out-work). Synthetic data [aims to preserve statistical properties of the original dataset whilst preventing privacy attacks. However, it has been found that it is still possible to deanonymize synthetic data ](https://www.usenix.org/conference/usenixsecurity22/presentation/stadler).

If stores implementing Just Walk Out had signage explicitly informing customers how data is being handled and how the system is operating (e.g. you are being watched, the data will be stored for this amount of time), or impact assessments were performed, we wondered whether our reaction would change. If we were properly informed about company practices, we were not sure if we would be more willing to support them (which is probably why companies are not transparent in the first place).

### What do you think of the various types of automation in grocery stores? Are they convenient, desirable, or worse for the shopping experience? Do you make use of them?

Automation stemming from AI hype can lead to technology being deployed before it is ready, resulting in more mistakes and less convenience. The fascination of flashy new technology sometimes detracts from simple and inexpensive, but boring, solutions. We wondered how reliable systems like Just Walk Out are, including how easy it is for people to dispute mistakes in bills. When contemplating a new technology we should clearly ask why we need it and what metric we will use to assess it. Involving humans and customer more heavily in the design and rollout of technology would be benefit the alignment of technological process. However, this can mean that pipelines take longer, potentially confining more ethical processes to smaller and more socially responsible companies.

In the context of grocery stores, whilst automation (e.g. self-checkout machines) can be more convenient for some people, it is not desirable for everyone. Having social contact in stores means a lot for some people; [humans desire connection yet the drive for efficiency and automation seems to be increasing isolation](https://theconversation.com/how-the-digitalisation-of-everything-is-making-us-more-lonely-90870). There should be a balance between efficiency and catering for everyone’s needs in an accessible way. In some stores, this has led to [scaling back of self-checkout machines](https://www.cbc.ca/news/business/some-retailers-scaling-back-self-checkouts-1.7034047), or the [introduction of “relaxed” checkout lanes](https://www.cbc.ca/news/canada/edmonton/grocery-slow-check-out-lane-1.6724938) specifically designed for people with [special needs or who want to take their time and chat with the cashier](https://www.cbc.ca/news/business/grocery-checkout-supermarket-shopping-loblaws-superstore-metro-sobeys-dementia-autism-social-anxiety-1.3954847). Other examples of improving accessibility include having a [“quiet hour”](https://www.forbes.com/sites/katehardcastle/2021/10/28/quiet-hours-are-an-important-first-step-to-retail-inclusivity/?sh=33ec3dcd7130) where lights are dimmed and checkout noise is lowered. This can make shopping much easier for autistic people, for example, by fostering a calmer experience. The more we turn to automation in previously social settings, the harder it will be to meet the needs of different groups.

On the other hand, we did consider some use cases where automation can be beneficial. People who receive benefits or food bank support may prefer being automatically processed, as they would not have to disclose publically that they are claiming benefits. In these kinds of instances it could be privacy enhancing.

### Bonus question: What change would you like to see on the basis of this piece? Who has the power to make that change?

Primarily, we would like to see more transparency from companies developing and deploying AI. In pursuit of improved transparency, the [Data Workers Inquiry](https://www.youtube.com/watch?v=tAMqrXlEPDI) at the [Weizenbaum Institute](https://www.weizenbaum-institut.de/en) is talking to workers involved in AI moderation and data labourers to improve understandings of their perspectives. Making workers [visible is the first step, followed by pressure on politicians and companies](https://www.weizenbaum-institut.de/en/news/detail/datenarbeiterinnen-die-arbeitsbedingungen-und-bedeutung-der-menschen-hinter-ki/).

## Attendees
- Huw Day, Data Scientist, Jean Golding Institute, [@disco_huw](https://twitter.com/disco_huw)
- Paul Matthews, Lecturer, UWE Bristol
- [Kamilla Wells](https://www.linkedin.com/in/kamilla-wells/), Citizen Developer, Australian Public Service, Brisbane
- Euan Bennet, Lecturer, University of Glasgow, [@DrEuanBennet](https://twitter.com/DrEuanBennet)
- Melanie Stefan, Computational Neuro person, Medical School Berlin

0 comments on commit 50456da

Please sign in to comment.