Business

To make an AI chat bot behave, Kenyan workers say they were ‘mentally scarred’ by graphic text-

ChatGPT has impressed millions with its ability to string together coherent, sometimes even accurate, sentences, blurbs, scripts, and more. To write like a human, the AI bot was trained with machine learning algorithms on a massive catalogue of material scoured from the web. But the development of ChatGPT wasn’t all automated: Human labour was required to stop ChatGPT falling into the same trap as its predecessor GPT-3, which was capable of making inappropriate, sometimes even racist, comments.

According to a recent investigation by Time, ChatGPT creator OpenAI outsourced this unsavory data processing task to Kenyan workers, many of whom reportedly earn less than $2 an hour.

ChatGPT is trained on datasets of such an immense size that they can’t be closely curated by hand, as are image generation tools such as DALL-E (also operated by OpenAI), Stable Diffusion, and Midjourney. Without training, ChatGPT wouldn’t work at all, but not all of the text you can find on the internet leads to the kind of comments you want your AI bot making.

The outsourced work involved labelling examples of the kind of offensive text that might show up in the training material. A collection of these labelled text samples was then fed into another AI, training it to notice and remove similar offensive text from ChatGPT’s responses to users.

Training the AI to avoid inappropriate language and themes keeps ChatGPT cleaner and makes it harder to use to produce disturbing content. But in this effort to improve the bot, OpenAI exposed low-paid workers in Kenya to some of the worst material on the web.

“To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021,” Time reports. “Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.”

The Time report says that one worker suffered from recurring visions as a result of the content they encountered on the job. All four of the workers Time spoke to said they were “mentally scarred by the work.”

There were reportedly around 36 workers employed to carry out the task on OpenAI’s behalf, each expected to “read and label between 150 and 250 passages of text per nine-hour shift.”

The company responsible for the outsourcing work is called Sama, a San Francisco-based firm with workers in Kenya, Uganda, and India. Time reports that OpenAI signed three contracts for the labelling work in late 2021, worth around $200,000 in total.

Sama says its employees had access to individual and group sessions with professional mental health therapists, accessible at any time. However, the workers spoken to by Time say only group sessions were available to them.

“Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content,” an OpenAI spokesperson told Time regarding the outsourced data processing work. “Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.”

According to Time, the nature of Sama’s work for OpenAI took a different turn in February 2022 when it began collecting “sexual and violent images,” some of which would be deemed illegal in the US. OpenAI said that labelling harmful images was “a necessary step” in making its tools safe to use, but that it never intended for the most extreme category of images to be collected by Sama and that this was a miscommunication.

Sama ultimately terminated its contract with OpenAI early. The report suggests that the Sama team raised concerns over the content of the images, which eventually led to the two companies’ deal collapsing. In the aftermath, some of the Sama workers were moved to lower paying contracts or their positions terminated entirely. The full Time report goes into much greater detail on OpenAI’s relationship with Sama.

OpenAI is currently valued in the billions of dollars. Microsoft is reportedly looking to sink more money into the AI firm, despite its own recent mass layoffs, and has announced plans to integrate OpenAI technologies into its services.

Moderation work has long involved some degree of human suffering: A report from 2019 on the mental wellbeing of employees of moderation teams used by Facebook described long-lasting trauma symptoms as a result of the work. 

OpenAI’s labelling needs are also a facet of a larger ethical crisis growing at the center of AI research: the problem of what to use as training material. Machines can’t learn to behave like humans without human-made material, but not everyone wants their work to be fed to an algorithm, and last year artists started labelling their work “no AI” in an attempt to ward off companies gathering training data for image generators. Now here’s the reverse problem: material that bot makers don’t want influencing their AI. Again, the task of rearing respectful AI bots comes down to people, in this case workers paid to read the web’s most disturbing content.

Related Posts

Casinos and Sports Betting Legalization in Texas – It’s Up to Voters

Seems like casinos and sports betting will soon be legalized in Texas! Many gaming advocates keep fighting for legalization, and the progress is noticeable. A step towards legalization:…

DuelNow Secures $11 Million Funding to Transform Sports Betting with Blockchain

DuelNow, a burgeoning force in the decentralized sports betting arena, proudly announced a significant financial milestone with a $10 million investment from GEM Digital Limited and an additional $1 million from a consortium of…

Philadelphia Casino Fined $100,000 for Accepting Proxy Sports Bets

The Pennsylvania Gaming Control Board has imposed a significant fine on a Philadelphia casino for accepting sportsbook wagers from a patron who was not physically present at the…

Philippine Gaming Industry Generates US$1.24 Billion in Q3 2023 to Exceed Prepandemic Levels

Gross gaming revenues of the Philippine gaming industry reached the level of Php68.8 billion (US$1.24 billion) in the third quarter of 2023. The figure reportedly represents a 20.1%…

เกมเมอร์รวมพลังกันรีวิวแง่บวก Helldivers 2 ให้กลับมาดีอีกครั้ง_1

ดูเหมือนตอนนี้ก็กลายเป็นช่วงเวลาของฟ้าหลังฝนแล้วครับสำหรับ Helldivers 2 ที่ในที่สุด Sony ก็ยกเลิกแผนการบังคับผูกบัญชี PSN เข้าสู่ Steam ในกรณีสำหรับการเล่นเกมรูปแบบออนไลน์ที่ก่อนหน้านี้ทำเอาคะแนนรีวิวตกยับเยินจนแถบรีวิวแดงแจ๋ ทว่าตอนนี้สถานการณ์ทุกอย่างก็เริ่มกลับมาทุเลาลงแล้ว และกลายเป็นว่าเกมเมอร์หลายคนเริ่มรวมใจส่งพลังให้ทีมงานอีกครั้ง หากเราเข้าไปในหน้าร้านค้า Steam จะพบว่าถึงแม้คะแนนรีวิวเฉลี่ยของเกมจะยังคงอยู่ในระดับ ‘ผสมกัน’ จากความโกรธแค้นเมื่อสัปดาห์ก่อนในรูปของรีวิวบอมบ์ แต่ถ้าสังเกตดีๆ จะพบว่าคะแนนรีวิวที่ได้รับการอัปโหลดช่วงหลังจากประกาศยกเลิกแผนเชื่อมต่อบัญชีจะเป็นรีวิวแง่บวกอย่างต่อเนื่อง และเมื่อตรวจสอบจากกราฟดูแล้วเราจะเห็นว่าจำนวนการรีวิวแง่บวกที่เกิดขึ้นในระยะเวลานี้ มีสูงมากกว่าที่เคยได้รับตั้งแต่วางจำหน่ายทีเดียว นอกจากนั้นแล้ว ในส่วนของยอดผู้เล่นบนแพลตฟอร์มของ Steam เองก็ยังคงอยู่ในหลักแสนแบบไม่ร่วงเลย เรียกได้ว่าตอนนี้ทีมงานก็ยิ้มแป้นกันไปเป็นแถบจากการสนับสนุนที่มีอย่างอุ่นหนาฝาคั่งจากผู้เล่นด้วยกัน แม้ว่ายังต้องใช้เวลาอีกนานจนกว่าจะสามารถกลับมาพลิกแถบสีของรีวิวให้กลับมาเป็น ‘สีฟ้า…

เปิดตัว Divine Dynamo Flamefrit เกมหุ่นยนต์แอ็กชั่น 2 มิติ

Inti Creates ประกาศสร้าง Divine Dynamo Flamefrit เกมแอ็กชั่น 2D ที่จะพาคอเกมรุ่นเก๋าไปย้อนรำลึกความทรงจำเกี่ยวกับเกมแฟนตาซีและหุ่นยนต์อนิเมะในยุค 90 โดยยังไม่มีการประกาศแพลตฟอร์มและวันวางจำหน่าย เกี่ยวกับ Divine Dynamo Flamefrit Inti Creates ได้ประกาศเปิดตัว Divine Dynamo Flamefrit เกมแอ็กชั่น 2D ที่จะพาคุณนึกถึงเกมแฟนตาซีและหุ่นยนต์อนิเมะยุค 90 โดยผู้เล่นจะได้สนุกไปกับการผจญภัยของ Yuuto Hino…