Digital Mental Health Company Draws Ire for Not Informing Users About GPT-3 Technology

08/feb/2023

La Cartita

Image by Kohji Asakawa from Pixabay

Online outrage seemed to bubble up after the company claimed they had utilized GPT-3, the LLM model that powers many chatbot applications, to field mental health questions. Additionally, the academic world viewed the experiment on 4000 people, e.g. users of KoKo’s peer to peer counseling services, as experiments performed without consent.

Experimental Setup

First, the users would submit input of any kind and a co-pilot, a plug-in on the side of a screen that monitors the actual chat, would propose a relevant output or response to the users. If the resulting output appeared favorable, then users of the plugin would fire off the response to the unaware recipient.

According to Koko’s Chief Executive: “Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own (p < .001). Response times went down 50%, to well under a minute.

The company tested the application on 4000 people and asked these individuals to rate their output. They rated it highly, but the experience was not as dynamic as expected so the project was never set into the production.

GPT3 and Public Reaction

Like mentioned above, the GPT3 model can feed a chat application that can answer prompts with human-like text, but the prompts have to be specific to the task at hand. KOKO was able to successfully answer in a coherent way to user input, but the backlash was considerable.

Koko

Ultimately, Koko is a free mental health service. The company tested a co-pilot approach with humans supervising the AI as needed in messages sent via Koko peer support. There’s the possibility that no-help is worse than some automated and triaged help, which may explain the motivation behind the project.