Generative AI may look like magic, but behind the development of these systems are armies of employees at companies like Google, OpenAI, and others, known as “prompt engineers” and analysts, who rate the accuracy of chatbots’ outputs to improve their AI.
But a new internal guideline passed down from Google to contractors working on Gemini, seen by TechCrunch, has led to concerns that Gemini could be more prone to spouting out inaccurate information on highly sensitive topics, like healthcare, to regular people.
To improve Gemini, contractors working with GlobalLogic, an outsourcing firm owned by Hitachi, are routinely asked to evaluate AI-generated responses according to factors like “truthfulness.”
These contractors were until recently able to “skip” certain prompts, and thus opt out of evaluating various AI-written responses to those prompts, if the prompt was way outside their domain expertise. For example, a contractor could skip a prompt that was asking a niche question about cardiology because the contractor had no scientific background.
But last week, GlobalLogic announced a change from Google that contractors are no longer allowed to skip such prompts, regardless of their own expertise.
Internal correspondence seen by TechCrunch shows that previously, the guidelines read: “If you do not have critical expertise (e.g. coding, math) to rate this prompt, please skip this task.”
But now the guidelines read: “You should not skip prompts that require specialized domain knowledge.” Instead, contractors are being told to “rate the parts of the prompt you understand” and include a note that they don’t have domain knowledge.
This has led to direct concerns about Gemini’s accuracy on certain topics, as contractors are sometimes tasked with evaluating highly technical AI responses about issues like rare diseases that they have no background in.
“I thought the point of skipping was to increase accuracy by giving it to someone better?” one contractor noted in internal correspondence, seen by TechCrunch.
Contractors can now only skip prompts in two cases: if they’re “completely missing information” like the full prompt or response, or if they contain harmful content that requires special consent forms to evaluate, the new guidelines show.
Google did not respond to TechCrunch’s requests for comment by press time. After this story published, Google, which did not dispute our reporting, told TechCrunch that the company was “constantly working to improve factual accuracy in Gemini.”
“Raters perform a wide range of tasks across many different Google products and platforms,” said Google spokesperson Shira McNamara. “They do not solely review answers for content, they also provide valuable feedback on style, format, and other factors. The ratings they provide do not directly impact our algorithms, but when taken in aggregate, are a helpful data point to help us measure how well our systems are working.”
Updated with post-publish comment from Google.
Leave a Reply