Skip to main content

Scathing study exposes Google’s harmful approach to AI development


A study published earlier this week by Surge AI appears to lay bare one of the biggest problems plaguing the AI industry: bullshit, exploitative data-labeling practices. Last year, Google built a dataset called “GoEmotions.” It was billed as a “fine-grained emotion dataset” — basically a ready-to-train-on dataset for building AI that can recognize emotional sentiment in text. Per a Google blog post: In “GoEmotions: A Dataset of Fine-Grained Emotions”, we describe GoEmotions, a human-annotated dataset of 58k Reddit comments extracted from popular English-language subreddits and labeled with 27 emotion categories. As the largest fully annotated English language fine-grained emotion dataset…

This story continues at The Next Web

Or just read more coverage about: Google
https://ift.tt/izMtSRV
Read full article: The Next Web

Comments

What's new?

Popular posts from this blog

My credit

February 05, 2014 at 10:23AM: This app offers detailed information on how to repair your own credit, it\'s function is only to provide information on how to do it by sending letters to the three credit reporting agencies, this app is to help those that want to correct negative information from their credit reports, without having to use the services of a professional credit repair agency, though the option of doing so is also available with the owner of this app. Read full article: FeedMyApp