A class action lawsuit has been filed on behalf of LinkedIn premium users, which alleges that the Microsoft owned professional networking platform was secretly feeding users private DM’s and and InMail to its AI training programs.The first whispers of the Linkedin scandal started circulating a few months back, like a bad rumor you couldn’t quite shake off and now the lawsuit brings some serious revelations. According to papers filed in a California federal court, LinkedIn has allegedly been sharing user messages with third- party companies.
This revelation comes at the back of some questionable moves by LinkedIn. In August last year, the platform introduced a new privacy setting allowing users to opt out of data sharing for AI training, however if you did not change this setting, you were opted in by default. The automatic opt in also allowed for user data to be shared with third parties for AI training. Unless you were particularly privacy-savvy, your data was likely up for grabs.
LinkedIn is also accused of attempting to conceal their actions, by quietly updating its privacy policy in September to explicitly state that user data could be used for AI training. It is further claimed that the Platforms FAQ (frequently asked questions) was updated to state that users may opt out from sharing their data for AI training, however this doesn’t affect the previous data that LinkedIn has harvested.
LinkedIn, of course, has denied the allegations, calling them “false and without merit” in a statement to the BBC.
So far the companies actions speak louder than its words. Why the need for such a clandestine opt-out setting in the first place? Why the sudden changes to the privacy policy and FAQ? These are the moves of a company trying to cover its tracks.
This lawsuit is more than just a legal spat between LinkedIn and its premium subscribers, it’s a question of why data privacy should take on paramount importance in the age of AI. Where do we draw the line? What data is fair game for AI training? And who gets to decide?