Register for our kickoff of the first phase of the SpringMo Black Wellness Initiative

LinkedIn Is Using Your Data to Train AI: What This Means for Your Privacy and How to Opt Out

If you’ve recently logged on to LinkedIn, chances are your user profile and other personal data have been accessed by the business and employment-focused social media platform’s new artificial intelligence (AI) feature. Upon launching LinkedIn’s website, you were likely greeted by the prompt: “Start a post, try writing with AI” on the homepage. Then, when you scrolled through the “Jobs” section, an AI prompt might have told you that it’s already scanned through the job posts and compiled the ones that best matched your LinkedIn profile. Sounds convenient, right? But at what cost?

When LinkedIn first introduced its AI-powered tools in October 2023, the company said they were designed to help businesses and individuals connect and grow on its platform. For instance, Sales Navigator’s AI-assisted search and AccountIQ were intended to make lead prospecting and account research more effective. Other tools that the firm launched at the time were Recruiter 2024, LinkedIn Learning AI-powered Coaching Experience and Accelerate for Campaign Manager—all three were said to promote greater productivity and work prioritization.

However, the business-centric social network did not stop at just providing light AI-powered assistance to its patrons. Just a month after, LinkedIn unveiled its AI-powered LinkedIn Premium experience for paying members of the site. The idea behind this new product was to create some sort of a copilot for Premium subscribers whenever they access the platform, allowing them to stay ahead in their professional life by having exclusive assistance for when they are planning to change careers or learn and develop new skills. Basically, this AI-powered feature would be the one to meticulously analyze every user’s input, data and activity, and match results with the best opportunities available. 

From early praise to troubling reality

The introduction of AI on LinkedIn’s platform was met with praise from users in the early stages of its implementation, especially when the company, led by CEO Ryan Roslansky, started rolling out supplementary tools, such as the AI-powered writing tool and its very own chatbot. The ingenious move to bank on AI for its platform’s overall improvement paid off, with Chief Operating Officer Daniel Shapero proudly announcing in March that LinkedIn Premium customers were up 25% year-over-year, augmenting the organization’s annual revenue to $1.7 billion. 

However, tucked beneath the overwhelming success of LinkedIn’s new venture were rumblings of how its AI technology was compromising user data while training its AI model. 

SUCCESS Magazine Subscription offer

Why LinkedIn’s use of user data in AI training is a big problem

AI models thrive on data. These virtual brains acquire knowledge through data inputs, so they can identify patterns, make predictions and produce outputs that seem intelligent using what they learn. It goes without saying that the more data an AI system has, the better it can run such functions. LinkedIn, as the world’s largest professional network with over 1 billion members across the globe, possesses a massive trove of user data, making it an ideal source for training AI models. Considering that LinkedIn is a wholly owned subsidiary of Microsoft—the biggest backer of the AI research organization OpenAI—incorporating AI into its system was never a matter of if, but when.

Since rolling out AI features on its platform last year, LinkedIn has seemingly shied away from admitting that it’s secretly collecting data from users’ posts, articles, preferences and other activities. But last week, the organization updated its website’s generative AI FAQs to indicate that its system collects user data and uses it to “improve or develop” its services. Not only that, the company took it upon itself to automatically enroll everyone in this feature, sparking backlash on social media from users who were not in favor of their personal data being harvested without their consent. 

An invasion of user privacy

Among those not pleased by LinkedIn’s invasion of user privacy was Women in Security and Privacy chair Rachel Tobac, who explained on X why people should opt out of the new AI feature. According to her, since generative AI tools come up with outputs based on inputs they are trained on, people may soon find their original content being “reused,” “rehashed” or completely plagiarized by AI. “It’s likely that elements of your writing, photos or videos will be melted together with other people’s content to build AI outputs,” Tobac wrote. 

Meanwhile, LinkedIn spokesman Greg Snapper has said that using customer data to train the platform’s AI could actually prove beneficial in the long run. Not only can this “help people all over the world create economic opportunity,” it can also “help a lot of people at scale” when done right, according to him. LinkedIn chief privacy officer Kalinda Raina also addressed the issue in a video post shared on LinkedIn’s website last week, saying the organization is only using its clients’ data to “improve both security and our products in the generative AI space and beyond.” Despite the clarifications, however, many still feel unsettled by the prospect of not being able to keep their user data private. 

How to opt out of LinkedIn’s AI training feature

To appease its customers following the backlash, LinkedIn has vowed to make updates to its user agreement with changes going into effect on Nov. 20. It has also clarified practices covered by its privacy policy, as well as added an opt-out setting for its AI training feature. To turn off LinkedIn’s AI training, simply follow these steps using the desktop version of the platform:

  • Log into your LinkedIn account
  • Tap on the “Me” button/profile avatar
  • Head to “Settings & Privacy”
  • Click on “Data privacy”
  • Locate and press “Data for Generative AI Improvement” 
  • Flip the toggle switch for “Use my data for training content creation AI models” to opt out of the feature

Once the feature is off, LinkedIn will no longer be able to collect your data and utilize it in AI training. Unfortunately, this opt-out setting is not retroactive, according to The Washington Post. This means data collected before the feature is deactivated would remain in the system. But while there’s no way to undo this, users can request that LinkedIn delete specific data or activity using the LinkedIn Data Deletion Form.

Photo by Tada Images/Shutterstock.com

Related Posts