LinkedIn, the Microsoft-owned professional networking platform, recently updated its privacy policy to reflect its use of users’ personal data for training AI models. This update, which went into effect on September 18, 2024, has sparked significant concern among users regarding the platform’s data handling practices. Without notifying users initially, LinkedIn had been scraping personal data from posts and profiles for AI training. This led to backlash, prompting LinkedIn to modify its terms of service to provide greater transparency.
What Has Changed?
The key update LinkedIn introduced involves clearer communication about how it uses personal data for AI training. The company stated in a blog post:
“On September 18, 2024, we added examples and other details to our Privacy Policy to clarify how we use personal data to develop and provide AI-powered services and share data with our affiliates, and to provide additional links to information that may be relevant to individuals in certain regions.”
These changes come after it was revealed that LinkedIn had already been collecting personal information for AI purposes without users’ explicit consent. The updated privacy policy now provides greater clarity on how the company gathers data and employs it to develop and enhance AI-driven tools, including features like generative AI, content recommendations, and content moderation.
While these AI-powered features can improve user experiences, the issue lies in the fact that LinkedIn users are opted in by default. In other words, unless users take specific steps to opt out, their personal data is automatically used for AI training.
What Does This Mean for Users?
The implications for LinkedIn’s 950 million users worldwide are significant. The new policy grants LinkedIn and its affiliates the right to use a wide range of user data, including profile information, posts, interactions, and potentially even private messages, to train AI models. This means the insights and actions you share on LinkedIn, from posting content to engaging with others, could be used to improve the AI-driven systems that recommend connections, curate news feeds, or generate automated content.
For many users, the automatic opt-in presents a challenge. Data privacy is an increasingly sensitive issue, and many individuals may be uncomfortable with their personal information being used in ways they didn’t explicitly approve. Moreover, LinkedIn’s lack of proactive communication regarding these changes has added to the frustration.
How to Opt Out
While LinkedIn’s new policy opts users in by default, the platform does offer the option to opt out of this data-sharing practice. Here’s how you can manage your privacy settings to protect your data from being used for AI training:
- Access Privacy Settings: First, log in to your LinkedIn account and navigate to your account settings. This can be done by clicking on your profile picture and selecting “Settings & Privacy” from the drop-down menu.
- Data Privacy Section: Once in the settings menu, look for the “Privacy” tab. Under this section, you’ll find various options related to how your data is handled.
- Manage AI Settings: Within the Privacy settings, you should see an option for “AI and Data Usage.” This is where LinkedIn gives users control over whether or not their personal data is used for AI training.
- Opt Out: Simply toggle off the setting that allows LinkedIn to use your personal data for AI training. This will prevent LinkedIn from using your profile, posts, and interactions for its machine learning models.
Why This Matters
LinkedIn’s policy update reflects a broader trend in the tech industry where platforms are increasingly integrating AI into their services. Companies like LinkedIn, which collect vast amounts of user data, are leveraging AI to enhance user experiences, improve targeted advertising, and refine their content moderation systems. While these developments may offer benefits, they also raise crucial questions about data privacy, consent, and user autonomy.
The automatic opt-in approach adopted by LinkedIn is particularly controversial. Many users feel that they should have been given the option to opt in voluntarily, rather than being opted in by default. Additionally, the platform’s initial failure to communicate its data scraping activities has eroded trust among some users.
This situation serves as a reminder of the importance of regularly reviewing the privacy settings of online platforms. It also highlights the growing need for transparency and user control in the age of AI. As more companies integrate AI into their services, users must remain vigilant about how their data is being collected and used.
Conclusion
LinkedIn’s decision to update its privacy policy and opt users into AI data sharing by default underscores the complex relationship between data privacy and AI innovation. While AI can significantly improve the user experience, it’s essential that platforms provide users with clear, upfront information about how their data is being used. If you’re concerned about LinkedIn’s data-sharing practices, it’s vital to take control of your privacy settings and opt out if necessary.