From Gmail to Word, Your Privacy Settings and AI in a New Relationship

As AI becomes more integrated into our daily tools, privacy concerns are rising, urging users to review their settings and data sharing practices.

From Gmail to Word, Your Privacy Settings and AI in a New Relationship
From Gmail to Word, Your Privacy Settings and AI in a New Relationship

New York: The start of the year is a good time to think about your online safety. We all know we should change our passwords and update our software. But there’s a new worry popping up: how AI is sneaking into our everyday apps and what that means for our privacy.

Lynette Owens from Trend Micro points out that as AI gets mixed into our favorite tools, we need to ask some tough questions about privacy. Many apps we use, like email and social media, might not clearly say if they’re using our data to train AI. This can leave us open to having our personal info used without us even knowing.

Owens stresses that it’s time for every app to take a good look at what data they collect and how they use it. She believes we need to be more aware of what we’re sharing and how it might be used to train AI models.

AI is already part of our daily lives. For instance, Gmail uses it for spam filtering and predictive text. Streaming services like Netflix analyze what we watch to suggest new shows. While these features are handy, we should think about what data is being collected and how it’s used.

Microsoft’s connected experiences feature has been under the microscope lately. It’s been around since 2019, but some reports suggest it’s new or that its settings have changed. Privacy experts worry that even if the settings haven’t changed, the way our data is used could be broader than we think.

A Microsoft spokesperson reassured that they don’t use customer data from Microsoft 365 to train AI models. However, they do allow customers to consent to using their data for specific purposes. This feature helps with things like real-time collaboration and cloud storage.

Ted Miracco, CEO of Approov, says features like this can be a double-edged sword. They promise better productivity but also raise privacy concerns. The default settings might opt people into data collection without them realizing it.

Kaveh Vadat from RiseOpp believes companies should be more transparent about how they handle personal data. He argues that having features enabled by default puts the burden on users to change their settings, which can feel intrusive.

Jochem Hummel from Warwick Business School thinks that while companies benefit from data sharing, users should have the choice to opt-in for privacy reasons. He notes that younger people seem less concerned about privacy, embracing these tools more readily.

Kevin Smith from Colby College points out that many privacy concerns have been around for years, but AI has brought them to the forefront. He emphasizes the need to manage how AI models handle personal data.

Turning off AI features is often buried in app settings. For example, in Microsoft Word, you can find the option under privacy settings. In Gmail, it’s in the general settings. While it’s easy to do, some experts argue that it shouldn’t be the user’s responsibility to deactivate these settings.

Wes Chaar, a data privacy expert, highlights the lack of clear communication about what these features entail. He compares it to inviting a helpful assistant into your home, only to find out they’ve been taking notes on your private conversations.

Ultimately, the current digital landscape leaves users vulnerable without strong systems to protect their data. Without clear consent and control, people risk having their information used in ways they never expected.

Disclaimer: All images comply with fair use for educational and informational purposes. Sourced from public platforms. Have questions? Contact us.
Fact-Checking Policy: Accurate information is our focus. If errors are found, please let us know, and corrections will be made swiftly.