Back to List
Granola Privacy Alert: AI Notes Viewable via Link and Used for Training by Default
Industry NewsGranolaAI PrivacyData Security

Granola Privacy Alert: AI Notes Viewable via Link and Used for Training by Default

Users of the AI-powered note-taking application Granola are being advised to review their privacy settings following revelations regarding data accessibility and usage. Although the company markets its service as 'private by default,' the platform currently allows anyone with a specific link to view notes. Furthermore, Granola utilizes user notes for internal AI training purposes unless individuals manually opt out of the process. Positioned as an AI notepad for professionals, these default configurations have raised concerns regarding the actual level of privacy provided to its user base. This report explores the discrepancy between the marketing claims and the functional reality of Granola's data handling policies as reported by The Verge.

The Verge

Key Takeaways

  • Link Accessibility: Despite claims of being private, any individual possessing a specific link can view a user's Granola notes.
  • AI Training Defaults: Granola utilizes user-generated notes for internal AI training by default.
  • Opt-Out Requirement: Users must manually change their settings to prevent their data from being used for AI model development.
  • Privacy Discrepancy: There is a notable gap between Granola's "private by default" marketing and its actual data sharing and training configurations.

In-Depth Analysis

The Reality of 'Private by Default' Claims

Granola markets itself as an AI notepad designed for professional use, emphasizing a commitment to privacy. However, the current technical implementation reveals that notes are accessible to anyone who has the corresponding link. This configuration challenges the traditional definition of "private," as it relies on the secrecy of a URL rather than restricted access controls or authentication. For users handling sensitive professional information, this default state poses a potential risk if links are shared inadvertently or discovered by unauthorized parties.

Data Utilization for AI Development

Beyond the visibility of notes, Granola's policy regarding internal AI training has come under scrutiny. The platform automatically opts users into a program where their notes are used to train the company's internal AI models. While many AI companies seek user data to improve their algorithms, the integration of this as a default setting—combined with the link-sharing accessibility—highlights a trend in the industry where user data is a primary resource for product iteration. Users who wish to maintain total confidentiality of their notes must navigate the application's settings to explicitly opt out of these training protocols.

Industry Impact

The situation with Granola underscores a growing tension in the AI software industry between user privacy and the data requirements of machine learning. As more "AI-first" productivity tools enter the market, the definition of "private by default" is becoming increasingly fluid. This case serves as a significant example for the industry, suggesting that transparency regarding link-based sharing and AI training opt-outs is critical for maintaining user trust. It also highlights the responsibility of users to audit the privacy settings of AI tools, even when those tools are marketed as secure professional solutions.

Frequently Asked Questions

Question: Can anyone see my Granola notes without my permission?

Based on the report, anyone who obtains the specific link to your note can view its content, as this is the default setting for the application.

Question: Does Granola use my personal notes to train their AI?

Yes, Granola uses notes for internal AI training by default. Users must manually opt out if they do not want their data used for this purpose.

Question: How does Granola describe its own privacy policy?

Granola describes its notes as being "private by default," despite the link-sharing and AI training configurations currently in place.

Related News

OpenAI Expands Media Footprint with Acquisition of Technology Talk Show TBPN
Industry News

OpenAI Expands Media Footprint with Acquisition of Technology Talk Show TBPN

OpenAI has officially acquired the technology talk show TBPN, marking a strategic move into the media and content space. While the acquisition has been confirmed, OpenAI has not disclosed the financial terms of the deal. Furthermore, the future of TBPN’s existing distribution channels remains uncertain, as the company has not yet clarified whether the show will continue its current presence on major platforms including YouTube, X (formerly Twitter), and various podcast networks. This acquisition highlights OpenAI's growing interest in controlling tech-centric narratives and engaging directly with audiences through established media properties, though specific integration plans and the long-term status of the show's accessibility are currently unavailable.

Open Models Reach Parity with Closed Frontier Models in Core AI Agent Tasks and Efficiency
Industry News

Open Models Reach Parity with Closed Frontier Models in Core AI Agent Tasks and Efficiency

A recent evaluation by LangChain reveals that open models, specifically GLM-5 and MiniMax M2.7, have crossed a significant performance threshold. These models now match the capabilities of closed frontier models in critical agent-related functions, including file operations, tool utilization, and instruction following. Beyond performance parity, these open-source alternatives offer substantial advantages in cost-effectiveness and reduced latency. This shift marks a turning point for developers and enterprises looking to deploy sophisticated AI agents without the high overhead typically associated with proprietary closed-source systems. The findings suggest that the gap between open and closed models is closing rapidly in the domain of functional AI tasks.

Inside the Erosion of Trust in Azure: A Former Core Engineer Reveals Costly Strategic Missteps
Industry News

Inside the Erosion of Trust in Azure: A Former Core Engineer Reveals Costly Strategic Missteps

Axel Rietschin, a former senior engineer within Microsoft's Azure Core team, has begun a series detailing the internal decisions and complacency that he claims eroded trust in the Azure cloud platform. Rietschin, who contributed to foundational technologies like the Azure Boost offload card and the Windows Container platform, suggests that these failures led to Microsoft nearly losing its largest customer, OpenAI, and damaging its relationship with the US government. Drawing on over a decade of experience within the Windows and Core OS teams, the author provides an insider's perspective on the technical and organizational mishaps that he characterizes as some of the most preventable and costly errors of the 21st century, potentially impacting trillions in value.