Back to List
Granola Privacy Alert: AI Notes Viewable via Link and Used for Training by Default
Industry NewsGranolaAI PrivacyData Security

Granola Privacy Alert: AI Notes Viewable via Link and Used for Training by Default

Users of the AI-powered note-taking application Granola are being advised to review their privacy settings following revelations regarding data accessibility and usage. Although the company markets its service as 'private by default,' the platform currently allows anyone with a specific link to view notes. Furthermore, Granola utilizes user notes for internal AI training purposes unless individuals manually opt out of the process. Positioned as an AI notepad for professionals, these default configurations have raised concerns regarding the actual level of privacy provided to its user base. This report explores the discrepancy between the marketing claims and the functional reality of Granola's data handling policies as reported by The Verge.

The Verge

Key Takeaways

  • Link Accessibility: Despite claims of being private, any individual possessing a specific link can view a user's Granola notes.
  • AI Training Defaults: Granola utilizes user-generated notes for internal AI training by default.
  • Opt-Out Requirement: Users must manually change their settings to prevent their data from being used for AI model development.
  • Privacy Discrepancy: There is a notable gap between Granola's "private by default" marketing and its actual data sharing and training configurations.

In-Depth Analysis

The Reality of 'Private by Default' Claims

Granola markets itself as an AI notepad designed for professional use, emphasizing a commitment to privacy. However, the current technical implementation reveals that notes are accessible to anyone who has the corresponding link. This configuration challenges the traditional definition of "private," as it relies on the secrecy of a URL rather than restricted access controls or authentication. For users handling sensitive professional information, this default state poses a potential risk if links are shared inadvertently or discovered by unauthorized parties.

Data Utilization for AI Development

Beyond the visibility of notes, Granola's policy regarding internal AI training has come under scrutiny. The platform automatically opts users into a program where their notes are used to train the company's internal AI models. While many AI companies seek user data to improve their algorithms, the integration of this as a default setting—combined with the link-sharing accessibility—highlights a trend in the industry where user data is a primary resource for product iteration. Users who wish to maintain total confidentiality of their notes must navigate the application's settings to explicitly opt out of these training protocols.

Industry Impact

The situation with Granola underscores a growing tension in the AI software industry between user privacy and the data requirements of machine learning. As more "AI-first" productivity tools enter the market, the definition of "private by default" is becoming increasingly fluid. This case serves as a significant example for the industry, suggesting that transparency regarding link-based sharing and AI training opt-outs is critical for maintaining user trust. It also highlights the responsibility of users to audit the privacy settings of AI tools, even when those tools are marketed as secure professional solutions.

Frequently Asked Questions

Question: Can anyone see my Granola notes without my permission?

Based on the report, anyone who obtains the specific link to your note can view its content, as this is the default setting for the application.

Question: Does Granola use my personal notes to train their AI?

Yes, Granola uses notes for internal AI training by default. Users must manually opt out if they do not want their data used for this purpose.

Question: How does Granola describe its own privacy policy?

Granola describes its notes as being "private by default," despite the link-sharing and AI training configurations currently in place.

Related News

Langfuse: An Open Source LLM Engineering Platform for Observability and Prompt Management
Industry News

Langfuse: An Open Source LLM Engineering Platform for Observability and Prompt Management

Langfuse has emerged as a comprehensive open-source engineering platform specifically designed for Large Language Model (LLM) applications. Originating from the Y Combinator W23 cohort, the platform provides a robust suite of tools including LLM observability, metrics tracking, evaluation frameworks, and prompt management. It also features a dedicated playground and dataset management capabilities. Langfuse is built with broad compatibility in mind, offering seamless integration with industry-standard tools such as OpenTelemetry, Langchain, the OpenAI SDK, and LiteLLM. By focusing on the critical infrastructure needs of AI developers, Langfuse aims to streamline the lifecycle of LLM application development from initial testing to production monitoring.

OpenMetadata: A Unified Platform for Data Discovery, Observability, and Governance Solutions
Industry News

OpenMetadata: A Unified Platform for Data Discovery, Observability, and Governance Solutions

OpenMetadata has emerged as a comprehensive open-source solution designed to streamline how organizations manage their data ecosystems. By providing a unified metadata platform, it addresses the critical needs of data discovery, observability, and governance. The platform is built upon a centralized metadata repository that serves as a single source of truth, complemented by advanced features such as deep column-level lineage and tools for seamless team collaboration. As data environments become increasingly complex, OpenMetadata aims to simplify the management of data assets by integrating these essential functions into a cohesive framework, allowing teams to better understand, monitor, and control their data lifecycle through a standardized metadata approach.

U.S. Soldier Charged with Insider Trading on Polymarket Using Classified Military Information
Industry News

U.S. Soldier Charged with Insider Trading on Polymarket Using Classified Military Information

Gannon Ken Van Dyke, a U.S. Army soldier, has been indicted for allegedly using classified government information to profit from bets on the prediction market platform Polymarket. According to the U.S. Attorney's Office for the Southern District of New York, Van Dyke participated in the planning of 'Operation Absolute Resolve,' a military mission to capture Nicolás Maduro. He is accused of leveraging his access to sensitive details regarding the timing and outcome of this operation to place illegal wagers. The charges include commodities fraud, wire fraud, theft of nonpublic government information, and making unlawful monetary transactions. This case marks a significant legal action against insider trading within decentralized prediction markets involving national security secrets.