Q: Is my document sent to the AI model?
A: By default, your document is not sent to the AI model (or any other third party). The AI features are completely opt-in, meaning that you have control over whether or not your document is processed by the AI model.
If you choose to use the AI features, only minimal sections of your document are sent for processing. This ensures that your privacy is protected and only the necessary information is shared. For more details on what is sent when using specific AI features, please refer to the next question.
Q: If I work with the AI features, what gets sent?
- Research: When you use the AI research feature, only the note in which you're asking the research question is sent for analysis. This allows the AI model to provide relevant information and suggestions based on the content of your note.
- AI proofreader: If you utilize the AI proofreader, only the specific section that you are proofing will be sent for processing. This ensures that your other content remains private and is not accessed by the AI model.
- AI comments: When using AI comments, only the paragraphs in which you've highlighted text will be sent to the AI model, along with the title of the document. The rest of your document remains confidential.
- AI tone adjustments: If you make use of AI tone adjustments, only the paragraph that requires adjustment will be sent to the AI model. Your other content remains private and is not shared.
Q: What model is used?
A: We use Chat GPT 4.0 or, in some cases, Chat GPT 3.5 as the underlying AI models for our features. These models are designed to provide advanced natural language processing capabilities and assist users in various tasks.
Q: Can the information provided by the model be trusted?
A: It's important to note that the AI model is a “research preview”. While it can serve as a valuable starting point for your work, it's essential to independently verify the information provided by the model. These AI models can sometimes generate incorrect or misleading information, which is known as "hallucinations." Ongoing research and development efforts are being undertaken to mitigate these issues and improve the reliability of the model's output.