Part 2: Accounts and Credentials
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 4 · Section 3 of 6
Part 2: Accounts and Credentials
Audit your AI tool tiers today. For each AI tool you use regularly:
- Find the privacy settings — specifically, whether your conversations are used for training
- Check your subscription tier and verify what the data handling terms actually say
- Opt out of training data collection if you are on a tier that uses conversations by default
- If you use AI for work with sensitive data, check whether your organisation has enterprise agreements with any of these tools
For OpenAI (ChatGPT): Settings → Data Controls → “Improve the model for everyone” — turn this off if it is on. Note this applies to your account; if you use a shared workspace account, check workspace settings separately.
For Anthropic (Claude): Consumer accounts do not use conversations for training by default on paid tiers. Check the privacy centre for your current tier’s specifics.
For Google Gemini: Review Google’s data settings in your account — Gemini activity is stored in your Google Activity by default. This can be turned off.
API key hygiene:
If you use any AI via API access (this includes tools like the Claude API, OpenAI API, or any tool that gives you an API key):
- Do not put API keys in code files you commit to any repository, even private ones
- Use environment variables or a secrets manager (1Password, AWS Secrets Manager, Doppler) instead
- Rotate API keys at least quarterly — monthly if you use them heavily
- Set usage limits on your API accounts so a compromised key has a spending ceiling
- Check your API provider’s dashboard for any usage spikes you did not cause
If you have committed an API key to a repository at any point: assume it is compromised, revoke it immediately from the provider’s dashboard, and generate a new one. Key-scanning tools can process public GitHub repositories in minutes, and private repositories that were ever briefly public have likely been scanned.
Password and account security:
- Use a password manager. If you do not have one, this matters more than anything else in this checklist. 1Password and Bitwarden are both solid options. Browser-saved passwords are specifically targeted by the infostealer malware that swept through AI platform credentials in 2023 — 664,000 ChatGPT credentials were compromised in that year alone because they were stored in browsers.
- Enable two-factor authentication on all AI platform accounts. Use an authenticator app rather than SMS where possible.
- Regularly review browser extensions. The number of malicious ChatGPT-related extensions proliferated rapidly in 2023 — from 11 to over 200 in three months. Check what extensions you have installed, verify each one is from a legitimate source, and remove anything you do not actively use or recognise.