How We Use AI: Nordic Startup News Transparency Policy
Last updated: November 2025
Our Commitment
At Nordic Startup News, we use AI as a tool to enhance our journalism — not replace it. This document explains exactly how, when, and why we use AI in our work.
Our promise: Every piece of content will clearly disclose what role AI played in its creation.
Why We Use AI
As a small, independent team, AI helps us research more thoroughly and quickly, draft and iterate on content faster, translate between languages more accurately, experiment with new formats, and maintain quality with limited resources.
What AI doesn’t do: Make editorial decisions, determine our coverage priorities, or replace human judgment on accuracy, fairness, or ethics.
Our Labeling System
Every article on NSN includes one of these labels:
- AI-Assisted Research — We used AI tools to gather information, find sources, or summarize documents. All facts verified by human editors.
- AI-Assisted Draft — AI generated an initial draft based on our outline. Substantial human editing, fact-checking, and rewriting.
- AI-Assisted Translation — Original written in one language, AI translated to another. Human review and editing.
- Human Written — No AI used in creation. Traditional research and writing process.
- AI Experiment — Experimental piece testing AI capabilities. Process and tools fully documented.
What Tools We Use
We’re tool-agnostic. Current tools in our stack:
- Research: Perplexity, Claude, ChatGPT
- Writing: Claude (long-form), ChatGPT (rapid drafting)
- Translation: DeepL, GPT-4
- Image Generation: We do NOT use AI-generated images without explicit labeling
Our Editorial Process
- Human Planning: We decide what to cover and why
- AI Research/Drafting: AI helps gather info or create first draft
- Human Editing: Substantial human review and rewriting
- Fact-Checking: All facts verified by human editors
- Ethical Review: We review for bias, fairness, cultural sensitivity
- Labeling: Clear disclosure of AI’s role
- Publication: Human editor gives final approval
Our Standards
- Accuracy: All facts verified by humans. We don’t trust AI fact-checking.
- Bias: We review AI content for potential biases. Human editors make final calls.
- Privacy: We don’t feed private information to AI tools.
- Attribution: We cite sources, not AI tools.
- Errors: If AI contributes to an error, we disclose that. We take responsibility.
Sponsored Content
When we create sponsored content, we clearly label it, disclose if AI was used, maintain the same editorial standards, and separate it from editorial content.
For Our Readers
We want to hear from you. If something seems off, tell us. If you want to know more about our process, ask. If you think we’re using AI wrong, we want to know.
Why Transparency Matters
Most media outlets either hide their AI use completely or go full AI-generated. We’re doing neither. We believe readers deserve to know how their news is made. AI is a powerful tool when used responsibly. Transparency builds trust.
This transparency policy itself was drafted with AI assistance (Claude) and edited by the NSN team.
