From the nutritional barometer
to a barometer of epistemic clarity
A proposal to put in place an indicator of reliability and informational transparency, starting with articles and textual publications.
- Project name: Info Score.
- Promise: to make visible the methodological reliability of a textual content.
- Starting point: articles and textual publications.
- Position of principle: AI assists, explains and leaves a trace; it does not, on its own, decree what is true.
- Key to credibility: transparency of criteria, right of appeal, independent governance.
1. Manifesto, restated
We live in an age in which publishing has become almost as simple as breathing. Legacy media, new media, platforms, influencers, institutions and ordinary citizens now take part in a worldwide circulation of information of unprecedented density. This democratic conquest is immense. But it comes with a growing disturbance: the public sphere increasingly mixes fact, opinion, commentary, hypothesis, performance, indignation — and at times sheer manipulation.
Within this profusion, the ordinary citizen does not merely lack time; they lack, above all, legible instruments to understand the nature of what they read. A piece of information may be accurate, incomplete, slanted, premature, emotional, taken out of context, or methodologically fragile. Yet it all tends to appear in the same graphic form, the same persuasive mise-en-scène, the same intensity of distribution.
Our societies have learned to protect bodies through simple markers. The nutritional barometer, without thinking on the consumer's behalf, makes excesses or balances visible. It offers a mediation of clarity. We propose to perform a similar gesture for intellectual life: to create an indicator of informational reliability and transparency — in other words, an Info Score.
This Info Score would not pretend to replace human judgment. It would be neither a ministry of truth nor a censorship machine. Its ambition would be more sober and more precious: to make visible what an item of information rests upon. The tool would not ask the reader to believe a label; it would show them the methodological elements that make a piece of content solid, fragile, prudent or misleading.
The application envisaged would first analyse articles and textual posts. The user would submit a link or a text. The system would identify the nature of the content, locate the main claims, examine the quality of the sources cited, the distinction between fact and commentary, the degree of emotional language, the acknowledgement of uncertainty — and then return an argued, intelligible score.
Artificial intelligence would here be an engine of assistance, not an absolute judge. It can help to extract, classify, compare, synthesise and explain. But it must remain embedded in an architecture of evidence, public rules, logging and appeal. Info Score will earn its legitimacy not from a technological veneer, but from the transparency of its criteria, the auditability of its decisions, and the existence of a pluralist governance.
Such a step has major civic utility. It can fortify critical thinking, hold content producers accountable, slow down the mechanisms of toxic virality, and restore a hygiene of reading in a world saturated with signs. Where so many systems aim to maximise attention, we propose an instrument oriented toward clarification.
The project does not promise infallibility. It proposes something more realistic and perhaps more civilising: a public method for reading better. In a healthy democracy, citizens are not told what to think; they are given markers to better discern what is presented to them as true. Info Score would therefore not close the debate. It would make the debate more conscious, more demanding, and more free.
Guiding principle We do not tell the public what to think. We show what an item of information rests upon, where it is solid, where it is fragile, and what still needs to be verified.
Initial Info Score reference
| Score | At a glance | Main criteria |
|---|---|---|
| A | Highly reliable | Sources identified, facts cross-checked, clear distinction between information and analysis. |
| B | Generally reliable | Solid base, but a few minor areas to clarify or update. |
| C | Needs verification | Plausible information, but incomplete, partial, or not yet sufficiently cross-checked. |
| D | Low reliability | Fragile sources, slanted framing, confusion between fact, hypothesis and commentary. |
| E | Highly subjective | Strong opacity, high emotional charge, risk of manipulation or deception. |
Complete application schema — MVP for articles and textual posts
The objective is not to manufacture a machine that decrees the truth, but a tool to support critical reading that makes the methodological soundness of a content visible.
Essential modules
| Module | Role | Inputs / outputs | Priority |
|---|---|---|---|
| Web interface | Collect the link or text; display the score and explanation. | Input: URL / text. Output: legible analysis report. | Very high |
| Content ingestion | Download the content, clean the text, extract title, author and date. | Input: web page. Output: normalised text. | Very high |
| AI analysis engine | Identify claims, sources, tone, uncertainty and nature of the content. | Input: text. Output: variables and argued summary. | Very high |
| Scoring engine | Compute sub-scores then the final A-E grade with public rules. | Input: variables. Output: score + sub-scores. | Very high |
| Evidence layer | Preserve the elements justifying the grade: quotes, metadata, inconsistencies. | Input: analysis. Output: evidence file. | High |
| Administration console | Review borderline cases, track appeals, version the rules. | Input: flagged cases. Output: decisions and log. | Medium |
User journey
- The user pastes the link of an article or textual post, or copies the text directly.
- The application extracts the main content and detects the available metadata.
- The AI engine identifies the key claims, the type of content and the language register.
- The scoring engine assigns sub-scores: sources, cross-checking, clarity, neutrality, uncertainty.
- The interface returns an Info Score, a plain-language summary, and a section on “what still needs to be verified”.
Recommended simple stack to start
- Web frontend first, before a mobile app, to test the service quickly and at moderate cost.
- Separate backend API for ingestion, analysis, scoring and logging.
- Relational database for analyses, scoring rules and appeals.
- Document storage for captures, extracted texts and evidence files.
- Complete logging so that each score can be explained and audited.
2. A civic startup project
Info Score can be carried as a civic startup: an entrepreneurial project in service of a public good. Its ambition is not to capture attention, but to make the informational space more legible, more accountable, and more educational.
Mission
To help any person assess more lucidly the methodological reliability of a textual content, and to understand, in plain language, what sustains or weakens an item of information.
Public problem addressed
- Massive confusion between information, analysis, opinion and manipulation.
- Poor legibility of sources and of the degree of uncertainty.
- Viral sharing of content before verification.
- Need for a simple tool for citizens, teachers, associations and newsrooms.
Value proposition
- A score that is simple to read.
- A detailed but accessible explanation.
- Public, contestable criteria.
- A logic of civic education rather than sanction.
Target publics at launch
- Citizens wishing to quickly check an article or a post.
- Teachers and media-literacy educators.
- Associations fighting disinformation.
- Newsrooms or collectives wishing to audit their methodological clarity.
MVP — what we build first
- Analysis of pasted texts and links to public articles.
- Detection of content type: information, analysis, opinion, satire, unknown.
- Sub-scores: sources, cross-checking, language, clarity, uncertainty.
- Final report with summary, score, caveats and items to verify.
Proposed governance
- Public, versioned scoring criteria.
- Log explaining each automated decision.
- Possibility of contestation and human review.
- Pluralist committee bringing together journalists, educators, jurists, technicians and citizens.
Possible distribution of roles among founding friends
| Pole | Mission | Ideal profile |
|---|---|---|
| Vision & partnerships | Carry the manifesto, federate allies, seek funding and support. | Strategic profile, network, ease of speech. |
| Product & UX | Design the user journey, test the legibility of screens and reports. | Pedagogical sense, design, attention to clarity. |
| Tech & AI | Build ingestion, analysis, scoring, traceability. | Full-stack developer or technical pair. |
| Method & governance | Draft criteria, manage appeals, ensure ethics. | Journalist, jurist, researcher, educator. |
12-month roadmap
- Months 1-2 : clarification of the manifesto, choice of criteria, mock-ups and first user tests.
- Months 3-4 : development of the web MVP for articles and textual posts.
- Months 5-6 : pilot phase with a small circle of users, collection of feedback, refinement of the score.
- Months 7-9 : publication of a beta, setting up the governance committee and an appeal mechanism.
- Months 10-12 : educational, civic-society or media partnerships, then reflection on extension toward image.
Invitation to join the venture
We are looking for friends able to combine intellectual rigour, civic sense, taste for precision and the joy of building. Info Score is not only a product: it is a proposal for democratic health.
This dossier constitutes a starting base. The criteria, the architecture and the governance must be tested, amended and refined collectively before any public launch.