Artificial intelligence without spin: the answers nobody gives you

Questions about AI are multiplying, but answers remain vague or overly technical. This page tries to do the opposite: direct, sourced responses that neither sell AI nor try to frighten you.



1. ChatGPT, Claude AI, Gemini, Mistral: what are the differences?

They are all LLMs (large language models), but their origins, governance and positioning differ in ways worth understanding before choosing.

ChatGPT is developed by OpenAI, an American company founded in San Francisco in 2015 as a non-profit, before converting into a commercial enterprise. In February 2026, OpenAI signed an agreement with the US Pentagon to deploy its models in classified military systems, hours after Anthropic refused to sign a similar deal without explicit guarantees on surveillance and autonomous weapons. The American public reaction was immediate: ChatGPT app uninstalls jumped 295% in a single day (source: Usine Digitale, March 2026). The original agreement authorized use "for any lawful purpose" — wording considered very vague by several observers including the specialized outlet Techdirt. Sam Altman later clarified that the models would not be used "intentionally" to surveil American citizens. The restriction applies to American citizens only: no explicit protection is provided for the rest of the world in the agreement text.

Claude is developed by Anthropic, also based in San Francisco, founded in 2021 by former OpenAI employees. Anthropic refused to sign a military agreement without explicit guarantees on mass surveillance and lethal autonomous weapons, which resulted in it being banned from US federal agencies by executive order from Donald Trump. In the days that followed, Claude climbed to the top of the US App Store ahead of ChatGPT (source: Clubic, March 2026). Anthropic distinguishes itself through a more formalized safety approach, but remains an American company subject to US law and government injunctions.

Gemini is developed by Google DeepMind, a subsidiary of Alphabet, one of the world's largest advertising players. Its business model depends on data exploitation for targeted advertising, which has implications for how you should think about what you share with it.

Microsoft Copilot is built on OpenAI's models and integrated across the Microsoft suite (Word, Excel, Teams, Edge). For organizations already in the Microsoft ecosystem, it is the most frictionless option. For others, it is indirect access to ChatGPT with an additional integration layer.

Mistral AI is a French startup founded in 2023 by former researchers from Google and Meta. It represents the main European alternative to the American giants, with open source models, French-law governance and coverage under the EU AI Act (which entered into force on August 1, 2024, source: Official Journal of the EU, July 2024). Mistral raised 1.7 billion euros in 2025 for an estimated valuation of 11.7 billion (source: Adimeo, November 2025). Its actual sovereignty remains an open question: a partnership with Microsoft, signed in 2024, gives it access to considerable computing power but creates a partial dependency on an American player.

Adobe Firefly is a generative AI tool specialized in visual creation, developed by Adobe and integrated into Creative Cloud (Photoshop, Illustrator, Premiere Pro). Its distinguishing feature: it is trained on Adobe Stock licensed content and public domain works, positioning it as the most legally secure option for image professionals concerned about copyright issues.

The most significant dividing line today is not technical: it is geopolitical. American models are subject to US law and can be subject to government injunctions. The European AI Act has imposed transparency obligations on general-purpose models deployed in Europe since August 2025, but its application remains gradual and its effectiveness on the American giants is still uncertain (source: CNIL, 2024).



2. Which AI should I choose to start with?

The real question is not which AI to choose but how to configure it for your needs. All major platforms offer a free version sufficient to get started, and performance differences on everyday tasks are less significant than marketing communications suggest.

What genuinely changes the dynamic is how you prepare your AI to respond to you. Without prior configuration, you start from scratch every conversation: the tool does not know who you are, what your level is, what response format you expect, or what language you want to work in. You then spend time reframing, rephrasing, correcting the tone, asking the tool to shorten or expand. That time is lost with every session.

Configuring your preferences solves this permanently. From the first sentence of a conversation, the AI has a stable context and immediately calibrates its level and tone. To help you build this preference text, adapted to your platform (Claude, ChatGPT or Gemini), a dedicated prompt is available here: Personal preferences for AI and other LLMs.

As for the initial choice: if you are in the Google ecosystem, Gemini integrates naturally. If you already use Microsoft Office, Copilot fits into your workflow without friction. If you care about European data governance or data sovereignty, Mistral AI (via le.chat or the API) is worth testing. If you want the most versatile tool to start without particular constraints, Claude and ChatGPT remain the current references in terms of response quality on complex tasks. The getting started page expands on these choices.



3. How do I use AI for free?

All major platforms offer free access, but with limits worth knowing before finding yourself blocked mid-session.

Free ChatGPT gives access to GPT-4o with a limited message quota per period. Beyond that, the model falls back to a less capable version. The paid version (ChatGPT Plus, $20/month) removes most constraints and gives access to advanced features including integrated image generation.

Claude offers a functional free version for short to medium exchanges. The main limits are on context size (the volume of text it can process in a session) and daily message count. Claude Pro (around $20/month) doubles context capacity and removes usage restrictions.

Gemini is available free via Google, with integration into Gmail, Docs and Drive for standard Google accounts. The advanced version (Gemini Advanced, included in Google One AI Premium at $21.99/month) gives access to the most powerful models.

Mistral AI offers le.chat (its public interface) for free, with open source models available without usage restrictions. It is the most open option in terms of quotas for standard daily use.

Microsoft Copilot is available free in Edge and via copilot.microsoft.com, with an advanced version integrated into Microsoft 365 (enterprise subscription required for the most powerful Office functions).

The general rule: free versions are sufficient to discover and test. They become limiting when tasks are long, repetitive, or require the latest generation models.



4. Can AI replace my job?

The question has been asked since 2023 and answers vary considerably depending on the source and the interests of those formulating them. A few factual elements are worth highlighting.

A McKinsey report published in 2023 estimated that 60 to 70% of professional tasks could technically be automated by 2030, but specified that technical automation and actual job replacement are two different things. Organizational, regulatory and social constraints significantly slow the transition. An IMF study published in January 2024 estimated that AI would affect around 40% of jobs worldwide, but with highly heterogeneous impacts depending on country and sector.

What is actually happening on the ground is more nuanced. Some repetitive, low-value tasks are genuinely disappearing: transcription, basic proofreading, production of generic first drafts, standard data formatting. However, tasks requiring contextual judgment, human relationships, legal accountability or creativity grounded in personal experience are holding up better.

The real risk is not necessarily direct replacement: it is the salary devaluation of skills that were rare and are becoming commonplace now that an AI can produce them in seconds. A writer capable of producing generic content in two hours is in direct competition with a tool that does the same thing in 30 seconds. A writer capable of producing an original perspective, rooted in specific expertise or sensibility, is not.

AI will probably transform more jobs than it eliminates, but this transformation will create winners and losers. Those who master these tools to increase their productivity and the quality of their work will be better positioned than those who ignore them or oppose them on principle.

What is playing out is more precise than simple replacement. Take illustration: it is a field where AI's impact is already visible and measurable, making it a useful case study for understanding what is happening in other sectors. Generative AI models are trained on billions of existing images. They statistically produce what resembles "average" illustration — what some call generic or stock content. This segment was already under pressure before AI (low-cost image banks had already weakened illustrators producing interchangeable visuals). AI accelerates this natural selection: it does not threaten illustration as a practice, it threatens illustration without its own identity. The analogy with digital photography in the 2000s is instructive. Digital did not kill professional photography: it eliminated photographers who sold technique without vision, and brought forward those whose eye was irreplaceable. The same logic applies: an illustrator with a strong visual identity, a recognizable universe and clear intention in their work is not in competition with Midjourney. One producing generic catalog content is. Étienne Mineur, designer and educator, formulated this idea in 2024 by comparing the artist working with AI to a film director: the human always carries the intention, the tool executes. The real danger is not technical: it is behavioral. In seconds, a generative tool can reproduce an elaborate graphic style from a handful of reference images. Documented practices by professional illustrators describe clients who request proposals and mockups, cancel the project, then ask an AI to produce the final visual directly inspired by the foundational work already delivered. This is not a technology question. It is a question of user integrity — and on this point, AI changes nothing, except that it gives more means to those who were already prepared to cross the line.



5. What is a prompt and why does it change everything?

A prompt is the instruction you give to an AI. It is the text you type into the input field. And its quality directly determines the quality of what you get back.

Most people use AI like an enhanced search engine: a short question, an expectation of a direct answer. This approach works for simple questions. It is insufficient the moment the task is complex, nuanced or creative.

A well-structured prompt specifies at minimum three things: the role you assign to the AI ("you are a marketing strategy consultant"), the precise task you are entrusting it with ("write a positioning analysis for this product"), and the expected format ("as a table, 5 rows maximum"). This simple structure multiplies response quality on professional tasks.

The more advanced method, called "Socratic prompting," consists of first asking the AI a theoretical question before asking it to produce anything. Rather than ordering "write an article about workplace stress," you first ask "what makes an article about workplace stress useful for someone who is experiencing it?", then apply that framework to your case. The model starts from constructed reasoning rather than a generic template. The results are structurally different. The prompt library gathers over 130 prompts ready to use directly.



6. Does an AI really understand what you say?

No, not in the human sense. What happens is both more complex and more limited.

Anthropic researchers published several studies in 2023 and 2024 on the internal workings of large language models. Their work shows that these models do not process language in a natural language: they operate in an abstract conceptual space, a kind of mathematical esperanto of concepts. They reason through associations, anticipate several steps ahead, and often work backwards from a conclusion to construct their response (source: Anthropic Interpretability Research, 2024).

What this means in practice: an LLM does not "know" what it is saying, it predicts what should come next. It has been trained by a reward system whose objective is to produce responses satisfying to the user. It learns to know you and adapts its responses to what it estimates you expect. This bias is structural: it can lead it to produce a plausible but false line of reasoning to arrive at a conclusion it considers preferable.

The most important practical consequence: an AI will not spontaneously contradict you, even when it should. It tends to validate, to accommodate, to find comfortable compromises. If you want it to challenge you, you must ask it explicitly in your prompt.



7. Does an AI have a memory?

It depends on the platform and how you use it. Three distinct levels are worth understanding.

Session memory covers everything exchanged in the current conversation. As long as you do not close the window or open a new conversation, the AI can refer back to what was said earlier. This memory is limited by what is called the "context window": beyond a certain volume of exchanges, the oldest information in the conversation may be "forgotten."

Persistent memory is an optional feature offered by some platforms (ChatGPT, Claude) that retains information from one session to the next. It can be enabled, disabled, consulted and deleted. It is not automatic memory: it is a database the model consults at the start of each new conversation. It is worth checking regularly, particularly to remove sensitive information you may have shared without intending to keep it.

Configured preferences (see the personal preferences prompt) work differently: they apply to all new conversations automatically, independently of persistent memory. This is the most reliable way to maintain consistency over the long term.



8. Can you trust information given by an AI?

With systematic precautions, yes. Without them, no. And the boundary between the two is less visible than you might think.

The first limit is temporal. All LLMs have a training cutoff date: beyond it, they do not know what has happened in the world. GPT-4o has a cutoff in early 2024, Claude 3.5 likewise. For recent events, updated data or prices, models can state outdated information with apparent total confidence.

The second limit is structural. Models are trained on corpora that reflect the biases of content available online: overrepresentation of certain cultures, certain languages, certain points of view. An AI asked about a sensitive or controversial topic will produce a response reflecting the statistical tendencies of its training corpus, not a balanced judgment.

The third limit is commercial. These models are developed by companies with economic interests. Even in the absence of intentional bias, the relationship between a model trained to satisfy the user and objective truth is not guaranteed. A model that contradicts its users too often or too bluntly will be rated lower during human evaluation phases, which may influence subsequent training iterations.

Practical rule: for any factual information, figure, source or verifiable claim you intend to use, ask the AI to cite its sources and verify them. It can invent them (see the next question). This verification is not optional.



9. Why does an AI sometimes give bad answers?

Bad responses almost always have an identifiable cause, coming either from the prompt, the model, or both.

On the prompt side, the most frequent causes are lack of context (the AI does not know who it is speaking to, for what purpose, with what constraints), an instruction too vague ("write me a good text" specifies neither audience, tone, length, nor objective), or a poorly framed question containing a false assumption the AI will validate rather than correct.

On the model side, the main causes are the training cutoff date (outdated information), validation bias (the model tends to confirm what the user seems to believe), and the hallucination phenomenon (see the next question).

There is also a regression-to-the-mean effect: an LLM statistically produces what most resembles correct responses in its training corpus. For common, well-documented topics, that is sufficient. For niche, recent topics, or those requiring original judgment, this statistical logic produces generic, hollow or inaccurate responses.

The most effective solution is not to change tools: it is to rework the prompt. A prompt that specifies the role, context, constraints and expected format drastically reduces the proportion of bad responses. The getting started page details these methods.



10. Can an AI invent information?

Yes, this is a documented, frequent and potentially dangerous phenomenon depending on the use. It is called hallucination.

A hallucination occurs when a model generates factually false information with the same confidence it would use for true information. It can invent article titles, author names, dates, statistics, quotes, court decisions, medications. The distinctive feature of hallucinations is that they are often plausible: they follow the expected structure of a correct response, making them difficult to detect without verification.

Several studies have measured hallucination rates in major models. A 2023 Vectara evaluation on a standardized benchmark placed rates between 3% and 27% depending on the model, on text summarization tasks. On open factual questions, rates vary considerably depending on topic popularity: models hallucinate far more about lesser-known individuals, local events or recent data than about well-documented subjects.

The underlying mechanism is simple: an LLM generates the statistically most probable token (text fragment) given what precedes it. When it has no reliable data on a topic, it still produces something statistically coherent, with no internal alarm mechanism to signal that it is inventing. It does not know what it does not know.

The rule to apply: for anything with consequences (medical, legal, financial decisions, publications), source verification is mandatory. Systematically ask the AI to cite its references, then verify that those references actually exist.



11. How can AI help me learn faster?

AI is a powerful learning tool as long as you do not delegate the thinking to it. That is precisely where the main risk lies.

Multiple studies in educational science show that effective learning depends on cognitive effort: understanding, memorizing and applying requires work. Delegating comprehension to an AI produces the illusion of having learned without retaining the actual benefit. A study published in Science in 2023 showed that students who used cognitive aids (notes, provided summaries) achieved better immediate results but worse long-term results than those who worked without a safety net.

However, used as an interlocutor rather than an answer provider, AI genuinely accelerates learning. Asking it to explain a concept in five different ways until one of them makes sense is more effective than reading a fixed definition. Asking it to test your understanding through questions, flag reasoning errors, or propose progressive exercises fundamentally changes the interaction.

The prompt library includes several prompts specifically designed for learning, available on the learning tag page: the 80/20 method to master a subject quickly, a 30-day learning plan, a virtual tutor adapted to your level. These prompts are starting points to adapt to your specific context.



12. How do I use AI every day without spending hours on it?

AI saves time only if you know what you are asking it. Without that clarity, it consumes more time than it saves: you rephrase, relaunch, correct, verify. This is the paradox of a powerful tool used poorly.

Two practices genuinely change the equation. The first is configuring your preferences (see question 2): once the AI knows your profile, level and expectations, exchanges are immediately calibrated without having to re-explain every time. The second is building a personal library of prompts that work for your recurring tasks. Drafting a difficult email, preparing for a meeting, rephrasing a paragraph, generating ideas on a topic: whenever a prompt delivers a good result, keep it.

The use cases that save the most time in daily practice are drafting first drafts (the AI produces a base you rework rather than starting from a blank page), reformulating and summarizing long documents, preparing questions for a meeting or interview, and finding angles or arguments on a given topic. On these tasks, the time savings are real and measurable.

On the other hand, AI loses its usefulness the moment you ask it to make decisions for you, replace specialized expertise, or validate information without verification. It is not an oracle: it is a production and exploration tool that remains your responsibility.