
- Dettagli
- Scritto da max
No medium has ever made knowledge as accessible as GPTs
Some communication experts and tech leaders, like Sam Altman and Demis Hassabis, emphasize how GPTs (Generative Pre-trained Transformers) ultimately democratize access to information, especially specialized knowledge—that is, information that typically requires the mediation of an expert or specialist.
Drawing a parallel between this transformation and Marshall McLuhan's theories, we can consider GPTs a new medium that radically changes the rules of access to information, making knowledge "conversational."
Compared to books, which are static, or classic search engines, which require meticulous work of selecting from links, Large Language Models act as an active and tailored medium, marking the shift from simple information scanning to a genuine dialogue. Experts define this transition as the move "from retrieval to understanding." The "syntactic barrier" definitively falls: it is no longer necessary to master a field's jargon to understand its concepts, because the tool adapts to the user (for example, by explaining a physics concept with a soccer metaphor). Knowledge becomes "Just-in-Time" and liquid, extractable at the exact millisecond it is needed, without the need to study entire manuals. For the first time in history, the medium does not offer a monologue but transforms into a Socratic tutor with whom it is possible to dialogue, rebut, and dissect every concept until it is fully understood.
This extreme efficiency, however, raises the paradox of progress: just as the advent of GPS has compromised our ability to read physical maps, the extreme accessibility of knowledge risks atrophying our capacity to make the necessary effort to understand. On one hand, there is enthusiasm for an unprecedented cognitive democratization, which breaks down the wall of technical language and transforms knowledge from an inaccessible castle into an instant public service. On the other hand, the doubt related to laziness emerges: by offering pre-chewed, "fast-food" style knowledge, the brain is prevented from building the fundamental neural pathways to process it deeply; it is the difference between comfortably observing a landscape from a helicopter and struggling to climb it step by step.
Are we perhaps moving from the era of "knowing things" to the era of "knowing how to ask things", where the most precious human skill would be critical thinking—the ability to formulate doubt and to check whether the answer provided by the machine is truthful information or a convincing hallucination. This new human intelligence is founded on three pillars: layered curiosity (not being satisfied with the first answer, but investigating sources and the "whys"), connecting the dots (the human transversality capable of linking distant domains like physics and poetry, where AI remains sectoral), and taste and intuition (the ability to give meaning and purpose to an ocean of information scanned by AI). Man ceases to be a walking encyclopedia to become a true director of knowledge.
This scenario triggers what sociologists call cognitive polarization, outlining a true bimodal dystopia of access to knowledge. Society would thus be distributing itself into two opposite and distant poles: the majority, the mass, consisting of the most passive users who delegate critical thinking to the machine, use LLMs as a shortcut to avoid effort but risk a severe cultural flattening. The smaller peak is represented by the cognitive elite, formed by a few people who use the LLM as a mental exoskeleton, leveraging the time saved in information retrieval to develop superior syntheses and produce innovation at previously unthinkable speeds. The tool inexorably widens this gap: it empowers those who are already curious and makes those who are lazy completely dependent, erecting a new, insurmountable class barrier based not on economic wealth, but on the ability to think.
The gap becomes even more evident when analyzing the historical evolution of the effort required for knowledge acquisition. In past eras, dominated by oral tradition, manuscripts, and print, access required great energy and very long times (years or months), but the authority of the source—be it the sage, the sacred text, or the editorial filter—guaranteed a very high perception of reliability, nullifying the burden of verification for the learner. The advent of radio and TV provided a shower of knowledge, while the Internet and search engines lowered access times to a few seconds, but introduced information overload and the need to distinguish sources. Today, LLMs completely eliminate the "friction cost": they process the final synthesis in milliseconds. However, if we analyze reliability, the perspective is reversed. The tool is intrinsically unreliable in terms of pure accuracy, as it generates probabilities, not certainties. The user's role should therefore evolve into that of a validator, on whom the burden of critical fact-checking rests.
This paradox generates an epistemological drift in which the ability to inhabit knowledge definitively splits in two. On one side, the few "savants" who make doubt their method: they know that artificial intelligence calculates probabilities and does not think, using it to accelerate syntheses but governing the statistical chaos through cross-verification. On the other, the mass of the "ignorants" mistakes the AI's grammatical fluency for absolute authority; if an output is immediate and does not stutter, it must be true. This approach makes the majority vulnerable to a new form of functional illiteracy and algorithmic manipulation, turning accessibility into the opium of the people 2.0, a convenience that atrophies the exercise of critical thought.
The repercussions of this bimodality are also powerfully reflected in market and societal dynamics, as demonstrated by the predicted evolution of the influencer sector for the 2025-2026 biennium. AI will not replace every public figure, but will split the market into two distinct segments. In aesthetic, commercial, and standardized sectors (fitness, gaming, fast-fashion), avatars and virtual influencers will take over: they cost half as much, are free from scandals, and offer unwavering brand consistency, to the detriment of Product-Influencers aimed at the masses. Conversely, in the niche of values and trust, human thinker-influencers—appreciated by the elite for their ethics, charisma, and vulnerability—will see their value enhanced, reaching engagement levels almost triple that of their artificial competitors and using AI only as a tool to amplify their own voice.
The final risk of this bimodal dystopia is that the masses will fall victim to so-called "algorithmic empathy," blindly interacting with synthetic entities perceived as human and turning the perfection of artificial intelligence into an obstacle that makes transparency and truth entirely irrelevant, while only a small elite of skeptics will continue to search for the real human spark beyond the screen.
Sources:
- https://video.unipegaso.it
- https://www.esserepensiero.it
- https://www.ai4business.it
- https://www.reddit.com
- https://www.fortuneita.com
- https://it.linkedin.com
Written with the aid of AI
Translated from italian by authomatic translation service