The new knowledge monopoly: how Silicon Valley took control of our thoughts
Thinking in public: working ideas out, out loud.
I’ve been thinking a lot recently about power. And more specifically, about the power that comes from controlling mass-adopted AI models.
A lot gets written about who controls AI: which governments are leading the AI arms race; which corporations are winning the talent wars; who’s controlling the world’s compute. And the battle for mainstream dominance is just as fierce. OpenAI reported last month that they’d hit 700million weekly active users and are on track for one billion users by the end of this year. For anyone not clear on the maths, that means by the end of this year, 1 out of every 8 people in the world could be using OpenAI’s large language models alone.
But of course, that 1-in-8 figure isn’t distributed evenly…
In a recent article for The Conversation, Dr. Kimberley Hardcastle discussed how AI (and large language models more specifically) are changing the way people study and learn.
“[W]hat’s being overlooked is how evolving generative AI systems are fundamentally changing our relationship with knowledge itself: how we produce, understand and use knowledge.”
The issue isn’t as simple (or easy to address) as ‘students are asking ChatGPT to do their homework’. Rather, the more we collectively turn to tools like ChatGPT to provide us with information or advice, the more we hand over control of how our knowledge comes to be, to a very select group of people. Dr. Hardcastle articulates this more eloquently than I could:
“When we outsource thought unthinkingly to machines, we hand unprecedented power to shape knowledge to the technology companies developing this evolving technology.”
And it’s not just students and the formal pursuit of knowledge where we are going to see this affect. Take, as an example, the world of dating. You might chuckle to yourself, but bear with me here.
Numerous studies and reports suggest that one of the most common uses of tools like ChatGPT is for something akin to therapy. People are turning to ChatGPT and its ilk for advice on just about everything, including relationships. Stats also show that:
a) Men are more likely to use ChatGPT than women;1
b) Men are nearly 3x more likely to ask ChatGPT for dating advice2; and
c) Men are far less likely to turn to traditional therapy than women.3
And while this might sound positive at first read (‘great, guys are getting help somewhere, at least!’) when you consider how large language models generate responses and the data on which they’ve been trained, it starts to raise some questions.
How Large Language Models (LLMs) learn
That all the major AI developers have trained their frontier models on all the text available on the internet is one of the world’s worst kept secrets.
At an event earlier this year on safe and trusted AI systems, I listened as an AI researcher from Meta talked about using 'anything you can find online’ (I might be paraphrasing here, but not a lot…) during the pre-training stages of LLM development. It was such a cavalier comment and I don’t think anyone else even blinked. But the ease (bordering on excitement) with which this was discussed raised serious ethical and safeguarding questions for me.
Set aside, for a minute, issues of copyright. Just think what text on the internet might say about relationships (and in particular, about women). LLMs are usually built with safeguards in place to prevent problematic outputs, but these safeguards are being built by the same teams that scraped the internet for training data in the first place.
Of the billions of pieces of text (or tokens, if you want to get technical), how many might - if strung together in the most probabilistic way - present dating advice based on a warped view of relationship dynamics? And how easy is it for the very people most likely to receive this advice, to spot any bias, bent or bigotry, when it is framed in language that sounds oh-so-agreeable? Especially if they never engage with professional support, to have something to compare it to.
We, as users, have to trust that the teams building the models we use are both aware of these issues and are building appropriate guardrails to safeguard against them. But that trust has not yet been earned.
When teams of incredibly clever, highly paid engineers, are tasked with building AI models optimised for performance, what thought is given to the responsibility they have to ensure the outputs generated do not spread one particular worldview, set of ideologies, or rhetoric? Just from my interactions with that researcher from Meta, it is easy for me to imagine an environment where enthusiasm for the technical outweighs any concern for the ethical.
And that’s before we get onto potential scenarios where the figures leading these companies actively endorse specific ideologies and appear to be comfortable calling for censorship of content that does not fit with their worldview…
Playing the thought experiment through to the end
Mass adoption of LLMs and GenAI, therefore, runs the risk that our very thoughts are mediated (at best; shaped at worst) by a handful of very powerful companies. Companies controlled by an even smaller group of powerful individuals. History itself could be written, without us - the public - ever being given a chance to collectively validate it.
Combine this with the fact that mass adoption of GenAI means more people are likely to outsource both writing and reading to machines. The process of reading and writing are vital for deep thinking, according to computer scientist and best-selling author Cal Newport (cited in The End of Thinking by
). And while the productivity gains from AI tools are incredibly tempting*, never have we more needed a global population that are skilled at critical thinking.This is where the tension sits. We’re trading away the very processes that enable us to think critically about what we’re consuming. Every time we let an AI summarise an article, write an email, or provide advice, we’re not just saving time - we’re ceding a tiny bit of our intellectual autonomy to companies whose motivations we can only guess at.
When those models are built by teams who treat the entire internet as fair game for training data, we’re letting a very specific subset of people – with their own blind spots, biases, and business pressures – become the architects of collective knowledge.
So what do we do with this? I’m not suggesting we abandon these tools entirely (that would make me a hypocrite). But perhaps we need to start treating our interactions with AI less like using a calculator and more like consulting an advisor whose credentials we haven’t properly vetted. One who might have read every toxic Reddit thread, every piece of misinformation, and every bit of propaganda ever posted online - and is now using that to influence how we think about everything from our relationships to our reality.
That 1-in-8 of the world’s citizens use AI, should give us leverage. We must demand more from those making decisions about AI - especially when it comes to how AI creates meaning. We should be entitled to more transparency about how results are generated and what data they have been trained on. And in an ideal world we would have more choice, not less, about the models we use and the companies we patron.
As always, I come back to the fundamental question of who’s in the room when these decisions are being made. We don’t need more over-eager engineers focused solely on model optimisation; we need voices that represent all of us, asking for the caution and consideration we deserve.
Got you thinking? Check out the articles that influenced me:
How generative AI is really changing education – by outsourcing the production of knowledge to big tech - Dr. Kimberley Hardcastle in The Conversation
*Full disclosure, I actively use AI to both assist with some writing tasks, and to summarise some materials for me to read.
https://techcrunch.com/2025/01/29/chatgpts-mobile-users-are-85-male-report-says/
https://www.9news.com.au/national/artificial-intelligence-dating-advice-young-people-are-turning-to-chatgpt-for-dating-advice-so-we-put-the-ai-to-the-test/973b35ec-3dd8-48e3-94ac-c002fa31c44a
https://www.mentalhealth.org.uk/explore-mental-health/statistics/men-women-statistics



