AI, Humans, Thinking and Conversations
AI, Humans, Thinking, and Conversations
A conversation over the kitchen table this weekend started with the need to keep a mindful eye on AI-generated content, and finished up being a conversation around what sort of conversation we were having. And whilst I would never claim to be ‘of the moment’ (I can just imagine my children’s faces…) I think this conversation was a particularly pertinent one.
Pilot v Passenger
We started off discussing the level of confidence we can have in AI-generated content. Adoption of AI tools and programmes has, I would say, been pretty widespread amongst my peers (I am 53 in two days’ time). Whilst we recognise the dangers and potential pitfalls, we are seeing huge benefit in areas such as efficiency, ‘polishing’, and supporting and directing/prompting our thinking. Organisations across industries are rightly trialling various tools and agents, in a bid to take advantage of the immense opportunity provided by AI. Our kitchen table conversation however provided an immediate surprise. The younger members around the table (early to mid-twenties) were unexpectedly cautious about the increasingly widespread use of AI in the workplace. You might expect that this is because they are the group who are most likely to be negatively impacted (now that much of the usual work for juniors can be done by AI) but it wasn’t that simple. They expressed a genuine concern for the erosion of the ability to think. A genuine concern for the fact that it’s now possible to start off a document, piece of work or project without properly understanding what you are trying to do, or how to do it. And they were concerned at our generation’s seeming reliance on the accuracy of AI generated content, without casting a proper and critical eye over it.
This led me to reference the excellent article that has just been published in the Harvard Business Review, coining the term ‘workslop’ and arguing that it is destroying productivity (and a big thank you to our Global Head of Faculty at O Shaped, Carrie Fletcher, for bringing this to my attention). The article argues that despite a surge in generative AI use across workplaces, most companies are seeing little measurable ROI. One possible reason is because AI tools are being used to produce “workslop” - content that appears polished but lacks real substance, offloading cognitive labour onto coworkers. Research from BetterUp Labs and Stanford found that 41% of workers have encountered such AI-generated output, costing nearly two hours of rework per instance and creating downstream productivity, trust, and collaboration issues. The authors of the article urge leaders to consider how they may be encouraging indiscriminate organisational mandates and offering too little guidance on quality standards. To counteract the dangers of workslop, they advocate for leaders modelling purposeful AI use, establishing clear norms, and encouraging a “pilot mindset” that combines high agency with optimism - promoting AI as a collaborative tool, not a shortcut.
Returning to our kitchen table, I was able to share an example where, on being asked to summarise key quotes from a set of interviews into an executive summary, the AI tool in question not only made up a quote, it actually referenced an interviewee who didn’t exist. This led to an interesting (at least to me) discussion on the distinction between pilot and passenger in our use of AI tools, with the younger generation concerned that we were all being too passenger-like in our use. The HBR article states that how ‘pilots’ use AI is critical. Pilots are much more likely to use AI to enhance their own creativity, for example, than passengers. Passengers, in turn, are much more likely to use AI in order to avoid doing work than pilots. Pilots use AI purposefully to achieve their goals.
The Three Types of Conversation
As the conversation continued, and debate and challenge reigned, it brought to mind another fascinating distinction that I had recently read about. Charles Duhigg, scientist and author of bestseller the Power of Habit, has written another bestseller: Supercommunicators: How to Unlock the Secret Language of Communication. Reflecting on the nuanced, friendly yet incisive debate we were having, I found myself wondering whether it would ever be possible to have this sort of conversation with generative AI.
Duhigg’s premise is that when we're having a discussion, we tend to think that we know what that discussion is about. For example: what the outcome of the meeting needs to be; who is going to pick the kids up from football; or how our day has been. But neuroscientists can now actually see people's brain activity as they're communicating, and what they have found is that we're actually having many different kinds of conversations at once that each use different parts of the brain. In general, these different kinds of conversations tend to fall into one of three buckets.
There are practical conversations where we're making plans or solving problems. There are emotional conversations, where I tell you what I'm feeling, and I don't want you to solve my feelings; I want you to empathize. And finally, there are social conversations, which are about how you and I relate to each other, how we relate to society, how we think of other people. Basically, what researchers have found is that any good discussion will actually have all three conversations within it. But if you and I are having different kinds of conversations at the same moment, it's very hard for us to fully hear each other, and it's very hard for us to feel connected.
How many times have you ‘downloaded’ to someone at the end of the day, just wanting to get things off your chest, only for them to try and come up with solutions. Which is very much not what you were looking for! Or how about a work meeting that is ostensibly a practical conversation, but which actually is being overshadowed by an emotional conversation which really needs to be had on that level, before the practical stuff can be considered?
Duhigg’s advice is that a meta conversation, where we're talking about how we communicate with each other is really, really important. The best communicators, the consistent supercommunicators, will engage in lots of meta conversation without you even realizing it, because they pose it as, “Hey, that sounds like something that was hard. Did it bother you a lot? Tell me about that.” What they're really asking is, was this an emotional issue for you, or was this a practical issue?
My takeaway on reading this: on the odd time my husband comes home and complains about something, I need to ask him: is this just a vent because you need to get something off your chest, or do you actually want me to come up with solutions for this?’ I’m hoping it will do wonders for marital harmony…
And my takeaway from the conversation around the kitchen table was: as humans we are complex, messy, wonderful and weird, with our brains wired over millenia. That’s going to be very hard for any AI tool to match in its entirety…
To read the HBR article referred to, click here. https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity To read the summary of a conversation between Charles Duhigg and David Epstein, for more detail on Superconnectors, click here. https://davidepstein.substack.com/p/how-to-be-a-supercommunicator