Welcome to our second edition of 2024! This edition: how to optimize your "invisible work", why humans often "over-trust" AI, and a great interview about making optimizing work environments for the humans in them.
Let's get going!
Invisible Work
Video Length: 2 minutes 06 seconds
Is your calendar an accurate reflection of the work you do every day? If you are like around 80% of people who do unstructured, ambiguous work, the answer is no. Most people don't formally allocate enough time for routine tasks like checking e-mails, scheduling meetings etc, despite the fact that these things happen every day!
It turns out that this behavior not only makes us feel constantly overwhelmed; it is also really inefficient - failing to acknowledge invisible work results in us spending more time doing it!
This edition, my animated twin explains how to identify this work and get more efficient at it to make every week go more smoothly. Check him out in the video below:
AI - An Issue of Trust
This week, I ran a LinkedIn Live Event alongside BillionMinds advisor Sean Alexander (a recording of the event is available here).
In the event, Sean and I looked at the challenges of building a workforce that can maximise the benefits and minimise the risks of AI, but there was one area a few of you asked us to go into in more detail - the challenges of "over-trust" - a situation where humans assume AI is right when it may well not be.
You might have read about the case of lawyer Zachariah Crabill, who used GPT to help him write a motion. GPT saved him time, but it cost him his job - it turned out that the brief included several entirely fake lawsuit citations.
Why did this happen? Well, to answer it fully, it's important to look at two things: 1) the technology being used and 2) the humans interacting with it.
Let's start with the technology (I'll deliberately simply here - after all, this is not a technical article). My favourite way to think of AI is as a system (usually with a degree of automation) that uses data to make predictions. Generative AI systems like Chat-GPT are using the words in your input, and an enormous underlying dataset (the information it "learns" from). Any answer it gives you is, in a way, a double guess - the first guess being what you meant when you asked the question, and the second guess being what a "good" answer is to that question.
Either or both guesses can be wildly off, so Generative AI is not actually intelligent in any usual sense of the word. Intelligence implies the application of knowledge, skills, and judgement. Generative AI is just a pretty good guesser.
None of this would be a problem if we truly internalized it when we interact with Generative AI, and changed our own behavior accordingly. But most of us don't, at least sometimes.
The underlying reasons for this are complicated, but they seem to be related to the helpful way GPT presents answers. Classic IT systems rely on you to "ask questions" in a predictable way (using menus or perhaps a formula like Excel). This allows the system to respond predictably in ways you don't need to double-check. If you get the question right, you can be confident the answer will be right too.
More recently, systems like Google search predict what we mean when we ask it for information, but have usually provided us with a list of possible answers, in the form of human generated content (for example by a ranked list of websites). As humans, we can then sort through this list, and use our judgement as to what is most appropriate.
But Generative AI skips that step, and jumps straight to what the system thinks is likely to be the best answer, constructing its best guess on the fly. By doing it this way, it can seem every bit as authoritative as Excel. For some people, this "authority" is so baked in that they rapidly trust it more than information from a human. After all, we intrinsically know that humans are fallible, and are used to technology systems not making those kinds of mistakes.
Today, GPT contains an out of the box warning "ChatGPT can make mistakes. Consider checking important information". But every day, humans are not fully heeding that warning. The reason is the same reason that Zachariah Crabhill didn't. As tired, overworked humans, we are motivated to lean on Generative AI to do our work for us. But as valuable as these systems are, we need to shift the way we use them, so that they are providing possibilities and we are providing our uniquely human intelligence.
The Whole Human at Work
Talking of intelligence - Laura Hamill PhD has a whole heap of it. Dr. Laura is the former co-founder of Limeade and for decades has been researching how to create highly effective workplaces build on a foundation of highly engaged employees. Dr Laura recently joined me on our Humanity Working podcast to talk about her findings. In a wide-ranging discussion, Laura and I talked about how work has changed in many organizations, to a point where there isn't even full agreement as to what someone should do in a job. We also looked at how unrealistic expectations can hurt both employers and employees. This episode is definitely worth a listen (which you can do by finding Humanity Working on your favourite podcasting platform) or a watch (which you can do by clicking below).
Thanks for Reading!
Please let me know your thoughts in the comments (I will respond)
If you liked this newsletter, be sure to subscribe! I also post regularly outside of this newsletter - you can make sure you miss nothing by following me "ringing" the 🔔 in the top right corner of my profile to be notified when I post.
You can also subscribe to our YouTube channel and follow me on X.