Our Skimming Future

AI breakthroughs and announcements have been coming thick and fast for the last year. Starting last summer Dall-E, Midjourney, & Stable Diffusion have allowed us to build impressive images in response to written prompts. Then in the fall OpenAI’s public access to ChatGTP, then Bing Chat, and now GTP-4 have impressed us all with the ability to create blurry jpegs of the written word. There has been a lot of consternation about the implications of these tools for careers and society. There have likely been converts to the rapture-for-nerds>>AHEM<<singularity as apocalyptic fears are stoked. There has been some breathless writing about chat “personalities” that speak to the anxieties of potential consciousness in Large Language Models. Plus the bugaboo of misinformation is a constant concern for journalists. I think there will be real consequences, good and bad, but am not inclined to worry about paperclip problems or sentient machines.

Yesterday (March 14, 2023) Google announced forthcoming generative AI features for their suite of productivity tools and tomorrow Microsoft is expected to announce similar features for their Office products. It is these efforts that concern me, and they have little do with the capabilities of machines, but about humans and our tendencies. Here’s where I am coming from:

In knowledge work, when you are collaborating with others, focus and attention are the most precious commodities of talented people. Over my working life, technological aids have come into the modern workplace that promised efficiency, but fractured the attention and focus of those who work there. Chat notifications, emails, shoulder-taps, meetings where everyone is managing demands while trying to focus on the subject at hand, all serve to make knowledge workers less able to actually do the things that they are ostensibly employed to do. Not only that, but all of these demands and distractions happen in environments that are structured by human relationships and power dynamics. Not only do I have to respond to an email thread that’s gone off the rails more than once, I have to do it in a way that makes me look smart and self-sufficient.

There is a partial solution to these issues that allows people to get into the details and focus. That is writing. Writing is a tool for communication. More importantly, it is a tool for the forming of ideas, whether or not they are read by others. Writing is a process by which our thoughts can be faced, evaluated, edited, and organized. It is a truism that in order to find out what you think about something you have to write about it.

Amazon is famous for requiring a write-up for every meeting which is then read by all attendees for the first few minutes of the meeting. This serves two purposes.

  1. It forces the person calling the meeting to form a cohesive context, agenda, and plan. Ideas that were incohate at first, are made concrete as they pass through the mind of the person and onto the page.
  2. Collaborating around written documents makes the ideas more comprehensible to others. It also forces them to sit and read and be prepared in the presence of their peers.

Edward Tufte’s excellent workshops on presenting information suggest a similar practice to Amazon’s to avoid the scourge of bullet point slide decks. Within product discussions, design prototypes can serve a similar purpose as writing by cutting through distractions and abstractions to make things concrete.

My concern with tools that make the writing of smart sounding text or the building of competent designs trivial isn’t that they can’t do the job, but that over time we won’t be able to do the job anymore. It is already commonplace for people to skim the communications they receive at work without really grappling with them. What happens when the people who create the communication are able to do the creative equivalent of skimming? Instead of doing the mental work to turn a notion into an idea, you can write two bullet points and let the AI generate the rest. While before, it was likely that many people wouldn’t really pay attention to what you wrote, now you are barely paying attention to what you write.

Perhaps we might be moving into higher levels of abstraction and we just have to learn to think differently and leave the details to the machines. In every organization I’ve been in a web of people sit below the person who is tasked at operating in abstraction and do a tremendous amount to make abstract directives and unconsidered effects thrown off in meetings into something concrete. What happens when we all feel that we can operate at the executive level and we rely on machines to handle the details? Is it even feasible? If it is feasible, what will it do to us? I would wager more endeavors have been derailed by failing to consider the details than by a bad abstraction.

I am not against generative AI. I think there are exciting use-cases that virtually every digital product could take advantage of, many of which are mundane. Using Chat-GPT to bootstrap coding projects is a game changer. In addition, the process of researching and distilling information will be greatly enhanced. Although, there is risk that we all end up less patient to read more than summaries than we already are.

The downside I’m pointing out is a potential future where everyone pretends to speak and write and then pretends to listen and read. Meanwhile, the skills that built us this huge corpus for AI to learn from in the first place, begin to atrophe and the blurry jpeg of the web becomes inexhorably more diffuse.

Listen to this Post

AI Working