The Purpose of Things Isn't to Stop Doing Things.
Some thoughts on the dangerous idea that we should outsource many aspects of our lives to technology.
As a writer and, more broadly, a human with varying interests, I have found ChatGPT to be an incredibly useful tool. A few examples:
When I want a synonym for a word I’ve used too many times, ChatGPT makes for an excellent thesaurus. Just yesterday, I asked it for synonyms for “bombastic,” and it gave me Pompous, Grandiloquent, Highfalutin, Pretentious, Overblown, Turgid, Florid, Ostentatious, Inflated, and Magniloquent.
When I’m reading a newsletter in Spanish, I can copy and paste sentences and paragraphs to double check meanings and context, minimizing lost reading time.
When I’m researching a subject and need help understanding a complex process, such as the difference between French AI startup Mistral’s “sliding window attention” model and the models used by OpenAI and Meta, ChatGPT can provide a pretty accurate overview that points me in the right direction.
From writing, to reading, to learning, ChatGPT helps me more effectively do a “thing,” but it does not, however, replace the “thing” I am doing. One issue I have with the AI boom is the growing idea that AI shouldn’t just be a tool, but that it should become the output itself. Case in point, a message that I received earlier this week:
Hey Jack,
I've been following your content for a while now, and I'm always impressed by your work. Your writing style is truly unique and captivating – it really draws me in and doesn't let go until I've finished reading.
Given your innate abilities, I believe you have an exciting opportunity: creating a GPT model that writes in your distinctive style, tone, and format.
Here's the idea:
1.) Load a substantial amount of your past content (long, medium, and short-form) into a GPT model's backend.
2.) Configure the model to learn from and emulate this content.
3.) Use it by simply prompting it with topics you're interested in.
This isn't meant to replace you, but rather to expedite your writing process significantly. You could skip drafts 1 & 2 or maybe just pick up complementary ideas!
Bonus suggestion: Include engagement metrics (likes, comments, etc.) for each piece of content. This could help the AI learn which components generate the most success, potentially uncovering patterns we might not even be aware of.
I'm sure you're already exploring similar ideas, but if you ever want to nerd out, just let me know. I'd be excited to be a part of your process, even if only for a moment!
Best,
[Redacted]
Disclaimer: Some of this message was created using AI
I do love the irony of someone using AI to send a message encouraging someone else to write with AI. Anyway, while I appreciate messages from readers, this particular reader missed the point of writing.
The problem with taking an AI-first approach to tasks is that it robs you of everything that you would have gained by doing the work yourself.
I don’t write to simply generate a 1,200 word output. I consider writing to be an extension of my curiosity, and the writing process itself is what turns a rough idea into a finished product. I begin with a vague idea based on some observation of the world, and I put that on paper. As I’m writing that idea, two distant synapses in my brain connect, bridging seemingly-unrelated ideas. Maybe an anecdote from my time playing football relates to risk-taking in financial markets. Maybe a conversation I had at the bar the previous weekend sends me in a new direction entirely. As I continue down this path, the story evolves until it hardly resembles the original idea. Writing is a metamorphosis that turns vague abstractions into novel ideas, but you have to go through the writing process to connect the various points along the way.
Sure, if I pasted all of my blogs into ChatGPT, and each week said, “write a 1,200 blog post about risk,” it could generate something that somewhat matched my tone, but it wouldn’t be me, and it would deprive me of the benefits of writing the piece myself.
Writing leads to clearer thinking because the act of writing reveals gaps in your reasoning. A half-formed idea can sound quite clear in your head until you try to put it on paper, but once you struggle to support that idea, you realize you have more work to do. If I outsourced the writing to JackGPT, I would sacrifice the deeper understanding derived from doing the work myself for some marginally quicker output.
This idea of outsourcing everything to AI wasn’t just a one-off recommendation by one of my readers, either. Last month, Zoom’s CEO Eric Yuan echoed this sentiment in an interview with The Verge’s Nilay Patel, saying he wants AI clones to take our places in most meetings. Some quotes:
NP: When you think about the elevator pitch for Zoom, you had the founder, and you had to go raise money once upon a time. In the beginning, it was very simple, right? Videoconferencing is very hard. It requires some dedicated hardware and expensive connections, and Zoom is going to be as simple to use as a consumer app. It’s videoconferencing, but simple. What’s the elevator pitch now?
EY: What we are doing now, it’s really looking at your entire schedule, how to leverage Zoom Workplace to help you out. Essentially, you can leave Zoom Workplace, and Zoom Workplace can help you get most of your work done, right? That’s our pitch. We are not there yet.
Today for this session, ideally, I do not need to join. I can send a digital version of myself to join so I can go to the beach. Or I do not need to check my emails; the digital version of myself can read most of the emails. Maybe one or two emails will tell me, “Eric, it’s hard for the digital version to reply. Can you do that?” Again, today we all spend a lot of time either making phone calls, joining meetings, sending emails, deleting some spam emails and replying to some text messages, still very busy. How [do we] leverage AI, how do we leverage Zoom Workplace, to fully automate that kind of work? That’s something that is very important for us.
NP: We have a big audience of product managers, engineers, and designers. I think what you’re saying is they’re going to send AI avatars to their stand-ups every morning.
EY: More than that. It’s not only for meetings. Even for my emails. I truly hate reading email every morning, and ideally, my AI version for myself reads most of the emails. We are not there yet…
NP: I’m assuming when you looked at your calendar today and saw a Decoder session. You were going to come to that on your own. What would you have sent an AI avatar to instead?
EY: I think an AI avatar is essentially just an AI version of myself, right?
NP: Sure.
EY Essentially, in order to listen to the call but also to interact with a participant in a meaningful way. Let’s say the team is waiting for the CEO to make a decision or maybe some meaningful conversation, my digital twin really can represent me and also can be part of the decision making process. We’re not there yet, but that’s a reason why there’s limitations in today’s LLMs. Everyone shares the same LLM. It doesn’t make any sense. I should have my own LLM — Eric’s LLM, Nilay’s LLM. All of us, we will have our own LLM. Essentially, that’s the foundation for the digital twin. Then I can count on my digital twin. Sometimes I want to join, so I join. If I do not want to join, I can send a digital twin to join. That’s the future.
Maybe I’m in the minority on this, but I don’t think it’s in a company’s best interest for an AI “clone” of a company’s CEO to respond to emails and attend conference calls with decision making authority! CEOs have to make decisions, and those decisions can’t simply be probability-weighted responses based on former actions. What’s the point of even having a CEO then? Zoom’s stock is down 40% since before the pandemic, you might as well use an AI based on Steve Jobs or Elon Musk to run the company instead.
An over-dependence on artificial intelligence is often just lethargy disguised as efficiency, and the “outsource everything to AI” crowd ignores the fact that the work is often more valuable than the output.
I have two predictions regarding the broader use of AI as a crutch:
First, folks who are willing to go out of their way to add a human touch to their work will only become more valuable as more people elect to outsource their work to AI. A thoughtful email (or, even better, handwritten note) will standout in a sea of AI-generated messages.
Second, the ability to discern value and insight from a flood of information will grow more and more important as the cost of producing data approaches zero. Models and reports that took days to build can be AI-generated in seconds, but what are you going to do with their results? The world will only grow noisier, and your ability to answer the latter is what really matters.
The purpose of “things” shouldn’t be to eventually replace ourselves doing those “things.” We should take pride in doing good work not shirking from it.
- Jack
I appreciate reader feedback, so if you enjoyed today’s piece, let me know with a like or comment at the bottom of this page!
Writing is thinking. ChatGPT is fantastic at refining writing and fine-tuning ideas, but the writing itself should not be outsourced to AI.
This: "Writing leads to clearer thinking because the act of writing reveals gaps in your reasoning. "
Well said.