Preparing for AI (original) (raw)

AI is everywhere—we’re in a middle of a technology shift that’s as big as (and possibly bigger than) the arrival of the web in the 1990s. Even though ChatGPT appeared almost two years ago, we still feel unprepared: we read that AI will change every job, but we don’t know what that means or how to prepare.

Here are a few ideas about preparing for that shift. The primary thing you can do to prepare for this shift is understand what AI can and can’t do—and in particular, understand what you can do better than AI. It’s frequently said that AI won’t take your job, but people who don’t use AI will lose their jobs to people who do. That’s true as far as it goes (and in a “blame the victim” sense)—but the real truth is that people who can’t add value to what AI can do are the ones who are in danger, whether they use AI or not. If you just reproduce AI results, you’re very replaceable.

Learn faster. Dig deeper. See farther.

How can you partner with AI to deliver better results than either you or AI could on your own? AI isn’t magic. It isn’t some superhuman intelligence, despite the pronouncements of a few billionaires who have a vested interest in convincing you to give up and let AI do everything—or to crawl into a shell because you’re scared of what AI can do. So, here are a few basic ideas about how you can be better than AI.

First, realize that AI is best used an assistant. It can give you a quick first draft of a report—but you can probably improve upon it, even if writing isn’t one of your strengths. Having a starting point is invaluable. AI is very good at telling you how to approach learning something new. It’s very good at summarizing books, podcasts, and videos, particularly if you start by asking it to make an outline, and then use the outline to focus on the parts that are most important. Shortly after ChatGPT was released, someone said that it was like a very eager intern: it can do a lot of stuff fast, but not particularly well. GPT (and the other AI services) have gotten better over the past two years, but that’s still true.

Second, realize that AI isn’t very good at being creative. It can tell you how to do something, but it’s not good at telling you what to do. It’s good at combining ideas that people have already had but not good at breaking new ground.

So, beyond the abstract ideas above, what do you need to know to use AI effectively?

It’s all about writing effective prompts. (“Prompts” implies chat and dialogue, but we’re using it for any kind of interaction, even (especially) if you’re writing software that generates or modifies prompts.) Good prompts can be very long and detailed—the more detailed, the better. An AI isn’t like a human assistant who will get bored if you have to spell out what you want in great detail—for an AI, that’s a good idea.

You have to learn a few basic prompting techniques:

You also have to learn to check whatever output the AI gives you. We’ve all heard of “hallucination”: when an AI gives you output that has no basis in fact. I like to differentiate “hallucination” from simple errors (an incorrect result), but both happen and the distinction is, at best, technical. It’s not clear what causes hallucination, though it’s more likely to occur in situations where the AI can’t come up with an “answer” to a question.

Checking an AI’s response is an important discipline that hasn’t been discussed. It’s often called “critical thinking,” but that’s not right. Critical thinking is about investigating the underpinning of ideas: the assumptions and preconceived notions behind them. Checking an AI is more like being a fact-checker for someone writing an important article:

Checking the AI is a strenuous test of your own knowledge. AI might be able to help. Google’s Gemini has an option for checking its output; it will highlight portions of the output and give links that support, refute, or provide (neutral) information about facts it cites. ChatGPT can be induced to do something similar. But it’s important not to rely on the ability of an AI to check itself. All AIs can make subtle errors that are hard to detect; all of the AIs can and will make mistakes checking their output. This is laborious work, but it’s very important to keep a human in the loop. If you trust AI too much, it will eventually be wrong at the most embarrassing and dangerous time possible.

Finally, you have to learn what information you should and shouldn’t give to an AI. How will the AI use the prompts you submit? Most AIs will use that information to train future versions of the model. For most conversations, that’s OK, but be careful about personal or confidential information. Your employer may have a policy on what can and can’t be sent to an AI or on which models have been approved for company use. Some of the models let you control whether they will use your data for training; make sure you know what the options are and that they’re set correctly.

That’s a start at what you need to learn to use AI effectively. There’s a lot more detail—it’s worth taking a few courses, such as those found in O’Reilly’s AI Academy—but this advice will get you started. More than anything else, use AI as an assistant, not as a crutch. Let AI help you be creative, but make sure that it’s your creativity. Don’t just parrot what an AI told you. That’s how to succeed with AI.

Tracking need-to-know trends at the intersection of business and technology.

Thank you for subscribing.