To be clear, I’m not suggesting that anyone stop using their favorite large-language model (LLM) conversational UI or GenAI-enabled products because there are downsides, just as I wouldn’t tell you to stop driving a car because it’s possible to get into an accident. However, with cars, we’re taught about the risks of driving, there are clear rules of the road, and a license is required to ensure the safe operation of a vehicle. In the world of GenAI, we are not all clear on the risks, the rules are not yet defined, and anyone can use an LLM and GenAI-enabled tools.
Risk #1 — Treating AI and LLMs with the same respect as search engines
We’ve all been using search engines for as long as they’ve been around. For some of you, that’s your entire life. And search engines do what individuals can’t — they can find answers to our questions quickly and accurately, across any site around the globe.
I believe that because of that experience, we want to trust the answers we get back from GenAI just the same as we trust search engine results. Not only does GenAI have the same ability to search, but it can also summarize, analyze, and reach conclusions that no search engine or any individual could. However, each of those functions requires judgment calls, and GenAI is subject to errors in judgment.
Risk #2 — AI doesn’t know when it’s wrong
To be fair, this applies to people as well. But when we get advice from a person, we view them as a single person, with a single perspective and knowledge base. When we get advice from GenAI-enabled tools or LLMs, we know that the knowledge base they are pulling from is vast, using far more data than a single individual could know or synthesize, so we are more likely to have high confidence that the answer is correct.
Of course, we’ve all heard stories of GenAI hallucinations — making things up — or where LLMs have just given clearly inaccurate information. One personal example is that I was using an LLM to help me revise a eulogy for my step-mother last year. I told it what the task was: “I have written a speech for her service that I’d like your help with. Right now, it’s too long, so I’d like your help to shorten it by 25%”. But before I could paste in the content of the eulogy I’d written, the LLM had written a eulogy of its own for a step-mother with a different name and characteristics. When I pointed out its mistake, it apologized, but it was a case where I could identify the errors.
Imagine if you are doing research on a brand new persona, and the LLM gives you inaccurate or incomplete information — how will you know? It’s a new persona. The truth is that you won’t know, and neither will the LLM. And while you can ask it what sources it used or how it came to that result, like its human counterpart, it may not actually know.
Risk #3 — AI is biased
Every GenAI system requires training. Just like living things, they don’t start out fully formed, but need to be exposed to data to learn, to recognize patterns, and to then create output in a particular way.
While these systems are trained on very large data sets, they are always incomplete in some way, which is one kind of bias. Imagine if everything you’d read about the world was from 1870 and before — your view of how the world works would be biased by what you’d been exposed to.
In addition, the training data may seem — or aspire — to be purely objective, but that data has been created by people, and people are biased not only by their beliefs, but by their cultures, experiences, and perspectives based on lived experiences. Imagine if the training data for an GenAI system only referenced a single religion, a single part of the globe, or a single gender. While that’s crazy talk, GenAI can pick up on nuance and bigotry and misogyny as patterns that become part of its reality.
Now consider what happens when GenAI systems are used as training data for other GenAI systems — that bias is compounded. If you are using a GenAI capability to help build a workflow or a user journey, it may be hard to spot the age bias, technological bias, or cultural bias in the results.
Risk #4 — Use of and release of intellectual property
LLMs and GenAI tools have the potential to train their systems and make them better over time by exposing them to your real-world problems.
Before you drop your company slide deck or survey results into any LLM or AI-enabled tool, you need to be aware of two things:
- The data you share may be used to train the GenAI system — if it is, you may be inadvertently releasing company secrets or intellectual property, which can jeopardize your job.
- The results you get back may inadvertently expose you to using another organization’s intellectual property, such as source code, copyrighted text, images, or patented ideas, which can lead to lawsuits for you and your company. And even if it’s not illegal, just awkward, the tool might generate a design for you based on another company’s design system.
Risk #5 — Volume and specificity do not equal completeness or accuracy
I recently spoke with someone developing a GenAI-powered tool for persona and user journey generation (there are a LOT of them, BTW). As a test, I asked them to create a technical persona that I knew very well, because I had invested years of research in knowing everything about this persona. In a very short time, the product produced a huge volume of skills, jobs to be done, frustrations, and other relevant data.
And if I didn’t know that persona so well, I’d have been excited. But one of the key things that the persona did was missing. If I hadn’t spent years researching this persona, I would have believed the persona from the tool was complete and accurate. Due to the volume of data in the results, the specificity of the data in the results, someone with less experience or knowledge would have no reason to believe the persona was incomplete.
As Kyle Soucy notes in her recent article on using AI for personas and journeys:
“It is crucial to manually review and adjust these AI-generated maps to ensure they incorporate human insights and accurately reflect real user experiences to provide a more actionable and comprehensive view of the user journey. While AI can assist in forecasting and visualizing user behavior, the strategic inclusion of human insights remains invaluable in crafting a journey map that truly reflects and improves the user journey.”