7 Deadly AI Sins for UX Professionals

Published on:


Summary: 
Succumbing to AI temptations weakens your UX skills. Strive for the AI virtues to keep yourself strong as you use AI in your work.

The 7 deadly sins form a centuries-old religious framework that warns against unchanging human temptations that always cause weakness and trouble. Instead, humankind is urged to embrace the 7 opposing virtues, which require extra effort but ultimately strengthen and protect the individual.

In this article, I describe 7 deadly sins for UX professionals using AI in their work and the accompanying 7 virtues they ought to strive for instead.

The 7 AI Sins for UX

We’ve now collectively spent nearly 400 hours with over 3,000 UX professionals from around the globe talking about AI. Obviously, tools, models, and use cases keep changing. However, the fundamental temptations we see UX professionals struggling with are becoming clear, and they are unlikely to change any time soon.

When succumbed to repeatedly, these temptations will weaken any UX professional and ultimately undermine the tasks and projects they undertake. Practitioners must expend the extra effort to embrace the “virtues” that will protect professional growth and the quality of their work.

7 AI Sins 7 AI Virtues
Outsourced thinking Ownership
Wasted time Automation
Lost details Selectivity
Isolated ideation Inclusion
Naïve trust Skepticism
Bland taste Originality
Defensive outlook Experimentation

1. Outsourced Thinking

There is no comfort in a growth zone, and no growth in a comfort zone. If you want to learn, you must exercise your mind in new ways. There is no way around this.

This is why repeatedly outsourcing your thinking to AI is so dangerous. It involves regularly engaging in any of the following:

  • Taking its recommendations at face value
  • Letting it synthesize multiple inputs for you
  • Asking it to provide the substance of what you create
  • Relying on it as a final quality check
  • Letting it provide first drafts or initial ideas

This proverbial “brain drain” is among the top concerns AI practitioners ask us about. My answer? Ask yourself this very simple question: If AI disappeared tomorrow, could you still confidently do what you do?

Virtue: Ownership

While AI can, and probably should, have a place in many tasks, turning to it first relinquishes ownership of the direction you go and masks the weaknesses in your thinking and abilities.

Strive to think independently first — even if just for a few moments — before you ask AI. Be honest with yourself and note your gut feelings, first impressions, and the holes in your abilities.

  • How would you word things?
  • What do you think is important to prioritize?
  • What ideas do you have for solving the problem?

Think first, ask second.

The key is to use it as a partner, not a guide. We see wise UX folks doing this, like one individual who had worked in UX for over 18 years and who shared with us,

“As you move up within the design realm, especially as you get into more and more leadership roles […], your peer group becomes smaller and smaller. […] I don’t really get to speak to people that have the same level of knowledge as I do as often. […] AI fills that void of conversation that I need to have.”

Whether you’ve been around forever or are brand new to the industry, whether you work with many UX colleagues or alone: just make sure you’re still thinking for yourself and bringing your ideas to AI for another perspective. It’s a rubber duck to help you process your thinking — not a north star to tell you where to go. You must be ready to push back and recognize when it goes wrong.

2. Wasted Time

While AI is often touted as a productivity tool, figuring out how to get the AI to do what you need often takes longer than doing the task all by yourself. AI doesn’t exist in a time vacuum. Any time you spend tinkering with it could always have been spent accomplishing something else instead.

People waste time with AI because:

  1. It’s hard to know upfront how long it will take to do a task with AI, and once you realize it’s taken a long time, you still don’t know if you’ve almost figured it out.
  2. It’s interesting! Many of us actually enjoy playing with AI and seeing what it can do.

The problem is that most UX work is done on someone else’s paid time, and ultimately, we must produce specific outputs by fast-approaching deadlines, whether AI helped us achieve them or not. If it’s not helping, it’s hurting.

Virtue: Automation

Because it’s tough to know whether investing time into using AI will make you more efficient, you have to be discerning about when you dive in. Experience shows that repetitive tasks benefit the most from AI because if you can create an AI bot (like a custom GPT) or a reusable prompt. In contrast, going straight to AI for one-off things often involves a lot of investment for very little gain. Look for opportunities to speed up tasks that you know you’ll be doing again, and again, and again.

3. Lost Details

When you rely on AI summaries of emails, project briefs, research data, or anything else, you sacrifice details and depth. What you gain in efficiency, you lose in richness and nuance. When you are responsible for the details, you should engage with all the information instead of relying on AI summaries. I don’t know one manager who will accept an excuse like “I’m so sorry I didn’t deliver this on time! The AI didn’t include the deadline in its summary of your email…”

Virtue: Selectivity

Do you need to know every detail about everything that crosses your path? No. Sometimes you only need to be informed rather than responsible, and in those cases, a summary is good enough. And the truth is, an AI summary is generally far better than you quickly scanning through. The key is recognizing when you will be held accountable for understanding the details and avoiding overreliance on AI in those situations.

4. Isolated Ideation

AI is undoubtedly powerful for ideation, but the ideation process is what aligns the people responsible for implementing the outputs. When you spend less time collaboratively thinking, discussing, and prioritizing with other people, they will be less committed to the outputs — especially if they were generated by AI.

Virtue: Inclusion

The truth is that people like the look of their own fingerprints. When they’ve been involved in creating something, they are more committed to implementing it. This doesn’t mean we need to avoid AI in the ideation process; it just means we need to keep including the relevant humans.

What if, rather than having AI suggest all the solutions, you used it to tee up the conversation with other humans? For example, it could help you create some sharp How Might We questions that you’ll use to stimulate important conversations with your colleagues. Or have people work together in groups to collectively write the AI prompts for possible solutions. AI can do a lot of heavy lifting — just don’t let it crowd out the important people you need as your allies.

5. Naïve Trust

Most people know they can’t fully trust information they get from AI, but, when you watch them, very few diligently verify its outputs because of the effort required. It’s easier to just take it as trustworthy. This is extremely predictable and completely unsurprising, given the interaction cost and cognitive effort required to verify AI outputs, compared to just cutting your losses and using what it gives you.

Virtue: Skepticism

I’ll spare you a diatribe about the danger of hallucinations, but it is worth your time to find legitimate and trustworthy sources of guidance and inspiration.

For example, if you’re looking to learn about the difference between usability testing and user interviews, AI honestly does a great job and can tailor the information to you (though it learned what it knows from what we at NN/g and others have written). We already see many UX folks turning to it for this kind of help.

However, AI guidance is weak when you are looking for excellent examples of specific and niche things, or when you want to be a true expert and learn from the original sources. And “ChatGPT said we should do this” probably isn’t a very persuasive argument with your stakeholders.

For example, we recently created a new course on writing tasks for usability testing. Try as we might, we could not get AI to write tasks that met our standards. The tasks it created would not have yielded good research data. The more important truth and quality are, the more discerning you need to be about where you get your information from.

6. Bland Taste

Relying too much on AI to suggest a style or failing to refine the style provided by AI sufficiently ultimately results in something generic and bland. This applies to everything — from writing tone, to UI layouts, to survey questions.

It’s not that what AI gives you is always “wrong.” It’s just … unoriginal. Frontier LLMs have been shown to come up with far more ideas, and sometimes even better ideas, than humans. But the more ideas you get from it, the more you’ll see how much they fall into the same patterns and mimic the ideas of others.

The most interesting, clever, original creations still come from humans, not AI.

Virtue: Originality

As AI gives more people the technical capabilities to produce work outside their realm of expertise, ultimately, taste and discernment will differentiate the very best. Very few employers are looking for team members who are really good at repurposing what everyone else has done — something AI makes easy. Sure, you can provide an example for it to mimic, but this is not the same thing as coming up with something original and fresh.

Your future value will directly relate to your ability to take a step beyond what others have done to create something new. Take a side-step and do it a little bit differently by modifying the AI outputs or using them as inspiration for your own creativity.

7. Defensive Outlook

Most humans inherently resist change and like to maintain the status quo. It’s easy to wish AI wasn’t changing things and hope it won’t affect what you do, but the truth is that things in this industry never stay totally still. It’s those who adapt that do best. Give up your pride and self-protection. Give it an honest chance.

Virtue: Experimentation

However, we see too many organizations where UX folks are getting pushed off the AI diving board before they’re sure they can swim. Constantly playing with new AI tools is a luxury that few UX people enjoy. Our suggestion is to look for inefficiencies, bottlenecks, and opportunities for automation, then give AI a chance there. We like Dr. Ethan Mollick’s suggestion to “bring AI to the table” and give it an opportunity for many of your work tasks, just to see what it can offer you. If it proves unhelpful, don’t force it.

Here’s an internal NN/g example: if you’ve ever taken a live, online course with us, you might have taken the associated exam to work toward UX certification. We end up writing and revising thousands of exam questions, which has been a major bottleneck at times.

So, we trained a custom GPT on all our best practices for writing exam questions. While simple to create, this little exam bot has saved us countless hours by aiding in the tedium of writing questions and allowed us to spend more time carefully revising and editing them to make sure they fairly assess the material we teach.

Conclusion

It’s not about avoiding AI. It’s about maintaining your own growth and the quality of your work as you use AI. AI will constantly be changing. Never let yourself slip into repeatedly committing the sins that weaken you and your UX skills.

Reference

Mollick, E. (2024). Co-intelligence: Living and working with AI. Penguin.

Source link

Related