Testing your IA with Treejack
To start - I have a confession to make. I love Treejack. So please know that this blog is completely biased. I love Treejack because for me it is permanently tangled with the fabulous job of creating awesome information architectures (IA). And no, this is not a paid endorsement :)
It gives you certainty in a world of ambiguity
It is super easy to use
It helps you produce a great case to convince stakeholders
What is Treejack
Treejack is an IA testing tool created by Kiwi company Optimal Workshop. It lets you validate and find problems in a draft IA in collaboration with users. You ask users to do things in your draft IA and if they can’t do them, you know you need to make improvements.
Content IA v Product IA
I am assuming if you have got this far you know what IA is, but it might be worthwhile making a distinction between content IA and product IA. In the world of digital content, IA refers to the global nav, left hand nav and sometimes even the global footer. In the world of product, IA refers to toolbars and other interaction points or affordances. A word of warning, Treejack is better for digital content than product. That does not mean you can’t use it for product IA, but some work-arounds are necessary.
When it comes to information categorisation you have a lot of choices. You need to decide what organisational system you want to use, for example, subject, user, version, product, cloud/ server, location, time, numbers, categories or totally random - like alphabetical :)
There are many, many factors that influence the human desire to categorise things including culture, community, brain function, interest level, task type and time.
A ten year old child may classify animals in the following way:
But when you think about it, this is just one, though culturally and historically accepted means of catagorisation. This fictitious Borges taxonomy, taken from an ancient Chinese encyclopædia entitled Celestial Emporium of Benevolent Knowledge could be just as valid :)
As long as the categorisation scheme you chose makes sense to your target users and has a strong information scent, by which I mean clear wayfinding clues, the sky is the limit. Use the same names for products, order functions in the same way and provide consistency between products wherever possible.
Cognitive load and mental models
In the ‘olden days’ of paper library cards we could only use one classification system, per book, per card (yes there are exceptions) but in today's digital world faceted classification is possible. You can facilitate users journey via many different paths. One destination can have many different labels. When we add content modelling and syndication into the mix this becomes even more powerful. Our jobs as architects is to identify the most likely path for the most users.
The art of labelling
You need to decide if you are going to go creative or practical when choosing your IA labels. Can I please beg you from the bottom of my secret librarian heart to go practical. Labels are not a place for coolness, humour or being clever :) They work best as exact and well thought out descriptions of what they contain (be it content or an interaction point.) If you have a bunch of content about apples, please label it as “apples”, not “keeping the doctor away”. If this really hurts, let's talk about it more :)
Why Treejack is awesome
As designers, content strategists and writers we are no strangers to ambiguity. However ambiguity in IA is a big no-no. Evidence shows us that users are very happy to click five - seven layers deep into an IA if they have a very strong information scent. Treejack is a sure fire way to be confident that you have removed that ambiguity for users.
What Treejack does
Treejack give you confidence in your decision making which can result in increased momentum. You can see problems quicker and make fixes quicker. For example, it helps you identify honey pots. These are labels and clusters of content that attract people (wrongly), and take them off course. Think of them as the sirens of your digital landscape. Treejack also helps you identify poor labeling, similar labels or when there are too many options. In the following example "Products and applications" is a honey pot. Both that label and "Working with us" need to be reconsidered.
Treejack is also a great way to secure stakeholder buy-in and sign-off because of the way it visualises results. The results diagrams tell a cohesive and strong story. It is really easy to show how and why your IA works well for users.
What Treejack does not do
You may find that there are three or four, really great way to arrange your IA. Treejack is not going to help you decide which is the best of those great ways - instead it highlights when you have a weak IA. Nor is Treejack a substitute for extensive user research and analysis. It will not help you with developing personas, understanding user scenarios or understanding how your users naturally group information, concepts or tasks. Optimal Workshop (maker of Treejack) has some other tools that may help with that, for example, their card sorting tool Optimal Sort. What Treejack does really well is help you eliminate problems in your IA quickly and easily.
Tips and tricks
My first advice is to collate and version all the ingredients you need for your survey. You will probably run at least three surveys over the same chunk of IA and you will need to be clear about where you are up to and what you have changed. Once you have identified your typical user, survey in small groups, and repeat two or three times only on problem areas.
What you will need:
IA - The tab separated hierarchy you want to survey
Candidates - The users you are going to invite to use your IA
Scenarios - The tasks you are going to ask candidates to complete
Profile questions - The demographic information you collect on candidates to prove your survey is valid
I prefer to generate this in Mindmeister or Excel/ Google sheets so I can copy and paste easily after each survey. Your three main objectives here will be communication, collaboration and versioning. See how I use Mindmeister to prepare an IA
Similar to the usability testing method suggested by Steve Krug you only need 6 - 10 candidates per survey. After only six surveys you will immediately see where your problem spots are. Don’t waste your candidates by asking for hundreds of responses. You just don’t need them.
I recommend you conduct at least three rounds of surveying to make sure you have not introduced new problems with your improvements. In the same way you can introduce new bugs when trying to resolve a bug in code, you can also introduce IA problems in new places when you make your first round of changes after surveying. I recommend you don’t use the candidate service Treejack offers instead, ask our research team how to get the details of customers.
Try conducting the survey in person with a candidate at least a couple of times. It is really illuminating to watch people use the survey as opposed to just analysing the results remotely. It will give you great clues about how people hack the survey. For example, I once watched as a candidate ignored the scenario task and spent a couple of minutes exploring the IA. I can now recognise that behaviour in the results and know to disregard them.
Getting your scenarios right can be a bit tricky. I have seen a survey round fail not because the IA did not work, but because the scenario was badly or confusingly worded. You also need to be very careful to map your scenarios to the right place in your IA. It is possible to map your scenario to multiple destinations but I don’t recommend this.
Your scenario needs to be open enough not to become a word matching exercise and closed enough that the survey is testing the IA not the user's interpretation of the scenario.
Make sure you match your scenarios to your target audience. You can see the following scenarios are built for different audiences.
Some dummy scenarios:
"You have just been told you will be using Bamboo at a new job - where do you get a crash course in how and why to use it?"
"You heard about an Atlassian wiki product at a conference that people were saying was pretty good. You think it started with a C? Where would you find out what it is called?"
"Imagine you are looking for a program to help you manage a bunch of work coming into your unit. You want to be able to allocate and track the work. Where would you find out if Atlassian makes a product like that?"
Profile questions give you confidence you have surveyed the right candidates. Ask your recruiter for a good spread of geography, role and experience. The profile questions help you understand and confirm your survey results. When you socialise your final IA you want to be able to say that you targeted representative users. Having a broad set of users is the only reason you might chose to survey more than ten people in one go.
Some example profile questions
How long have you been in your current role?
How long have you used Confluence for?
What is your role?
What State do you work in?
Forming the right team
Back in 2012 I reworked an IA for Foxtel. It was a great project because it was super focussed and we had excellent team balance. I was the IA expert working with two guys who were content and customer experts. It is key to find that right balance in your team. Find people who really know our users, who really know our content and who really know IA/Treejack.
The results diagram in Treejack are fun. I remember the first time I saw one of them, and I thought “how the hell do I read this”, but with practise they become very straightforward and easy to interpret.
For example, results that are mostly green show an IA chunk that doesn’t need much more work.
Results that are mostly red, show an IA chunk that will need a rework.
How do you test your IA?
I'd love to hear how you test your IA. Do you use similar methods? If you are in Sydney and want to talk about this more come to the Sydney Content Strategy meet up in June.
Women, Fire and Dangerous Things by George Lakof
Information Architecture by Christina Wodtke And Austin Govella
Information Architecture for the World Wide Web (polar bear book) by Rosenfeld and Morville
Card Sorting by Donna Spencer
Ambient Findability by Peter Morville
Content Everywhere by Sara Wachter-Boettcher
Content Strategy for the Web by Kristina Halvorson
Everything is Miscellaneous by David Weinberger