As explained in our previous post, we changed concept from Rankr to Featr.
To test the interface with actual users, we created a paper prototype. Such a prototype allows to get fast feedback and incorporate changes rapidly, because all you have to do is draw a new design. We'll now discuss our first iteration and evaluation.
Friday, 30 March 2012
Change of concept
In our early concept named Rankr the idea was to filter and rank incoming streams of messages in an automated way so the user can process these in an efficient way. Because the system takes the burden of filtering and ranking from the user we thought this should be the key idea to base our application upon. At first it sounded great on paper, very efficient once it's realized and fully functional. But relying on this kind of automated system is also very optimistic in the sense that we expect it to filter streams just the way the user expects it to be. Clearly with the current technology this can be done to some extent but it also has many flaws and weaknesses. Such a weakness can be the system filtering messages, based on some criteria given by user, and putting them in places where they are not meant to be. This undesirable effect can cause loss of messages. So we decided not to use any form of automated filtering or ranking.
Thursday, 15 March 2012
Sunday, 11 March 2012
Application scenario
Introduction
Nowadays many people struggle with the information overload caused by all kinds of modern communication. Several principles have been presented to cope with this huge activity stream. First there is the issue of presenting information in a concise, hierarchical and yet informative way. To this end, filtering, clustering and ranking are essential. Another issue involves processing messages. Some should be removed immediately, others filed for later. In this course we are given the task to develop a product supporting this functionality in a human centered way.
Nowadays many people struggle with the information overload caused by all kinds of modern communication. Several principles have been presented to cope with this huge activity stream. First there is the issue of presenting information in a concise, hierarchical and yet informative way. To this end, filtering, clustering and ranking are essential. Another issue involves processing messages. Some should be removed immediately, others filed for later. In this course we are given the task to develop a product supporting this functionality in a human centered way.
Wednesday, 7 March 2012
Brainstorming techniques 1.0, guest star: prof M. Specht
In last UI session we had an introduction in some brainstorming techniques by professor Markus Specht. The bottomline was to design a program with keywords ‘filter’, ‘information’ and ‘priority’. In order to get us on the right track we started with a core dump: we had to write down all associations with this words. This is a good way to get a lot of different ideas in a short time span. We think this part of the session could perhaps be more productive if it would be performed in an iterative fashion. This way, associations made by others can spawn some new ideas.
In the next phase, three random associations were handed out to each group to design an application based on these three words. In 15 minutes we had to think out a concept, name and business model, which we had to present to our fellow students. By forcing us to do this in such a short time, we were limited to the essence of the design process. The key words gave a clear scope. Yet the usefulness and the attractiveness of the product had to be considered as well, so users would see why they should use the product.
The following step was writing down a user scenario for the product we designed. This helped us think about specific and concrete problems users could have with the product. This scenario was presented to the class, after which other students were invited to ask questions and point out strengths and weaknesses. These comments served as a first feedback about a broad concept.
The last part was evaluating the other designs, by handing out a total of 10 points to the other teams. Unfortunately, we were running out of time and the evaluation had to be done with few consideration. So in our opinion, this step didn’t really indicate the quality of the designs.
But indeed, professional situations are very similar in the way that when you can’t sell an idea in five minutes, you won’t sell it at all. Then again, in real life situations, marketing strategies tend to be a little more thought through.
Things we've learnt about UI evaluation
One thing
that really stands out after reading the comments on our Google+ evaluation is
that we had to be more explicit about our underlying assumptions. For example
we didn’t indicate how we invited respondents for our QUIS survey or for the
user interviews. We did say why we chose the QUIS 5.0 questionnaire, but we
didn’t mention it in the extensive document. We didn’t indicate why we chose the
actions executed by the test users in the usability tests either.
Also,
choosing a general survey always carries the risk of having some questions that
are not a 100% applicable to the specific setting. So we sometimes tried to
extract information from questions that were not really relevant.
We noticed
the importance of good data representation, which helps the reader understand
and verify our the results of our tests. In this regard, we think the box plot
with the scores of all the QUIS questions was a good choice, but it would have
been more informative if we would have placed the questions instead of numbers
on the x-axis.
We already realized
beforehand that 11 users is a small test group the extract some representative
results from a questionnaire. Maybe we should have thought of some additional
ways to reach more respondents.
Subscribe to:
Posts (Atom)