Thursday, 31 May 2012
Final report
The final report of our User Interfaces project can be found here: Final report Team Chill.
Reflections UI course
This blogpost contains our reflections on the User Interfaces course:
Lieven
Last year, I took the course ‘HCI: principles and design’ at KTH (Stockholm). The iterative UI design using both paper and digital prototypes was part of the course. We even had a small project to put it into practise. That was just enough to get started. The UI course I took here as part of my HCI program went far deeper, mostly because we had to work on a real-life scale. The constant need for reporting and feedback from others helped to stay alert and question the course we were following at the time. On evaluating our project, I see two shortcomings, mentioned before. For starters, we don’t have that much test users. I do however believe we extracted useful information of our subject. I must admit I’m rather reluctant to prompt users too much, which might be another explanation for our limited test audience. Secondly, we focused not enough on efficiency, but on functionality instead. As argued before, prototyping has its limits, but that shouldn’t have stopped us of at least trying to incorporate it more.
Finally, the methodology and the user’s active role in it are two important things I take with me from this course, together with a reinforced striving to design in a user-centred way – after all, he or she is the one measure of quality that matters.
Yasin
With this course we have learned how to evaluate and design a user friendly application by applying methods that are proven to be successful in the past years. For achieving this we iterate our test phases with the users and look for problems and solutions thereby. So each iteration brings an improvement to our application. Overall this whole process leads us to our goal which is to make an application which users have a good satisfaction and experience with. By following these methodology we were able to design an application which provided a solution to information overload. If we were not given any of these methods we would probably end up with making something that wasn't really a good solution and thereby losing too much time. The course itself is very structured and demands continuous involvement. By giving results via blogs we were able to get some feedback and apply changes to our design if needed. At first this seems time consuming but in the end we have seen how important this really was. Because finding flaws in time instead of waiting till the end saves us all so much headache. For those who are interested in human centered design should definitely take a look at this course. It’s really worth it and you’ll have so much fun thanks to the interactiveness of it. By reading and writing comments on students’ blogs we were able to find some issues with their and our application. Sometimes we made comparisons with those designs that were similar to ours (e.g. team Sjiek’s Focus) and Incorporated some of their ideas to ours while giving them some suggestions as well. In the end this helped us a lot and student colleagues served as expert evaluators.
Ward
I think that this course was very useful in the way that we that we were exposed to the a real-life usability design process. At first I thought that the evaluation techniques presented in the course and used during the project were rather theoretical. But as told in the presentation of the Capgemini people, things like paper prototyping in an iterative design are used to develop real-life applications. In this course we were obligated to work actively on the project during the entire semester and this is definitely a necessity if you want to create a good user interface.
I think a strong point of our project is that followed a structured approach in each iteration. By continuously checking whether problems from the previous iteration were solved and whether new problems arose, I believe we were able to create a decent interface. Of course, if it was a business application we were creating, some more iterations would follow. Also when looking back, I would start testing the efficiency of the application sooner. Sadly this wasn’t possible since our implementation wasn’t ready at the time.
By being obliged to read and comment on other blogs, we were able to see where other groups were in the development process and what interesting insights they obtained. Also by getting feedback from other groups we received some additional helpful insights for the development process.
Saturday, 19 May 2012
Rationale for our score
At the end of the presentation session, we had to grant each group a number of points. We had 10 points at our disposition...and three minutes to make up our mind, which proved to be rather short for a final evaluation in this course. Although of course, we have been able to follow progress throughout the year via the blogs. Remarkable enough: the teams that got the highest score both have a rather minimalistic design...contrary to ourselves. Something to think about.
Friday, 18 May 2012
Presentation of Featr
You can have a look at our presentation of last Tuesday here. It was perhaps a little long, but it does give a nice overview of the progress of our work.
Tuesday, 15 May 2012
Digital prototype iteration 2 (validation)
Introduction
This post
describes the validation of the second iteration of our digital prototype.
After the first iteration of the prototype, we caught some minor problems and
suggested some solutions. In this iteration, we simply wanted to test whether those
changes had the desired effect.
1. Method and set-up
The method
and set-up are the same as the previous iteration, only we left the
questionnaire behind, since the goal was testing whether the small changes had
the desired effect. We executed some user interviews in the hall of CW, and let
them do exactly the same tasks is in the previous iteration.
To conduct
the interviews, we had our test subject sitting behind a computer in the hall
of the department, carrying out tasks as instructed by a team member. A second
team member took notes of the process, watching the screen and asking some
general questions afterwards.
Our prototype is shown in the picture below.
Our prototype is shown in the picture below.
2. Test subjects
As with the
previous test, we used engineering students as test subjects, because they were
the easiest to find and we are only testing the functionality in this
iteration, not the efficiency. For the tests of our implementation, we hope to
find a more varied test panel.
After two
test subjects, the same conclusions came up twice, so we decided not to test
any further.
3. Analysis and results
Firstly,
we'll look back on the changes and problems that came out of the previous
iteration. Secondly, we'll discuss new problems that arose in this iteration.
3.1
Review on the changes from previous iteration
Changing
priority names
Naming the priorities high, normal, low instead of 1,2,3 completely solved the problems of test users not knowing which was the highest priority. None of the users had problems when we asked them to label a message with the highest priority.
Recovery from error
Naming the priorities high, normal, low instead of 1,2,3 completely solved the problems of test users not knowing which was the highest priority. None of the users had problems when we asked them to label a message with the highest priority.
Recovery from error
When the
test users were asked to undo a delete operation (bring a message they sent to
trash back), all the users still went looking in the trash to retrieve the
message. When we asked them if they considered using the ‘undo’-button, they
indicated they would have used it if they had seen it. To solve this problem we
will increase the size of the undo-image en type the words ‘UNDO’ next to it.
Shortcuts
Shortcuts
We included
in our digital prototype the functionality that you can start a search
operation by pressing enter after you entered the search term. All test user
indeed used the enter button to start their search, so this is definitely an
improvement.
As for the
delete button, Axure didn’t give the possibility to implement a removal of
messages on delete. Since none of the test users tried to use the delete button
to remove any of the messages, this functionality wasn’t really missed. Of
course it wouldn’t hurt adding this functionality, but it’s not indispensable.
Read messages
We tested whether it was clear to the user that read messages change
colour by showing them a list of messages and asking which of them had already
been read. It was clear to all test users that the least bright messages were
the one that had already been read.
Advanced
search revisited
The
advanced search is now not automatically shown once you search for a term, in
this was an improvement. The uncertainty of whether or the test users should
press search again disappeared.
The
advanced search panel itself now looked like in the figure below. It was a lot
clearer for the test users how it worked now. The fact that the source
selection is now done on the right bar where it always happens, wasn’t clear
for all test users, but once we explained it, they could see why it was logical
to place it there. For the rest everything in the advanced search panel was
clear for the test users in this iteration. The advanced search panel is shown below.
Where am I?
Where am I?
When we
asked people after a certain task where they were in the application, they were
able to answer us right away. When we asked they how they knew where they were,
they told us they used the breadcrumbs on the top bar. One test user did remark
that the breadcrumbs may be a bit larger. We actually agreed so we decided to
make it a bit larger so people will definitely know where in the application
they are.
4.2 New
problems and solutions
The
unclearness of the next week, next month and later tabs
We came across one new problem in the user test. It wasn’t clear for the
users what time period we meant with the next week, next month and later tabs.
We already figured out ourselves that this could create problems for the users,
so we specifically asked them what time period they thought the different tabs
represented. They indicated that these time periods were indeed confusing. We
asked them whether it would be more clear when we would name the tabs
“>1week”, “>2weeks” and “>month”. They told that this would be more clear, but
this is one of the we would check in a following iteration, if we had the time.
4. Conclusion
This second
iteration with our digital prototype seems to prove that, aside from some very
small remarks, we're on the right track for the functionality of our
application. Of course the efficiency still needs to be tested with our
implementation. This will be done in the following week. The problems of the
previous iteration have largely been solved. For the one newly discovered
problem we suggested a solution to the test users and they seemed to agree with
it. Of course, to be absolutely sure we’d have to do a third iteration of the
digital prototype, but we prefer testing our implementation to check the
efficiency of our application.
Tuesday, 8 May 2012
Digital prototype iteration 1
Introduction
This post describes the first iteration of our digital prototype. Although it is quite static and only covers certain action, it shows quite reliably what the final application will look like an gives a realistic idea about its functionality. First we'll describe our method, set-up and test subjects. The method and set-up are almost exactly the same as the previous iteration.Next, we describe the results of this iteration, both evaluating the changes made last time and addressing new problems we met. Finally, we look ahead to the next iteration.
Monday, 7 May 2012
Status update 7/5
We've finished the usability tests with the first version of our digital prototype. A report will be posted this evening. We're going to adapt the changes discussed in our report to the mockup and subject it to some usability tests later this week.
We also decided to continue to create a real-life application with Google Web Toolkits (GWT), so we can test the impact of a large number of messages as to obtain a better measure for the efficiency of our design. We are going to finish the implementation before the presentation of May 15th. However we actually hope to also finish testing the implementation by then. In that case we could already present these newly obtained results.
An updated planning can be found below. Yasin put more effort in the implementation than planned, since he had to become familiar with the GWT API. For this reason we've reduced some of his efforts in other tasks (e.g. reporting).
We also decided to continue to create a real-life application with Google Web Toolkits (GWT), so we can test the impact of a large number of messages as to obtain a better measure for the efficiency of our design. We are going to finish the implementation before the presentation of May 15th. However we actually hope to also finish testing the implementation by then. In that case we could already present these newly obtained results.
Friday, 4 May 2012
Status update 4/5
Our digital
prototype is finally sufficiently functional to start user interviews. The user
scenarios we used for our last paper prototype iteration are all supported by
the mockup, though the behaviour of the mockup is still limited to what we want
the users to achieve. For this reason, we are having some difficulties in
converting the mockup to a real life application. The mockup is exported as a
combination of HTML and JavaScript. We already found a library to deal with RSS
feeds, so in principle it should be possible to combine it to the mockup (this
was our initial idea).
Subscribe to:
Posts (Atom)