Thursday, 31 May 2012

Final report

The final report of our User Interfaces project can be found here: Final report Team Chill.

Reflections UI course


This blogpost contains our reflections on the User Interfaces course:

Lieven
Last year, I took the course ‘HCI: principles and design’ at KTH (Stockholm). The iterative UI design using both paper and digital prototypes was part of the course. We even had a small project to put it into practise. That was just enough to get started. The UI course I took here as part of my HCI program went far deeper, mostly because we had to work on a real-life scale. The constant need for reporting and feedback from others helped to stay alert and question the course we were following at the time. On evaluating our project, I see two shortcomings, mentioned before. For starters, we don’t have that much test users. I do however believe we extracted useful information of our subject. I must admit I’m rather reluctant to prompt users too much, which might be another explanation for our limited test audience. Secondly, we focused not enough on efficiency, but on functionality instead. As argued before, prototyping has its limits, but that shouldn’t have stopped us of at least trying to incorporate it more.
Finally, the methodology and the user’s active role in it are two important things I take with me from this course, together with a reinforced striving to design in a user-centred way – after all, he or she is the one measure of quality that matters.

Yasin
With this course we have learned how to evaluate and design a user friendly application by applying methods that are proven to be successful in the past years. For achieving this we iterate our test phases with the users and look for problems and solutions thereby. So each iteration brings an improvement to our application. Overall this whole process leads us to our goal which is to make an application which users have a good satisfaction and experience with. By following these methodology we were able to design an application which provided a solution to information overload. If we were not given any of these methods we would probably end up with making something that wasn't really a good solution and thereby losing too much time. The course itself is very structured and demands continuous involvement. By giving results via blogs we were able to get some feedback and apply changes to our design if needed. At first this seems time consuming but in the end we have seen how important this really was. Because finding flaws in time instead of waiting till the end saves us all so much headache. For those who are interested in human centered design should definitely take a look at this course. It’s really worth it and you’ll have so much fun thanks to the interactiveness of it. By reading and writing comments on students’ blogs we were able to find some issues with their and our application. Sometimes we made comparisons with those designs that were similar to ours (e.g. team Sjiek’s Focus) and Incorporated some of their ideas to ours while giving them some suggestions as well. In the end this helped us a lot and student colleagues served as expert evaluators.

Ward
I think that this course was very useful in the way that we that we were exposed to the a real-life usability design process. At first I thought that the evaluation techniques presented in the course and used during the project were rather theoretical. But as told in the presentation of the Capgemini people, things like paper prototyping in an iterative design are used to develop real-life applications. In this course we were obligated to work actively on the project during the entire semester and this is definitely a necessity if you want to create a good user interface.
I think a strong point of our project is that followed a structured approach in each iteration. By continuously checking whether problems from the previous iteration were solved and whether new problems arose, I believe we were able to create a decent interface. Of course, if it was a business application we were creating, some more iterations would follow. Also when looking back, I would start testing the efficiency of the application sooner. Sadly this wasn’t possible since our implementation wasn’t ready at the time.
By being obliged to read and comment on other blogs, we were able to see where other groups were in the development process and what interesting insights they obtained. Also by getting feedback from other groups we received some additional helpful insights for the development process.

Saturday, 19 May 2012

Rationale for our score

At the end of the presentation session, we had to grant each group a number of points. We had 10 points at our disposition...and three minutes to make up our mind, which proved to be rather short for a final evaluation in this course. Although of course, we have been able to follow progress throughout the year via the blogs. Remarkable enough: the teams that got the highest score both have a rather minimalistic design...contrary to ourselves. Something to think about.

Friday, 18 May 2012

Presentation of Featr

You can have a look at our presentation of last Tuesday here. It was perhaps a little long, but it does give a nice overview of the progress of our work.

Tuesday, 15 May 2012

Digital prototype iteration 2 (validation)


Introduction
This post describes the validation of the second iteration of our digital prototype. After the first iteration of the prototype, we caught some minor problems and suggested some solutions. In this iteration, we simply wanted to test whether those changes had the desired effect.

1. Method and set-up


The method and set-up are the same as the previous iteration, only we left the questionnaire behind, since the goal was testing whether the small changes had the desired effect. We executed some user interviews in the hall of CW, and let them do exactly the same tasks is in the previous iteration.
To conduct the interviews, we had our test subject sitting behind a computer in the hall of the department, carrying out tasks as instructed by a team member. A second team member took notes of the process, watching the screen and asking some general questions afterwards.
Our prototype is shown in the picture below.


2. Test subjects
As with the previous test, we used engineering students as test subjects, because they were the easiest to find and we are only testing the functionality in this iteration, not the efficiency. For the tests of our implementation, we hope to find a more varied test panel.
After two test subjects, the same conclusions came up twice, so we decided not to test any further.

3. Analysis and results
Firstly, we'll look back on the changes and problems that came out of the previous iteration. Secondly, we'll discuss new problems that arose in this iteration.

3.1 Review on the changes from previous iteration
Changing priority names
Naming the priorities high, normal, low instead of 1,2,3 completely solved the problems of test users not knowing which was the highest priority. None of the users had problems when we asked them to label a message with the highest priority.

Recovery from error
When the test users were asked to undo a delete operation (bring a message they sent to trash back), all the users still went looking in the trash to retrieve the message. When we asked them if they considered using the ‘undo’-button, they indicated they would have used it if they had seen it. To solve this problem we will increase the size of the undo-image en type the words ‘UNDO’ next to it.

Shortcuts
We included in our digital prototype the functionality that you can start a search operation by pressing enter after you entered the search term. All test user indeed used the enter button to start their search, so this is definitely an improvement.
As for the delete button, Axure didn’t give the possibility to implement a removal of messages on delete. Since none of the test users tried to use the delete button to remove any of the messages, this functionality wasn’t really missed. Of course it wouldn’t hurt adding this functionality, but it’s not indispensable.

Read messages
We tested whether it was clear to the user that read messages change colour by showing them a list of messages and asking which of them had already been read. It was clear to all test users that the least bright messages were the one that had already been read.

Advanced search revisited
The advanced search is now not automatically shown once you search for a term, in this was an improvement. The uncertainty of whether or the test users should press search again disappeared.
The advanced search panel itself now looked like in the figure below. It was a lot clearer for the test users how it worked now. The fact that the source selection is now done on the right bar where it always happens, wasn’t clear for all test users, but once we explained it, they could see why it was logical to place it there. For the rest everything in the advanced search panel was clear for the test users in this iteration. The advanced search panel is shown below.




Where am I?
When we asked people after a certain task where they were in the application, they were able to answer us right away. When we asked they how they knew where they were, they told us they used the breadcrumbs on the top bar. One test user did remark that the breadcrumbs may be a bit larger. We actually agreed so we decided to make it a bit larger so people will definitely know where in the application they are.

4.2 New problems and solutions

The unclearness of the next week, next month and later tabs
We came across one new problem in the user test. It wasn’t clear for the users what time period we meant with the next week, next month and later tabs. We already figured out ourselves that this could create problems for the users, so we specifically asked them what time period they thought the different tabs represented. They indicated that these time periods were indeed confusing. We asked them whether it would be more clear when we would name the tabs “>1week”, “>2weeks” and “>month”. They told that this would be more clear, but this is one of the we would check in a following iteration, if we had the time.

4. Conclusion
This second iteration with our digital prototype seems to prove that, aside from some very small remarks, we're on the right track for the functionality of our application. Of course the efficiency still needs to be tested with our implementation. This will be done in the following week. The problems of the previous iteration have largely been solved. For the one newly discovered problem we suggested a solution to the test users and they seemed to agree with it. Of course, to be absolutely sure we’d have to do a third iteration of the digital prototype, but we prefer testing our implementation to check the efficiency of our application. 

Tuesday, 8 May 2012

Digital prototype iteration 1

Introduction

This post describes the first iteration of our digital prototype. Although it is quite static and only covers certain action, it shows quite reliably what the final application will look like an gives a realistic idea about its functionality.  First we'll describe our method, set-up and test subjects. The method and set-up are almost exactly the same as the previous iteration.
Next, we describe the results of this iteration, both evaluating the changes made last time and addressing new problems we met. Finally, we look ahead to the next iteration.

Monday, 7 May 2012

Status update 7/5

We've finished the usability tests with the first version of our digital prototype. A report will be posted this evening. We're going to adapt the changes discussed in our report to the mockup and subject it to some usability tests later this week.

We also decided to continue to create a real-life application with Google Web Toolkits (GWT), so we can test the impact of a large number of messages as to obtain a better measure for the efficiency of our design. We are going to finish the implementation before the presentation of May 15th. However we actually hope to also finish testing the implementation by then. In that case we could already present these newly obtained results.


An updated planning can be found below. Yasin put more effort in the implementation than planned, since he had to become familiar with the GWT API. For this reason we've reduced some of his efforts in other tasks (e.g. reporting).

Friday, 4 May 2012

Status update 4/5


Our digital prototype is finally sufficiently functional to start user interviews. The user scenarios we used for our last paper prototype iteration are all supported by the mockup, though the behaviour of the mockup is still limited to what we want the users to achieve. For this reason, we are having some difficulties in converting the mockup to a real life application. The mockup is exported as a combination of HTML and JavaScript. We already found a library to deal with RSS feeds, so in principle it should be possible to combine it to the mockup (this was our initial idea).

Monday, 30 April 2012

Session with Mr. Xavier Ochoa

During our session of HCI course on 24th of april we had the opportunity meet Mr. Xavier Ochoa from Ecuador. Mr. Xavier Ochoa is a professor in Ecuador who works in a research center with his team. He mainly focuses on developing new ways of human-computer interaction technologies for low budget.

He expressed that as developer it's our task to put humans in center and that we should give attention to all kinds of audiences to solve their problems using the knowledge we gathered in our life.
During his presentation he showed the students a few prototypes his team has developed that are targeted for people who can't afford to buy expensive technologies. For instance, his team has succeeded to build a prototype for movement detection. While this technology exists today, it's done using very sophisticated and expensive technologies. In his case they were able to build it by using cheap devices like web cams and gloves. It's really astonishing achieving the same for much lower budget. Providing solutions for such challenging tasks is something that we as developers should aim for in our lives.
Making technology accessible for all people is another challenge he discussed about. When developing a technology we should also keep those people that are psychically limited in mind. These new challenges bring new ways of thinking and looking at the technology. Instead of being narrow-minded we begin to have a broader vision on future technologies.

I want to thank you Mr. Xavier Ochoa for his great presentation and taking our attentions to the possibilities we
can achieve by thinking in different ways. I wish his team good luck with his research.

Google+ redesign

Hey, everyone We have seen some recent changes made to Google+. As you all know Google is trying its best to take its place as a serious player in social network market on the internet. Many years ago there was not much of a competition in this field, if you still remember the days of Myspace being the only dominant player. But as years have passed there started to appear other competitors like Facebook that ultimately dethroned Myspace and totally buried it in its grave. So what was the problem with Myspace? Well, it couldn't see the needs of the new generation of people and was unable to adapt itself to the changes. In that time Facebook came along and offered some great ways to socialize our life. It was a great success and still is today. Like in those days the market has some other players as well but these are in minority. As you all know Google has been very interested to dive into this market. It has tried many ways but none of them succeeded. Some says these were trial and error tactics of Google to find the right solution. So Google had announced last year its new social network called Google+. In its initial release it had many cool ideas like circles, hangouts and a hybrid like system of Twitter and Facebook. These ideas sound great but in order to be successful Google had to persuade people that its social application is different and innovative while still looks familiar.

Friday, 27 April 2012

Capgemini presentation: CHI in action


Last Tuesday professor Duval invited some people from Capgemini to our lesson to present a mobile project they were working on. The goal was showing that the things we’ve seen in our lessons do apply in the real world. And I must say, they succeeded.

Thursday, 26 April 2012

On the history of HCI

The day before yesterday, professor Duval told us a little about the history of HCI. As always, IT history sounds too incredible to be true. Yet, it is. A story of vision, opportunities and a glance at what might lie ahead.

Sunday, 22 April 2012

Paper prototype Iteration 2

0. Introduction
This blog post contains the second iteration of our paper prototype. For this iteration we evaluated the changes we discussed in the first iteration to see whether they’ve caused the desired effect.

1. Method
As in the first iteration, we did some user tests with the paper prototyping method. The purpose this time was to test whether the changes after the first iteration leaded to the desired improvements.

We combined the paper prototyping with a little survey we conducted after each user interview. That way we hoped to get an honest opinion of the test users regarding the usefulness of the application and some design choices we made, whereas only paper prototyping doesn’t really investigates this. The survey contained some general questions concerning the user-friendliness and usefulness of the application as well as some design choices. We added the design choice questions to check whether some design decisions we had some doubts about where indeed right.

The usefulness and user-friendliness questions came from the CSUQ questionnaire.We choose this questionnaire because it is not particularly long and really focuses on comfort of use and efficiency of the interface. We've dropped some questions, for example those on error handling, because they didn't really apply to our prototype and we didn’t want to browbeat our test users, as we also wanted to ask some questions concerning design choices. The complete survey can be found in our previous blog post.

Update planning & Gantt-chart

We have updated our Gantt-chart and made planning on how many hours we will work on differerent tasks. Below you see these two tables for all tasks.


Figure1: Final Gantt-chart


Figure2: Planning

Sunday, 15 April 2012

Questionnaire

The questionnaire we want to use combined with our usability interviews is partially an adapted version of the CSUQ questionnaire. We choose CSUQ because it is not particularly long and really focuses on comfort of use and efficiency of the interface. As a complement to the interviews it gives more general and direct information than for example a QUIS questionnaire (at least for this application). The scores go from 1 to 7. We grouped the questions by theme, because that way we are better equipped to draw some conclusions.

Wednesday, 11 April 2012

Planning

This post contains our planning for the remainder of this course. The most important aspect is perpetually improving our design in different iteration using UI evaluation techniques. There are 2 big course-defined deadlines remaining: a final presentation of our work is expected for May 15th, a final report by June 1st.


Tuesday, 10 April 2012

Featr storyboard

Because our application changed from Rankr to Featr, is seemed useful to reiterate on the storyboard. We developed a new one, (mostly) according to our current version though some minor improvements in the prototype have not been taken into account.

We explicitly started from our prototype. Henceforth, the storyboard is quite detailed and might even seem very cluttered. This is due to the effort to fit everything in the small boxes and still keep it readable (more or less). It does however, give an idea about how to operate our application.

Thursday, 5 April 2012

Featr user scenario

John is a 40 year old business manager. He receives on average 150 mails a day, is active on Twitter and Google+, follows blogs which could be interesting for his company and is in contact with his colleagues via Yammer. Handling all this streams of information separately would take him too much time, so he uses our Featr application to save time.

After his lunch break on Monday, John starts Featr and all new messages are shown on the main page. He first wants to process his mails, so at the right sidebar he only selects mail as a source of messages. He has 44 new mails. 8 of the mails concern a new project ‘X’ the company is starting, so he tags them with the name ‘project X’. There are also 3 very important mails coming from his boss. He gives these messages priority level 1. 11 mails absolutely need to be handled before leaving the workplace, so he drags them to ‘Today’ on top of the left sidebar. The user-defined deadlines pop up and he drags them further to ‘At  work’. Another 13 just need to be handled today, so he drags them to ‘Before sleeping’. The other 20 mails are dragged to ‘Tuesday’ -  ‘At work’.

Next John chooses to process all other incoming messages so he checks the ‘All sources’ button of the right sidebar. Next he drags the messages to the deadlines he chooses. John also drags some useless messages directly to the trash-folder. When all messages are processed, John switches to the ToDo tab pane using the tabs on the top and selects ‘Today’ - ‘At work’ on the left sidebar to see which messages needs to be handled before he leaves the workplace. He first handles some messages from his boss who have priority level 1. John opens the messages in a pop-up by double-clicking them. When he wants to take an action like respond to a mail, the application will redirect him to his mail client. He then handles all the messages with a ‘project X’-tag. Next he needs to go to a meeting so he will have to handle the other messages some other time.

Just before going home, he processes all new messages and handles all the messages that needed to be done before leaving the workplace. He then goes home to his wife and kids and has lunch. After reading a bed-time story to his children he handles all messages that he needed to look at before going to bed. He can then go to sleep knowing he has responded to everything that needed to be taken care of.

Friday, 30 March 2012

Paper prototyping: the next iteration(s)

As explained in our previous post, we changed concept from Rankr to Featr.
To test the interface with actual users, we created a paper prototype. Such a prototype allows to get fast feedback and incorporate changes rapidly, because all you have to do is draw a new design. We'll now discuss our first iteration and evaluation.

Change of concept

In our early concept named Rankr the idea was to filter and rank incoming streams of messages in an automated way so the user can process these in an efficient way. Because the system takes the burden of filtering and ranking from the user we thought this should be the key idea to base our application upon. At first it sounded great on paper, very efficient once it's realized and fully functional. But relying on this kind of automated system is also very optimistic in the sense that we expect it to filter streams just the way the user expects it to be. Clearly with the current technology this can be done to some extent but it also has many flaws and weaknesses. Such a weakness can be the system filtering messages, based on some criteria given by user, and putting them in places where they are not meant to be. This undesirable effect can cause loss of messages. So we decided not to use any form of automated filtering or ranking.

Sunday, 11 March 2012

Application scenario

Introduction

Nowadays many people struggle with the information overload caused by all kinds of modern communication. Several principles have been presented to cope with this huge activity stream. First there is the issue of presenting information in a concise, hierarchical and yet informative way. To this end, filtering, clustering and ranking are essential. Another issue involves processing messages. Some should be removed immediately, others filed for later. In this course we are given the task to develop a product supporting this functionality in a human centered way.


Wednesday, 7 March 2012

Brainstorming techniques 1.0, guest star: prof M. Specht

In last UI session we had an introduction in some brainstorming techniques by professor Markus Specht. The bottomline was to design a program with keywords ‘filter’, ‘information’ and ‘priority’. In order to get us on the right track we started with a core dump: we had to write down all associations with this words.  This is a good way to get a lot of different ideas in a short time span. We think this part of the session could perhaps be more productive if it would be performed in an iterative fashion. This way, associations made by others can spawn some new ideas.
In the next phase, three random associations were handed out to each group to design an application based on these three words. In 15 minutes we had to think out a concept, name and business model, which we had to present to our fellow students. By forcing us to do this in such a short time, we were limited to the essence of the design process. The key words gave a clear scope. Yet the usefulness and the attractiveness of the product had to be considered as well, so users would see why they should use the product.
The following step was writing down a user scenario for the product we designed. This helped us think about specific and concrete problems users could have with the product. This scenario was presented to the class, after which other students were invited to ask questions and point out strengths and weaknesses. These comments served as a first feedback about a broad concept.
The last part was evaluating the other designs, by handing out a total of 10 points to the other teams. Unfortunately, we were running out of time and the evaluation had to be done with few consideration. So in our opinion, this step didn’t really indicate the quality of the designs.
But indeed, professional situations are very similar in the way that when you can’t sell an idea in five minutes, you won’t sell it at all. Then again, in real life situations, marketing strategies tend to be a little more thought through.

Things we've learnt about UI evaluation


One thing that really stands out after reading the comments on our Google+ evaluation is that we had to be more explicit about our underlying assumptions. For example we didn’t indicate how we invited respondents for our QUIS survey or for the user interviews. We did say why we chose the QUIS 5.0 questionnaire, but we didn’t mention it in the extensive document. We didn’t indicate why we chose the actions executed by the test users in the usability tests either.

Also, choosing a general survey always carries the risk of having some questions that are not a 100% applicable to the specific setting. So we sometimes tried to extract information from questions that were not really relevant.

We noticed the importance of good data representation, which helps the reader understand and verify our the results of our tests. In this regard, we think the box plot with the scores of all the QUIS questions was a good choice, but it would have been more informative if we would have placed the questions instead of numbers on the x-axis.

We already realized beforehand that 11 users is a small test group the extract some representative results from a questionnaire. Maybe we should have thought of some additional ways to reach more respondents.

Friday, 24 February 2012

Google+ evaluation results

On the 20th of September 2011 after a test period of about three months, Google released Google+, its social network site, for the general public. It introduces some new concepts such as Circles to organise contacts and Hangouts for video chat. In order to get an idea of what users think of the application we conducted a small evaluation. This article presents a summary of our results. A more extensive document can be found here.

Saturday, 18 February 2012

QUIS survey

Our adapted QUIS survey is available now! The original approach is unchanged, but we added some questions on general information:
  • Age class
  • IT experience
  • Time spent on social networks
  • Time spent on Google+
  • Use of other Google+ products
Participation in our evaluation is quick and easy. You can find our form at www.evalgoogleplus.tk

UI annoyances

In daily life, we encounter usability issues all the time. This can be in software, but it is applicable to general design as well. This article highlights some examples of bad design choices, both in everyday objects and software.

UI Annoyances: BNP Paribas Fortis

Internet has become very popular thanks to the new technological developments in internet technology. People are doing their daily tasks via internet and it saves us a lot of time and reduces the overhead. One of the applications we often make use of is internet banking. It has many benefits over old-fashioned banking where people have to visit a bank in order to fulfil their needs. Internet banking can be a useful alternative. However, as the remainder of the text points out, it might have some issues of its own.

Thursday, 16 February 2012

Actions for test-users


We'll ask some test-users to perform following actions in Google+ during the think-aloud interviews:

1. Register on Google+
2. Add one of us to the circle of acquaintances
3. Create a new circle, add someone to that circle and delete the circle
4. Start a videochat (without mentioning it's called a hangout)
5. Log out
6. Delete user profile (advanced)

These tasks must be completed without us giving any additional information. While the test-users perform these tasks, the sequence of screens, the thoughts of the test users while performing the tasks and the duration will be recorded.

Result first group meeting

We just finished our first group meeting to discuss the evaluation techniques we're going to use for Google+. We decided to send out a QUIS questionnary with some additional questions about the participants' background. This is an excellent and well-established method to quickly gain insight in the user's experience for various aspects of an application. However, we also intend to cross-validate the results obtained with this questionnary by some 'lightweight' think-aloud interviews. Several users will be asked to perform typical tasks in Google+, while explaining what they're doing to the interviewer. Both the screen and audio will be recorded. These data allow us to compare tendencies in detailed cases with the larger picture. The questionnaire is being prepared and will be available before next week. We'll keep you posted.

Wednesday, 15 February 2012

Getting started

Welcome to our blog! We are CHIll, one of the Master student teams enrolled in the User Interfaces course at the KULeuven. In the coming months, we'll post regular updates on our work, based on the given topics. Up next: evaluation of Google+ from a user point of view. Any constructive comments you may want to share are a welcome addition. Follow us (and the other teams) on Twitter as well: #chikul12