Lokomotiv

Lokomotiv

Tuesday, 31 March 2015

Prototypes and usability testing.

The main aim of creating a prototype is to allow successful usability testing, through which important data can be gathered and problems with the product can be identified and removed, thus improving the product. The question then becomes: what should the nature of the prototype be? Naturally we want to constantly move towards a more sophisticated model, one that better represents an iteration of the final product, but most design processes cannot sustain (financially or time-wise) a constant stream of high fidelity prototypes. What ought we then consider when we think about usability testing without a high fidelity prototype, can such testing even be done? The literature says yes! Sauer et al presents us with a framework for conducting usability testing, and notes that the fidelity level of the prototype is not superior in importance to other aspects of usability testing (Sauer, J. Seibel, K. Ruttinger, B). The framework consists of four main areas: user characteristics, system prototype, testing environment and task scenarios. It is clear that that the system prototype is an important part of the equation however it is not the only one and the other three can be strengthened to compensate for a low-fidelity prototype.


Boothe et al provides us with an interesting study in relation to the effects of the fidelity of a prototype on the ability of the user to detect usability problems and the user's perception about the usability of the product. Boothe et al finds that the medium of a prototype does not in fact alter the ability of users to detect usability problems. The study also notes that the medium of the prototype only affects the ability to detect severe usability problems in particular scenarios. Finally the study finds that the medium does not greatly affect the users perception about the usability of the product (Boothe, C. Strawderman, L. Hosea, E.). Booth et al focus on the two mediums our group used as well, paper and computer, and as such these results are very useful to us.


In light of these two studies we decided to focus mainly on the appropriate user characteristics along with good task scenarios to allow us to get away using a simple paper prototype rather than having to make a high fidelity one as Booth et al seems to imply that the medium of the prototype is not paramount. We worked to find people close to our targeted user base, that is: close to our personas. The relative user competence was also gaged by having different variations of our of personas (CS student, Arts student etc) to test our product. We designed the task scenarios to have a narrow breadth, that is we expect the users to not use our product in an environment where the user has multiple tasks to consider, but rather can focus on operating our product. The depth of the task scenarios was comparatively quite large, as we wanted the user to try to exhaust all the options and features of our product. Sadly we were unable to use the proper testing environment since we didn't manage feel comfortable asking museum visitors to work with our prototype at the museum as to not make a disturbance (and our presence was already somewhat frowned upon by the staff).

Boothe, C. Strawderman, L. Hosea, E. (2013). The effects of prototype medium on usability testing. Applied Ergonomics. 44 (6), 1033-1038.

Sauer, J. Seibel, K. Ruttinger, B. (2010). The influence of user expertise and prototype fidelity in usability tests. Applied Ergonomics. 41 (1), 130-140.

Think-aloud

The think-aloud took place at a cafeteria with a liberal arts student (English literature). The person did not quite fit the persona:s we made since he was neither an avid museum visitor nor does he have the spare time both of our persona's have but he has a great interest in the arts and writes a lot so he is at least in the same ballpark as the “Jonas” persona. The first aspect the user noted was the overall concept, wondering if he would ever really use one of these interactive maps since he really didn't care about the number of people in the galleries.

After persuading him to take the product out on a test run anyways he started pressing the floor buttons. He commented on how it seemed odd to display the museum from a side view but noted that the symbols and the buttons were easy and intuitive to use. The reward system caused some problems as the user seemed dismissive towards the idea, but conceded that the system too was quite easy to understand. Further than this we didn't go, partly because of a lack of time and partly because our product really didn't have too many other functions than the dynamic map.

The user seemed to fare well in the use of the system, but the attitude towards the necessity of such a system, and some of the features (side-view and reward system), was less stellar. The main lessons from this exercise were 1) the overall layout is clear and intuitive and 2) some features are not exciting enough. Thus work continues on making the reward system and side-view better.

Iteration based on the personas and requirements

After working with our personas we turned to the literature to find out how we ought to use the personas in the design process. One major area of discussion for us was the area of requirements, in particular user requirements in relation to our newly created personas. The main user requirements we established for our product were: 1) The user should not require prompts and instructions to use the product and 2) The user should not have to spend a lot of time with the product. 1 and 2 are close related but still decisively different. We identified 1 as a requirement because the nature of the intended product: we want the product to be clear enough so that the user doesn't have to have any particular skills to use the product nor should the user have to spend time learning to use the product through prompts. 2 on the other hand deals with the fact that as the product is to be used while in a museum we don't want the use of the product to take up too much of the time the user could be spending admiring the artworks at the museum. Both 1 and 2 are in line with our personas, neither one has any particular interest in learning to use a new system and both personas have come to the museum to be calm and escape the stress of normal life, not to spend a long time operating our product.
Thus, based on the user requirements identified above, we iterated our design. The main change of the product was decreasing the amount of information on the map screen and making all the symbols as easy to recognize as possible. This made it so that the user only needed to take a short glance at the main map to see which halls were clogged with people and which halls were not, thus helping to balance the load of visitors in the museum.

As we were working with user requirements we also realized that due to the specific environment where the product was to be used (the museum) there were also some environmental requirements to consider. Chief among these was the fact that the product should not interfere with the audio and visual atmosphere of the museum. To do this we made the whole product completely quiet and reasoned that the panels ought only be installed in specific locations outside the galleries as to not disturb the visitors.

After exercise 1

Exercise 1, field studies and understanding the user, and the subsequent interviews.

In the first exercise we were introduced to the idea of field studies. We discussed with our group different methods of gaining an insight into the potential user-base of our product. We were instructed to do interviews and we set out to defining how we were to conduct these interviews and to what extent we should rely upon them. The most important consideration we tackled was that of establishing the goals of our interview. The literature tells is that rather than going in trying to figure out potential product ideas from open-ended questions to the public we ought to set clear goals, although we ought not be afraid to modify these goals if something new and exciting pops up. The goals we settled on were the following: identify when, why and how often the interviewee visits the chosen museum, and how the interviewee would characterize the nature of his visit. An added goal, and a more specific one relating to our product idea, was to get an understanding of how often the interviewee eats at the museum restaurant and to what extent the possible queues affect the interviewee's eating habits at the restaurant.

We also discussed how we ought to observe the potential users outside the formal interviews. We concluded that we ought to be as casual as possible as not to disturb the field environment and as such try to make a good “quick and dirty” observation. We chose to focus on an outsider observation since our skills as observers were not capable of conducting a participating observation and we didn't want to disturb the museum goers with such either. We decided to do a single interview each and then spend the rest of the time acting as normal museum visitors to observe the other museum visitors. The goals for the observations were to focus on how the visitors dealt with crowded spaces, in particular their emotional reactions, along with investigating the nature of the visitors that chose to eat at the restaurant.

Finally we decided on using audio recordings in addition to written notes. The Audio was used in the interviews while the written notes were used to catalog observations. We chose against using a camera due to the intrusive nature of such and the ban on filming inside the museum itself.

Wednesday, 25 March 2015

The Final Design

The Clickable Design
(Needs to be downloaded to be clickable)


Here follows the final designs most important pictures with discussion about it's clickable buttons
This is a sideview of the museum which shows all the exhibits present in the museum at once.
This was chosen as a starting screen which goes back to the choice of
museum which is going to be discussed in the design decisions blogpost.
The human symbols represent the amount of people in all the rooms. The meaning of the different ones will be discussed on the help page.
If you want to know something about a certain exhibit, you press on it,
and then you will get specific information about that exhibit

Left Floor Buttons: Sends you to the floor overview of the that floor, birdview.
Rooms: Sends you to specific information about the different exhibits.
Question Mark: Sends you to the help screen, showing the meaning of the symbols.
Refresh button: Updates the amount of people in the different rooms.
Pastry Button: Sends you to the Bull-point screen.
The symbol page gives a quick overview over the icons that are in the birdview: Stairs, elevator and emergency exit. It also shows what the different human symbols represent: Green is for few people, yellow for more and red is for many people in that room. This was, again, used on the sideview screen, the main screen, to show the amount of people in the different rooms.

Back Arrow: Back to the sideview screen.
Information screen about the different exhibits. As mentioned, the human represent the current amount of people in the room.
Back Arrow: Goes back to the sideview screen
Read More: Goes to the museums website to show more information about the exhibit.



Birdview of the differenet floors, this is the first floor. Shows helpful information for when you are at the museum, like where the stairs and elevators are. Note that the floors are color coded with the sideview so the same room always have the same color.

Sideview: Goes to the sideview, mainscreen.
Floorbuttons: Goes to other floors, cant click on the floor that you are currently at.
Icons up in right corner are the same as for the sideview.




An optional feature to use in the application is the feature of gathering
points from different exhibits around the museum. The points are
gathered by scanning QR codes at the exhibits and the fewer people that
are in your current room, the more points you get.
The idea goes back to that we want to balance out the museums visitors as
much as possible around the museum and with this way, some people might
be encouraged to go to places where it currently are less people, like
they might wait to go to the cafeteria if it’s a lot of people there.
If you gathered enough points you will be rewarded with a pastry in the cafeteria.
Some information about this can also be read on the app.

Monday, 9 March 2015

Our Product after the evaluation session.

Our product is a simple map application/interface with load-balancing properties. The map gives the visitors a clear, easy to use graphical interface which not only gives a simple representation of the current population levels in each of the galleries and in the restaurant, but also gives some information of each of the exhibits. The interface is essentially a top-down view of the museum floors, with each room containing a coloured symbol (pictured below) representing the current population status. This button is colour coded and changes depending on the population level in each gallery. The pictures below shows the different colours and what they represent.

The product comes in two forms. A larger map form, which can be displayed on panels in the museum, and a smartphone application. Another feature of the app is a collect-points-get-reward system where the user scans QR codes that are present in each exhibition. The codes are worth different amount of points depending on the current people load (e.g. a “green” room is worth the most and a “red” one the least) and when the goal is reached a coupon is earned which can be redeemed for a pastry in the cafeteria. This is probably not something for every app user but will hopefully promote migration to less populated exhibitions.

From the evaluation session we got a number of great suggestions for improvement ( these can be read about in an earlier blogpost). The suggestions were particularly related to the area of  buttons and availability. One of the main questions brought up was how the map was to be used if the user was colour blind. After reading on wikipedia we decided to keep colorblind people in mind when deciding the final color palette and layout but not to create an entirely separate “colorblind mode”.

Another upgrade we made was adding a statistic button (a very small one in the corner) which displays past information relating to the population levels in the current exhibits. This lets people see which exhibits have been very popular in the past, a piece of information some people might really enjoy. We want to keep the overall display very simple and clean, but we felt that adding a small button to the corner would not be too intrusive. A button we added to the smartphone version if the refresh button. This button refreshes the population indicators and gives the user a sense of tactile control over the application that an automatically updating solution wouldn’t provide.

The final upgrade that was proposed was a floor button which allows you to change between floors in a multiple story museum. This was a great suggestion since our group had completely forgot that many museums actually have multiple floors and having all the floors constantly shown on the screen would make the maps not only small but also leave the screens cluttered. This addition will no doubt make the map more legible and clearer to use.
All things considered the evaluation session was a great boon for our group. It gave us a chance to improve on key features and add vital upgrades that we might have missed had we not been able to have our product evaluated.

We noted this specific things in our design process.
We focused on one museum which meant that are designed was adjusted to fit that museum. Since our choice of museum was Fotografiska museet which has three different floors we thought an overlook using a sideview would be a good way to show the entire museum at once. If we would have chosen another museum, the option of only having the birdview, view from above, would have been more of a viable option, but in this scenario, sideview fitted us better.

We were thinking about adding more symbols to the prototype in the beginning, for example symbols representing if exhibits were new or are going to expire soon, but we skipped this in the final design to make the design look more clean and to avoid unnecessary clutter that do not help with are load balancing goal.
 

Tuesday, 3 March 2015

Reading Seminar 2 Conclusions

During this reading seminar we discussed the aspects of the evaluation process that are most relevant to our project. In particular we identified the following areas: Bias control, controlled testing, different evaluation methods and usability paradigms. In particular interest during the seminar was the area of bias control. To what extent ought we accept bias in the evaluation process? Can bias even be avoided?

As engineering students we are naturally attuned to abhor bias, it mucks up data and worsens not only our efficiency but also our productivity. However this was an eye-opening seminar for many of us as, in the field of  human-computer-interaction, avoiding bias, personal or user bias, is almost impossible. As such we had to start thinking about alternative models in which we not only acknowledge the bias but also work around it. In our case we are to let students from other groups evaluate our product and as such we are unable to get a truly unbiased evaluation (seeing as it is highly unlikely that our peers would speak their mind if they thought ill of our product) and thus our teacher assistant advised us not to waste our precious design-time worrying about bias but rather focus on making the prototypes better so that bias would be diminished.

Another area of discussion during the reading seminar was the extent to which already established frameworks could be applied to our evaluation process. As the efficiency and reliability of our evaluation process is paramount it seemed, to our group, that using a already widely tested evaluation framework would be highly fortuitous. Among the frameworks we investigated it was the DECIDE framework In particular that stood out accepted to our group as the best mode of evaluation and we aim to continue using this framework.

During the main introductory phase of gathering information about our target group we visited Fotografiska Muséet. The time of our field study happened to coincide with a rather idle period at the museum and therefore the number of visitors was quite low. This presented a problem for us since the idea for our product is mainly intended for periods of high pressure when there are queues and visitors might have a hard time finding a place to sit in the cafeteria for example.  Bearing this in mind, we’ve had to make sure that we kept the “intended” situation in mind when designing our prototype rather than the one we actually found ourselves in when conducting the research.

Evaluation Exercise

We got feedback on our project from another group. They instantly understood the main purpose of our prototype, which is a simple guide for museum visitors. Then when they were asked about our icons, and how apparent their function was. Our load balancing system with the little man shifting colors was simple to understand, when the man was red, it was full and if it was green there were a lot of space in that room. They didn’t understand it instantly, but after a few seconds, they assumed it was representing the amount of people in the room.   
load balancing system.

Most of the evaluation was about adding implementations to the prototype to improve it, for example orientation was an issue, whether you look at the map from a side view or from above; from a bird view. It was very difficult to understand where the view was coming from in the prototypes current state.

These were the things we will have to address:

  • Colorblind mode.
  • Floor selection (which floor is currently being shown).
  • Question mark button (which will show a “help” interface).
  • Refresh button (* A refresh button that refreshes the numbers, automatic for screens)
  • Symbols (various symbols for popular exhibits, new exhibits etc).
  • The “history” of the museum. (At which times is it crowded etc).
  • Add a clear way to distinguish between a side and bird view.
  • A queue timer to help visitors plan their schedule.
  • An intro animation to help with orientation (ties in with the different views).

*  = There will be some differences between the application for mobile phones and the screens at the museum. Screens at the museum are focused on being more automatic with refreshes and so on, but the app needs a refresh button so not unnecessary internet from the user will be taken, since not all museums give out free wifi to their visitors.

The other half of the exercises had us evaluating a fellow groups project. What we were presented with was a quiz app that was meant to the bridge gap between an elderly audience and a younger one through the spirit of competition. In essence the app incorporated a QR-code system wherein one person created a lobby and others could join by scanning a user-generated QR code on the host’s device. The questions were picked by the host and were relevant to the current exhibit.

Our suggestions were generally based on the 10 Usability Heuristics for User Interface Design developed by Jakob Nielsen. Our improvements included a system where the QR codes were not used to host games but to hold information and questions about the museum, thus making a nationwide server for questions (something originally proposed) unnecessary, this made scanning the host QR code unnecessary as other could connect via the museum’s questions directly. This improvement relates to Nielsen’s 6th principle (Recognition rather than recall) as users no longer needed the complex server system along with the difficulty of sharing QR codes.  Our improvements also included general improvements to the layout of the system, in particular the lobby structure, so that using the system became simpler and thus the magnitude of the enjoyment of the user would increase. This is in line with Nilsen’s 8th principle (Aesthetic and minimalist design) as a clear menu helps in  improving the Aesthetic experience of the application.

Tuesday, 24 February 2015

Reading Seminar 2


The most interesting part of the chapters were the part about the evaluation paradigms and techniques, how they were used together and when to use them.
The Quick and dirty evaluation is when the focus is on fast input rather than carefully documented findings. This focus a lot on how the user reacts in its natural environment.  
Even though fast input is good, sometimes you need specific things tested. That’s where Usability testing is better, since it’s strongly controlled by the evaluator and everything the test person does is documented. Fixed user tests in a more quantitative way.
If you as an evaluator needs more input from the users natural environment then Field studies  is better since it’s more about getting out in the field and realizing what the users need.
Predictive evaluation uses expert evaluators to find out what the average user need built on already set heuristics, any way existing of solving the problem at hand, the solve the existing problems. These tests aren’t connected to an actual user at all, no test person involved.

The techniques that go with these are observing, asking users and experts, user testing as well as modeling users task performance. Most of the different techniques apply in all the paradigms, like observing the users in question(exception in a predictive evaluation). But asking the users are more important in field studies, usability testing and quick and dirty evaluation when asking experts is more important in predictive evaluation as well as quick and dirty.

Ethical issues is something that has to be taken in mind, but also focus on identifying the practical issues like budget restrictions and also the time of the project.

Petter Andersson

When evaluating a project and product it is important to keep and follow a set structure. There are of course many ways to approach such an evaluation but after reading through the chapters covering this topic I feel there were some good tips to take to heart. First and foremost, taking advantage of the DECIDE* framework is probably very helpful no matter the product you're trying to evaluate. The described way of iterating through each step several times seems like a good way to do things which will allow you to continuously improve and add to the "plan" before actually committing any time or work to something that might not work in the end.

When conduction various evaluation methods in the field there are several points to keep in mind. You've got to know whether you want, for example an interview, to be structured or open-ended. Perhaps in some situations there is more to gain by just observing users interact with the product instead of asking them about it.

Another important point is to use and analyse the data gathered from the evaluation in a efficient and correct manner. Depending on the way the data was collected different methods should be used to analyse it. Some important concepts to consider are for example the reliability and validity of the data.

*DECIDE can be broken up into the following points:
1. Determine the goals
2. Explore the questions
3. Choose the evaluation methods
4. Identify the practical issues
5. Decide how to deal with the ethical issues
6. Evaluate, analyze, interpret, and present the data.
Ted Wanning

As i have understood it the core principles of the chapters was to evaluate, question the users and the running of tests. The core concept of the evaluation was first to acquire and stick to a rigid structure. For this purpose the book is intent on promoting the so called DECIDE framework. What this mean is that the framework can be summarised by the words Determine (goals), Explore (questions), Choose (evaluation methods), Identify (issues), Decide (dealing with ethical issues) and Evaluate (data). By using this framework in a process, where you iterate over the projects issues and solving the issues as the emerge.  Questioning users seems to be the most effective way of improving ones project since evaluating your own project can lead to bias. But for me, I felt like what the book brought up was just rehashing the subjects that has been discussed during the lectures, and doing quite the awful job at bringing up examples.

For example the book brings up testing and how it should be done but when actually applied in real life the testing as well as interviews tend to be rushed and with strange questions that promote vague answers. To summarise I found the chapters to be great at theory of evaluation and such things, but not so accomplished when it comes to apply it in certain work-projects.
Axel Swaretz


These chapters in the book was about evaluation and testing of a project, both during early stages of development and at later points of finetuning. A lot of what was mentioned seemed pretty obvious to me but I can see the importance in stressing the importance of continuous evaluation during the entire development cycle as well as actually taking what data is collected seriously. One of our lecturers told us a story of him starting a new job at a mayor state agency where they had developed a new computer system and hired several consulting firms to evaluate it, but they weren’t actually looking for feedback but more for a seal of approval and thus had hired several until they were satisfied. Thats not what evaluation is for.

I said I thought it felt obvious what the book brought up, but its probably more the case that it made a lot of sense when reading through it. Something as seemingly simple as coming up with a few appropriate interview questions takes a lot of thought to actually be effective, which is a problem we ran into when conducting our field-studies. I think the key to a successful evaluation is having a clear goal/direction or else time and effort will be wasted, the book presents several “evaluation frameworks” to achieve this such as DECIDE or GOMS.
David Sjöblom

Additionally to identifying user needs and setting requirements on your product according to those needs, some kind of user testing is of great importance as well. User testing results in useful data which tells the developers the usability degree of their product by the target user group in their specific environment. In order to optimize the usability design of the product it is a good idea to test your product among the intended users. There are of course different ways of approaching the user testing and which method to use depends on what kind of data you want to retrieve. In other words user testing discovers flaws or problems with the product as well as the advantages. It also makes the users' feel more involved with the product and ensures us that the product meets their needs, participatory design. It is not always true that the requirements set on the product at the early stages of the design process holds true in the end.

Therefore developers should follow the user-centered design models which includes user testing. Using different techniques like the GOMS model, Fitts' Law and the keystroke level model will help optimizing the product predicting the user performance. Apart from the data gathered, additional insight is gained which further helps with the future development of the product.


          Robert Wörlund

The chapters deal with the evaluation on of a product. The main topics brought up are what are the main attributes of the product that ought to be evaluated, how these attributes are to be tested, and how the data gathered from said tests ought to be evaluated. Of particular interest for our design process is the DECIDE framework as it can be directly applied. This framework allows us to more fully explore the evaluation process and to a greater depth understand the failings and triumphs of our product.

At our current time the most interesting of the DECIDE steps is the evaluation and analyzes of the data step seeing as we are nearing the point of having to do exactly that. Among the vital points of the analysis presented in the chapters (validity, reliability, bias, scope, and ecological validity) the one I found most relevant was the bias aspect. As no one in our group is a qualified researcher or an expert interviewer the danger is that the results from our interviews are distorted by our actions. However this problem might not be as fatal as we expect if we acknowledge it and try to not completely base our evaluation on methods that are especially prone to bias (like interviews).
            Jonas Hongisto