The most interesting part of the chapters were the part about the evaluation paradigms and techniques, how they were used together and when to use them.
The Quick and dirty evaluation is when the focus is on fast input rather than carefully documented findings. This focus a lot on how the user reacts in its natural environment.
Even though fast input is good, sometimes you need specific things tested. That’s where Usability testing is better, since it’s strongly controlled by the evaluator and everything the test person does is documented. Fixed user tests in a more quantitative way.
If you as an evaluator needs more input from the users natural environment then Field studies is better since it’s more about getting out in the field and realizing what the users need.
Predictive evaluation uses expert evaluators to find out what the average user need built on already set heuristics, any way existing of solving the problem at hand, the solve the existing problems. These tests aren’t connected to an actual user at all, no test person involved.
The techniques that go with these are observing, asking users and experts, user testing as well as modeling users task performance. Most of the different techniques apply in all the paradigms, like observing the users in question(exception in a predictive evaluation). But asking the users are more important in field studies, usability testing and quick and dirty evaluation when asking experts is more important in predictive evaluation as well as quick and dirty.
Ethical issues is something that has to be taken in mind, but also focus on identifying the practical issues like budget restrictions and also the time of the project.
Petter Andersson
When evaluating a project and product it is important to keep and follow a set structure. There are of course many ways to approach such an evaluation but after reading through the chapters covering this topic I feel there were some good tips to take to heart. First and foremost, taking advantage of the DECIDE* framework is probably very helpful no matter the product you're trying to evaluate. The described way of iterating through each step several times seems like a good way to do things which will allow you to continuously improve and add to the "plan" before actually committing any time or work to something that might not work in the end.
When conduction various evaluation methods in the field there are several points to keep in mind. You've got to know whether you want, for example an interview, to be structured or open-ended. Perhaps in some situations there is more to gain by just observing users interact with the product instead of asking them about it.
Another important point is to use and analyse the data gathered from the evaluation in a efficient and correct manner. Depending on the way the data was collected different methods should be used to analyse it. Some important concepts to consider are for example the reliability and validity of the data.
*DECIDE can be broken up into the following points:
1. Determine the goals
2. Explore the questions
3. Choose the evaluation methods
4. Identify the practical issues
5. Decide how to deal with the ethical issues
6. Evaluate, analyze, interpret, and present the data.
Ted Wanning
As i have understood it the core principles of the chapters was to evaluate, question the users and the running of tests. The core concept of the evaluation was first to acquire and stick to a rigid structure. For this purpose the book is intent on promoting the so called DECIDE framework. What this mean is that the framework can be summarised by the words Determine (goals), Explore (questions), Choose (evaluation methods), Identify (issues), Decide (dealing with ethical issues) and Evaluate (data). By using this framework in a process, where you iterate over the projects issues and solving the issues as the emerge. Questioning users seems to be the most effective way of improving ones project since evaluating your own project can lead to bias. But for me, I felt like what the book brought up was just rehashing the subjects that has been discussed during the lectures, and doing quite the awful job at bringing up examples.
For example the book brings up testing and how it should be done but when actually applied in real life the testing as well as interviews tend to be rushed and with strange questions that promote vague answers. To summarise I found the chapters to be great at theory of evaluation and such things, but not so accomplished when it comes to apply it in certain work-projects.
Axel Swaretz
These chapters in the book was about evaluation and testing of a project, both during early stages of development and at later points of finetuning. A lot of what was mentioned seemed pretty obvious to me but I can see the importance in stressing the importance of continuous evaluation during the entire development cycle as well as actually taking what data is collected seriously. One of our lecturers told us a story of him starting a new job at a mayor state agency where they had developed a new computer system and hired several consulting firms to evaluate it, but they weren’t actually looking for feedback but more for a seal of approval and thus had hired several until they were satisfied. Thats not what evaluation is for.
I said I thought it felt obvious what the book brought up, but its probably more the case that it made a lot of sense when reading through it. Something as seemingly simple as coming up with a few appropriate interview questions takes a lot of thought to actually be effective, which is a problem we ran into when conducting our field-studies. I think the key to a successful evaluation is having a clear goal/direction or else time and effort will be wasted, the book presents several “evaluation frameworks” to achieve this such as DECIDE or GOMS.
David Sjöblom
Additionally to identifying user needs and setting requirements on your product according to those needs, some kind of user testing is of great importance as well. User testing results in useful data which tells the developers the usability degree of their product by the target user group in their specific environment. In order to optimize the usability design of the product it is a good idea to test your product among the intended users. There are of course different ways of approaching the user testing and which method to use depends on what kind of data you want to retrieve. In other words user testing discovers flaws or problems with the product as well as the advantages. It also makes the users' feel more involved with the product and ensures us that the product meets their needs, participatory design. It is not always true that the requirements set on the product at the early stages of the design process holds true in the end.
Therefore developers should follow the user-centered design models which includes user testing. Using different techniques like the GOMS model, Fitts' Law and the keystroke level model will help optimizing the product predicting the user performance. Apart from the data gathered, additional insight is gained which further helps with the future development of the product.
The chapters deal with the evaluation on of a product. The main topics brought up are what are the main attributes of the product that ought to be evaluated, how these attributes are to be tested, and how the data gathered from said tests ought to be evaluated. Of particular interest for our design process is the DECIDE framework as it can be directly applied. This framework allows us to more fully explore the evaluation process and to a greater depth understand the failings and triumphs of our product.
At our current time the most interesting of the DECIDE steps is the evaluation and analyzes of the data step seeing as we are nearing the point of having to do exactly that. Among the vital points of the analysis presented in the chapters (validity, reliability, bias, scope, and ecological validity) the one I found most relevant was the bias aspect. As no one in our group is a qualified researcher or an expert interviewer the danger is that the results from our interviews are distorted by our actions. However this problem might not be as fatal as we expect if we acknowledge it and try to not completely base our evaluation on methods that are especially prone to bias (like interviews).
Jonas Hongisto