Human Computer Interaction

KANESWARAN SACHCHITHANANTHAN
11 min readDec 27, 2020

--

Human Computer Interaction

What is Human-Computer Interaction (HCI)?

Human-computer interaction (HCI) is a multidisciplinary field of study focusing on the design of computer technology and, in particular, the interaction between humans (the users) and computers. While initially concerned with computers, HCI has since expanded to cover almost all forms of information technology design.

Why does HCI matter?

Human-Computer Interaction enables UX and User Interface (UI) designers all over the world to produce better, more user-focused computers, helping every consumer of that product or service. From ensuring machines continue to operate in safe, secure and user-friendly ways, to allowing users of all abilities to interact with computers, HCI is invaluable in making sure that computers are designed for successful and intuitive human use.

Design rules for interactive systems

Learnability

Learnability means the ease with which new users can begin effective interaction and achieve maximal performance. The learnability of a product can be measured under four areas namely effectiveness, efficiency, satisfaction and errors.

Flexibility

Flexibility means the multiplicity of ways in which the user and system exchange information. There should not be an tough way for user to get information from the system. It also includes the ability of the system to support user interaction for more than one task at a time.

Robustness

Robustness is the level of support provided to the user in determining successful achievement and assessment of goals. It is the extent to which the user can reach the intended goal after recognizing an error in the previous interaction.

Standards and Guideline for Interactive systems

Standards for interactive system design are usually set by national or international bodies to ensure compliance with a set of design rules by a large community. Standards can apply specifically to either the hardware or the software used to build the interactive system.

Shneidermans’s 8 Golden Rules

Ben Shneiderman, an American computer scientist consolidated some implicit facts about designing and came up with the following eight general guidelines

1. Strive for consistency in action sequences, layout, terminology, command use and so on.

2. Enable frequent users to use shortcuts, such as abbreviations, special key sequences and macros, to perform regular, familiar actions more quickly.

3. Offer informative feedback for every user action, at a level appropriate to the magnitude of the action.

4. Design dialogs to yield closure so that the user knows when they have completed a task.

5. Offer error prevention and simple error handling so that, ideally, users are prevented from making mistakes and, if they do, they are offered clear and informative instructions to enable them to recover.

6. Permit easy reversal of actions in order to relieve anxiety and encourage exploration, since the user knows that he can always return to the previous state.

7. Support internal locus of control so that the user is in control of the system, which responds to his actions.

8. Reduce short-term memory load by keeping displays simple, consolidating multiple page displays and providing time for learning action sequences.

These rules provide a useful shorthand for the more detailed sets of principles described earlier. Like those principles, they are not applicable to every eventuality and need to be interpreted for each new situation. However, they are broadly useful and their application will only help most design projects.

Norman’s 7 Principles

To assess the interaction between human and computers, Donald Norman in 1988 proposed seven principles. He proposed the seven stages that can be used to transform difficult tasks. Following are the seven principles of Norman

  • Use both knowledge in world & knowledge in the head.
  • Simplify task structures.
  • Make things visible.
  • Get the mapping right (User mental model = Conceptual model = Designed model).
  • Convert constrains into advantages (Physical constraints, Cultural constraints, Technological constraints).
  • Design for Error.
  • When all else fails − Standardize.

Evaluation techniques for interactive systems

What is evaluation?

When we use a design process, we still need to assess our designs and test our systems to ensure that they actually behave as we expect and meet user requirements. This is the role ofevaluation.

Evaluation should not be thought of as a single phase in the design process (still less as an activity tacked on the end of the process if time permits). Ideally, evaluation should occur throughout the design life cycle, with the results of the evaluation feeding back into modifications to the design.

Clearly, it is not usually possible to perform extensive experimental
testing continuously throughout the design, but analytic and informal techniques can and should be used.

Goals of Evaluation

Evaluation has three main goals: to assess the extent and accessibility of the system’s functionality, to assess users’ experience of the interaction, and to identify any specific problems with the system. The design of the system should enable users to perform their intended tasks more easily. In addition to evaluating the system design in terms of its functional capabilities, it is important to assess the user’s experience of the interaction and its impact upon him. The final goal of evaluation is to identify specific problems with the design. These may be aspects of the design which, when used in their intended context, cause unexpected results, or confusion amongst users.

Evaluation through expert analysis

In particular, the first evaluation of a system should ideally be performed before any implementation work has started. A number of methods have been proposed to evaluate interactive systems through expert analysis. These depend upon the designer, or a human factors expert, taking the design and assessing the impact that it will have upon a typical user. The basic intention is to identify any areas that are likely to cause difficulties because they violate known cognitive principles, or ignore accepted empirical results. These methods can be used at any stage in the development process from a design specification, through storyboards and prototypes, to full implementations, making them flexible evaluation approaches. They are also relatively cheap, since they do not require user involvement. However, they do not assess actual use of the system, only whether or not a system upholds accepted usability principles.

  • Cognitive walkthrough :-The origin of the cognitive walkthrough approach to evaluation is the code walkthrough familiar in software engineering. Walkthroughs require a detailed review of a sequence of actions. In the code walkthrough, the sequence represents a segment of the program code that is stepped through by the reviewers to check certain characteristics .main focus of the cognitive walkthrough is to establish how easy a system is to learn. It is vital to document the cognitive walkthrough to keep a record of what is good and what needs improvement in the design. It is therefore a good idea to produce some standard evaluation forms for the walkthrough
  • Heuristic evaluation :- A heuristic is a guideline or general principle or rule of thumb that can guide a design decision or be used to critique a decision that has already been made. Heuristic evaluation can be performed on a design specification so it is useful for evaluating early design. But it can also be used on prototypes, storyboards and fully functioning systems. It is therefore a flexible, relatively cheap approach. Hence it is often considered a discount usability technique. The general idea behind heuristic evaluation is that several evaluators independently critique a system to come up with potential usability problems. It is important that there be several of these evaluators and that the evaluations be done independently.
  • Model based evaluation :- it is a third expert-based approach is the use of models. Certain cognitive and design models provide a means of combining design specification and evaluation into the same framework. Design methodologies, such as design rationale (see Chapter 6), also have a role to play in evaluation at the design stage. Design rationale provides a framework in which design options can be evaluated. By examining the criteria that are associated with each option in the design, and the evidence that is provided to support these criteria, informed judgments can be made in the design. Dialog models can also be used to evaluate dialog sequences for problems, such as unreachable states, circular dialogs and complexity. Models such as state transition networks are useful for evaluating dialog designs prior to implementation.
  • Using previous studies in evaluation :-Experimental psychology and human–computer interaction between them possess a wealth of experimental results and empirical evidence. Some of this is specific to a particular domain, but much deals with more generic issues and applies in a variety of situations. Examples of such issues are the usability of different menu types, the recall of command names, and the choice of icons. A final approach to expert evaluation exploits this inheritance, using previous results as evidence to support (or refute) aspects of the design. It is expensive to repeat experiments continually and an expert review of relevant literature can avoid the need to do so. It should be noted that experimental results cannot be expected to hold arbitrarily across contexts. The reviewer must therefore select evidence carefully, noting the experimental design chosen, the population of participants used, the analyses performed and the assumptions made.

Evaluation through user participation

User participation in evaluation tends to occur in the later stages of development when there is at least a working prototype of the system in place. This may range from a simulation of the system’s interactive capabilities, without its underlying functionality. These include empirical or experimental methods, observational methods, query techniques, and methods that use physiological monitoring, such as eye tracking and measures of heart rate and skin conductance. There are two types of evaluation namely laboratory studies and field studies.

Laboratory Study: In the first type of evaluation studies, users are taken out of their normal work environment to take part in controlled tests, often in a specialist usability laboratory

Field Study: This type of evaluation takes the designer or evaluator out into the user’s work environment in order to observe the system in action.

Empirical methods: experimental evaluation
One of the most powerful methods of evaluating a design or an aspect of a design is to use a controlled experiment. This provides empirical evidence to support a particular claim or hypothesis. It can be used to study a wide range of different issues at different levels of detail. The evaluator chooses a hypothesis to test, which can be determined by measuring some attribute of participant behavior. A number of experimental conditions are considered which differ only in the values of certain controlled variables

Observational techniques
In this method users are asked to complete a set of predetermined tasks, although, if observation is being carried out in their place of work, they may be observed going about their normal duties. The evaluator watches and records the users’ actions. Consequently users are asked to elaborate their actions by ‘thinking aloud’.

Query techniques
This relies on asking the user about the interface directly. Query techniques can be useful in eliciting detail of the user’s view of a system. They can be used in evaluation and more widely to collect information about user requirements and tasks. There are two main types of query technique: interviews and questionnaires.

Evaluation through monitoring physiological responses

Potentially this type of evaluation will allow the evaluators not only to see more clearly exactly what users do when they interact with computers, but also to measure how they feel. The two areas receiving the most attention to date are eye tracking and physiological measurement.

Universal Design for Interactive Systems

Multi-modal interaction

A system needs to provide information through more than one medium and that can be elicited through multi-modal interaction. Multi-modal interaction covers the five senses namely, sight, sound, touch, taste and smell. Sight. Anyhow, taste and smell are less appreciated, may be they will be needed in future.

Sound in the interface
Sound is an important contributor to usability. There is experimental evidence to suggest that the addition of audio confirmation of modes, in the form of changes in keyclicks, reduces errors. The dual presentation of information through sound and vision supports universal design, by enabling access for users with visual and hearing impairments respectively. It also enables information to be accessed in poorly lit or noisy environments. Sound can convey transient information and does not take up screen space, making it potentially useful for mobile applications.

Touch in the interface
Touch is the only sense that can be used to both send and receive information. The use of touch in the interface is known as haptic interaction. Haptics is a generic term relating to touch, but it can be roughly divided into two areas: cutaneous perception, which is concerned with tactile sensations through the skin; and kinesthetics, which is the perception of movement and position.

Handwriting recognition
Like speech, we consider handwriting to be a very natural form of communication. The idea of being able to interpret handwritten input is very appealing, and handwriting appears to offer both textual and graphical input using the same tools.

Gesture recognition
Gesture is a component of human–computer interaction that has become the subject of attention in multi-modal systems. Being able to control the computer with certain movements of the hand would be advantageous in many situations where there is no possibility of typing, or when other senses are fully occupied. It could also support communication for people who have hearing loss, if signing could be ‘translated’ into speech or vice versa.

Designing Interfaces for diversity

that, although we can make general observations about human capabilities, users in fact have different needs and limitations. Interfaces are usually designed to cater for the ‘average’ user, but unfortunately this may exclude people who are not ‘average’.

  • Designing for users with disabilities
  • It is estimated that at least 10% of the population of every country has a disability that will affect interaction with computers. Employers and manufacturers of computing equipment have not only a moral responsibility to provide accessible products, but often also a legal responsibility. In many countries, legislation now demands that the workplace must be designed to be accessible or at least adaptable to all — the design of software and hardware should not unnecessarily restrict the job prospects of people with disabilities.
  • Designing for different age groups
  • We have considered how people differ along a range of sensory, physical and cognitive abilities. However, there are other areas of diversity that impact upon the way we design interfaces. One of these is age. In particular, older people and children have specific needs when it comes to interactive technology.
  • Designing for cultural differences
  • The final area of diversity we will consider is cultural difference. Cultural difference is often used synonymously with national differences but this is too simplistic. Whilst there are clearly important national cultural differences, such as those we saw.

Thank you

--

--

KANESWARAN SACHCHITHANANTHAN

i am a student at university of kelaniya ,studing software engineering.