Human–computer interaction in the context of Text user interface


Human–computer interaction in the context of Text user interface

Human–computer interaction Study page number 1 of 2

Play TriviaQuestions Online!

or

Skip to study material about Human–computer interaction in the context of "Text user interface"


⭐ Core Definition: Human–computer interaction

Human–computer interaction (HCI) is the process through which people operate and engage with computer systems. Research in HCI covers the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe how people interact with computers and design technologies that allow humans to interact with computers in new ways. These include visual, auditory, and tactile (haptic) feedback systems, which serve as channels for interaction in both traditional interfaces and mobile computing contexts.A device that allows interaction between human being and a computer is known as a "human–computer interface".

As a field of research, human–computer interaction is situated at the intersection of computer science, behavioral sciences, design, media studies, and several other fields. The term was popularized by Stuart K. Card, Allen Newell, and Thomas P. Moran in their 1983 book, The Psychology of Human–Computer Interaction. The first known use was in 1975 by Carlisle. The term is intended to convey that, unlike other tools with specific and limited uses, computers have many uses which often involve an open-ended dialogue between the user and the computer. The notion of dialogue likens human–computer interaction to human-to-human interaction: an analogy that is crucial to theoretical considerations in the field.

↓ Menu
HINT:

In this Dossier

Human–computer interaction in the context of Computer science

Computer science is the study of computation, information, and automation. Included broadly in the sciences, computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software). An expert in the field is known as a computer scientist.

Algorithms and data structures are central to computer science.The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data.

View the full Wikipedia page for Computer science
↑ Return to Menu

Human–computer interaction in the context of Cursor (user interface)

In human–computer interaction, a cursor is an indicator used to show the current position on a computer monitor or other display device that will respond to input, such as a text cursor or a mouse pointer.

View the full Wikipedia page for Cursor (user interface)
↑ Return to Menu

Human–computer interaction in the context of Graphical user interface

A graphical user interface, or GUI, is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation. In many applications, GUIs are used instead of text-based UIs, which are based on typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where head-up displays (HUDs) are preferred), or not including flat screens like volumetric displays because the term is restricted to the scope of 2D display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.

View the full Wikipedia page for Graphical user interface
↑ Return to Menu

Human–computer interaction in the context of Douglas Engelbart

Douglas Carl Engelbart (January 30, 1925 – July 2, 2013) was an American engineer, inventor, and a pioneer in many aspects of computer science. He is best known for his work on founding the field of human–computer interaction, particularly while at his Augmentation Research Center Lab in SRI International, which resulted in creation of the computer mouse, and the development of hypertext, networked computers, and precursors to graphical user interfaces. These were demonstrated at The Mother of All Demos in 1968. Engelbart's law, the observation that the intrinsic rate of human performance is exponential, is named after him.

The "oN-Line System" (NLS) developed by the Augmentation Research Center under Engelbart's guidance with funding mostly from the Advanced Research Projects Agency (ARPA), later renamed Defense Advanced Research Projects Agency (DARPA), demonstrated many technologies, most of which are now in widespread use; it included the computer mouse, bitmapped screens, word processing, and hypertext; all of which were displayed at "The Mother of All Demos" in 1968. The lab was transferred from SRI to Tymshare in the late 1970s, which was acquired by McDonnell Douglas in 1984, and NLS was renamed Augment (now the Doug Engelbart Institute). At both Tymshare and McDonnell Douglas, Engelbart was limited by a lack of interest in his ideas and funding to pursue them and retired in 1986.

View the full Wikipedia page for Douglas Engelbart
↑ Return to Menu

Human–computer interaction in the context of Interactivity

Across the many fields concerned with interactivity, including information science, computer science, human-computer interaction, communication, and industrial design, there is little agreement over the meaning of the term "interactivity", but most definitions are related to interaction between users and computers and other machines through a user interface. Interactivity can however also refer to interaction between people. It nevertheless usually refers to interaction between people and computers – and sometimes to interaction between computers – through software, hardware, and networks.

Multiple views on interactivity exist. In the "contingency view" of interactivity, there are three levels:

View the full Wikipedia page for Interactivity
↑ Return to Menu

Human–computer interaction in the context of User interface

In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.

Generally, the goal of user interface design is to produce a user interface that makes it easy, efficient, and enjoyable (user-friendly) to operate a machine in the way which produces the desired result (i.e. maximum usability). This generally means that the operator needs to provide minimal input to achieve the desired output, and also that the machine minimizes undesired outputs to the user.

View the full Wikipedia page for User interface
↑ Return to Menu

Human–computer interaction in the context of Copy and paste

Cut, copy, and paste are essential commands of modern human–computer interaction and user interface design. They offer an interprocess communication technique for transferring data through a computer's user interface. The cut command removes the selected data from its original position, and the copy command creates a duplicate; in both cases the selected data is kept in temporary storage called the clipboard. Clipboard data is later inserted wherever a paste command is issued. The data remains available to any application supporting the feature, thus allowing easy data transfer between applications.

The command names are a (skeuomorphic) interface metaphor based on the physical procedure used in manuscript print editing to create a page layout, like with paper.The commands were pioneered into computing by Xerox PARC in 1974, popularized by Apple Computer in the 1983 Lisa workstation and the 1984 Macintosh computer, and in a few home computer applications such as the 1984 word processor Cut & Paste.

View the full Wikipedia page for Copy and paste
↑ Return to Menu

Human–computer interaction in the context of Interaction design

Interaction design, often abbreviated as IxD, is "the practice of designing interactive digital products, environments, systems, and services." While interaction design has an interest in form (similar to other design fields), its main area of focus rests on behavior. Rather than analyzing how things are, interaction design synthesizes and imagines things as they could be. This element of interaction design is what characterizes IxD as a design field, as opposed to a science or engineering field.

Interaction design borrows from a wide range of fields like psychology, human-computer interaction, information architecture, and user research to create designs that are tailored to the needs and preferences of users. This involves understanding the context in which the product will be used, identifying user goals and behaviors, and developing design solutions that are responsive to user needs and expectations.

View the full Wikipedia page for Interaction design
↑ Return to Menu

Human–computer interaction in the context of Mobile computing

Mobile computing is human–computer interaction in which a computer is expected to be transported during normal usage and allow for transmission of data, which can include voice and video transmissions. Mobile computing involves mobile communication, mobile hardware, and mobile software. Communication issues include ad hoc networks and infrastructure networks as well as communication properties, protocols, data formats, and concrete technologies. Hardware includes mobile devices or device components. Mobile software deals with the characteristics and requirements of mobile applications.

View the full Wikipedia page for Mobile computing
↑ Return to Menu

Human–computer interaction in the context of Text-based user interface

In computing, text-based user interfaces (TUI) (alternately terminal user interfaces, to reflect a dependence upon the properties of computer terminals and not just text), is a retronym describing a type of user interface (UI) common as an early form of human–computer interaction, before the advent of bitmapped displays and modern conventional graphical user interfaces (GUIs). Like modern GUIs, they can use the entire screen area and may accept mouse and other inputs. They may also use color and often structure the display using box-drawing characters such as ┌ and ╣. The modern context of use is usually a terminal emulator.

View the full Wikipedia page for Text-based user interface
↑ Return to Menu

Human–computer interaction in the context of Direct manipulation interface

In computer science, human–computer interaction, and interaction design, direct manipulation is an approach to interfaces which involves continuous representation of objects of interest together with rapid, reversible, and incremental actions and feedback. As opposed to other interaction styles, for example, the command language, the intention of direct manipulation is to allow a user to manipulate objects presented to them, using actions that correspond at least loosely to manipulation of physical objects. An example of direct manipulation is resizing a graphical shape, such as a rectangle, by dragging its corners or edges with a mouse.

Having real-world metaphors for objects and actions can make it easier for a user to learn and use an interface (some might say that the interface is more natural or intuitive), and rapid, incremental feedback allows a user to make fewer errors and complete tasks in less time, because they can see the results of an action before completing the action, thus evaluating the output and compensating for mistakes.

View the full Wikipedia page for Direct manipulation interface
↑ Return to Menu

Human–computer interaction in the context of 3D user interaction

3D human–computer interaction is a form of human–computer interaction where users are able to move and perform interaction in 3D space. Both the user and the computer process information where the physical position of elements in 3D space is relevant. It largely encompasses virtual reality and augmented reality.

The 3D space used for interaction can be the real physical space, a virtual space representation simulated on the computer, or a combination of both. When the real physical space is used for data input, the human interacts with the machine performing actions using an input device that detects the 3D position of the human interaction, among other things. When it is used for data output, the simulated 3D virtual scene is projected onto the real environment through one output device.

View the full Wikipedia page for 3D user interaction
↑ Return to Menu

Human–computer interaction in the context of Modality (human–computer interaction)

In the context of human–computer interaction, a modality is the classification of a single independent channel of input/output between a computer and a human. Such channels may differ based on sensory nature (e.g., visual vs. auditory), or other significant differences in processing (e.g., text vs. image).A system is designated unimodal if it has only one modality implemented, and multimodal if it has more than one. When multiple modalities are available for some tasks or aspects of a task, the system is said to have overlapping modalities. If multiple modalities are available for a task, the system is said to have redundant modalities. Multiple modalities can be used in combination to provide complementary methods that may be redundant but convey information more effectively. Modalities can be generally defined in two forms: computer-human and human-computer modalities.

View the full Wikipedia page for Modality (human–computer interaction)
↑ Return to Menu

Human–computer interaction in the context of Facial recognition system

A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.

Development on similar systems began in the 1960s as a form of computer application. Since their inception, facial recognition systems have seen wider uses in recent times on smartphones and in other forms of technology, such as robotics. Because computerized facial recognition involves the measurement of a human's physiological characteristics, facial recognition systems are categorized as biometrics. Although the accuracy of facial recognition systems as a biometric technology is lower than iris recognition, fingerprint image acquisition, palm recognition or voice recognition, it is widely adopted due to its contactless process. Facial recognition systems have been deployed in advanced human–computer interaction, video surveillance, law enforcement, passenger screening, decisions on employment and housing, and automatic indexing of images.

View the full Wikipedia page for Facial recognition system
↑ Return to Menu

Human–computer interaction in the context of Paper prototyping

In human–computer interaction, paper prototyping is a widely used method in the user-centered design process, a process that helps developers to create software that meets the user's expectations and needs – in this case, especially for designing and testing user interfaces. It is throwaway prototyping and involves creating rough, even hand-sketched, drawings of an interface to use as prototypes, or models, of a design. While paper prototyping seems simple, this method of usability testing can provide useful feedback to aid the design of easier-to-use products. This is supported by many usability professionals.

View the full Wikipedia page for Paper prototyping
↑ Return to Menu

Human–computer interaction in the context of Jef Raskin

Jef Raskin (born Jeff Raskin; March 9, 1943 – February 26, 2005) was an American human–computer interface expert who conceived and began leading the Macintosh project at Apple in the late 1970s.

View the full Wikipedia page for Jef Raskin
↑ Return to Menu