User interface in the context of Text user interface


User interface in the context of Text user interface

User interface Study page number 1 of 4

Play TriviaQuestions Online!

or

Skip to study material about User interface in the context of "Text user interface"


⭐ Core Definition: User interface

In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.

Generally, the goal of user interface design is to produce a user interface that makes it easy, efficient, and enjoyable (user-friendly) to operate a machine in the way which produces the desired result (i.e. maximum usability). This generally means that the operator needs to provide minimal input to achieve the desired output, and also that the machine minimizes undesired outputs to the user.

↓ Menu
HINT:

In this Dossier

User interface in the context of Video game

A video game, computer game, or simply game, is an electronic game that involves interaction with a user interface or input device (such as a joystick, controller, keyboard, or motion sensing device) to generate visual feedback from a display device, most commonly shown in a video format on a television set, computer monitor, flat-panel display or touchscreen on handheld devices, or a virtual reality headset. Most modern video games are audiovisual, with audio complement delivered through speakers or headphones, and sometimes also with other types of sensory feedback (e.g., haptic technology that provides tactile sensations). Some video games also allow microphone and webcam inputs for in-game chatting and livestreaming.

Video games are typically categorized according to their hardware platform, which traditionally includes arcade video games, console games, and computer games (which includes LAN games, online games, and browser games). More recently, the video game industry has expanded onto mobile gaming through mobile devices (such as smartphones and tablet computers), virtual and augmented reality systems, and remote cloud gaming. Video games are also classified into a wide range of genres based on their style of gameplay and target audience.

View the full Wikipedia page for Video game
↑ Return to Menu

User interface in the context of Graphical user interface

A graphical user interface, or GUI, is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation. In many applications, GUIs are used instead of text-based UIs, which are based on typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where head-up displays (HUDs) are preferred), or not including flat screens like volumetric displays because the term is restricted to the scope of 2D display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.

View the full Wikipedia page for Graphical user interface
↑ Return to Menu

User interface in the context of Interactivity

Across the many fields concerned with interactivity, including information science, computer science, human-computer interaction, communication, and industrial design, there is little agreement over the meaning of the term "interactivity", but most definitions are related to interaction between users and computers and other machines through a user interface. Interactivity can however also refer to interaction between people. It nevertheless usually refers to interaction between people and computers – and sometimes to interaction between computers – through software, hardware, and networks.

Multiple views on interactivity exist. In the "contingency view" of interactivity, there are three levels:

View the full Wikipedia page for Interactivity
↑ Return to Menu

User interface in the context of User interface design

User interface (UI) design or user interface engineering is the design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. In computer or software design, user interface (UI) design primarily focuses on information architecture. It is the process of building interfaces that clearly communicate to the user what's important. UI design refers to graphical user interfaces and other forms of interface design. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals (user-centered design). User-centered design is typically accomplished through the execution of modern design thinking which involves empathizing with the target audience, defining a problem statement, ideating potential solutions, prototyping wireframes, and testing prototypes in order to refine final interface mockups.

User interfaces are the points of interaction between users and designs.

View the full Wikipedia page for User interface design
↑ Return to Menu

User interface in the context of Browser extension

A browser extension is a software module for customizing a web browser. Browsers typically allow users to install a variety of extensions, including user interface modifications, cookie management, ad blocking, and the custom scripting and styling of web pages.

Browser plug-ins are a different type of module and no longer supported by the major browsers. One difference is that extensions are distributed as source code, while plug-ins are executables (i.e. object code). The most popular browser, Google Chrome, has over 100,000 extensions available but stopped supporting plug-ins in 2020.

View the full Wikipedia page for Browser extension
↑ Return to Menu

User interface in the context of Firefox

Mozilla Firefox, or simply Firefox, is a free and open-source web browser developed by the Mozilla Foundation and its subsidiary, the Mozilla Corporation. It uses the Gecko rendering engine to display web pages, which implements current and anticipated web standards. Firefox is available for Windows 10 or later versions of Windows, macOS, and Linux. Its unofficial ports are available for various Unix and Unix-like operating systems, including FreeBSD, OpenBSD, NetBSD, and other operating systems, such as ReactOS. It is the default, pre-installed browser on Debian, Ubuntu, and other Linux distros. Firefox is also available for Android and iOS. However, as with all other iOS web browsers, the iOS version uses the WebKit layout engine instead of Gecko due to platform requirements. An optimized version was also available on the Amazon Fire TV as one of the two main browsers available with Amazon's Silk Browser, until April 30, 2021, when Firefox would be discontinued on that platform. Firefox is the spiritual successor of Netscape Navigator, as the Mozilla community was created by Netscape in 1998, before its acquisition by AOL. Firefox was created in 2002 under the codename "Phoenix" by members of the Mozilla community who desired a standalone browser rather than the Mozilla Application Suite bundle. During its beta phase, it proved to be popular with its testers and was praised for its speed, security, and add-ons compared to Microsoft's then-dominant Internet Explorer 6. It was released on November 9, 2004, and challenged Internet Explorer's dominance with 60 million downloads within nine months. In November 2017, Firefox began incorporating new technology under the code name "Quantum" to promote parallelism and a more intuitive user interface.

Firefox usage share grew to a peak of 32.21% in November 2009, with Firefox 3.5 overtaking Internet Explorer 7, although not all versions of Internet Explorer as a whole; its usage then declined in competition with Google Chrome. As of February 2025, according to StatCounter, it had a 6.36% usage share on traditional PCs (i.e. as a desktop browser), making it the fourth-most popular PC web browser after Google Chrome (65%), Microsoft Edge (14%), and Safari (8.65%).

View the full Wikipedia page for Firefox
↑ Return to Menu

User interface in the context of Screen reader

A screen reader is a form of assistive technology (AT) that renders text and image content as speech or braille output. Screen readers are essential to blind people, and are also useful to people who are visually impaired, illiterate or learning-disabled. Screen readers are software applications that attempt to convey what people with normal eyesight see on a display to their users via non-visual means, like text-to-speech, sound icons, or a braille device. They do this by applying a wide variety of techniques that include, for example, interacting with dedicated accessibility APIs, using various operating system features (like inter-process communication and querying user interface properties), and employing hooking techniques.

Microsoft Windows operating systems have included the Microsoft Narrator screen reader since Windows 2000, though separate products such as Freedom Scientific's commercially available JAWS screen reader and ZoomText screen magnifier and the free and open source screen reader NVDA by NV Access are more popular for that operating system. Apple Inc.'s macOS, iOS, and tvOS include VoiceOver as a built-in screen reader, while Google's Android provides the Talkback screen reader and its ChromeOS can use ChromeVox. Similarly, Android-based devices from Amazon provide the VoiceView screen reader. There are also free and open source screen readers for Linux and Unix-like systems, such as Speakup and Orca.

View the full Wikipedia page for Screen reader
↑ Return to Menu

User interface in the context of Typing

Typing is the process of entering or inputting text by pressing keys on a typewriter, computer keyboard, mobile phone, or calculator. It can be distinguished from other means of text input, such as handwriting and speech recognition; text can be in the form of letters, numbers and other symbols. The world's first typist was Lillian Sholes from Wisconsin in the United States, the daughter of Christopher Latham Sholes, who invented the first practical typewriter.

User interface features such as spell checker and autocomplete serve to facilitate and speed up typing and to prevent or correct errors the typist may make.

View the full Wikipedia page for Typing
↑ Return to Menu

User interface in the context of Application programming interface

An application programming interface (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software. A document or standard that describes how to build such a connection or interface is called an API specification. A computer system that meets this standard is said to implement or expose an API. The term API may refer either to the specification or to the implementation.

In contrast to a user interface, which connects a computer to a person, an application programming interface connects computers or pieces of software to each other. It is not intended to be used directly by a person (the end user) other than a computer programmer who is incorporating it into software. An API is often made up of different parts which act as tools or services that are available to the programmer. A program or a programmer that uses one of these parts is said to call that portion of the API. The calls that make up the API are also known as subroutines, methods, requests, or endpoints. An API specification defines these calls, meaning that it explains how to use or implement them.

View the full Wikipedia page for Application programming interface
↑ Return to Menu

User interface in the context of Web browsing

Web navigation is the process of navigating a network of information resources in the World Wide Web, which is organized as hypertext or hypermedia. The user interface that is used to do so is called a web browser.

A central theme in web design is the development of a web navigation interface that maximizes usability.

View the full Wikipedia page for Web browsing
↑ Return to Menu

User interface in the context of Electronic musical instrument

An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.

An electronic instrument might include a user interface for controlling its sound, often by adjusting the pitch, frequency, or duration of each note. A common user interface is the musical keyboard, which functions similarly to the keyboard on an acoustic piano where the keys are each linked mechanically to swinging string hammers - whereas with an electronic keyboard, the keyboard interface is linked to a synth module, computer or other electronic or digital sound generator, which then creates a sound. However, it is increasingly common to separate user interface and sound-generating functions into a music controller (input device) and a music synthesizer, respectively, with the two devices communicating through a musical performance description language such as MIDI or Open Sound Control. The solid state nature of electronic keyboards also offers differing "feel" and "response", offering a novel experience in playing relative to operating a mechanically linked piano keyboard.

View the full Wikipedia page for Electronic musical instrument
↑ Return to Menu

User interface in the context of Copy and paste

Cut, copy, and paste are essential commands of modern human–computer interaction and user interface design. They offer an interprocess communication technique for transferring data through a computer's user interface. The cut command removes the selected data from its original position, and the copy command creates a duplicate; in both cases the selected data is kept in temporary storage called the clipboard. Clipboard data is later inserted wherever a paste command is issued. The data remains available to any application supporting the feature, thus allowing easy data transfer between applications.

The command names are a (skeuomorphic) interface metaphor based on the physical procedure used in manuscript print editing to create a page layout, like with paper.The commands were pioneered into computing by Xerox PARC in 1974, popularized by Apple Computer in the 1983 Lisa workstation and the 1984 Macintosh computer, and in a few home computer applications such as the 1984 word processor Cut & Paste.

View the full Wikipedia page for Copy and paste
↑ Return to Menu

User interface in the context of Windows CE

Windows CE, later known as Windows Embedded CE and Windows Embedded Compact, is a discontinued operating system developed by Microsoft for mobile and embedded devices. It was part of the Windows Embedded family and served as the software foundation of several products including the Handheld PC, Pocket PC, Auto PC, Windows Mobile, Windows Phone 7 and others.

Unlike Windows Embedded Standard, Windows For Embedded Systems, Windows Embedded Industry and Windows IoT, which are based on Windows NT, Windows CE uses a different kernel. Microsoft licensed it to original equipment manufacturers (OEMs), who could modify and create their own user interfaces and experiences, with Windows Embedded Compact providing the technical foundation to do so.

View the full Wikipedia page for Windows CE
↑ Return to Menu

User interface in the context of Clamshell design

Clamshell design is a form factor commonly used in the design of electronic devices and other manufactured objects. It is inspired by the morphology of the clam. The form factor has been applied to handheld game consoles, mobile phones (where it is often called a "flip phone"), and especially laptop computers. Clamshell devices are usually made of two sections connected by a hinge, each section containing either a flat panel display or an alphanumeric keyboard/keypad, which can fold into contact together like a bivalve shell.

Generally speaking, the interface components such as keys and display are kept inside the closed clamshell, protecting them from damage and unintentional use while also making the device shorter or narrower so it is easier to carry around. In many cases, opening the clamshell offers more surface area than when the device is closed, allowing interface components to be larger and easier to use than on devices which do not flip open. A disadvantage of the clamshell design is the connecting hinge, which is prone to fatigue or failure.

View the full Wikipedia page for Clamshell design
↑ Return to Menu

User interface in the context of Point and click

Point and click are one of the actions of a computer user moving a pointer to a certain location on a screen (pointing) and then pressing a button on a mouse or other pointing device (click). An example of point and click is in hypermedia, where users click on hyperlinks to navigate from document to document. User interfaces, for example graphical user interfaces, are sometimes described as "point-and-click interfaces", often to suggest that they are very easy to use, requiring that the user simply point to indicate their wishes. Describing software this way implies that the interface can be controlled solely through a pointing device with little or no input from the keyboard, as with many graphical user interfaces.

In some systems, such as Internet Explorer, moving the pointer over a link (or other GUI control) and waiting for a split-second will cause a tooltip to be displayed.

View the full Wikipedia page for Point and click
↑ Return to Menu

User interface in the context of Text-based user interface

In computing, text-based user interfaces (TUI) (alternately terminal user interfaces, to reflect a dependence upon the properties of computer terminals and not just text), is a retronym describing a type of user interface (UI) common as an early form of human–computer interaction, before the advent of bitmapped displays and modern conventional graphical user interfaces (GUIs). Like modern GUIs, they can use the entire screen area and may accept mouse and other inputs. They may also use color and often structure the display using box-drawing characters such as ┌ and ╣. The modern context of use is usually a terminal emulator.

View the full Wikipedia page for Text-based user interface
↑ Return to Menu

User interface in the context of Heads-up display (video games)

In video games, the HUD (heads-up display) is the method by which information is visually displayed to the player as part of a game's user interface. It takes its name from the head-up displays used in modern aircraft.

The HUD is frequently used to simultaneously display several pieces of information including the player character's health points, items, and an indication of game progression (such as score or level). A HUD may also include elements to aid a player's navigation in the virtual space, such as a mini-map.

View the full Wikipedia page for Heads-up display (video games)
↑ Return to Menu

User interface in the context of ChromeOS

ChromeOS (sometimes styled as chromeOS and formerly styled as Chrome OS) is an operating system designed and developed by Google. It is derived from the open-source ChromiumOS operating system (which itself is derived from Gentoo Linux), and uses the Google Chrome web browser as its principal user interface.

Google announced the project in July 2009, initially describing it as an operating system where applications and user data would reside in the cloud. ChromeOS was used primarily to run web applications.

View the full Wikipedia page for ChromeOS
↑ Return to Menu