Graphical user interface in the context of Integrated development environment


Graphical user interface in the context of Integrated development environment

Graphical user interface Study page number 1 of 5

Play TriviaQuestions Online!

or

Skip to study material about Graphical user interface in the context of "Integrated development environment"


⭐ Core Definition: Graphical user interface

A graphical user interface, or GUI, is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation. In many applications, GUIs are used instead of text-based UIs, which are based on typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where head-up displays (HUDs) are preferred), or not including flat screens like volumetric displays because the term is restricted to the scope of 2D display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.

↓ Menu
HINT:

In this Dossier

Graphical user interface in the context of Computer mouse

A computer mouse (plural mice; also mouses) is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of the pointer (called a cursor) on a display, which allows a smooth control of the graphical user interface of a computer.

The first public demonstration of a mouse controlling a computer system was done by Douglas Engelbart in 1968 as part of the Mother of All Demos. Mice originally used two separate wheels to directly track movement across a surface: one in the x-dimension and one in the Y. Later, the standard design shifted to use a ball rolling on a surface to detect motion, in turn connected to internal rollers. Most modern mice use optical movement detection with no moving parts. Though originally all mice were connected to a computer by a cable, many modern mice are cordless, relying on short-range radio communication with the connected system.

View the full Wikipedia page for Computer mouse
↑ Return to Menu

Graphical user interface in the context of Pointing device

A pointing device is a human interface device that allows a user to input spatial (i.e., continuous and multi-dimensional) data to a computer. Graphical user interfaces (GUI) and CAD systems allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer (or cursor) and other visual changes. Common gestures are point and click and drag and drop.

While the most common pointing device by far is the mouse, many more devices have been developed. However, the term mouse is commonly used as a metaphor for devices that move a computer cursor.

View the full Wikipedia page for Pointing device
↑ Return to Menu

Graphical user interface in the context of Douglas Engelbart

Douglas Carl Engelbart (January 30, 1925 – July 2, 2013) was an American engineer, inventor, and a pioneer in many aspects of computer science. He is best known for his work on founding the field of human–computer interaction, particularly while at his Augmentation Research Center Lab in SRI International, which resulted in creation of the computer mouse, and the development of hypertext, networked computers, and precursors to graphical user interfaces. These were demonstrated at The Mother of All Demos in 1968. Engelbart's law, the observation that the intrinsic rate of human performance is exponential, is named after him.

The "oN-Line System" (NLS) developed by the Augmentation Research Center under Engelbart's guidance with funding mostly from the Advanced Research Projects Agency (ARPA), later renamed Defense Advanced Research Projects Agency (DARPA), demonstrated many technologies, most of which are now in widespread use; it included the computer mouse, bitmapped screens, word processing, and hypertext; all of which were displayed at "The Mother of All Demos" in 1968. The lab was transferred from SRI to Tymshare in the late 1970s, which was acquired by McDonnell Douglas in 1984, and NLS was renamed Augment (now the Doug Engelbart Institute). At both Tymshare and McDonnell Douglas, Engelbart was limited by a lack of interest in his ideas and funding to pursue them and retired in 1986.

View the full Wikipedia page for Douglas Engelbart
↑ Return to Menu

Graphical user interface in the context of User interface

In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.

Generally, the goal of user interface design is to produce a user interface that makes it easy, efficient, and enjoyable (user-friendly) to operate a machine in the way which produces the desired result (i.e. maximum usability). This generally means that the operator needs to provide minimal input to achieve the desired output, and also that the machine minimizes undesired outputs to the user.

View the full Wikipedia page for User interface
↑ Return to Menu

Graphical user interface in the context of Autocomplete

Autocomplete, or word completion, is a feature in which an application predicts the rest of a word a user is typing. In Android and iOS smartphones, this is called predictive text. In graphical user interfaces, users can typically press the tab key to accept a suggestion or the down arrow key to accept one of several.

Autocomplete speeds up human-computer interactions when it correctly predicts the word a user intends to enter after only a few characters have been typed into a text input field. It works best in domains with a limited number of possible words (such as in command line interpreters), when some words are much more common (such as when addressing an e-mail), or writing structured and predictable text (as in source code editors).

View the full Wikipedia page for Autocomplete
↑ Return to Menu

Graphical user interface in the context of Apple Inc.

Apple Inc. is an American multinational technology company headquartered in Cupertino, California, in Silicon Valley, best known for its consumer electronics, software and online services. Founded in 1976 as Apple Computer Company by Steve Jobs, Steve Wozniak and Ronald Wayne, the company was incorporated by Jobs and Wozniak as Apple Computer, Inc. the following year. It was renamed to its current name in 2007 as the company had expanded its focus from computers to consumer electronics. Apple has been described as a Big Tech company.

The company was founded to market Wozniak's Apple I personal computer. Its successor, the Apple II, became one of the first successful mass-produced microcomputers. Apple introduced the Lisa in 1983 and the Macintosh in 1984 as some of the first computers to use a graphical user interface and a mouse. By 1985, internal conflicts led to Jobs leaving the company to form NeXT and Wozniak withdrawing to other ventures; John Sculley served as CEO for over a decade. In the 1990s, Apple lost considerable market share in the personal computer industry to the lower-priced Wintel duopoly of Intel-powered PC clones running Microsoft Windows, and neared bankruptcy by 1997. To overhaul its market strategy, it acquired NeXT, bringing Jobs back to the company. Under his leadership, Apple returned to profitability by introducing the iMac, iPod, iPhone, and iPad devices; creating the iTunes Store; launching the "Think different" advertising campaign; and opening the Apple Store retail chain. Jobs resigned in 2011 for health reasons, and died two months later; he was succeeded as CEO by Tim Cook.

View the full Wikipedia page for Apple Inc.
↑ Return to Menu

Graphical user interface in the context of Feature phone

A feature phone (also spelled featurephone), brick phone, or dumbphone, is a type of mobile phone with basic functionalities, as opposed to more advanced and modern smartphones. The term has been used for both newly made mobile phones that are not classed as smartphones and older mobile phones from eras before smartphones became ubiquitous.

The functions of feature phones are limited compared to smartphones: they tend to use an embedded operating system with a small and simple graphical user interface (unlike large and complex mobile operating systems on a smartphone) and cover general communication basics, such as calling and texting by SMS, although some may include limited smartphone-like features as well. Additionally, they may also evoke the form factor of earlier generations of mobile phones, typically from the 1990s and 2000s, with press-button based inputs and a small non-touch display.

View the full Wikipedia page for Feature phone
↑ Return to Menu

Graphical user interface in the context of J. C. R. Licklider

Joseph Carl Robnett Licklider (/ˈlɪkldər/; March 11, 1915 – June 26, 1990), known simply as J. C. R. or "Lick", was an American psychologist and computer scientist who is considered to be among the most prominent figures in computer science development and general computing history.

He is particularly remembered for being one of the first to foresee modern-style interactive computing and its application to all manner of activities; and also as an Internet pioneer with an early vision of a worldwide computer network long before it was built. He did much to initiate this by funding research that led to significant advances in computing technology, including today's canonical graphical user interface, and the ARPANET, which is the direct predecessor of the Internet.

View the full Wikipedia page for J. C. R. Licklider
↑ Return to Menu

Graphical user interface in the context of Real-time computer graphics

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.

Computers have been capable of generating 2D images such as simple lines, images and polygons in real time since their invention. However, quickly rendering detailed 3D objects is a daunting task for traditional Von Neumann architecture-based systems. An early workaround to this problem was the use of sprites, 2D images that could imitate 3D graphics.

View the full Wikipedia page for Real-time computer graphics
↑ Return to Menu

Graphical user interface in the context of Microsoft Windows

Microsoft Windows, commonly known as Windows, is a proprietary graphical operating system developed and marketed by Microsoft.

It is grouped into families that cater to particular sectors of the computing industry – Windows for personal computers, Windows Server for servers, and Windows IoT for embedded systems. Windows itself is further grouped into editions that cater to different users – Home for home users, Professional for advanced users, Education for schools, and Enterprise for corporations. Windows is sold both as a consumer retail product and to computer manufacturers, who bundle and distribute it with their systems.

View the full Wikipedia page for Microsoft Windows
↑ Return to Menu

Graphical user interface in the context of Palm OS

Palm OS (later versions of which were also known as Garnet OS) is a discontinued mobile operating system initially developed by Palm, Inc., for personal digital assistants (PDAs) in 1996. Palm OS was designed for ease of use with a touchscreen-based graphical user interface. It was provided with a suite of basic applications for personal information management. Later versions of the OS were extended to support smartphones. The software appeared on the company's line of Palm devices while several other licensees have manufactured devices powered by Palm OS.

Following Palm's purchase of the Palm trademark, the operating system was renamed Garnet OS. In 2007, ACCESS introduced the successor to Garnet OS, called Access Linux Platform; additionally, in 2009, the main licensee of Palm OS, Palm, Inc., switched from Palm OS to webOS for their forthcoming devices.

View the full Wikipedia page for Palm OS
↑ Return to Menu

Graphical user interface in the context of Zooming user interface

In computing, a zooming user interface or zoomable user interface (ZUI, pronounced zoo-ee) is a type of graphical user interface (GUI) on which users can change the scale of the viewed area in order to see more detail or less, and browse through different documents. Information elements appear directly on an infinite virtual desktop (usually created using vector graphics), instead of in windows. Users can pan across the virtual surface in two dimensions and zoom into objects of interest. For example, as you zoom into a text object it may be represented as a small dot, then a thumbnail of a page of text, then a full-sized page and finally a magnified view of the page.

ZUIs use zooming as the main metaphor for browsing through hyperlinked or multivariate information.Objects present inside a zoomed page can in turn be zoomed themselves to reveal further detail, allowing for recursive nesting and an arbitrary level of zoom.

View the full Wikipedia page for Zooming user interface
↑ Return to Menu

Graphical user interface in the context of Touchpad

A touchpad or trackpad is a type of pointing device. Its largest component is a tactile sensor: an electronic device with a flat surface that detects the position and motion of a user's fingers, and translates them into 2D motion to control a pointer in a graphical user interface. Touchpads are common on laptop computers, contrasted with desktop computers, with which mice are more prevalent. Trackpads are sometimes used with desktop setups where desk space is scarce. Wireless touchpads are also available as detached accessories. Due to the ability of trackpads to be made small, they were additionally used on personal digital assistants (PDAs) and some portable media players.

View the full Wikipedia page for Touchpad
↑ Return to Menu

Graphical user interface in the context of Point and click

Point and click are one of the actions of a computer user moving a pointer to a certain location on a screen (pointing) and then pressing a button on a mouse or other pointing device (click). An example of point and click is in hypermedia, where users click on hyperlinks to navigate from document to document. User interfaces, for example graphical user interfaces, are sometimes described as "point-and-click interfaces", often to suggest that they are very easy to use, requiring that the user simply point to indicate their wishes. Describing software this way implies that the interface can be controlled solely through a pointing device with little or no input from the keyboard, as with many graphical user interfaces.

In some systems, such as Internet Explorer, moving the pointer over a link (or other GUI control) and waiting for a split-second will cause a tooltip to be displayed.

View the full Wikipedia page for Point and click
↑ Return to Menu

Graphical user interface in the context of Drag and drop

In computer graphical user interfaces, drag and drop is a pointing device gesture in which the user selects a virtual object by "grabbing" it and dragging it to a different location or onto another virtual object. In general, it can be used to invoke many kinds of actions, or create various types of associations between two abstract objects.

As a feature, drag-and-drop support is not found in all software, though it is sometimes a fast and easy-to-learn technique. However, it is not always clear to users that an item can be dragged and dropped, or what command is performed by the drag and drop, which can decrease usability.

View the full Wikipedia page for Drag and drop
↑ Return to Menu

Graphical user interface in the context of Desktop environment

In computing, a desktop environment (DE) is an implementation of the desktop metaphor made of a bundle of programs running on top of a computer operating system that share a common graphical user interface (GUI), sometimes described as a graphical shell. The desktop environment was seen mostly on personal computers until the rise of mobile computing. Desktop GUIs help the user to easily access and edit files, while they usually do not provide access to all of the features found in the underlying operating system. Instead, the traditional command-line interface (CLI) is still used when full control over the operating system is required.

A desktop environment typically consists of icons, windows, toolbars, folders, wallpapers and desktop widgets (see Elements of graphical user interfaces and WIMP). A GUI might also provide drag and drop functionality and other features that make the desktop metaphor more complete. A desktop environment aims to be an intuitive way for the user to interact with the computer using concepts which are similar to those used when interacting with the physical world, such as buttons and windows.

View the full Wikipedia page for Desktop environment
↑ Return to Menu

Graphical user interface in the context of Icon (computing)

In computing, an icon is a pictogram or ideogram displayed on a computer screen in order to help the user navigate a computer system. It can serve as an electronic hyperlink or file shortcut to access the program or data. The user can activate an icon using a mouse, pointer, finger, or voice commands. Their placement on the screen, also in relation to other icons, may provide further information to the user about their usage. In activating an icon, the user can move directly into and out of the identified function without knowing anything further about the location or requirements of the file or code.

Icons as parts of the graphical user interface of a computer system, in conjunction with windows, menus and a pointing device (mouse), belong to the much larger topic of the history of the graphical user interface that has largely supplanted the text-based interface for casual use.

View the full Wikipedia page for Icon (computing)
↑ Return to Menu