WARNING: READ THIS Please do not delete this article! I copied the original article to insert my changes here. https://en.wikipedia.org/wiki/Virtual_keyboard |
A virtual keyboard is a software component that allows a user to enter characters.[1] A virtual keyboard can usually be operated with multiple input devices, which may include a touchscreen, an actual computer keyboard and a computer mouse.
On a desktop computer, a virtual keyboard might provide an alternative input mechanism for users with disabilities who cannot use a conventional keyboard, or for bi- or multi-lingual users who switch frequently between different character sets or alphabets, which may be confusing over time. Although hardware keyboards are available with dual keyboard layouts (e.g. Cyrillic/Latin letters in various national layouts), the on-screen keyboard provides a handy substitute while working at different stations or on laptops, which seldom come with dual layouts.
Virtual keyboards can be categorized by the following aspects:
Virtual keyboards are commonly used as an on-screen input method in devices with no physical keyboard, where there is no room for one, such as a pocket computer, personal digital assistant (PDA), tablet computer or touchscreen-equipped mobile phone. Text is commonly inputted either by tapping a virtual keyboard or finger-tracing.[7] Virtual keyboards are also used as features of emulation software for systems that have fewer buttons than a computer keyboard would have.
The four main approaches to enter text into a PDA were, virtual keyboards operated by a stylus, external USB keyboards, handwritten keyboards and stroke recognition. Many early PDA where not primarily focused on virtual keyboards. The Illustration shows the keyboard of Microsoft’s mobile operating system. Their approach was to simulate a complete functional keyboard. Which resulted in a slightly overloaded keyboard layout. The main problem that those old PDAs were not supporting the multi-touch technology and were causing as a result, usability problems for the user. Multi-touch was in use as early as 1985. Apple popularized the term "multi-touch" in 2007, when they published their first iPhone. These "multi-touch" displays allow for more sophisticated interfaces using various gestures entered with one or more fingers.
When Apple presented the iPhone in 2007, the decision to no include a physical keyboard was seen as a detriment to the device. Further more Steve Jobs focused on this decision as a unique selling point. Obviously the iPhone become a huge success and their virtual keyboard design pattern has become a standard on mobile devices today.
//einleitender Satz
The android sdk provides a so called InputMethodService. This service provides a standard implementation of an InputMethod, which final implementations can derive from and customize. What enables the android development community to implement their own keyboard layouts. The InputMethodService ships with its on Keyboard View. While the InputMethod Service can be used to customize key and gesture inputs, the Keyboard Class loads an XML description of a keyboard and stores the attributes of the keys. The SimpleIME class, which is the main class in every keyboard application, should have three member variables:
After declaring these variables and adding the methods of the OnKeyboardActionListener interface, the SimpleIME (only snippet) class should look like this:
As a result it is possible to install different keyboard versions on an Android device and that your keyboard is only an application, simple activating over the Android Settings menu is possible.
Apple also provides the possibilty for the community to develop custom keyboards, but does not give any access to the dictionary or general keyboard settings. Further iOS is automatically switching between system and custom keyboards, if the user enters text into text input.
The UIInputViewController is the primary view controller for a custom keyboard app extension. This controller provides different methods for the implementation of a custom keyboard, such as a user interface for a custom keyboard, obtaining a supplementary lexicon or changing the primary language of a custom keyboard.
Next to the classic virtual keyboard implementation Android, iOS and custom keyboards, such as Swift key for example, are providing different features to improve the usability and the efficiency of their keyboards.
The Android platform offers a spelling checker framework that lets you implement and access spell checking in your application. The framework is one of the Text Service APIs offered by the Android platform. Based on text you provide, the session object returns spelling suggestions generated by the spelling checker.
iOS is using the class UITextChecker, an object you use to check a string (usually the text of a document) for misspelled words, commonly know as Apple’s autocorrection. UITextChecker spell-checks are using a lexicon for a given language. You can tell it to ignore specific words when spell-checking a particular document and you can have it learn words, which adds those words to the lexicon.
Diverse scientific papers at the beginning of the 2000’s showed even before the invention of smart phones, that predicting words, based on what the user is typing, is very helpful to increase the typing speed. At the beginning of development of this keyboard feature, prediction was mainly based on static dictionaries. In 2014 Apple presented iOS 8 which includes a new predictive typing feature called QuickType, which displays word predictions above the keyboard as the user types. Google implemented the predicting method in 2013 in Android X. This development was mainly driven by third party keyboard providers, such as Swift Key and Swype. Both provide powerful word search engine with corresponding databases.
iOS and Android allow developers to replace its keyboard with their own keyboard apps. This has led to experimentation and new features, like the gesture-typing feature that’s made its way into Android’s official keyboard after proving itself in third-party keyboards. Research by Google itself confirmed that gesture-typing is increasing the typing rate by 22% and is decreasing the error rate near to 0%. Google further showed that the gesture-typing method is also useful on smart watches. Their scientific research is primarily based on research made by I. Scott MacKenzie and papers about modeling finger touch with fitts’ law.
On the Internet, various JavaScript virtual keyboards have been created, allowing users to type their own languages on foreign keyboards, particularly in Internet cafes. Multitouch screens allow the possibility to create virtual chorded keyboards for tablet computers,[8] touchscreens, touchpads and wired gloves.[9][10]
An optical virtual keyboard was invented and patented by IBM engineers in 2008.[11] It optically detects and analyses human hand and finger motions and interprets them as operations on a physically non-existent input device like a surface having painted keys. In that way it allows to emulate unlimited types of manually operated input devices such as a mouse or keyboard. All mechanical input units can be replaced by such virtual devices, optimized for the current application and for the user's physiology maintaining speed, simplicity and unambiguity of manual data input.
The basic idea of a virtual keyboard in an augmented reality environment is to give the user a text input possibilty. The most common approach is to render a flat keyboard into the augmented reality based on the unity TouchScreenKeyboard. The microsoft HoloLense enables the user to point at letters on the keyboard by moving his head.
Another approach was researched by the Korean KJIST U-VR Lab in 2003. Their suggestion was to use wearables to track the finger motion to replace a physical keyboards with virtual ones. They also tried to give a audiovisual feedback to the user, when a key got hit. The basic idea was to give the user a more natural way to enter text, based on what he is used to.
The challenges, as in augmented reality, is to give the user the possibilty to enter text in a completely virtual environment. One big issue is that most augmented reality systems on the market are not tracking the hands of the user. So many available system provide the possibilty to point at letters.
In September 2016 Google has released a virtual keyboard app for their Daydream virtual reality headset. To enter text, the user can point at specific letters with the Daydream controller.
In February 2017 Logitech presented experimental approach to bring their keyboards into the virtual environment. With Vive Tracker and the Logitech G gaming keyboard it is possible to exactly track every finger movement, without waring any type of glove. 50 of such packages were send to exclusive developers, enabling them, in combination of Logitche’s BRIDGE developers kit, to test and experiment with the new technology. RECHERCHE aber keine Erwähnung, dass Hände getrackt werden.
Virtual keyboards may be used in some cases to reduce the risk of keystroke logging.[12] For example, Westpac’s online banking service uses a virtual keyboard for the password entry, as does TreasuryDirect (see picture). It is more difficult for malware to monitor the display and mouse to obtain the data entered via the virtual keyboard, than it is to monitor real keystrokes. However it is possible, for example by recording screenshots at regular intervals or upon each mouse click.[13][14]
The use of an on-screen keyboard on which the user "types" with mouse clicks can increase the risk of password disclosure by shoulder surfing, because:
Category:Assistive technology Category:Computer keyboard types Category:Pointing-device text input Category:Touch user interfaces