HomeWinBuzzer NewsMicrosoft Makes Big Play for Brain Computer Interfaces with Flurry of Patents

Microsoft Makes Big Play for Brain Computer Interfaces with Flurry of Patents

Four Microsoft patents show the company is targeting brain computer interfaces for integration in apps and mixed reality HoloLens.

-

Machine learning techniques and innovation in artificial intelligence (AI) have allowed brain computer interfaces (BCIs) to evolve in recent years. BCI systems are now more adept than ever at mapping, augmenting, and repairing human cognitive and sensory functions. This is achievable my reconstructing through images from the visual cortex.

Microsoft appears to be making a major move in the brain computer interface field. The company is exploring ways in which BCIs could be used to control a computer. In an effort to further development, the company has patented several methods of note.

In one patent (below), Microsoft describes a brain reading capability. If integrated in an app, the technology could understand what a user’s intended action is and automatically execute it. The app could perform the intended action just by reading the user.

“Computer systems, methods, and storage media for changing the state of an application by detecting neurological user intent data associated with a particular operation of a particular application state, and changing the application state so as to enable execution of the particular operation as intended by the user. The application state is automatically changed to align with the intended operation, as determined by received neurological user intent data, so that the intended operation is performed. Some embodiments relate to a computer system creating or updating a state machine, through a training process, to change the state of an application according to detected neurological data.”

Neurological activity could be used for more basic commands. For example, another patent explains how users could match their neurological activity to an analogue control. This would allow users to control a PC without physically moving the mouse.

This feels like an implementation that could be more easily developed than app integration for automated execution:

CONTINUOUS MOTION CONTROLS OPERABLE USING NEUROLOGICAL DATA

“Computer systems, methods, and storage media for generating a continuous motion control using neurological data and for associating the continuous motion control with a continuous user interface control to enable analog control of the user interface control. The user interface control is modulated through a user’ s physical movements within a continuous range of motion associated with the continuous motion control. The continuous motion control enables fine-tuned and continuous control of the corresponding user interface control as opposed to control limited to a small number of discrete settings.”

More Brain Computer Interface Patents

Microsoft has also patented a method for change the mode of a PC depending on brain activity:

MODIFYING THE MODALITY OF A COMPUTING DEVICE BASED UPON A USER’S BRAIN ACTIVITY

“Technologies are described herein for modifying the modality of a computing device based upon a user’s brain activity. A machine learning classifier is trained using data that identifies a modality for operating a computing device and data identifying brain activity of a user of the computing device. Once trained, the machine learning classifier can select a mode of operation for the computing device based upon a user’s current brain activity and, potentially, other biological data. The computing device can then be operated in accordance with the selected modality. An application programming interface can also expose an interface through which an operating system and application programs executing on the computing device can obtain data identifying the modality selected by the machine learning classifier. Through the use of this data, the operating system and application programs can modify their mode of operation to be most suitable for the user’s current mental state.”

Finally, Microsoft believes brain activity could discern and understand objects in a user’s visual field. This ties directly into virtual and mixed reality and would work through head mounted displays. Microsoft already has the network in place with Windows Mixed Reality and HoloLens:

MODIFYING A USER INTERFACE BASED UPON A USER’S BRAIN ACTIVITY AND GAZE

Technologies are described herein for modifying a user interface (“UI”) provided by a computing device based upon a user’s brain activity and gaze. A machine learning classifier is trained using data that identifies the state of a UI provided by a computing device, data identifying brain activity of a user of the computing device, and data identifying the location of the user’s gaze. Once trained, the classifier can select a state for the UI provided by the computing device based upon brain activity and gaze of the user. The UI can then be configured based on the selected state. An API can also expose an interface through which an operating system and programs can obtain data identifying the UI state selected by the machine learning classifier. Through the use of this data, a UI can be configured for suitability with a user’s current mental state and gaze.”

SourceMSPU
Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News