CN110419018B - Automatic control of wearable display devices based on external conditions - Google Patents

Automatic control of wearable display devices based on external conditions Download PDF

Info

Publication number
CN110419018B
CN110419018B CN201780087609.9A CN201780087609A CN110419018B CN 110419018 B CN110419018 B CN 110419018B CN 201780087609 A CN201780087609 A CN 201780087609A CN 110419018 B CN110419018 B CN 110419018B
Authority
CN
China
Prior art keywords
user
virtual content
wearable system
environment
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780087609.9A
Other languages
Chinese (zh)
Other versions
CN110419018A (en
Inventor
J·M·鲍德利
S·奈尔斯
N·E·萨梅茨
A·阿米尔忽施曼德
N·U·罗柏纳
C·M·哈利西斯
M·拜伦洛特
C·A·R·辛特龙
B·K·史密斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Magic Leap Inc
Original Assignee
Magic Leap Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Magic Leap Inc filed Critical Magic Leap Inc
Priority to CN202310968411.9A priority Critical patent/CN117251053A/en
Publication of CN110419018A publication Critical patent/CN110419018A/en
Application granted granted Critical
Publication of CN110419018B publication Critical patent/CN110419018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B6/00Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings
    • G02B6/0001Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings specially adapted for lighting devices or systems
    • G02B6/0011Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings specially adapted for lighting devices or systems the light guides being planar or of plate-like form
    • G02B6/0033Means for improving the coupling-out of light from the light guide
    • G02B6/005Means for improving the coupling-out of light from the light guide provided by one optical element, or plurality thereof, placed on the light output side of the light guide
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B6/00Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings
    • G02B6/0001Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings specially adapted for lighting devices or systems
    • G02B6/0011Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings specially adapted for lighting devices or systems the light guides being planar or of plate-like form
    • G02B6/0075Arrangements of multiple light guides
    • G02B6/0076Stacked arrangements of multiple light guides of the same or different cross-sectional area
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/20Linear translation of whole images or parts thereof, e.g. panning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0127Head-up displays characterised by optical features comprising devices increasing the depth of field
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • G02B2027/0174Head mounted characterised by optical features holographic
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0185Displaying image at variable distance
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0081Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. enlarging, the entrance or exit pupil
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04804Transparency, e.g. transparent or translucent windows
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Databases & Information Systems (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
  • Position Input By Displaying (AREA)

Abstract

Embodiments of the wearable device may include a Head Mounted Display (HMD) that can be configured to display virtual content. When a user interacts with visual or audible virtual content, the wearable user may encounter a triggering event, such as an emergency or unsafe condition, detection of one or more triggering objects in the environment, or determining characteristics of the user's environment (e.g., home or office). Embodiments of the wearable device may automatically detect the trigger event and automatically control the HMD to de-emphasize, block, or stop displaying virtual content. The HMD may include buttons that may be actuated by a user to manually de-emphasize, block, or stop displaying virtual content.

Description

基于外部条件的可穿戴显示装置的自动控制Automatic control of wearable display devices based on external conditions

相关申请的交叉引用Cross References to Related Applications

本申请要求依据35U.S.C.§119(e)的2016年12月29日提交的名称为“MANUAL ORAUTOMATIC CONTROL OF WEARABLE DISPLAY DEVICE BASED ON EXTERNAL CONDITIONS(基于外部条件的可穿戴显示装置的手动或自动控制)”的美国临时申请No.62/440099的优先权,其公开内容通过引用整体并入本文。This application claims a patent application entitled "MANUAL ORAUTOMATIC CONTROL OF WEARABLE DISPLAY DEVICE BASED ON EXTERNAL CONDITIONS" filed on December 29, 2016 pursuant to 35 U.S.C. §119(e). priority of U.S. Provisional Application No. 62/440099, the disclosure of which is incorporated herein by reference in its entirety.

技术领域technical field

本公开涉及混合现实成像和可视化系统,并且更具体地涉及基于外部条件的混合现实成像和可视化系统的自动控制。The present disclosure relates to mixed reality imaging and visualization systems, and more particularly to automatic control of mixed reality imaging and visualization systems based on external conditions.

背景技术Background technique

现代计算和显示技术促进了用于所谓的“虚拟现实”、“增强现实”或“混合现实”体验的系统的开发,其中数字再现图像或其部分以它们看起来或可能被感知为真实的方式呈现给用户。虚拟现实或“VR”场景通常涉及呈现数字或���拟图像信息而对其他实际的真实世界视觉输入不透明;增强现实或“AR”场景通常涉及呈现数字或虚拟图像信息,作为对用户周围的真实世界的可视化的增强;混合现实或“MR”涉及将真实世界和虚拟世界融合,以产生物理和虚拟对象共存并实时交互的新环境。事实证明,人类视觉感知系统非常复杂,并且产生有助于连同其它虚拟或真实世界的图像元素一起的虚拟图像元素的舒适、自然、丰富呈现的AR技术是具有挑战性的。本文公开的系统和方法解决了与AR和VR技术相关的各种挑战。Modern computing and display technologies have facilitated the development of systems for so-called "virtual reality", "augmented reality" or "mixed reality" experiences, in which images, or parts thereof, are digitally reproduced in such a way that they appear or may be perceived as real presented to the user. Virtual reality or "VR" scenarios typically involve the presentation of digital or virtual image information opaque to otherwise actual real-world visual input; augmented reality or "AR" scenarios typically involve the presentation of digital or virtual image information as a reflection of the real world around the user. Augmentation of visualization; Mixed reality or "MR" involves merging the real and virtual worlds to produce new environments where physical and virtual objects coexist and interact in real time. The human visual perception system has proven to be very complex, and it is challenging to produce AR technologies that facilitate a comfortable, natural, rich presentation of virtual image elements along with other virtual or real-world image elements. The systems and methods disclosed herein address various challenges associated with AR and VR technologies.

发明内容Contents of the invention

可穿戴装置的实施例可以包括可以被配置为显示虚拟内容的头戴式显示器(HMD)。当用户与视觉或听觉虚拟内容交互时,可穿戴装置的用户可能遇到触发事件,例如紧急状况或不安全状况、在环境中检测到一个或多个触发对象、或检测到用户已进入特定环境(例如,家或办公室)。可穿戴装置的实施例可以自动检测触发事件并自动控制HMD以不强调、阻止(block)或停止显示虚拟内容。HMD可以包括按钮,该按钮可以由用户致动以手动地不强调、阻止或停止显示虚拟内容。在某些实现中,可穿戴装置可以响应于检测到终止条件而重新开始或恢复虚拟内容。Embodiments of a wearable device may include a head-mounted display (HMD) that may be configured to display virtual content. When the user interacts with visual or auditory virtual content, the user of the wearable device may encounter a trigger event, such as an emergency or unsafe situation, detect one or more trigger objects in the environment, or detect that the user has entered a specific environment (for example, home or office). Embodiments of the wearable device may automatically detect triggering events and automatically control the HMD to de-emphasize, block, or stop displaying virtual content. The HMD can include buttons that can be actuated by the user to manually de-emphasize, prevent, or stop displaying virtual content. In some implementations, the wearable device can restart or resume the virtual content in response to detecting a termination condition.

此说明书所述主题的一个或多个实施方式的细节在下面的附图和描述中阐述。从描述、附图和权利要求中,其他特征、方面以及优势将变得显而易见。本发明内容和之后的具体实施方式都不旨在限定或限制本发明主题的范围。The details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will be apparent from the description, drawings, and claims. Neither this Summary nor the Detailed Description that follows is intended to define or limit the scope of the inventive subject matter.

附图说明Description of drawings

图1A描绘了具有由人观看到的某些虚拟现实对象和某些物理对象的混合现实场景的图示。FIG. 1A depicts an illustration of a mixed reality scene with some virtual reality objects and some physical objects viewed by a person.

图1B示出了用于可穿戴显示系统的佩戴者的视场和注视域。Figure IB illustrates the field of view and gaze of a wearer for a wearable display system.

图2示意性地示出了可穿戴显示系统的示例。Fig. 2 schematically shows an example of a wearable display system.

图3示意性地示出了使用多个深度平面模拟三维图像的方法的方面。Fig. 3 schematically illustrates aspects of a method of simulating a three-dimensional image using multiple depth planes.

图4示意性地示出了用于将图像信息输出给用户的波导堆叠的示例。Figure 4 schematically shows an example of a waveguide stack for outputting image information to a user.

图5示出了可以由波导输出的示例性出射光束。Figure 5 shows an exemplary exit beam that may be output by the waveguide.

图6是示出了光学系统的示意图,该光学系统包括波导装置、用于将光光学耦合到波导装置或光学耦合来自波导装置的光的光学耦合器子系统、以及用于产生多焦点体积显示、图像或光场的控制子系统。Figure 6 is a schematic diagram illustrating an optical system including a waveguide, an optical coupler subsystem for optically coupling light to or from the waveguide, and for producing a multifocal volumetric display , image or light field control subsystem.

图7是可穿戴系统的示例的框图。7 is a block diagram of an example of a wearable system.

图8是呈现与辨别的对象有关的虚拟内容的方法的示例的过程流程图。8 is a process flow diagram of an example of a method of presenting virtual content related to a recognized object.

图9是可穿戴系统的另一示例的框图。9 is a block diagram of another example of a wearable system.

图10示出了包括环境和传感器的可穿戴系统的各种组件的示例的示意图。10 shows a schematic diagram of an example of various components of a wearable system including the environment and sensors.

图11A和11B示出了在外科背景下使头戴式显示器(HMD)沉默的示例。11A and 11B illustrate an example of silencing a head mounted display (HMD) in a surgical setting.

图11C示出了在工业背景下使HMD沉默的示例。Figure 11C shows an example of silencing an HMD in an industrial context.

图11D示出了在教育背景下使HMD沉默的示例。Figure 1 ID shows an example of silencing an HMD in an educational context.

图11E示出了在购物背景下使HMD沉默的示例。Figure 1 IE shows an example of silencing an HMD in the context of shopping.

图11F示出了选择性地阻止工作环境中的虚拟内容的示例。FIG. 11F shows an example of selectively blocking virtual content in a work environment.

图11G示出了选择性地阻止休息室环境中的虚拟内容的示例。FIG. 11G illustrates an example of selectively blocking virtual content in a lounge environment.

图12A、12B和12C示出了基于触发事件使HMD呈现的虚拟内容沉默的示例。12A, 12B, and 12C illustrate examples of silencing virtual content presented by an HMD based on a triggering event.

图12D示出了在检测到用户环境的变化时使虚拟内容沉默的示例。FIG. 12D illustrates an example of silencing virtual content when a change in the user's environment is detected.

图13A和13B示出了基于触发事件使增强现实显示装置沉默的示例过程。13A and 13B illustrate an example process for silencing an augmented reality display device based on a trigger event.

图13C示出了用于选择性地阻止环境中的虚拟内容的示例流程图。13C illustrates an example flow diagram for selectively blocking virtual content in an environment.

图14A示出了响应于现实按钮的手动致动可以由HMD显示的警告消息。FIG. 14A shows a warning message that may be displayed by the HMD in response to manual actuation of a real button.

图14B是示出用于手动致动HMD的沉默操作模式的示例过程的流程图。14B is a flowchart illustrating an example process for manually actuating the HMD's silent mode of operation.

在整个附图中,可以重复使用附图标记来指示所引用的元件之间的对应关系。提供附图是为了说明本文描述的示例实施例,而不是为了限制本公开的范围。Throughout the drawings, reference numerals may be re-used to indicate correspondence between referenced elements. The drawings are provided to illustrate example embodiments described herein and not to limit the scope of the present disclosure.

具体实施方式Detailed ways

概述overview

可穿戴装置的显示系统可以被配置为在AR/VR/MR环境中呈现虚拟内容。虚拟内容可以包括视觉和/或听觉内容。在使用头戴式显示装置(HMD)时,用户可能会遇到可能需要不强调或根本不提供虚拟内容中的一些或全部的情况。例如,在用户的全部注意力应该在实际的物理现实上而没有来自虚拟内容的潜在的分心期间,用户可能遇到紧急状况或不安全状况。在这样的状况下,当用户试图处理真实世界的实际物理内容和HMD提供的虚拟内容时,向用户呈现虚拟内容可能引起感知混淆。因此,如下面进一步描述的,在可能需要不强调或停止显示虚拟内容的情况下,HMD的实施例可以提供HMD的手动或自动控制。The display system of the wearable device may be configured to present virtual content in an AR/VR/MR environment. Virtual content may include visual and/or audio content. While using a head-mounted display device (HMD), a user may encounter situations where it may be desirable to de-emphasize or not provide some or all of the virtual content. For example, a user may encounter an emergency or unsafe situation during a time when the user's full attention should be on the actual physical reality without potential distraction from virtual content. In such a situation, presenting virtual content to the user may cause perceptual confusion when the user is trying to deal with the actual physical content of the real world and the virtual content provided by the HMD. Accordingly, embodiments of the HMD may provide manual or automatic control of the HMD in situations where it may be desirable to de-emphasize or cease display of virtual content, as described further below.

此外,虽然可穿戴装置可以向用户呈现大量信息,但是在一些情况下,用户可能难以筛选虚拟内容以识别用户有兴趣与之交互的内容。有利地,在一些实施例中,可穿戴装置可以自动检测用户的位置并基于该位置选择性地阻止(或选择性地允许)虚拟内容,因此可穿戴装置可以呈现与用户具有更高相关性并且适合于用户的环境(例如,位置)(例如用户是在家还是在工作)的虚拟内容。例如,可穿戴装置可以呈现与视频游戏、预定的电话会议或工作电子邮件相关的各种虚拟内容。如果用户在办公室中,则用户可能需要观看与工作相关的虚拟内容,例如电话会议和电子邮件,而阻止与视频游戏相关的虚拟内容,以便用户可以专注于工作。Furthermore, while wearable devices can present a large amount of information to a user, in some cases it can be difficult for the user to sift through virtual content to identify content that the user is interested in interacting with. Advantageously, in some embodiments, the wearable device can automatically detect the user's location and selectively block (or selectively allow) virtual content based on that location, so the wearable device can appear more relevant to the user and Virtual content appropriate to the user's environment (eg, location) (eg, whether the user is at home or work). For example, wearable devices can present various virtual content related to video games, scheduled conference calls, or work emails. If the user is in an office, the user may need to watch work-related virtual content, such as conference calls and emails, while video game-related virtual content is blocked so that the user can focus on work.

在某些实现中,可穿戴装置可以基于由面向外的成像系统(单独或与位置传感器组合)获取的图像数据自动检测用户位置的变化。可穿戴装置可以响应于检测到用户已经从一个环境移动到另一个环境而自动应用适合于当前位置的设置。在某些实现中,可穿戴系统可以基于用户的环境(也称为场景)使虚拟内容沉默。例如,家和商场中的起居室都可以被视为娱乐场景,因此可以在两种环境中阻止(或允许)类似的虚拟内容。还可以基于是否阻止(或允许)具有相似特性的内容来阻止(或允许)虚拟内容。例如,用户可以选择在办公环境中阻止社交网络应用(或者可以选择仅允许与工作相关的内容)。基于用户提供的这种配置,对于办公环境,可穿戴系统可以自动阻止视频游戏,因为视频游戏和社交网络应用都具有娱乐特性。In some implementations, the wearable device can automatically detect changes in the user's location based on image data acquired by an outward-facing imaging system (alone or in combination with a position sensor). The wearable device may automatically apply settings appropriate for the current location in response to detecting that the user has moved from one environment to another. In some implementations, the wearable system can silence virtual content based on the user's environment (also referred to as a scene). For example, a living room in a home and a mall could both be considered entertainment scenarios, so similar virtual content could be blocked (or allowed) in both environments. Virtual content may also be blocked (or allowed) based on whether content with similar characteristics is blocked (or allowed). For example, a user may choose to block social networking applications in an office environment (or may choose to allow only work-related content). Based on this configuration provided by the user, for an office environment, the wearable system can automatically block video games because both video games and social networking applications have entertainment properties.

尽管参考使虚拟内容沉默描述了示例,但是也可以应用类似技术来使可穿戴系统的一个或多个部件沉默。例如,可穿戴系统可以响应于紧急状况(例如,火灾)而使面向内的成像系统沉默以保留系统的硬件资源。此外,尽管某些示例被描述为选择性地阻止某些环境中的某些虚拟内容,但这仅用于示例,并且混合现实装置可以附加地或可选地选择性地允许不同的虚拟内容,以实现与阻止基本相同的结果。Although examples are described with reference to silencing virtual content, similar techniques may also be applied to silence one or more components of a wearable system. For example, a wearable system may silence an inward-facing imaging system to preserve the system's hardware resources in response to an emergency situation (eg, fire). Furthermore, although some examples are described as selectively blocking certain virtual content in certain environments, this is for example only and a mixed reality installation may additionally or alternatively selectively allow different virtual content, to achieve essentially the same result as blocking.

可穿戴系统的3D显示的示例Example of 3D display for wearable systems

可穿戴系统(本文也称为增强现实(AR)系统)可以被配置为向用户呈现2D或3D虚拟图像。图像可以是静止图像、视频的帧或视频以及其组合等。可穿戴系统的至少一部分可以实现在可穿戴装置上,该可穿戴装置可以单独或组合地呈现VR、AR或MR环境以用于用户交互。可穿戴装置可以可互换地用作AR装置(ARD)。此外,为了本公开的目的,术语“AR”可以与术语“MR”交换地使用。Wearable systems (also referred to herein as augmented reality (AR) systems) can be configured to present 2D or 3D virtual images to a user. An image may be a still image, a frame of a video or a video, a combination thereof, and the like. At least a portion of the wearable system can be implemented on a wearable device that can, alone or in combination, present a VR, AR, or MR environment for user interaction. A wearable device can be used interchangeably as an AR device (ARD). Additionally, the term "AR" may be used interchangeably with the term "MR" for the purposes of this disclosure.

图1A描绘了具有某些虚拟现实对象以及由人观看的某些物理对象的混合现实场景的示例。在图1A中,描绘了MR场景100,其中MR技术的用户看到以人、树木、背景中的建筑物和混凝土平台120为特征的真实世界公园状设置110。除了这些项目之外,MR技术的用户同样感知到他“看到”站在真实世界平台上的机器人雕像130,以及飞过的卡通式化身角色140,该化身角色140看起来是大黄蜂的化身,即使这些元素在真实世界中不存在。FIG. 1A depicts an example of a mixed reality scene with some virtual reality objects and some physical objects viewed by a person. In FIG. 1A , an MR scene 100 is depicted in which a user of MR technology sees a real-world park-like setting 110 featuring people, trees, buildings in the background, and a concrete platform 120 . In addition to these items, the user of the MR technology also perceives that he "sees" a robot statue 130 standing on a real-world platform, and a flying cartoon-like avatar character 140 that appears to be an avatar of a bumblebee , even if these elements do not exist in the real world.

为了使3D显示产生真实的深度感,更具体地,产生模拟的表面深度感,可能期望显示器视场中的每个点产生与其虚拟深度对应的调节(accommodative)响应。如果对显示点的调节响应未与该点的虚拟深度对应,如通过会聚和立体视觉的双目深度提示所确定的,则人眼可能经历调节冲突,导致不稳定的成像、有害的眼睛疲劳、头痛,以及在缺乏调节信息的情况下,几乎完全缺少表面深度。In order for a 3D display to produce a realistic perception of depth, and more specifically, a simulated surface perception of depth, it may be desirable for each point in the display's field of view to produce an accommodative response corresponding to its virtual depth. If the accommodative response to a displayed point does not correspond to the virtual depth at that point, as determined by convergence and binocular depth cues for stereopsis, the human eye may experience an accommodation conflict, resulting in erratic imaging, unwanted eye fatigue, Headaches, and the almost complete lack of surface depth in the absence of regulatory information.

图1B示出了人的视场(FOV)和注视域(FOR)。FOV包括用户环境的用户在给定时间感知到的一部分。当人移动、移动他们的头部或移动他们的眼睛或凝视时,该视场可以改变。FIG. 1B shows a person's field of view (FOV) and field of gaze (FOR). FOV includes the portion of the user's environment that the user perceives at a given time. This field of view can change as the person moves, moves their head, or moves their eyes or gazes.

FOR包括用户周围环境的能够被用户经由可穿戴系统感知的一部分。因此,对于佩戴头戴式显示装置的用户,注视域可以包括围绕佩戴者的基本上全部4π球面立体角,因为佩戴者可以移动他或她的身体、头部或眼睛以基本上感知空间中的任何方向。在其他背景下,用户的移动可能被更加地限制,因此用户的注视域可能对向较小的立体角。图1B示出了包括中心区域和周边区域的这种视场155。中心视场将为人提供环境视图的中心区域中的对象的对应视图。类似地,周边视场将向人提供环境视图的周边区域中的对象的对应视图。在这种情况下,什么被认为是中心和什么被认为是周边是人正在看的那个方向的功能(function)由此也就是他们的视场。视场155可以包括对象121、122。在该示例中,中心视场145包括对象121,而另一对象122在周边视场中。FOR includes a portion of the user's surroundings that can be sensed by the user via the wearable system. Thus, for a user wearing a head-mounted display device, the field of gaze may include substantially the entire 4π spherical solid angle around the wearer, since the wearer can move his or her body, head, or eyes to perceive substantially the entire 4π spherical solid angle in space. any direction. In other contexts, the user's movement may be more constrained, so the user's field of gaze may subtend a smaller solid angle. FIG. 1B shows such a field of view 155 including a central region and a peripheral region. The central field of view will provide a person with a corresponding view of objects in the central area of the environment view. Similarly, the peripheral field of view will provide a person with a corresponding view of objects in the surrounding area of the environment view. In this case, what is considered central and what is considered peripheral is a function of the direction the person is looking in and thus their field of view. Field of view 155 may include objects 121 , 122 . In this example, central field of view 145 includes object 121 while another object 122 is in the peripheral field of view.

视场(FOV)155可以包含多个对象(例如,对象121、122)。视场155可以取决于AR系统的尺寸或光学特性,例如,透明窗口的透明孔径尺寸或头戴式显示器的透镜,光通过该透镜从用户前方的现实世界传递到用户的眼睛。在一些实施例中,当用户210的姿势(例如,头部姿势、身体姿势和/或眼睛姿势)改变时,视场155可以相应地改变,并且视场155内的对象也可以改变。如本文所述,可穿戴系统可以包括诸如监视注视域165中的对象以及视场155中的对象或对其成像的相机的传感器。在一些这样的实施例中,可穿戴系统可以警告用户未注意的发生在用户的视场155中和/或发生在用户的视场之外但在注视域165内的对象或事件。在一些实施例中,可穿戴系统还可以区分用户210引导注意力或不引导注意力的内容。Field of view (FOV) 155 may contain multiple objects (eg, objects 121, 122). The field of view 155 may depend on the size or optical characteristics of the AR system, for example, the transparent aperture size of a transparent window or the lens of a head-mounted display through which light passes from the real world in front of the user to the user's eyes. In some embodiments, as the posture of user 210 (eg, head posture, body posture, and/or eye posture) changes, field of view 155 may change accordingly, and objects within field of view 155 may also change. As described herein, a wearable system may include sensors such as a camera that monitors or images objects in field of gaze 165 as well as objects in field of view 155 . In some such embodiments, the wearable system may alert the user to unnoticed objects or events occurring within the user's field of view 155 and/or occurring outside the user's field of view but within the gaze field 165 . In some embodiments, the wearable system may also distinguish between content that directs the user's 210 attention or that does not.

FOV或FOR中的对象可以是虚拟对象或物理对象。虚拟对象可以包括例如操作系统对象,例如,用于输入命令的终端、用于访问文件或目录的文件管理器、图标、菜单、用于音频或视频流的应用、来自操作系统的通知等。虚拟对象还可以包括应用中的对象(例如,化身)、游戏图形或图像中的虚拟对象等。一些虚拟对象可以是操作系统对象和应用中的对象。可穿戴系统可以将虚拟元件添加到通过头戴式显示器的透明光学器件观看到的现有物理对象,从而允许用户与物理对象交互。例如,可穿戴系统可以添加与房间中的医疗监视器相关联的虚拟菜单,其中虚拟菜单可以向用户提供选项以打开或调整医学成像设备或剂量控制。因此,除了用户环境中的对象之外,头戴式显示器还可以向佩戴者呈现额外的虚拟图像内容。Objects in FOV or FOR can be virtual objects or physical objects. Virtual objects may include, for example, operating system objects such as a terminal for entering commands, a file manager for accessing files or directories, icons, menus, applications for audio or video streaming, notifications from the operating system, and the like. Virtual objects may also include objects in applications (eg, avatars), virtual objects in game graphics or images, and the like. Some virtual objects can be operating system objects and objects in applications. Wearable systems can add virtual elements to existing physical objects viewed through the transparent optics of a head-mounted display, allowing users to interact with physical objects. For example, a wearable system could add a virtual menu associated with a medical monitor in the room, where the virtual menu could provide the user with options to turn on or adjust medical imaging equipment or dose controls. Thus, the HMD can present additional virtual image content to the wearer in addition to objects in the user's environment.

图1B还示出了注视域(FOR)165,该注视域包括人210周围的环境能够被人210例如通过转动他们的头部或重定向他们的凝视而感知的一部分。人210眼睛的视场155的中心部分可以被称为中心视场145。视场155内但在中心视场145外部的区域可以被称为周边视场。在图1B中,注视域165可以包含能够由佩戴可穿戴系统的用户感知的一组对象(例如,对象121、122、127)。FIG. 1B also shows a field of gaze (FOR) 165 that includes a portion of the environment around person 210 that person 210 can perceive, for example, by turning their head or redirecting their gaze. The central portion of the field of view 155 of the eye of a person 210 may be referred to as the central field of view 145 . The area within field of view 155 but outside central field of view 145 may be referred to as the peripheral field of view. In FIG. 1B , gaze field 165 may contain a set of objects (eg, objects 121 , 122 , 127 ) that can be perceived by a user wearing the wearable system.

在一些实施例中,对象129可以在用户的视觉FOR之外,但是可能仍然潜在地被可穿戴装置上的传感器(例如,相机)感知(取决于它们的位置和视场)以及为用户210显示的或者以其他方式由可穿戴装置使用的与对象129相关联的信息。例如,对象129可以位于用户环境中的墙壁后面,使得用户不能在视觉上感知到对象129。然而,可穿戴装置可以包括能够与对象129通信的传感器(诸如射频、蓝牙、无线或其他类型的传感器)。In some embodiments, the object 129 may be outside the user's visual FOR, but may still potentially be sensed (depending on their position and field of view) by sensors (e.g., cameras) on the wearable device and displayed for the user 210 or otherwise used by the wearable device associated with object 129. For example, object 129 may be located behind a wall in the user's environment such that the user cannot visually perceive object 129 . However, a wearable device may include sensors (such as radio frequency, Bluetooth, wireless, or other types of sensors) capable of communicating with object 129 .

显示系统的示例Example showing the system

VR、AR和MR体验可以由具有显示器的显示系统提供,在显示器中,与多个深度平面对应的图像被提供给观看者。对于每个深度平面,图像可以是不同的(例如,提供场景或对象的略微不同的呈现),并且该图像可以由观看者的眼睛单独聚焦,从而有助于基于需要被聚焦到位于不同深度平面上的场景的不同图像特征的眼睛的调节或基于观察离焦的不同深度平面上的不同图像特征,为用户提供深度提示。如本文其他地方所讨论的,这种深度提示提供了可靠的深度感知。VR, AR and MR experiences can be provided by a display system having a display in which images corresponding to multiple depth planes are presented to the viewer. The image can be different for each depth plane (e.g., providing a slightly different rendering of the scene or object), and the image can be individually focused by the viewer's eyes, helping to be focused at different depth planes as needed Depth cues are provided to the user based on accommodation of the eye for different image features of the scene above or based on viewing different image features at different depth planes out of focus. As discussed elsewhere in this paper, such depth cues provide reliable depth perception.

图2示出了可以被配置为提供AR/VR/MR场景的可穿戴系统200的示例。可穿戴系统200还可以被称为AR系统200。可穿戴系统200包括显示器220、以及支持该显示器220的功能的各种机械和电子模块和系统。显示器220可以与框架230耦接,该框架可以由用户、佩戴者或观看者210佩戴。可以将显示器220定位在用户210的眼睛前方。显示器220可以向用户呈现AR/VR/MR内容。显示器220可以包括佩戴在用户的头部上的头戴式显示器(HMD)。在一些实施例中,扬声器240被耦接到框架230并且位于用户的耳道附近(在一些实施例中,另一扬声器(未示出)位于用户的另一耳道附近以提供立体声/可塑形声音控制)。可穿戴系统220可以包括音频传感器232(例如,麦克风),用于检测来自环境的音频流并捕捉环境声音。在一些实施例中,一个或多个其他音频传感器(未示出)被定位以提供立体声接收。立体声接收可用于确定声源的位置。可穿戴系统200可以对音频流执行语音或话音辨别。FIG. 2 shows an example of a wearable system 200 that may be configured to provide AR/VR/MR scenarios. The wearable system 200 may also be referred to as an AR system 200 . The wearable system 200 includes a display 220 , and various mechanical and electronic modules and systems that support the functionality of the display 220 . Display 220 may be coupled to frame 230 , which may be worn by a user, wearer, or viewer 210 . Display 220 may be positioned in front of user 210's eyes. The display 220 may present AR/VR/MR content to the user. The display 220 may include a head mounted display (HMD) worn on the user's head. In some embodiments, speaker 240 is coupled to frame 230 and positioned near the user's ear canal (in some embodiments, another speaker (not shown) is positioned near the user's other ear canal to provide stereo/conformable audio) voice control). Wearable system 220 may include an audio sensor 232 (eg, a microphone) for detecting audio streams from the environment and capturing ambient sounds. In some embodiments, one or more other audio transducers (not shown) are positioned to provide stereo reception. Stereo reception can be used to locate sound sources. Wearable system 200 may perform speech or speech recognition on the audio stream.

可穿戴系统200可以包括观察用户周围的环境中的世界的面向外的成像系统464(如图4所示)。可穿戴系统200还可以包括能够跟踪用户的眼睛运动的面向内的成像系统462(如图4所示)。面向内的成像系统可以跟踪一只眼睛的运动或两只眼睛的运动。面向内的成像系统462可以附到框架230并且可以与处理模块260或270电通信,处理模块260或270可以处理由面向内的成像系统获取的图像信息以确定例如眼睛的瞳孔直径或取向、用户210的眼睛运动或眼睛姿态。Wearable system 200 may include an outward-facing imaging system 464 (shown in FIG. 4 ) that views the world in the environment around the user. The wearable system 200 may also include an inward-facing imaging system 462 (shown in FIG. 4 ) capable of tracking the user's eye movements. The inward-facing imaging system can track the movement of one eye or the movement of both eyes. Inwardly facing imaging system 462 may be attached to frame 230 and may be in electrical communication with processing module 260 or 270, which may process image information acquired by inwardly facing imaging system to determine, for example, the pupil diameter or orientation of the eye, the user's The eye movement or eye gesture of 210 .

作为示例,可穿戴系统200可以使用面向外的成像系统464或面向内的成像系统462来获取用户姿态的图像。图像可以是静止图像、视频的帧或视频以及其组合等。As an example, wearable system 200 may use outward-facing imaging system 464 or inward-facing imaging system 462 to acquire images of the user's gestures. An image may be a still image, a frame of a video or a video, a combination thereof, and the like.

可穿戴系统200可以包括用户可选择的现实按钮263,该按钮可以用于减弱(attenuate)由可穿戴系统200呈现给用户的视觉或听觉内容。当致动现实按钮263时,视觉或听觉虚拟内容被减少(与正常显示条件相比),使得用户更多地感知在用户环境中发生的实际的物理现实。现实按钮263可以是触控或压力敏感的并且可以设置在可穿戴系统200的框架230上或电池电源组上(例如,佩戴在用户腰部附近,例如,在带夹上)。下面将参考图14A和14B进一步描述现实按钮263。Wearable system 200 may include a user-selectable reality button 263 that may be used to attenuate visual or auditory content presented by wearable system 200 to the user. When the reality button 263 is actuated, the visual or auditory virtual content is reduced (compared to normal display conditions) so that the user perceives more of the actual physical reality that occurs in the user's environment. The reality button 263 may be touch or pressure sensitive and may be provided on the frame 230 of the wearable system 200 or on the battery power pack (eg, worn near the user's waist, eg, on a belt clip). Reality button 263 will be further described below with reference to FIGS. 14A and 14B.

显示器220可以可操作地(诸如通过有线引线或无线连接)被耦接250到本地数据处理模块,本地数据处理模块可以以各种配置安装,诸如被固定地附到框架230上、被固定地附到由用户佩戴的头盔或帽子上、被嵌入头戴耳机内、或者其它的可拆卸地附到用户210(例如,以背包式配置、以带耦接式配置)。Display 220 may be operatively coupled (such as by wired leads or a wireless connection) to local data processing module 250, which may be mounted in various configurations, such as fixedly attached to frame 230, fixedly attached to To a helmet or hat worn by the user, embedded within a headset, or otherwise detachably attached to the user 210 (eg, in a backpack configuration, in a strap-coupled configuration).

本地处理和数据模块260可以包括硬件处理器以及诸如非易失性存储器(例如,闪速存储器)的数字存储器,这两者都可以用于辅助处理、高速缓存和存储数据。该数据可以包括:a)从传感器(其例如可以可操作地耦接到框架230或者其它的可操作地附到用户210)捕捉的数据,所述传感器例如为图像捕捉装置(例如,面向内的成像系统或面向外的成像系统中的相机)、音频传感器(例如,麦克风)、惯性测量单元(IMU)、加速度计、罗盘、全球定位系统(GPS)单元、无线电装置或陀螺仪;或b)使用远程处理模块270或远程数据储存库280获取和/或处理的数据,这些数据可以在这样的处理或检索之后被传送到显示器220。本地处理和数据模块260可以诸如经由有线或无线通信链路可操作地通过通信链路262或264耦接到远程处理模块270或远程数据储存库280,使得这些远程模块可以用作本地处理和数据模块260的资源。此外,远程处理模块270和远程数据储存库280可以可操作地彼此耦接。Local processing and data module 260 may include hardware processors and digital storage such as non-volatile memory (eg, flash memory), both of which may be used to assist in processing, caching and storing data. This data may include: a) data captured from sensors (which may, for example, be operatively coupled to frame 230 or otherwise operably attached to user 210), such as image capture devices (e.g., inwardly facing cameras in imaging systems or outward-facing imaging systems), audio sensors (eg, microphones), inertial measurement units (IMUs), accelerometers, compasses, global positioning system (GPS) units, radios, or gyroscopes; or b) Data acquired and/or processed using remote processing module 270 or remote data repository 280 may be communicated to display 220 after such processing or retrieval. Local processing and data module 260 may be operably coupled to remote processing module 270 or remote data repository 280 via communication link 262 or 264, such as via a wired or wireless communication link, such that these remote modules may serve as local processing and data Module 260 resources. Additionally, remote processing module 270 and remote data repository 280 may be operatively coupled to each other.

在一些实施例中,远程处理模块270可以包括一个或多个处理器,这些硬件处理器被配置为分析和处理数据或图像信息。在一些实施例中,远程数据储存库280可以包括数字数据存储设施,该设施可以通过因特网或“云”资源配置中的其它网络配置而可用。在一些实施例中,在本地处理和数据模块中存储所有数据并且执行所有计算,从而允许从远程模块完全自主的使用。In some embodiments, remote processing module 270 may include one or more processors configured to analyze and process data or image information. In some embodiments, remote data repository 280 may include a digital data storage facility that may be available through the Internet or other network configuration in a "cloud" resource configuration. In some embodiments, all data is stored and all calculations are performed in local processing and data modules, allowing completely autonomous use from remote modules.

示例环境传感器Example Environmental Sensors

环境传感器267可以被配置为检测用户周围的世界的对象、刺激、人、动物、位置或其他方面。如参考图11A-11C进一步描述���那���,由���境���感器267获取的信息可以用于确定能够使可穿戴装置使音频或虚拟感知沉默的一个或多个触发事件。环境传感器可以包括图像捕捉装置(例如,相机、面向内的成像系统、面向外的成像系统等)、麦克风、惯性测量单元(IMU)、加速计、指南针、全球定位系统(GPS)单元、无线电装置、陀螺仪、高度计、气压计、化学传感器、湿度传感器、温度传感器、外部麦克风、光传感器(例如,光度计)、定时装置(例如,时钟或日历)或其的任何组合或子组合。在一些实施例中,环境传感器还可以包括各种生理传感器。这些传感器可以测量或评估用户的生理参数,例如心率、呼吸率、皮肤电反应、血压、脑电图状态等。环境传感器还可以包括发射装置,该发射装置被配置为接收诸如激光、可见光、不可见光波长或声音(例如,可听声音、超声波或其他频率)的信号。在一些实施例中,一个或多个环境传感器(例如,相机或光传感器)可以被配置为测量环境的环境光(例如,亮度)(例如,以捕捉环境的照明条件)。例如应变仪、路缘探测器等的物理接触传感器也可以作为环境传感器包括在内。参考图10进一步描述关于环境传感器267的附加细节。Environmental sensors 267 may be configured to detect objects, stimuli, people, animals, locations, or other aspects of the world around the user. As further described with reference to FIGS. 11A-11C , information obtained by environmental sensors 267 may be used to determine one or more triggering events that enable the wearable device to silence audio or virtual perception. Environmental sensors may include image capture devices (e.g., cameras, inward-facing imaging systems, outward-facing imaging systems, etc.), microphones, inertial measurement units (IMUs), accelerometers, compasses, global positioning system (GPS) units, radios , gyroscope, altimeter, barometer, chemical sensor, humidity sensor, temperature sensor, external microphone, light sensor (eg, photometer), timing device (eg, clock or calendar), or any combination or subcombination thereof. In some embodiments, environmental sensors may also include various physiological sensors. These sensors can measure or evaluate the user's physiological parameters, such as heart rate, respiration rate, galvanic skin response, blood pressure, EEG status, etc. The environmental sensor may also include a transmitting device configured to receive signals such as laser light, visible light, invisible light wavelengths, or sound (eg, audible sound, ultrasound, or other frequencies). In some embodiments, one or more environmental sensors (eg, cameras or light sensors) may be configured to measure ambient light (eg, brightness) of the environment (eg, to capture lighting conditions of the environment). Physical contact sensors such as strain gauges, curb detectors, etc. may also be included as environmental sensors. Additional details regarding environmental sensor 267 are further described with reference to FIG. 10 .

本地处理和数据模块260可以通过通信链路262和/或264(例如通过有线或无线通信链路)可操作地耦合到远程处理模块270和/或远程数据储存库280,使得这些远程模块可以用作本地处理和数据模块260的源。此外,远程处理模块和远程数据储存库可以可操作地彼此耦合。Local processing and data module 260 may be operatively coupled to remote processing module 270 and/or remote data repository 280 via communication links 262 and/or 264 (e.g., via wired or wireless communication links), such that these remote modules may be used with Source for local processing and data module 260. Additionally, a remote processing module and a remote data repository can be operably coupled to one another.

可穿戴系统200还可以被配置为接收其他环境输入,诸如全球定位卫星(GPS)位置数据、天气数据、日期和时间或者可以从因特网、卫星通信或其他合适的有线或无线数据通信方法接收的其他可用的环境数据。处理模块260可以被配置为访问表征用户位置的进一步信息,例如花粉计数、人口统计,空气污染、环境毒素、来自智能恒温器的信息、生活方式统计或与其他用户、建筑物或医疗保健供应商的接近度。在一些实施例中,可以使用基于云的数据库或其他远程数据库来访问表征位置的信息。本地处理模块270可以被配置为获得这样的数据和/或进一步分析来自环境传感器中的任何一个或组合的数据。Wearable system 200 may also be configured to receive other environmental inputs, such as Global Positioning Satellite (GPS) location data, weather data, date and time, or other information that may be received from the Internet, satellite communications, or other suitable wired or wireless data communication methods. available environmental data. The processing module 260 may be configured to access further information characterizing the user's location, such as pollen counts, demographics, air pollution, environmental toxins, information from smart thermostats, lifestyle statistics, or relationships with other users, buildings, or healthcare providers. the proximity. In some embodiments, the information characterizing the location may be accessed using a cloud-based or other remote database. Local processing module 270 may be configured to obtain such data and/or further analyze data from any one or combination of environmental sensors.

3D光场显示的示例Example of 3D light field display

人类视觉系统是复杂的并且提供对深度的真实感知是具有挑战性的。不受理论的限制,据信对象的观看者可能由于聚散和调节的组合而将对象感知为三维的。两只眼睛相对于彼此的聚散运动(例如,瞳孔向着彼此或远离彼此以会聚眼睛的视线来注视对象的旋转动作)与眼睛的晶状体的聚焦(或“调节”)紧密相关。在正常情况下,改变眼睛的晶状体的聚焦或者使眼睛调节以将聚焦从一个对象改变到位于不同距离处的另一对象,将会在被称为“调节-聚散度反射(accommodation-vergence reflex)”的关系下自动导致到相同距离的聚散度的匹配变化。同样,在正常情况下,聚散度的变化将引发调节的匹配变化。提供调节与聚散度之间的更好匹配的显示系统可以形成更逼真或更舒适的三维图像模拟。The human visual system is complex and providing a realistic perception of depth is challenging. Without being bound by theory, it is believed that a viewer of an object may perceive the object as three-dimensional due to a combination of vergence and accommodation. The vergent motion of the two eyes relative to each other (eg, the rotational motion of the pupils toward or away from each other to focus the eyes' line of sight on an object) is closely related to the focusing (or "accommodation") of the lenses of the eyes. Under normal circumstances, changing the focus of the lens of the eye, or causing the eye to accommodate to change the focus from one object to another at a different distance, would be referred to as the accommodation-vergence reflex. )” relationship automatically results in a matching change of vergence to the same distance. Also, under normal conditions, a change in vergence will trigger a matching change in accommodation. A display system that provides a better match between accommodation and vergence can result in a more realistic or comfortable three-dimensional image simulation.

图3示出了示出了使用多个深度平面模拟三维图像的方法的方面。参考图3,在z轴上距眼睛302和304的不同距离处的对象由眼睛302和304调节,以使得那些对象对焦(infocus)。眼睛302和304呈现特定的调节状态,以使沿着z轴的不同距离处的对象进入焦点。因此,可以说特定的调节状态与深度平面306中的特定一个深度平面相关联,该特定深度平面具有相关联的焦距,以使得当眼睛处于该深度平面的调节状态时,特定深度平面中的对象或对象的部分对焦。在一些实施例中,可以通过为眼睛302和304中的每一者提供图像的不同呈现来模拟三维图像,并且还可以通过提供与深度平面中每一个深度平面对应的图像的不同呈现来模拟三维图像。尽管为了清楚说明而示出为分离的,但应理解的是,例如,随着沿着z轴的距离增加,眼睛302和304的视场可以重叠。另外,虽然为了便于说明而示出为平坦的,但应理解的是,深度平面的外形可以在物理空间中是弯曲的,使得深度平面中的所有特征在特定的调节状态下与眼睛对焦。不受理论的限制,据信人类眼睛通常可以解释有限数量的深度平面以提供深度感知。因此,通过向眼睛提供与这些有限数量的深度平面中的每一个深度平面对应的图像的不同呈现,可以实现方式高度可信的感知深度模拟。FIG. 3 illustrates aspects illustrating a method of simulating a three-dimensional image using multiple depth planes. Referring to FIG. 3, objects at different distances on the z-axis from the eyes 302 and 304 are accommodated by the eyes 302 and 304 so that those objects are in focus. Eyes 302 and 304 exhibit specific accommodation states to bring objects at different distances along the z-axis into focus. Thus, it can be said that a particular accommodation state is associated with a particular one of the depth planes 306 that has an associated focal length such that when the eye is in the accommodation state for that depth plane, objects in the particular depth plane or part of the subject in focus. In some embodiments, three-dimensional images can be simulated by providing different presentations of images to each of eyes 302 and 304, and three-dimensional images can also be simulated by providing different presentations of images corresponding to each of the depth planes image. Although shown separated for clarity of illustration, it is understood that the fields of view of eyes 302 and 304 may overlap, for example, with increasing distance along the z-axis. Additionally, while shown as flat for ease of illustration, it is understood that the profile of the depth plane may be curved in physical space such that all features in the depth plane are in focus with the eye at a particular accommodation state. Without being bound by theory, it is believed that the human eye can generally interpret a limited number of depth planes to provide depth perception. Thus, by providing the eye with a different presentation of the image corresponding to each of these limited number of depth planes, a highly confident simulation of perceived depth can be achieved.

波导堆叠组件waveguide stack assembly

图4示出了示出了用于向用户输出图像信息的波导堆叠的示例。可穿戴系统400包括可以用于采用多个波导432b、434b、436b、438b、4400b向眼睛/大脑提供三维感知的波导堆叠或堆叠波导组件480。在一些实施例中,可穿戴系统400可以对应于图2的可穿戴系统200,其中图4更详细地示意性地示出了该可穿戴系统200的一些部分。例如,在一些实施例中,波导组件480可以集成到图2的显示器220。Figure 4 shows an example showing a waveguide stack for outputting image information to a user. Wearable system 400 includes a waveguide stack or stacked waveguide assembly 480 that can be used to provide three-dimensional perception to the eye/brain using multiple waveguides 432b, 434b, 436b, 438b, 4400b. In some embodiments, the wearable system 400 may correspond to the wearable system 200 of FIG. 2 , wherein FIG. 4 schematically shows some parts of the wearable system 200 in more detail. For example, in some embodiments, waveguide assembly 480 may be integrated into display 220 of FIG. 2 .

继续参考图4,波导组件480可以还包括位于波导之间的多个特征458、456、454、452。在一些实施例中,特征458、456、454、452可以是透镜。在其他实施例中,特征458、456、454、452可以不是透镜。然而,它们可以简单地是隔离物(例如,用于形成气隙的覆层或结构)。With continued reference to FIG. 4 , the waveguide assembly 480 may further include a plurality of features 458 , 456 , 454 , 452 positioned between the waveguides. In some embodiments, the features 458, 456, 454, 452 may be lenses. In other embodiments, the features 458, 456, 454, 452 may not be lenses. However, they may simply be spacers (eg cladding or structures for forming air gaps).

波导432b、434b、436b、438b、440b或多个透镜458、456、454、452可以被配置为以各种级别的波前曲率或光线发散向眼睛发送图像信息。每个波导级别可以与特定的深度平面相关联,并且可以被配置为输出与该深度平面对应的图像信息。图像注入装置420、422、424、426、428可以用于将图像信息注入到波导440b、438b、436b、434b、432b中,其中的每个波导可以被配置为分配入射光穿过每个相应的波导,用于向眼睛410输出。光离开图像注入装置420、422、424、426、428的输出表面并被注入到波导440b、438b、436b、434b、432b的相应输入边缘。在一些实施例中,可以将单个光束(例如,准直光束)注入到每个波导中,以便以与特定波导相关联的深度平面对应的特定角度(和发散量)输出朝向眼睛410定向的克隆准直光束的整个视场。The waveguides 432b, 434b, 436b, 438b, 440b or the plurality of lenses 458, 456, 454, 452 may be configured to send image information to the eye with various levels of wavefront curvature or light divergence. Each waveguide level can be associated with a particular depth plane and can be configured to output image information corresponding to that depth plane. Image injection devices 420, 422, 424, 426, 428 may be used to inject image information into waveguides 440b, 438b, 436b, 434b, 432b, each of which may be configured to distribute incident light through each respective A waveguide for output to the eye 410 . Light exits the output surfaces of the image injection devices 420, 422, 424, 426, 428 and is injected into respective input edges of the waveguides 440b, 438b, 436b, 434b, 432b. In some embodiments, a single beam (e.g., a collimated beam) can be injected into each waveguide to output clones oriented toward the eye 410 at a particular angle (and amount of divergence) corresponding to the depth plane associated with the particular waveguide. The entire field of view of the collimated beam.

在一些实施例中,图像注入装置420、422、424、426、428是分立的显示器,每个显示器产生用于分别注入到相应波导440b、438b、436b、434b、432b中的图像信息。在一些其它实施例中,图像注入装置420、422、424、426、428是单个复用显示器的输出端,其可以例如经由一个或多个光导管(诸如,光纤线缆)向图像注入装置420、422、424、426、428中的每一个图像注入装置用管输送图像信息。In some embodiments, the image injection devices 420, 422, 424, 426, 428 are discrete displays, each display generating image information for injection into a respective waveguide 440b, 438b, 436b, 434b, 432b, respectively. In some other embodiments, the image injection device 420, 422, 424, 426, 428 is the output of a single multiplexed display, which may be fed to the image injection device 420, for example, via one or more light guides, such as fiber optic cables. , 422, 424, 426, 428, each of the image injection devices uses a tube to deliver image information.

控制器460控制堆叠波导组件480和图像注入装置420、422、424、426、428的操作。在一些实施例中,控制器460包括编程(例如,非暂时性计算机可读介质中的指令),该编程调整图像信息到波导440b、438b、436b、434b、432b的定时和提供。在一些实施例中,控制器460可以是单个整体装置,或者是通过有线或无线通信通道连接的分布式系统。在一些实施例中,控制器460可以是处理模块260或270(在图2中示出)的部分。Controller 460 controls the operation of stacked waveguide assembly 480 and image injection devices 420 , 422 , 424 , 426 , 428 . In some embodiments, controller 460 includes programming (eg, instructions in a non-transitory computer readable medium) that adjusts the timing and provision of image information to waveguides 440b, 438b, 436b, 434b, 432b. In some embodiments, controller 460 may be a single monolithic device, or a distributed system connected by wired or wireless communication channels. In some embodiments, controller 460 may be part of processing module 260 or 270 (shown in FIG. 2 ).

波导440b、438b、436b、434b、432b可以被配置为通过全内反射(TIR)在每个相应的波导内传播光。波导440b、438b、436b、434b、432b可以各自是��面的或具有其它形状(例如,曲面的),其具有顶部主表面和底部主表面以及在这些顶部主表面与底部主表面之间延伸的边缘。在所示的配置中,波导440b、438b、436b、434b、432b可以各自包括光提取光学元件440a、438a、436a、434a、432a,这些耦出光学元件被配置为通过将每一个相应波导内传播的光重新定向到波导外而将光提取到波导外,以向眼睛410输出图像信息。所提取的光也可以被称为耦出光,并且光提取光学元件也可以被称为耦出光学元件。所提取的光束在波导中传播的光照射到光重新定向元件。光提取光学元件(440a、438a、436a、434a、432a)可以例如为反射或衍射光学特征。虽然为了便于描述和清晰绘图起见而将其图示为设置在波导440b、438b、436b、434b、432b的底部主表面处,但是在一些实施例中,光提取光学元件440a、438a、436a、434a、432a可以设置在顶部和/或底部主表面处和/或可以直接设置在波导440b、438b、436b、434b、432b的体积中。在一些实施例中,光提取光学元件440a、438a、436a、434a、432a可以形成在被附到透明基板的材料层中以形成波导440b、438b、436b、434b、432b。在一些其它实施例中,波导440b、438b、436b、434b、432b可以是单片材料,并且光提取光学元件440a、438a、436a、434a、432a可以形成在该片材料的表面上或该片材料的内部中。The waveguides 440b, 438b, 436b, 434b, 432b may be configured to propagate light within each respective waveguide by total internal reflection (TIR). The waveguides 440b, 438b, 436b, 434b, 432b may each be planar or have other shapes (e.g., curved) having top and bottom major surfaces and edges extending between these top and bottom major surfaces. . In the configuration shown, the waveguides 440b, 438b, 436b, 434b, 432b may each include light extraction optics 440a, 438a, 436a, 434a, 432a configured to transmit The light from the is redirected out of the waveguide and the light is extracted out of the waveguide to output image information to the eye 410 . The extracted light may also be referred to as outcoupling light, and the light extraction optics may also be referred to as outcoupling optics. The extracted beam of light propagating in the waveguide illuminates the light redirecting element. The light extraction optics (440a, 438a, 436a, 434a, 432a) may, for example, be reflective or diffractive optical features. Although illustrated for ease of description and clarity of drawing as being disposed at the bottom major surface of the waveguides 440b, 438b, 436b, 434b, 432b, in some embodiments the light extraction optics 440a, 438a, 436a, 434a , 432a may be disposed at the top and/or bottom major surfaces and/or may be disposed directly in the volume of waveguides 440b, 438b, 436b, 434b, 432b. In some embodiments, light extraction optical elements 440a, 438a, 436a, 434a, 432a may be formed in a layer of material that is attached to a transparent substrate to form waveguides 440b, 438b, 436b, 434b, 432b. In some other embodiments, the waveguides 440b, 438b, 436b, 434b, 432b can be a single piece of material and the light extraction optics 440a, 438a, 436a, 434a, 432a can be formed on the surface of the piece of material or on the surface of the piece of material. in the interior.

继续参考图4,如本文所讨论的,每个波导440b、438b、436b、434b、432b被配置为输出光以形成与特定���度平面对应的图像。例如,最接近眼睛的波导432b可以被配置为将如注入到这种波导432b中的准直光传送到眼睛410。准直光可以代表光学无限远焦平面。下一个上行波导434b可以被配置为将穿过第一透镜452(例如,负透镜)的准直光在其可以到达眼睛410之前发送出。第一透镜452可以被配置为产生轻微凸面的波前曲率,使得眼睛/大脑将来自下一个上行波导434b的光解释为来自第一焦平面,该第一焦平面从光学无限远处更靠近向内朝向眼睛410。类似地,第三上行波导436b使其输出光在到达眼睛410之前穿过第一透镜452和第二透镜454;第一和第二透镜452和454的组合光焦度(optical power)可以被配置为产生另一增量的波前曲率,以使得眼睛/大脑将来自第三波导436b的光解释为来自第二焦平面,该第二焦平面从光学无穷远比来自下一个上行波导434b的光更靠近向内朝向人。Continuing with reference to FIG. 4, each waveguide 440b, 438b, 436b, 434b, 432b is configured to output light to form an image corresponding to a particular depth plane, as discussed herein. For example, the waveguide 432b closest to the eye may be configured to deliver collimated light as injected into such waveguide 432b to the eye 410 . Collimated light can represent the optically infinite focal plane. The next upstream waveguide 434b may be configured to send the collimated light through the first lens 452 (eg, negative lens) out before it can reach the eye 410 . The first lens 452 may be configured to create a slightly convex wavefront curvature such that the eye/brain interprets the light from the next upstream waveguide 434b as coming from the first focal plane that is closer from optical infinity toward Inward toward the eye 410 . Similarly, third upstream waveguide 436b causes its output light to pass through first lens 452 and second lens 454 before reaching eye 410; the combined optical power of first and second lenses 452 and 454 can be configured To create another incremental wavefront curvature so that the eye/brain interprets light from the third waveguide 436b as coming from a second focal plane that is farther from optical infinity than light from the next upstream waveguide 434b Closer inward towards the person.

其它波导层(例如,波导438b、440b)和透镜(例如,透镜456、458)被类似地配置,其中堆叠中的最高波导440b通过它与眼睛之间的所有透镜发送其输出,用于代表最靠近人的焦平面的聚合焦度(aggregate focal power)。当在堆叠波导组件480的另一侧上观看/解释来自世界470的光时,为了补偿透镜458、456、454、452的堆叠,补偿透镜层430可以设置在堆叠的顶部处以补偿下面的透镜堆叠458、456、454、452的聚合焦度。这种配置提供了与可用波导/透镜配对一样多的感知焦平面。波导的光提取光学元件和透镜的聚焦方面可以是静态的(例如,不是动态的或电活性的)。在一些替代实施例中,两者之一或两者都可以是使用电活性特征而动态的。The other waveguide layers (e.g., waveguides 438b, 440b) and lenses (e.g., lenses 456, 458) are similarly configured, with the tallest waveguide 440b in the stack sending its output through all the lenses between it and the eye, used to represent the highest The aggregate focal power near the focal plane of the person. To compensate for the stack of lenses 458, 456, 454, 452 when viewing/interpreting light from the world 470 on the other side of the stacked waveguide assembly 480, a compensating lens layer 430 may be provided at the top of the stack to compensate for the underlying lens stack 458, 456, 454, 452 convergent focus. This configuration provides as many perceived focal planes as there are waveguide/lens pairs available. The focusing aspects of the waveguide's light extraction optics and lenses may be static (eg, not dynamic or electroactive). In some alternative embodiments, either or both may be dynamic using electroactive features.

继续参考图4,光提取光学元件440a、438a、436a、434a、432a可以被配置为将光重新定向到它们相应的波导之外并且针对与该波导相关联的特定深度平面输出具有适当的发散量或准直量的该光。结果,具有不同相关联深度平面的波导可以具有不同的光提取光学元件的配置,这些耦出光学元件依赖于相关联的深度平面而输出具有不同发散量的光。在一些实施例中,如本文所讨论的,光提取光学元件440a、438a、436a、434a、432a可以是体积或表面特征,其可以被配置为以特定角度输出光。例如,光提取光学元件440a、438a、436a、434a、432a可以是体积全息图、表面全息图或衍射光栅。诸如衍射光栅光提取光学元件在2015年6月25日公开的美国专利公开No.2015/0178939中有所描述,该专利公开的全部内容通过引用合并于此。With continued reference to FIG. 4 , the light extraction optics 440a, 438a, 436a, 434a, 432a can be configured to redirect light out of their respective waveguides with an appropriate amount of divergence for the particular depth plane output associated with that waveguide. or a collimated amount of this light. As a result, waveguides with different associated depth planes may have different configurations of light extraction optics that output light with different amounts of divergence depending on the associated depth plane. In some embodiments, as discussed herein, light extraction optical elements 440a, 438a, 436a, 434a, 432a can be volume or surface features that can be configured to output light at specific angles. For example, light extraction optical elements 440a, 438a, 436a, 434a, 432a may be volume holograms, surface holograms or diffraction gratings. Light extraction optical elements such as diffraction gratings are described in US Patent Publication No. 2015/0178939, published June 25, 2015, which is incorporated herein by reference in its entirety.

在一些实施例中,光提取光学元件440a、438a、436a、434a、432a是形成衍射图案的衍射特征,或者说“衍射光学元件”(在此也称为“DOE”)。优选地,DOE具有相对低的衍射效率,以使得光束的仅一部分光通过DOE的每一个交点而偏转向眼睛410,而其余部分经由全内反射而继续移动通过波导。携带图像信息的光可以因此被分成多个相关的出射光束,这些出射光束在多个位置处离开波导,并且结果对于在波导内反弹的该特定准直光束是朝向眼睛304的相当均匀图案的出射发射。In some embodiments, the light extraction optical elements 440a, 438a, 436a, 434a, 432a are diffractive features that form a diffractive pattern, or "diffractive optical elements" (also referred to herein as "DOEs"). Preferably, the DOE has a relatively low diffraction efficiency such that only a portion of the light of the beam is deflected towards the eye 410 by each intersection of the DOE, while the remainder continues to move through the waveguide via total internal reflection. The light carrying the image information can thus be split into a number of correlated exit beams that exit the waveguide at multiple locations, and the result is a fairly uniform pattern of exit towards the eye 304 for that particular collimated beam bouncing within the waveguide. emission.

在一些实施例中,一个或多个DOE可以在它们活跃地衍射的“开”状态与它们不显著衍射的“关”状态之间可切换。例如,可切换的DOE可以包括聚合物分散液晶层,其中微滴在主体介质中包含衍射图案,并且微滴的折射率可以被切换为基本上匹配主体材料的折射率(在这种情况下,图案不会明显地衍射入射光),或者微滴可以被切换为与主体介质的折射率不匹配的折射率(在这种情况下,该图案活跃地衍射入射光)。In some embodiments, one or more DOEs may be switchable between their actively diffracting "on" state and their insignificantly diffracting "off" state. For example, a switchable DOE can include a polymer-dispersed liquid crystal layer in which the droplets contain a diffractive pattern in the host medium, and the refractive index of the droplets can be switched to substantially match that of the host material (in this case, pattern does not significantly diffract incident light), or the droplet can be switched to a refractive index that does not match that of the host medium (in which case the pattern actively diffracts incident light).

在一些实施例中,深度平面或景深的数量和分布可以基于观看者的眼睛的瞳孔尺寸或取向动态地改变。景深可以与观看者的瞳孔尺寸成反比地改变。结果,随着观看者眼睛的瞳孔的尺寸减小,景深增加使得一个平面不可辨别,因为该平面的位置超出眼睛的焦深,随着瞳孔尺寸的减少和相应的景深增加,该平面可能变得可辨别并且看起来更加对焦。同样地,用于向观看者呈现不同图像的间隔开的深度平面的数量可以随着瞳孔尺寸的减小而减小。例如,在不调节眼睛离开一个深度平面并到另一个深度平面的调节的情况下,观看者可能无法以一个瞳孔尺寸清楚地感知第一深度平面和第二深度平面的细节。然而,这两个深度平面可以在不改变调节的情况下以另一瞳孔尺寸同时充分地对焦于用户。In some embodiments, the number and distribution of depth planes or depths of field can be changed dynamically based on the pupil size or orientation of the viewer's eyes. Depth of field can vary inversely proportional to the viewer's pupil size. As a result, as the size of the pupil of the viewer's eye decreases, the depth of field increases such that a plane cannot be discerned because the plane is located beyond the focal depth of the eye, and as the pupil size decreases and the corresponding depth of field increases, the plane may become Recognizable and appear more in focus. Likewise, the number of spaced apart depth planes used to present different images to the viewer may decrease as the pupil size decreases. For example, a viewer may not be able to clearly perceive details in a first depth plane and a second depth plane with one pupil size without accommodation of the eyes out of one depth plane and into another. However, these two depth planes can be fully in focus on the user at the same time with another pupil size without changing the accommodation.

在一些实施例中,显示系统可以基于瞳孔尺寸或取向的确定或者在接收到指示特定瞳孔尺寸或取向的电信号之后,改变接收图像信息的波导的数量。例如,如果用户的眼睛不能区分与两个波导相关联的两个深度平面,则控制器460(其可以是本地处理和数据模块260的实施例)可以被配置或编程为停止向这些波导中的一个提供图像信息。有利地,这可以减少系统上的处理负担,从而增加系统的响应性。在用于波导的DOE可以在接通和断开状态之间切换的实施例中,当波导确实接收图像信息时,DOE可以切换到断开状态。In some embodiments, the display system may vary the number of waveguides receiving image information based on a determination of pupil size or orientation or upon receipt of an electrical signal indicative of a particular pupil size or orientation. For example, if the user's eyes cannot distinguish between the two depth planes associated with two waveguides, the controller 460 (which may be an embodiment of the local processing and data module 260) may be configured or programmed to stop the flow of water into the waveguides. One provides image information. Advantageously, this can reduce the processing load on the system, thereby increasing the responsiveness of the system. In embodiments where the DOE for the waveguide can be switched between on and off states, the DOE can be switched to the off state when the waveguide is actually receiving image information.

在一些实施例中,可以期望使出射光束满足直径小于观看者眼睛直径的条件。然而,考虑到观看者瞳孔尺寸的变化,满足这种条件可能是具有挑战性的。在一些实施例中,通过响应于观看者瞳孔的尺寸的确定而改变出射光束的尺寸,在宽范围的瞳孔尺寸上满足该条件。例如,随着瞳孔尺寸减小,出射光束的尺寸也可能减小。在一些实施例中,可以使用可变孔径来改变出射光束尺寸。In some embodiments, it may be desirable for the exit beam to have a diameter smaller than the diameter of the viewer's eye. However, satisfying this condition can be challenging given the variation in pupil size of the viewer. In some embodiments, this condition is satisfied over a wide range of pupil sizes by varying the size of the exit beam in response to a determination of the size of the viewer's pupil. For example, as the size of the pupil decreases, the size of the exit beam may also decrease. In some embodiments, a variable aperture can be used to vary the exit beam size.

可穿戴系统400可以包括对世界470的一部分成像的面向外的成像系统464(例如,数码相机)。世界470的该部分可被称为世界相机的视场(FOV),并且成像系统464有时被称为FOV相机。世界相机的FOV可以与观看者210的FOV相同或不同,观看者210的FOV包含观看者210在给定时间感知的世界470的一部分。例如,在一些情况下,世界相机的FOV可能大于可穿戴系统400的观看者210的观看者210。可供观看者观看或成像的整个区域可以被称为注视场(field of regard)(FOR)。FOR可以包括围绕可穿戴系统400的4π球面度(steradian)立体角,因为佩戴者可以移动他的身体、头部或眼睛以基本上感知空间中的任何方向。在其他背景下,佩戴者的运动可能更加狭窄,因此佩戴者的FOR可以对着较小的立体角。如参考图1B所述,当用户正在使用HMD时,用户210还可以具有与用户眼睛相关联的FOV。在一些实施例中,与用户眼睛相关联的FOV可以与成像系统464的FOV相同。在其他实施例中,与用户眼睛相关联的FOV与成像系统464的FOV不同。从面向外的成像系统464获得的图像可以用于跟踪用户做出的手势(例如,手或手指手势)、检测用户前方的世界470中的对象等等。Wearable system 400 may include outward-facing imaging system 464 (eg, a digital camera) that images a portion of world 470 . This portion of world 470 may be referred to as the world camera's field of view (FOV), and imaging system 464 is sometimes referred to as a FOV camera. The FOV of the world camera may or may not be the same as the FOV of the viewer 210, which contains the portion of the world 470 that the viewer 210 perceives at a given time. For example, in some cases, the FOV of the world camera may be larger than the viewer 210 of the viewer 210 of the wearable system 400 . The entire area available for viewing or imaging by a viewer may be referred to as the field of regard (FOR). FOR can include a 4π steradian solid angle around the wearable system 400, since the wearer can move his body, head or eyes to perceive essentially any direction in space. In other contexts, the wearer's motion may be more narrow, so the wearer's FOR may subtend a smaller solid angle. As described with reference to FIG. 1B , user 210 may also have a FOV associated with the user's eyes when the user is using the HMD. In some embodiments, the FOV associated with the user's eyes may be the same as the FOV of imaging system 464 . In other embodiments, the FOV associated with the user's eyes is different than the FOV of the imaging system 464 . Images obtained from outward-facing imaging system 464 may be used to track gestures made by the user (eg, hand or finger gestures), detect objects in world 470 in front of the user, and the like.

可穿戴系统400可以包括音频传感器232,例如麦克风,以捕获环境声音。如上所述,在一些实施例中,可以定位一个或多个其他音频传感器以提供对确定语音源的位置有用的立体声接收。作为另一示例,音频传感器232可以包括方向麦克风,该音频传感器还可以提供关于音频源的位置的有用的方向信息。Wearable system 400 may include an audio sensor 232, such as a microphone, to capture ambient sound. As noted above, in some embodiments, one or more other audio sensors may be positioned to provide stereo reception useful for determining the location of speech sources. As another example, audio sensor 232 may include a directional microphone, which may also provide useful directional information about the location of the audio source.

可穿戴系统400还可以包括面向内的成像系统462(例如,数码相机),其观察用户的运动,诸如眼睛运动和面部运动。面向内的成像系统462可以用于捕捉眼睛410的图像以确定眼睛410的瞳孔的尺寸和/或取向。面向内的成像系统462可以用于获得用于确定用户正在看的方向的图像(例如,眼睛姿态)或用于用户的生物辨识的图像(例如,经由虹膜辨识)。在一些实施例中,每只眼睛可以利用至少一个相机,以分别单独地确定每只眼睛的瞳孔尺寸和眼睛姿态,从而允许向每只眼睛呈现图像信息以动态地为该眼睛设计。在一些其他实施例中,仅确定单个眼睛410的瞳孔直径或取向(例如,每对眼睛仅使用单个相机)并且假设用户的双眼是类似的。可以分析由面向内的成像系统462获得的图像以确定用户的眼睛姿态或情绪,这可以被可穿戴系统400使用来决定应该向用户呈现哪个音频或视觉内容。可穿戴系统400还可以使用诸如IMU、加速度计、陀螺仪等的传感器来确定头部姿态(例如,头部位置或头部取向)。Wearable system 400 may also include an inward-facing imaging system 462 (eg, a digital camera) that observes the user's movements, such as eye movements and facial movements. Inwardly facing imaging system 462 may be used to capture images of eye 410 to determine the size and/or orientation of the pupil of eye 410 . Inwardly facing imaging system 462 may be used to obtain images for determining the direction the user is looking (eg, eye pose) or for biometric identification of the user (eg, via iris recognition). In some embodiments, each eye may utilize at least one camera to determine pupil size and eye pose for each eye individually, allowing image information to be presented to each eye to dynamically design for that eye. In some other embodiments, only the pupil diameter or orientation of a single eye 410 is determined (eg, using only a single camera per pair of eyes) and the user's eyes are assumed to be similar. Images obtained by inwardly facing imaging system 462 may be analyzed to determine the user's eye pose or emotion, which may be used by wearable system 400 to decide which audio or visual content should be presented to the user. Wearable system 400 may also use sensors such as IMUs, accelerometers, gyroscopes, etc. to determine head pose (eg, head position or head orientation).

可穿戴系统400可以包括用户输入装置466,用户可以通过该用户输入装置466向控制器460输入命令以与系统400交互。例如,用户输入装置466可以包括轨迹垫(trackpad)、触摸屏、操纵杆、多自由度(DOF)控制器、电容感应装置、游戏控制器、键盘、鼠标、方向垫(D-pad)、棒(wand)、触觉装置、图腾(例如,用作虚拟用户输入装置)等等。多DOF控制器可以感测控制器的一些或全部可能的平移(例如,左/右、前/后或上/下)或旋转(例如,偏航(yaw)、俯仰(pitch)或滚转(roll))中的用户输入。支持平移运动的���DOF控制器可以被称为3DOF,而支持平移和旋转的多DOF控制器可以被称为6DOF。在一些情况下,用户可以使用手指(例如,拇指)在触控敏感输入装置上按压或滑动以向可穿戴系统400提供输入(例如,向可穿戴系统400提供的用户界面提供用户输入)。在使用可穿戴系统400期间,用户输入装置466可以由用户的手握持。用户输入装置466可以与可穿戴系统400进行有线或无线通信。Wearable system 400 may include a user input device 466 through which a user may input commands to controller 460 to interact with system 400 . For example, user input devices 466 may include trackpads, touch screens, joysticks, multiple degrees of freedom (DOF) controllers, capacitive sensing devices, game controllers, keyboards, mice, directional pads (D-pads), sticks ( wand), haptic devices, totems (eg, used as virtual user input devices), etc. A multi-DOF controller may sense some or all possible translations (e.g., left/right, forward/backward, or up/down) or rotation (e.g., yaw, pitch, or roll) of the controller (e.g., roll)) in the user input. A multi-DOF controller that supports translational motion may be referred to as 3DOF, while a multi-DOF controller that supports translation and rotation may be referred to as 6DOF. In some cases, a user may use a finger (eg, thumb) to press or slide on a touch-sensitive input device to provide input to wearable system 400 (eg, to provide user input to a user interface provided by wearable system 400 ). During use of wearable system 400, user input device 466 may be held by a user's hand. User input device 466 may be in wired or wireless communication with wearable system 400 .

图5示出了由波导输出的出射光束的示例。示出了一个波导,但是应该理解,波导组件480中的其它波导可以类似地起作用,其中波导组件480包括多个波导。光520在波导432b的输入边缘432c处被注入到波导432b中,并且通过TIR在波导432b内传播。在光5200照射在DOE 432a上的点处,一部分光如出射光束510离开波导。出射光束510被示出为基本上平行,但是依赖于与波导432b相关联的深度平面,出射光束510也可以以一角度(例如,形成发散的出射光束)被重新定向以传播到眼睛410。应该理解,基本上平行的出射光束可以指示具有光提取光学元件的波导,所述耦出光学元件将光耦出以形成看起来被设置在距眼睛410较大距离(例如,光学无穷远)处的深度平面上的图像。其它波导或者其它光提取光学元件组可以输出更加发散的出射光束图案,这将需要眼睛410调节更近距离以将其聚焦在视网膜上并且将被大脑解释为来自比光学无穷远更接近眼睛410的距离的光。Fig. 5 shows an example of an outgoing beam output by a waveguide. One waveguide is shown, but it should be understood that the other waveguides in waveguide assembly 480, where waveguide assembly 480 includes a plurality of waveguides, may function similarly. Light 520 is injected into waveguide 432b at input edge 432c of waveguide 432b and propagates within waveguide 432b by TIR. At the point where light 5200 impinges on DOE 432a, a portion of the light, such as exit beam 510, exits the waveguide. Exit beam 510 is shown as being substantially parallel, but depending on the depth plane associated with waveguide 432b, exit beam 510 may also be redirected at an angle (eg, forming a diverging exit beam) for propagation to eye 410. It should be understood that a substantially parallel exit beam may be indicative of a waveguide having light extraction optics that couple light out to form image on the depth plane. Other waveguides or other sets of light extraction optics could output a more divergent exit beam pattern, which would require the eye 410 to accommodate closer to focus it on the retina and would be interpreted by the brain as coming from closer to the eye 410 than optical infinity distance light.

图6是示出光学系统的示意图,该光学系统包括波导装置、将光光学耦合到波导装置或光学耦合来自波导装置的光的光学耦合器子系统以及控制子系统,用于产生多焦点体积显示、图像或光场。光学系统可以包括波导装置、将光光学耦合到波导装置或光学耦合来自波导装置的光的光学耦合器子系统以及控制子系统。光学系统可以用于产生多焦点体积、图像或光场。光学系统可以包括一个或多个主平面波导632a(图6中仅示出一个)和与至少一些主波导632a中的每一个相关联的一个或多个DOE 632b。平面波导632b可以类似于参考图4讨论的波导432b、434b、436b、438b、440b。光学系统可以采用分布波导装置,以沿第一轴(图6的视图中的垂直或Y轴)中继(relay)光并沿第一轴(例如,Y轴)扩展光的有效出射光瞳。分布波导装置可以例如包括分布平面波导622b和与分布平面波导622b相关联的至少一个DOE 622a(由双点划线示出)。分布平面波导3在至少一些方面可以与主平面波导1相似或相同,具有与其不同的取向。同样地,所述至少一个DOE 622a在至少一些方面可以与DOE632a相似或相同。例如,分布平面波导622b或DOE 622a可以分别由与主平面波导632b或DOE632a相同的材料构成。图6所示的光学显示系统600的实施例可以集成到图2所示的可穿戴系统200中。6 is a schematic diagram illustrating an optical system including a waveguide, an optical coupler subsystem to optically couple light to or from the waveguide, and a control subsystem for producing a multifocal volumetric display , image or light field. The optical system may include a waveguide, an optical coupler subsystem to optically couple light to or from the waveguide, and a control subsystem. The optical system can be used to generate multifocal volumes, images or light fields. The optical system may include one or more main planar waveguides 632a (only one is shown in FIG. 6 ) and one or more DOEs 632b associated with each of at least some of the main waveguides 632a. The planar waveguide 632b may be similar to the waveguides 432b, 434b, 436b, 438b, 440b discussed with reference to FIG. 4 . The optical system may employ a distributed waveguide to relay light along a first axis (vertical or Y axis in the view of FIG. 6) and expand the effective exit pupil of light along a first axis (eg, Y axis). The distributed waveguide arrangement may, for example, include a distributed planar waveguide 622b and at least one DOE 622a (shown by a two-dashed line) associated with the distributed planar waveguide 622b. The distributed planar waveguide 3 may be similar or identical to the main planar waveguide 1 in at least some respects, with a different orientation therefrom. Likewise, the at least one DOE 622a may be similar or identical to DOE 632a in at least some respects. For example, distribution planar waveguide 622b or DOE 622a may be constructed of the same material as main planar waveguide 632b or DOE 632a, respectively. The embodiment of the optical display system 600 shown in FIG. 6 may be integrated into the wearable system 200 shown in FIG. 2 .

中继和出射光瞳扩展光可以被从分布波导装置光学耦合到一个或多个主平面波导632b中。主平面波导632b可以沿第二轴中继光,该第二轴优选地与第一轴正交,(例如,考虑图6的水平或X轴)。值得注意的是,第二轴可以是与第一轴的非正交轴。主平面波导632b沿着第二轴(例如,X轴)扩展光的有效出射光瞳。例如,分布平面波导622b可以沿垂直或Y轴中继和扩展光,并将该光传递到可以沿水平或X轴中继和扩展光的主平面波导632b。Relay and exit pupil-expanded light may be optically coupled from the distributed waveguide arrangement into one or more principal planar waveguides 632b. The primary planar waveguide 632b can relay light along a second axis, which is preferably orthogonal to the first axis, (eg, consider the horizontal or X axis of FIG. 6). Notably, the second axis may be a non-orthogonal axis to the first axis. The principal planar waveguide 632b expands the effective exit pupil of light along a second axis (eg, the X-axis). For example, distribution planar waveguide 622b can relay and spread light along a vertical or Y axis and pass the light to main planar waveguide 632b which can relay and spread light along a horizontal or X axis.

光学系统可以包括一个或多个彩色光源(例如,红色、绿色和蓝色激光)610,其可以光学耦合到单模光纤640的近端。光纤640的远端可以通过压电材料的中空管642穿过或接收。远端作为固定的自由柔性悬臂644从管642突出。压电管642可以与四个象限电极(未示出)相关联。例如,电极可以镀在管642的外部、外表面或外周或直径上。芯电极(未示出)也可以位于管642的芯、中心、内周或内径中。The optical system can include one or more colored light sources (eg, red, green, and blue lasers) 610 that can be optically coupled to the proximal end of a single-mode fiber 640 . The distal end of the optical fiber 640 may be passed through or received by a hollow tube 642 of piezoelectric material. The distal end protrudes from the tube 642 as a fixed free flexible cantilever 644 . Piezoelectric tube 642 may be associated with four quadrant electrodes (not shown). For example, electrodes may be plated on the exterior, outer surface or perimeter or diameter of tube 642 . A core electrode (not shown) may also be located in the core, center, inner circumference or inner diameter of the tube 642 .

例如经由导线660电耦合的驱动电子器件650驱动相对的电极对以独立地在两个轴上弯曲压电管642。光纤640的突出远端尖端具有机械共振模式。共振频率可以取决于光纤640的直径、长度和材料特性。通过在光纤悬臂644的���一机械共振模式���近���动压电管642,���以使光纤悬臂644振动,并且可以扫过(sweep)大的偏转(deflection)。Drive electronics 650 , electrically coupled, eg, via wires 660 , drive opposing pairs of electrodes to bend piezoelectric tube 642 independently in two axes. The protruding distal tip of the optical fiber 640 has a mechanical resonance mode. The resonant frequency may depend on the diameter, length and material properties of the optical fiber 640 . By vibrating the piezoelectric tube 642 near the first mechanical resonance mode of the fiber cantilever 644, the fiber cantilever 644 can be vibrated and large deflection can be swept.

通过在两个轴上刺激共振,在面积填充二维(2D)扫描中双轴扫描光纤悬臂644的尖端。通过与光纤悬臂644的扫描同步地调制光源610的强度,从光纤悬臂644出射的光可以形成图像。在美国专利公开No.2014/0003762中提供了对这种设置的描述,该公开的全部内容通过引用合并于此。The tip of the fiber optic cantilever 644 is scanned biaxially in an area-filling two-dimensional (2D) scan by stimulating resonances in both axes. By modulating the intensity of the light source 610 synchronously with the scanning of the fiber optic cantilever 644, the light emerging from the fiber optic cantilever 644 can form an image. A description of such an arrangement is provided in US Patent Publication No. 2014/0003762, which is hereby incorporated by reference in its entirety.

光耦合器子系统的部件可以使从扫描光纤悬臂644射出的光准直。准直光可以被镜面648反射到包含至少一个衍射光学元件(DOE)622a的窄分布平面波导622b中。准直光可以通过TIR沿着分布平面波导622b垂直地(相对于图6的视图)传播,由此与DOE 622a重复交叉。DOE 622a优选地具有低衍射效率。这导致光的部分(例如,10%)在与DOE 622a的每个交叉点处朝向较大的主平面波导632b的边缘衍射,并且光的部分经由TIR继续在其原始轨迹上向下行进分布平面波导622b的长度。Components of the optical coupler subsystem may collimate the light exiting the scanning fiber cantilever 644 . The collimated light may be reflected by a mirror 648 into a narrow distribution planar waveguide 622b comprising at least one diffractive optical element (DOE) 622a. Collimated light may propagate vertically (with respect to the view of FIG. 6 ) along the distribution planar waveguide 622b by TIR, thereby repeatedly intersecting the DOE 622a. DOE 622a preferably has low diffraction efficiency. This causes a portion (e.g., 10%) of the light to be diffracted at each intersection with the DOE 622a towards the edge of the larger principal planar waveguide 632b, and the portion of the light to continue on its original trajectory down the distribution plane via TIR The length of the waveguide 622b.

在与DOE 622a的每个交叉点处,附加光可以朝向主波导632b的入口衍射。通过将入射光分成多个耦出(outcouple)组,光的出射光瞳可以在分布平面波导622b中被DOE622a垂直扩展。这种从分布平面波导622b耦出的垂直扩展的光可以进入主平面波导632b的边缘。At each intersection with DOE 622a, additional light may be diffracted towards the entrance of main waveguide 632b. By splitting the incident light into multiple outcouple groups, the exit pupil of the light can be expanded vertically by the DOE 622a in the distribution planar waveguide 622b. This vertically expanding light coupled out of the distribution planar waveguide 622b can enter the edge of the main planar waveguide 632b.

进入主波导632b的光可以经由TIR沿着主波导632b水平传播(相对于图6的视图)。由于光在多个点处与DOE 632a相交,它经由TIR沿着主波导632b的至少部分长度水平传播。DOE 632a可以有利地被设计或被配置为具有相位轮廓,该相位轮廓是线性衍射图案和径向对称衍射图案的总和,以产生光的偏转和聚焦。DOE 632a可以有利地具有低衍射效率(例如,10%),使得在DOE 632a的每个交叉点处光束的仅部分光朝向视图的眼睛偏转,而其余的光继续经由TIR传播通过主波导632b。Light entering the main waveguide 632b may propagate horizontally along the main waveguide 632b via TIR (with respect to the view of FIG. 6 ). As the light intersects DOE 632a at multiple points, it propagates horizontally via TIR along at least part of the length of main waveguide 632b. The DOE 632a may advantageously be designed or configured to have a phase profile that is the sum of a linear diffraction pattern and a radially symmetric diffraction pattern to produce deflection and focusing of light. The DOE 632a may advantageously have a low diffraction efficiency (eg, 10%) such that at each intersection of the DOE 632a only a portion of the light beam is deflected towards the viewing eye, while the rest of the light continues to propagate through the main waveguide 632b via TIR.

在传播光和DOE 632a之间的每个交叉点处,部分光朝向允许光从TIR逸出的主波导632b的相邻面衍射,并从主波导632b的面出射。在一些实施例中,DOE 632a的径向对称衍射图案另外赋予衍射光的聚焦水平,使单个光束的光波前成形(例如,赋予曲率)以及以与设计的聚焦水平相匹配的角度操纵光束。At each intersection between the propagating light and the DOE 632a, a portion of the light diffracts toward and exits the face of the main waveguide 632b which allows the light to escape from the TIR. In some embodiments, the radially symmetric diffraction pattern of DOE 632a additionally imparts a focus level of the diffracted light, shapes (eg, imparts curvature) the optical wavefronts of individual beams, and steers the beams at angles that match the designed focus level.

因此,这些不同的路径可以使光通过多个DOE 632a从主平面波导632b耦出,所述多个DOE 632a具有不同的角度、聚焦水平或在出射光瞳处产生不同的填充图案。出射光瞳处的不同填充图案可以有利地用于创建具有多个深度平面的光场显示。波导组件中的每个层或堆叠中的一组层(例如,3层)可以用于产生相应的颜色(例如,红色、蓝色、绿色)。由此,例如,可以采用第一组三个相邻层在第一焦深处分别产生红色、蓝色和绿色光。可以采用第二组三个相邻层在第二焦深处分别产生红色、蓝色和绿色光。可以采用多个组来生成具有各种焦深的全3D或4D彩色图像光场。Thus, these different paths can cause light to be coupled out of the main planar waveguide 632b through multiple DOEs 632a that have different angles, focus levels, or produce different fill patterns at the exit pupil. Different fill patterns at the exit pupil can be advantageously used to create light field displays with multiple depth planes. Each layer in the waveguide assembly or a set of layers (eg 3 layers) in the stack can be used to generate a corresponding color (eg red, blue, green). Thus, for example, a first set of three adjacent layers may be employed to generate red, blue and green light, respectively, at a first depth of focus. A second set of three adjacent layers may be employed to generate red, blue and green light, respectively, at a second depth of focus. Multiple groups can be employed to generate a full 3D or 4D color image light field with various depths of focus.

可穿戴系统的其他组件Other components of the wearable system

在许多实现方式中,可穿戴系统可以包括除上述可穿戴系统的组件之外或作为其替代的其他组件。可穿戴系统可以例如包括一个或多个触觉装置或组件。触觉装置或组件可以用于向用户提供触感。例如,当触摸虚拟内容(例如,虚拟对象、虚拟工具、其他虚拟构造)时,触觉装置或组件可以提供压力或纹理的触感。触感可以复制虚拟对象所代表的物理对象的感觉或者可以复制虚拟内容所代表的想象的对象或角色(例如,龙)的感觉。在一些实现方式中,用户可以佩戴触觉装置或组件(例如,用户可穿戴的手套)。在一些实现方式中,触觉装置或组件可以由用户持有。In many implementations, a wearable system may include other components in addition to or instead of the components of the wearable system described above. A wearable system may, for example, include one or more haptic devices or components. Haptic devices or components can be used to provide a sense of touch to a user. For example, a haptic device or component may provide a haptic sensation of pressure or texture when virtual content (eg, a virtual object, virtual tool, other virtual construct) is touched. Haptics may replicate the feel of a physical object represented by the virtual object or may replicate the feel of an imaginary object or character (eg, a dragon) represented by the virtual content. In some implementations, a user may wear a haptic device or assembly (eg, a user-wearable glove). In some implementations, a haptic device or assembly can be held by a user.

可穿戴系统可以例如包括一个或多个物理对象,这些物理对象可以由用户操纵以允许与可穿戴系统的输入或交互。这些物理对象在此可以称为图腾。一些图腾可以采用无生命对象的形式,诸如例如,一件金属或塑料、墙壁、��子的表面。在某些实现方式中,图腾实际上可能不具有任何物理输入结构(例如,键、触发器、操纵杆、轨迹球、摇杆开关)。相反,图腾可以仅提供物理表面,并且可穿戴系统可以呈现用户界面,以使用户看起来在图腾的一个或多个表面上。例如,可穿戴系统可以使计算机键盘和触控板的图像呈���为看起来位于图腾的一个或多个表面上。例如,可穿戴系统可以使���拟计算机键盘和虚拟触控板呈现为看起来在用作图腾的矩形薄铝板的表面上。该矩形板本身没有任何物理键或触控板或传感器。然而,由于经由虚拟键盘和/或虚拟触控板进行选择或输入,因此可穿戴系统可以借助该矩形板检测到用户操纵或交互或触摸。用户输入装置466(图4中所示)可以是图腾的实施例,其可以包括触控板、触摸板、触发器、操纵杆、轨迹球、摇杆或虚拟开关、鼠标、键盘、多自由度控制器或其他物理输入装置。用户可以单独或与姿态组合使用图腾来与可穿戴系统或其他用户交互。A wearable system may, for example, include one or more physical objects that may be manipulated by a user to allow input or interaction with the wearable system. These physical objects may be referred to herein as totems. Some totems may take the form of inanimate objects such as, for example, the surface of a piece of metal or plastic, a wall, or a table. In some implementations, the totem may not actually have any physical input structures (eg, keys, triggers, joysticks, trackballs, rocker switches). Instead, the totem can provide only physical surfaces, and the wearable system can present a user interface so that the user appears to be on one or more surfaces of the totem. For example, the wearable system can render images of a computer keyboard and trackpad to appear to be on one or more surfaces of the totem. For example, a wearable system could make a virtual computer keyboard and a virtual trackpad appear to be on the surface of a rectangular thin aluminum plate used as a totem. The rectangular pad itself doesn't have any physical keys or trackpad or sensors. However, due to selection or input via the virtual keyboard and/or virtual touchpad, the wearable system can detect user manipulation or interaction or touch by means of the rectangular pad. User input device 466 (shown in FIG. 4 ) may be an embodiment of a totem, which may include a touchpad, touch pad, trigger, joystick, trackball, rocker or virtual switch, mouse, keyboard, multi-degree-of-freedom controller or other physical input device. Users can use totems alone or in combination with gestures to interact with the wearable system or other users.

在美国专利公开No.2015/0016777中描述了可以与本公开的可穿戴装置、HMD和显示系统一起使用的触觉装置和图腾的示例,其全部内容通过引用并入此文。Examples of haptic devices and totems that may be used with the wearable devices, HMDs, and display systems of the present disclosure are described in US Patent Publication No. 2015/0016777, which is incorporated herein by reference in its entirety.

示例可穿戴系统、环境和界面Example wearable systems, environments and interfaces

可穿戴系统可以采用各种映射相关技术,以便在呈现的光场中实现方式高景深。在映射出虚拟世界时,了解真实世界中的所有特征和点以准确地描绘与真实世界相关的虚拟对象是有利的。为此,通过包括传送关于真实世界的各种点和特征的信息的新图片,可以将从可穿戴系统的用户捕捉的FOV图像添加到世界模型。例如,可穿戴系统可以收集映射点的组(诸如2D点或3D点)并找到新的映射点以呈现世界模型的更准确的版本。可以将第一用户的世界模型(例如,通过诸如云网络的网络)传送给第二用户,使得第二用户可以体验第一用户周围的世界。Wearable systems can employ various mapping-related techniques in order to achieve a high depth of field in the presented light field. When mapping out the virtual world, it is advantageous to know all the features and points in the real world to accurately depict virtual objects relative to the real world. To this end, FOV images captured from the user of the wearable system can be added to the world model by including new pictures that convey information about various points and features of the real world. For example, a wearable system can collect sets of mapping points (such as 2D points or 3D points) and find new mapping points to present a more accurate version of the world model. A first user's world model can be communicated (eg, via a network such as a cloud network) to a second user so that the second user can experience the world around the first user.

图7是MR环境700的示例的框图。MR环境700可以被配置为接收来自一个或多个用户可穿戴系统(例如,可穿戴系统200或显示系统220)或固定室内系统(例如,室内相机等)的输入(例如,来自用户的可穿戴系统的视觉输入702、诸如室内相机的固定输入704、来自各种传感器的感觉输入706、手势(gesture)、图腾、眼睛跟踪、来自用户输入装置466的用户输入等)。可穿戴系统可以使用各种传感器(例如,加速度计、陀螺仪、温度传感器、运动传感器、深度传感器、GPS传感器、面向内的成像系统、面向外的成像系统等)来确定用户的环境的位置和各种其他属性。该信息可以进一步被补充有来自室内中的固定相机的信息,其可以从不同的视点提供图像或各种提示。由相机(诸如室内相机和/或面向外的成像系统的相机)获取的图像数据可以被减少为映射点的组。FIG. 7 is a block diagram of an example of an MR environment 700 . MR environment 700 may be configured to receive input from one or more user wearable systems (e.g., wearable system 200 or display system 220) or fixed indoor systems (e.g., indoor cameras, etc.) visual input to the system 702, fixed input such as an indoor camera 704, sensory input 706 from various sensors, gestures, totems, eye tracking, user input from user input device 466, etc.). Wearable systems can use various sensors (e.g., accelerometers, gyroscopes, temperature sensors, motion sensors, depth sensors, GPS sensors, inward-facing imaging systems, outward-facing imaging systems, etc.) to determine the location and location of the user's environment. various other properties. This information can be further supplemented with information from fixed cameras in the room, which can provide images or various cues from different viewpoints. Image data acquired by a camera (such as an indoor camera and/or a camera of an outward-facing imaging system) may be reduced to a set of map points.

一个或多个对象辨别器708可以爬行通过所接收的数据(例如,点的集合)并且在映射数据库710的帮助下辨别或映射点、标记图像、将语义信息附到对象。映射数据库710可以包括随时间收集的各种点及其对应的对象。各种装置和映射数据库可以通过网络(例如,LAN、WAN等)彼此连接以访问云。One or more object recognizers 708 may crawl through the received data (eg, collections of points) and with the help of mapping database 710 recognize or map points, label images, attach semantic information to objects. Mapping database 710 may include various points and their corresponding objects collected over time. Various devices and mapping databases can be connected to each other through a network (eg, LAN, WAN, etc.) to access the cloud.

基于该信息和映射数据库中的点的集合,对象辨别器708a至708n可以辨别环境中的对象。例如,对象辨别器可以辨别面部、人、窗、墙壁、用户输入装置、电视、文件(例如,本文安全性示例中描述的旅行票、驾驶执照、护照)、用户环境中的其他对象等。一个或多个对象辨别器可以专用于具有某些特性的对象。例如,对象辨别器708a可以用于辨别面部,而另一个对象辨别器可以用于辨别文件。Based on this information and the set of points in the mapping database, object identifiers 708a through 708n can identify objects in the environment. For example, an object recognizer may recognize faces, people, windows, walls, user input devices, televisions, documents (eg, travel tickets, driver's licenses, passports as described in the security examples herein), other objects in the user's environment, and the like. One or more object discriminators can be dedicated to objects with certain properties. For example, object recognizer 708a may be used to recognize faces, while another object recognizer may be used to recognize documents.

可以使用各种计算机视觉技术来执行对象辨别。例如,可穿戴系统可以分析由面向外的成像系统464(图4中所示)获取的图像以执行场景重建、事件检测、视频跟踪、对象辨别(例如,人或文件)、对象姿态估计、面部辨别(例如,来自环境中的人或文件上的图像)、学习、索引、运动估计或图像分析(例如,辨识文件内的记号(indicia),诸如照片、签名、身份证明信息、旅行信息等)等等。可以使用一个或多个计算机视觉算法来执行这些任务。计算机视觉算法的�����制性示例包括:标度(scale)不变特征变换(SIFT)、加速稳健(robust)特征(SURF)、定向(orient)FAST和旋转BRIEF(ORB)、二进制稳健不变可缩放关键点(BRISK)、快速视网膜关键点(FREAK)、维奥拉-琼斯(Viola-Jones)算法、特征脸(Eigenfaces)方法、卢卡斯-堪纳德(Lucas-Kanade)算法、霍恩-申克(Horn-Schunk)算法、均值平移(Mean-shift)算法、视觉同步定位和映射(vSLAM)技术、序贯(sequential)贝叶斯估计器(例如,卡尔曼滤波器、扩展卡尔曼滤波器等)、束调整、自调节阈值(和其他阈值技术)、迭代最近点(ICP)、半全局匹配(SGM)、半全局块匹配(SGBM)、特征点直方图、各种机器学习算法(诸如,支持向量机、k-最近邻算法、朴素贝叶斯、神经网络(包括卷积或深度神经网络)、或其他有监督/无监督模型等)等等。Object recognition can be performed using various computer vision techniques. For example, the wearable system can analyze images acquired by outward-facing imaging system 464 (shown in FIG. 4 ) to perform scene reconstruction, event detection, video tracking, object recognition (e.g., people or documents), object pose estimation, facial Recognition (e.g., from people in the environment or images on documents), learning, indexing, motion estimation, or image analysis (e.g., recognizing indicia within documents, such as photographs, signatures, identification information, travel information, etc.) etc. These tasks can be performed using one or more computer vision algorithms. Non-limiting examples of computer vision algorithms include: scale (scale) invariant feature transform (SIFT), accelerated robust (robust) feature (SURF), orientation (orient) FAST and rotation BRIEF (ORB), binary robust invariant variable Scaled Keypoint (BRISK), Fast Retinal Keypoint (FREAK), Viola-Jones Algorithm, Eigenfaces (Eigenfaces) Method, Lucas-Kanade Algorithm, Horn - Horn-Schunk algorithm, Mean-shift algorithm, Visual Simultaneous Localization and Mapping (vSLAM) technique, Sequential Bayesian estimators (e.g., Kalman filter, Extended Kalman filters, etc.), bundle adjustment, self-adjusting thresholding (and other thresholding techniques), iterative closest point (ICP), semi-global matching (SGM), semi-global block matching (SGBM), feature point histogram, various machine learning algorithms (such as support vector machines, k-nearest neighbors, naive Bayesian, neural networks (including convolutional or deep neural networks), or other supervised/unsupervised models, etc.), etc.

一个或多个对象辨别器708还可以实现各种文本���别算法以从图像中识别和提取文本。一些示例文本辨别算法包括:光学字符辨别(OCR)算法、深度学习算法(例如深度神经网络)、模式匹配算法、用于预处理的算法等。One or more object recognizers 708 may also implement various text recognition algorithms to recognize and extract text from images. Some example text recognition algorithms include: optical character recognition (OCR) algorithms, deep learning algorithms (eg, deep neural networks), pattern matching algorithms, algorithms for preprocessing, and the like.

附加地或可选地,对象辨别可以通过各种机器学习算法来执行。一旦经过训练,机器学习算法就可以由HMD存储。机器学习算法的一些示例可以包括监督或非监督机器学习算法,其包括回归算法(例如,普通最小二乘回归)、基于实例的算法(例如,学习向量量化)、决策树算法(例如,分类和回归树)、贝叶斯算法(例如,朴素贝叶斯)、聚类算法(例如,k均值聚类)、关联规则学习算法(例如,先验(a-priori)算法)、人工神经网络算法(例如,感知器)、深度学习算法(例如,深度玻尔兹曼机或深度神经网络)、维数减少算法(例如,主成分分析)、集成算法(例如,层叠泛化)或其他机器学习算法。在一些实施例中,可以针对各个数据组定制各个模型。例如,可穿戴装置可以产生或存储基础模型。基本模型可以用作起点以产生特定于数据类型(例如,遥现会话中的特定用户)、数据组(例如,遥现会话中的用户的获得的附加图像的组)、条件情况或其他变体的附加模型。在一些实施例中,可穿戴HMD可以被配置为利用多种技术来产生用于分析聚合数据的模型。其他技术可以包括使用预限定的阈值或数据值。Additionally or alternatively, object recognition can be performed by various machine learning algorithms. Once trained, the machine learning algorithm can be stored by the HMD. Some examples of machine learning algorithms may include supervised or unsupervised machine learning algorithms, including regression algorithms (e.g., ordinary least squares regression), instance-based algorithms (e.g., learning vector quantization), decision tree algorithms (e.g., classification and regression trees), Bayesian algorithms (e.g., Naive Bayes), clustering algorithms (e.g., k-means clustering), association rule learning algorithms (e.g., a-priori algorithms), artificial neural network algorithms (e.g., perceptrons), deep learning algorithms (e.g., deep Boltzmann machines or deep neural networks), dimensionality reduction algorithms (e.g., principal component analysis), ensemble algorithms (e.g., stacked generalization), or other machine learning algorithm. In some embodiments, individual models can be customized for individual data sets. For example, a wearable device may generate or store base models. The base model can be used as a starting point to generate data-type-specific (e.g., a particular user in a telepresence session), data group (e.g., a set of acquired additional images of a user in a telepresence session), conditional situation, or other variants additional models. In some embodiments, a wearable HMD can be configured to utilize a variety of techniques to generate models for analyzing aggregated data. Other techniques may include using pre-defined thresholds or data values.

基于该信息和映射数据库中的点的集合,对象辨别器708a至708n可以辨别对象并用语义信息补充对象以赋予对象生命。例如,如果对象辨别器将点的组辨别为门,则系统可以附加一些语义信息(例如,门具有铰链并且具有围绕铰链的90度运动)。如果对象辨别器将点的组辨别为镜子,则系统可以附加该镜子具有可以反射室内对象的图像的反射表面的语义信息。语义信息可以包括如本文所述的对象的功能可见性。例如,语义信息可以包括对象的法线。系统可以分配向量,该向量的方向指示对象的法线。在某些实现中,一旦对象辨别器708基于从用户周围的图像辨别的对象而辨别环境(例如,休闲或工作环境、公共或私人环境或家庭环境等),可穿戴系统就可以将辨别的环境与世界地图或GPS坐标中的某些坐标相关联。例如,一旦可穿戴系统辨别(例如,通过对象辨别器708或用户的响应)环境是用户家中的起居室,则可穿戴系统可以自动地将环境的位置与GPS坐标或与世界地图中的位置相关联。结果,当用户将来进入相同位置时,可穿戴系统可以基于起居室环境呈现/阻止虚拟内容。可穿戴系统还可以创建用于使可穿戴装置沉默或用于为所辨别的环境呈现定制内容的设置,作为环境的语义信息的一部分。因此,当用户将来进入相同位置时,可穿戴系统可以根据环境自动呈现虚拟内容或使可穿戴装置沉默,而无需重新辨别环境的类型,这可以提高效率并减少延迟。Based on this information and the set of points in the mapping database, object identifiers 708a through 708n can identify objects and supplement them with semantic information to animate them. For example, if an object discriminator recognizes a group of points as a door, the system can attach some semantic information (eg, a door has a hinge and has a 90 degree motion around the hinge). If the object discriminator recognizes the group of points as a mirror, the system can attach semantic information that the mirror has a reflective surface that can reflect images of objects in the room. Semantic information may include functional visibility of objects as described herein. For example, semantic information may include normals of objects. The system can assign a vector whose direction indicates the normal of the object. In some implementations, once the object recognizer 708 has recognized an environment (e.g., a leisure or work environment, a public or private environment, or a home environment, etc.) Associated with some coordinates in world map or GPS coordinates. For example, once the wearable system recognizes (e.g., through object recognizer 708 or the user's response) that the environment is a living room in the user's home, the wearable system can automatically correlate the location of the environment with GPS coordinates or with a location on a world map couplet. As a result, the wearable system can present/block virtual content based on the living room environment when the user enters the same location in the future. The wearable system can also create settings for silencing the wearable device or for presenting customized content for the discerned environment as part of the environment's semantic information. Therefore, when the user enters the same location in the future, the wearable system can automatically present virtual content or silence the wearable device according to the environment without re-discriminating the type of environment, which can improve efficiency and reduce latency.

随���时间的������,���射数据���随着系统(可以驻留在本地或可以通过无线网络访问)累积来自世界的更多数据而增长。一旦辨别出对象,就可以将信息发送到一个或多个可穿戴系统。例如,MR环境700可以包括关于在加利福尼亚发生的场景的信息。环境700可以被发送到纽约的一个或多个用户。基于从FOV相机和其他输入接收的数据,对象辨别器和其他软件组件可以映射从各种图像收集的点、辨别对象等,使得场景可以被准确地“传递”给可能位于世界的不同地方的第二用户。环境700也可以使用拓扑图来用于本地化的目的。Over time, the mapping database grows as the system (which can reside locally or be accessed over a wireless network) accumulates more data from the world. Once an object is identified, the information can be sent to one or more wearable systems. For example, MR environment 700 may include information about a scene taking place in California. Environment 700 may be sent to one or more users in New York. Based on data received from FOV cameras and other inputs, object discriminators and other software components can map points collected from various images, identify objects, etc., so that the scene can be accurately "delivered" to the second party, which may be located in a different part of the world. Second user. Environment 700 may also use topology maps for localization purposes.

图8是呈现与辨别的对象相关的虚拟内容的方法800的示例的过程流程图。方法800描述了如何将虚拟场景呈现给可穿戴系统的用户。用户可能在地理上远离场景。例如,用户可能位于纽约,但可能想要查看当前正在加利福尼亚进行的场景或者可能想要与居住在加利福尼亚的朋友一起散步。8 is a process flow diagram of an example of a method 800 of presenting virtual content related to a recognized object. Method 800 describes how a virtual scene is presented to a user of a wearable system. Users may be geographically distant from the scene. For example, a user may be located in New York, but may want to see a scene currently taking place in California or may want to go for a walk with a friend who lives in California.

在框810处,可穿戴系统可以从用户和其他用户接收关于用户的环境的输入。这可以通过各种输入装置和映射数据库中已经拥有的知识来实现方式。在框810处,用户的FOV相机、传感器、GPS、眼睛跟踪等将信息传送给系统。在框820处,系统可以基于该信息确定稀疏点。稀疏点可以用于确定姿态数据(例如,头部姿态、眼睛姿态、身体姿态或手姿势),以可以用于显示和理解用户周围环境中各种对象的取向和位置。在框830处,对象辨别器708a-708n可以爬行通过这些收集的点并使用映射数据库辨别一个或多个对象。然后,在框840处,可以将该信息传送给用户的个人可穿戴系统,以及在框850处,可以相应地将期望的虚拟场景显示给用户。例如,可以相对于各种对象和纽约的用户的其他周围环境以适当的取向、位置等显示期望的虚拟场景(例如,CA中的用户)。At block 810, the wearable system may receive input from the user and other users regarding the user's environment. This can be achieved through various input means and mapping the knowledge already held in the database. At block 810, the user's FOV camera, sensors, GPS, eye tracking, etc. communicate information to the system. At block 820, the system may determine sparse points based on this information. The sparse points can be used to determine pose data (eg, head pose, eye pose, body pose, or hand pose) that can be used to display and understand the orientation and position of various objects in the user's surrounding environment. At block 830, the object recognizers 708a-708n may crawl through the collected points and recognize one or more objects using the mapping database. Then, at block 840, this information can be communicated to the user's personal wearable system, and at block 850, the desired virtual scene can be displayed to the user accordingly. For example, a desired virtual scene (eg, a user in CA) may be displayed in an appropriate orientation, position, etc. relative to various objects and the user's other surroundings in New York.

图9是可穿戴系统的另一示例的框图。在该示例中,可穿戴系统900包括映射920,该映射920可以包括含有关于世界的映射数据的映射数据库710。该映射可以部分地本地驻留在可穿戴系统上,并且可以部分地驻留在可以由有线或无线网络访问的网络存储位置(例如,在云系统中)。姿态过程910可以在可穿戴计算架构(例如,处理模块260或控制器460)上执行,并利用来自920映射的数据以确定可穿戴计算硬件或用户的位置和取向。可以从用户正在体验系统并在世界中操作时即时收集的数据中计算姿态数据。数据可以包括图像、来自传感器(诸如惯性测量单元,其通常包括加速度计和陀螺仪组件)的数据和与真实或虚拟环境中的对象相关的表面信息。9 is a block diagram of another example of a wearable system. In this example, wearable system 900 includes mapping 920, which may include mapping database 710 containing mapping data about the world. The map may reside partly locally on the wearable system and partly in a network storage location (eg, in a cloud system) accessible by a wired or wireless network. Gesture process 910 may execute on the wearable computing architecture (eg, processing module 260 or controller 460 ) and utilize data from 920 maps to determine the position and orientation of the wearable computing hardware or the user. Pose data can be computed from data collected on the fly as the user is experiencing the system and operating in the world. Data may include images, data from sensors such as inertial measurement units, which typically include accelerometer and gyroscope components, and surface information related to objects in real or virtual environments.

稀疏点表示可以是同时定位和映射(例如,SLAM或vSLAM,指的是其中输入仅是图像/视觉的配置)过程的输出。该系统可以被配置为不仅可以找出世界上各种组件的位置,而且还可以找出世界由什么组成。姿态可以是实现方式许多目标的构建块,包括填充(populate)映射和使用映射中的数据。Sparse point representations can be the output of simultaneous localization and mapping (eg, SLAM or vSLAM, which refers to configurations where the input is only images/visions) processes. The system can be configured to not only find out where various components of the world are located, but also what the world is made of. Gestures can be building blocks for many purposes, including populating maps and using data from maps.

在一个实施例中,稀疏点位置本身可能不完全足够,并且可能需要进一步的信息来产生多焦点AR、VR或MR体验。通常表示为深度映射信息的密集表示可以用于至少部分地填充该间隙。可以从称为立体过程940的过程计算这样的信息,其中使用诸如三角测量或飞行时间感测的技术来确定深度信息。图像信息和活动图案(诸如使用活动投影仪创建的红外图案)、从图像相机获取的图像或手部手势/图腾950可以用作立体过程940的输入。可以将大量深度映射信息融合在一起,并且可以用表面表示来概括其中的一些。例如,数学上可限定的表面可以是有效的(例如,相对于大点云)并且可以是对诸如游戏引擎的其他处理装置的可消化的输入。因此,立体过程940的输出(例如,深度映射)可以在融合过程930中组合。姿态910也可以是该融合过程930的输入,并且融合930的输出成为填充映射过程920的输入。子表面可以彼此连接,诸如在地形映射中,以形成更大的表面,并且映射变成点和表面的大混合。In one embodiment, sparse point locations by themselves may not be entirely sufficient and further information may be required to produce a multi-focal AR, VR or MR experience. A dense representation, usually represented as depth map information, can be used to at least partially fill this gap. Such information may be calculated from a process called stereoscopic process 940, where depth information is determined using techniques such as triangulation or time-of-flight sensing. Image information and motion patterns (such as infrared patterns created using motion projectors), images captured from image cameras, or hand gestures/totems 950 may be used as input to the stereoscopic process 940 . A large amount of depth map information can be fused together, and some of it can be generalized with a surface representation. For example, mathematically definable surfaces can be efficient (eg, with respect to large point clouds) and can be ingestible input to other processing devices, such as game engines. Thus, the output of the stereo process 940 (eg, the depth map) may be combined in the fusion process 930 . Pose 910 may also be an input to this fusion process 930 , and the output of fusion 930 becomes an input to fill-mapping process 920 . Subsurfaces can be connected to each other, such as in terrain maps, to form larger surfaces, and the map becomes a large mixture of points and surfaces.

为了解决混合现实过程960中的各个方面,可以使用各种输入。例如,在图9所示的实施例中,游戏参数可以是输入以确定系统的用户正在与各个位置处的一个或多个怪物、在各种条件下死亡或逃跑(诸如如果用户射击怪物)的怪物、墙壁或者在不同位置处的其他对象等玩怪物战斗游戏。世界映射可以包括关于对象的位置或对象的语义信息的信息,并且该世界映射是到混合现实的另一个有价值的输入。相对于世界的姿态也成为输入,并且几乎对任何交互系统起着关键作用。To address various aspects in the mixed reality process 960, various inputs may be used. For example, in the embodiment shown in FIG. 9, a game parameter may be an input to determine that a user of the system is fighting one or more monsters at various locations, dying under various conditions, or fleeing (such as if the user shoots a monster). Play monster battle games against monsters, walls, or other objects etc. at different locations. A world map can include information about the location of objects or semantic information of objects, and is another valuable input to mixed reality. Pose relative to the world also becomes an input and plays a key role in almost any interactive system.

来自用户的控制或输入是到可穿戴系统900的另一输入。如本文所述,用户输入可以包括视觉输入、手势、图腾、音频输入、感官输入等。为了在周围移动或玩游戏,例如,用户可能需要向可穿戴系统900指示关于他或她想要做什么。除了在空间中移动自己之外,还可以使用各种形式的用户控制。在一个实施例中,图腾(例如,用户输入装置)或诸如玩具枪的对象可以由用户握持并由系统跟踪。该系统优选地将被配置为知道用户正在握持物品并且理解用户与物品进行何种交互(例如,如果图腾或对象是枪,则系统可以被配置为理解位置和取向以及用户是否正在点击触发器或其他可能配备有例如IMU的传感器的感测按钮或元件,这可能有助于确定正在发生的事情,即使此类活动不在任何相机的视场之内)。Control or input from a user is another input to wearable system 900 . As described herein, user input may include visual input, gestures, totems, audio input, sensory input, and the like. In order to move around or play a game, for example, the user may need to indicate to the wearable system 900 what he or she wants to do. In addition to moving oneself in space, various forms of user control can be used. In one embodiment, a totem (eg, a user input device) or an object such as a toy gun can be held by the user and tracked by the system. The system will preferably be configured to know that the user is holding an item and understand what kind of interaction the user is having with the item (e.g. if the totem or object is a gun, the system can be configured to understand position and orientation and whether the user is clicking the trigger or other sensing buttons or elements that may be equipped with sensors such as an IMU, which may help determine what is going on, even if such activity is not within the field of view of any cameras).

手姿势跟踪或辨别也可以提供输入信息。可穿戴系统900可以被配置为跟踪和解释用于按钮按压、用于向左或向右作出手势、停止、抓取、握持等的手姿势。例如,在一种配置中,用户可能想要在非游戏环境中翻阅电子邮件或日历或者与其他人或玩家做“拳头碰撞”。可穿戴系统900可以被配置为利用最小量的手姿势,其可以是动态的,也可以不是动态的。例如,手姿势可以是简单的静态姿势,例如,用于停止的张开手、用于表示好(ok)的拇指向上、用于表示不好的拇指向下、或者用于方向命令的用手向右或向左或向上/向下翻转。Hand gesture tracking or discrimination can also provide input. Wearable system 900 may be configured to track and interpret hand gestures for button presses, for gesturing left or right, stopping, grabbing, holding, and the like. For example, in one configuration, a user may want to flip through email or a calendar or do "fist bumps" with other people or players in a non-gaming environment. Wearable system 900 may be configured to utilize a minimal amount of hand gestures, which may or may not be dynamic. For example, hand gestures can be simple static gestures such as open hand for stop, thumbs up for ok, thumbs down for bad, or hand gestures for directional commands Flip right or left or up/down.

眼睛跟踪是另一输入(例如,跟踪用户正在看的位置以控制显示技术以在特定深度或范围呈现)。在一个实施例中,可以使用三角测量来确定眼睛的聚散,然后使用为该特定人物开发的聚散/调节模型,可以确定调节。眼睛跟踪可以由眼睛相机执行以确定眼睛注视(例如,一只或两只眼睛的方向或取向)。其他技术可以用于眼睛跟踪,诸如例如通过放置在眼睛附近的电极测量电位(例如,电眼图)。Eye tracking is another input (eg, tracking where the user is looking to control display technology to render at a particular depth or range). In one embodiment, triangulation can be used to determine the vergence of the eyes, and then using a vergence/accommodation model developed for that particular person, accommodation can be determined. Eye tracking can be performed by an eye camera to determine eye gaze (eg, direction or orientation of one or both eyes). Other techniques can be used for eye tracking, such as, for example, measuring electrical potential by electrodes placed near the eye (eg, electro-oculography).

话音跟踪可以是另一输入,该另一输入可以单独使用或与其他输入(例如,图腾跟踪、眼睛跟踪、手势跟踪等)组合使用。话音跟踪可以包括话音辨别、语音辨别或其组合。系统900可以包括从环境接收音频流的音频传感器(例如,麦克风)。系统900可以结合说话者辨别技术以确定谁在说话(例如,话音是否来自可穿戴装置的佩戴者或者来自另一个人或语音(例如,由环境中的扬声器发送的被记录的语音)),以及可以结合语音辨别技术以确定所说的内容。本地数据和处理模块或远程处理模块270可以处理来自麦克风的音频数据(或另一流中的音频数据,例如,用户正在观看的视频流),以通过应用各种语音辨别算法来识别语音的内容,各种语音辨别算法例如隐马尔可夫模型、基于动态时间规整(DTW)的语音辨别、神经网络、例如深度前馈和递归神经网络的深度学习算法、端到端自动语音辨别、机器学习算法(参考图7描述)或使用声学建模或语言建模等的其他算法。Voice tracking may be another input that may be used alone or in combination with other inputs (eg, totem tracking, eye tracking, gesture tracking, etc.). Voice tracking may include voice recognition, voice recognition, or a combination thereof. System 900 may include an audio sensor (eg, a microphone) that receives an audio stream from an environment. System 900 may incorporate speaker recognition techniques to determine who is speaking (e.g., whether the voice is from the wearer of the wearable device or from another person or voice (e.g., recorded voice sent by a speaker in the environment)), and Speech recognition technology can be incorporated to determine what was said. The local data and processing module or the remote processing module 270 may process audio data from the microphone (or audio data in another stream, e.g., a video stream that the user is watching) to recognize the content of the speech by applying various speech recognition algorithms, Various speech recognition algorithms such as hidden Markov models, dynamic time warping (DTW) based speech recognition, neural networks, deep learning algorithms such as deep feed-forward and recurrent neural networks, end-to-end automatic speech recognition, machine learning algorithms ( 7) or use other algorithms such as acoustic modeling or language modeling.

混合现实过程960的另一输入可以包括事件跟踪。从面向外的成像系统464获取的数据可以用于事件跟踪,并且可穿戴系统可以分析这样的成像信息(使用计算机视觉技术)以确定是否正在发生触发事件,该触发事件可以有益地������系统自动���使��被���现给用户的视觉或听觉内容沉默。Another input to the mixed reality process 960 may include event tracking. Data acquired from outward-facing imaging system 464 can be used for event tracking, and the wearable system can analyze such imaging information (using computer vision techniques) to determine whether a triggering event is occurring, which can beneficially cause the system to automatically Silence the visual or audio content being presented to the user.

本地数据和处理模块或远程处理模块270还可以应用语音辨别算法,该算法可以辨别说话者的身份,例如说话者是可穿戴系统900的用户210还是用户正在与其交谈的其他人。一些示例性语音辨别算法可以包括频率估计、隐马尔可夫模型、高斯混合模型、模式匹配算法、神经网络、矩阵表示、向量量化,说话者分类(diarisation)、决策树和动态时间规整(DTW)技术。语音辨别技术还可以包括反说话者(anti-speaker)技术,诸如群组模型和世界模型。频谱特征可以用于表示说话者特性。本地数据和处理模块或远程数据处理模块270可以使用参考图7描述的各种机器学习算法以执行语音辨别。The local data and processing module or the remote processing module 270 may also apply voice recognition algorithms that can identify the speaker, eg, whether the speaker is the user 210 of the wearable system 900 or someone else the user is talking to. Some exemplary speech recognition algorithms may include frequency estimation, hidden Markov models, Gaussian mixture models, pattern matching algorithms, neural networks, matrix representations, vector quantization, speaker classification (diarisation), decision trees, and dynamic time warping (DTW) technology. Speech recognition techniques may also include anti-speaker techniques, such as group models and world models. Spectral features can be used to represent speaker characteristics. The local data and processing module or the remote data processing module 270 may use various machine learning algorithms described with reference to FIG. 7 to perform speech recognition.

关于相机系统,图9中所示的示例性可穿戴系统900包括三对相机:相对宽的FOV或被动SLAM相机对,其布置在用户的脸部的侧面;定向在用户的前方的不同的相机对以处理立体成像过程并且还用于捕捉在用户面部的前方的手姿势和图腾/对象跟踪。FOV相机或用于立体过程940的相机的对也可以被称为相机16。FOV相机和用于立体过程940的相机对可以是面向外的成像系统464(如图4所示)的一部分。可穿戴系统900可以包括定向朝向用户的眼睛的眼睛跟踪相机(其也被示出为眼睛相机24并且其可以是图4中所示的面向内的成像系统462的一部分),以便对眼睛向量和其他信息进行三角测量。可穿戴系统900还可以包括一个或多个纹理光投影仪(诸如红外(IR)投影仪)以将纹理注入场景中。Regarding the camera system, the exemplary wearable system 900 shown in FIG. 9 includes three pairs of cameras: a relatively wide FOV or passive SLAM camera pair arranged to the side of the user's face; a different camera oriented in front of the user. Pair to handle the stereoscopic imaging process and also for capturing hand poses and totem/object tracking in front of the user's face. The FOV camera or pair of cameras used for stereoscopic process 940 may also be referred to as camera 16 . The FOV camera and camera pair for stereoscopic process 940 may be part of outward facing imaging system 464 (shown in FIG. 4 ). Wearable system 900 may include an eye-tracking camera (also shown as eye camera 24 and which may be part of inward-facing imaging system 462 shown in FIG. Additional information is triangulated. Wearable system 900 may also include one or more texture light projectors, such as infrared (IR) projectors, to inject texture into the scene.

包括环境传感器的可穿戴系统的示例Example of a wearable system including environmental sensors

图10示出了包括环境传感器的可穿戴系统的各种部件的示例的示意图。在一些实施例中,增强现实显示系统1010可以是图2中所示的显示系统的实施例。在一些实现中,AR显示系统1010可以是混合现实显示系统。环境传感器可以包括传感器24、28、30、32和34。环境传感器可以被配置为检测关于AR系统的用户的数据(也称为用户传感器)或者被配置为收集关于用户环境的数据(也称为外部传感器)。例如,生理传感器可以是用户传感器的实施例,而气压计可以是外部传感器。在一些情况下,传感器可以是用户传感器和外部传感器。例如,当用户位于反射表面(例如,镜子)前面时,面向外的成像系统可以获取用户环境的图像以及用户的图像。作为另一示例,麦克风可以用作用户传感器和外部传感器,因为麦克风可以获取来自用户和环境的声音。在图10所示的示例中,传感器24、28、30和32可以是用户传感器,而传感器34可以是外部传感器。10 shows a schematic diagram of an example of various components of a wearable system including environmental sensors. In some embodiments, augmented reality display system 1010 may be an embodiment of the display system shown in FIG. 2 . In some implementations, AR display system 1010 may be a mixed reality display system. Environmental sensors may include sensors 24 , 28 , 30 , 32 and 34 . Environmental sensors may be configured to detect data about a user of the AR system (also referred to as user sensors) or to collect data about the user's environment (also referred to as external sensors). For example, a physiological sensor may be an embodiment of a user sensor, while a barometer may be an external sensor. In some cases, the sensors may be user sensors and external sensors. For example, when a user is positioned in front of a reflective surface (eg, a mirror), an outward-facing imaging system may acquire images of the user's environment as well as images of the user. As another example, a microphone can be used as both a user sensor and an external sensor, since the microphone can pick up sounds from the user and the environment. In the example shown in FIG. 10 , sensors 24 , 28 , 30 , and 32 may be user sensors, while sensor 34 may be an external sensor.

如图所示,增强现实显示系统1010可以包括各种用户传感器。增强现实显示系统1010可以包括观看者成像系统22。观看者成像系统22可以是图4中描述的面向内的成像系统462的实施例。观看者成像系统22可以包括与光��26(例如,红外光源)配对的相机24(例如,红外、UV、其他不可见光和/或可见光相机),光源26指向用户并被配置为监测用户(例如,眼睛1001、1002和/或用户的周围组织)。相机24和光源26可以可操作地耦接到本地处理模块270。这样的相机24可以被配置为监测相应眼睛的瞳孔(包括瞳孔尺寸)、虹膜和/或眼睛周围的组织(诸如眼睑或眉毛)的取向、形状和对称性中的一个或多个,以进行本文公开的各种分析。在一些实施例中,眼睛的虹膜和/或视网膜的成像可以用于用户的安全辨别。继续参考图10,相机24可以进一步被配置为对相应眼睛的视网膜成像,诸如用于诊断目的和/或基于视网膜特征(诸如眼底的中央凹或特征)的位置的取向跟踪。可以执行虹膜和视网膜成像或扫描以便用户的安全辨别,用于例如正确地将用户数据与特定用户相关联和/或向适当的用户呈现私人信息。在一些实施例中,除了相机24之外或作为相机24的替代,一个或多个相机28可以被配置为检测和/或监测用户状态的各种其他方面。例如,一个或多个相机28可以是面向内的并且被配置为监测除了用户的眼睛之外的特征的形状、位置、运动、颜色和/或其他特性,例如,一个或多个面部特征(例如,面部表情、有意运动、非自愿的抽动)。在另一示例中,一个或多个相机28可以是面向下的或面向外的,并且被配置为监视用户、用户FOV中的另一个人、FOV中的对象等的手臂、手、腿、脚和/或躯干的位置、移动和/或其他特征或性质。相机28可以用于对环境成像,并且可穿戴装置可以分析此类图像以确定是否正在发生触发事件,使得应当使由可穿戴装置正在呈现给用户的视觉或听觉内容沉默。As shown, the augmented reality display system 1010 may include various user sensors. Augmented reality display system 1010 may include viewer imaging system 22 . Viewer imaging system 22 may be an embodiment of inward facing imaging system 462 depicted in FIG. 4 . The viewer imaging system 22 may include a camera 24 (e.g., an infrared, UV, other invisible and/or visible light camera) paired with a light source 26 (e.g., an infrared light source) directed at the user and configured to monitor the user (e.g., Eyes 1001, 1002 and/or surrounding tissue of the user). Camera 24 and light source 26 may be operatively coupled to a local processing module 270 . Such cameras 24 may be configured to monitor one or more of the orientation, shape, and symmetry of the pupil (including pupil size), iris, and/or tissues surrounding the eye (such as eyelids or eyebrows) of the respective eye for purposes of this disclosure. public analysis. In some embodiments, imaging of the iris and/or retina of the eye may be used for secure identification of the user. With continued reference to FIG. 10 , the cameras 24 may be further configured to image the retina of the respective eye, such as for diagnostic purposes and/or for orientation tracking based on the location of retinal features such as the fovea or features of the fundus. Iris and retinal imaging or scanning may be performed for secure identification of users, for example, to correctly associate user data with a particular user and/or present private information to appropriate users. In some embodiments, one or more cameras 28 may be configured in addition to or instead of cameras 24 to detect and/or monitor various other aspects of a user's state. For example, one or more cameras 28 may be inward-facing and configured to monitor the shape, position, motion, color, and/or other characteristics of features other than the user's eyes, for example, one or more facial features (e.g., , facial expressions, voluntary movements, involuntary tics). In another example, one or more cameras 28 may be downward facing or outward facing and configured to monitor the arms, hands, legs, feet of the user, another person in the user's FOV, an object in the FOV, etc. and/or the position, movement and/or other characteristics or properties of the torso. Camera 28 may be used to image the environment, and the wearable device may analyze such images to determine whether a triggering event is occurring such that visual or audible content being presented to the user by the wearable device should be silenced.

在一些实施例中,如本文所公开的,显示系统1010可以包括空间光调制器,其通过光纤扫描仪(例如,图4中的图像注入装置420、422、424、426、428)可变地投射穿过用户的视网膜的光束以形成图像。在一些实施例中,光纤扫描仪可以与相机24或28结合使用或代替相机24或28使用,以例如跟踪用户的眼睛或使用户的眼睛成像。例如,作为被配置为输出光的扫描光纤的替代或补充,健康系统可以具有单独的光接收装置以接收从用户的眼睛反射的光并收集与该反射光相关联的数据。In some embodiments, as disclosed herein, display system 1010 may include a spatial light modulator that is variably A beam of light is projected across the user's retina to form an image. In some embodiments, a fiber optic scanner may be used in conjunction with or instead of cameras 24 or 28, for example, to track or image a user's eye. For example, instead of or in addition to a scanning optical fiber configured to output light, the health system may have a separate light receiving device to receive light reflected from the user's eye and collect data associated with the reflected light.

继续参考图10,相机24、28和光源26可以安装在框架230上,框架230也可以保持波导堆叠1005、1006。在一些实施例中,显示系统1010的传感器和/或其他电子装置(例如,相机24、28和光源26)可以被配置为通过通信链路262、264与本地处理和数据模块270通信。With continued reference to FIG. 10 , the cameras 24 , 28 and the light source 26 may be mounted on a frame 230 which may also hold the waveguide stacks 1005 , 1006 . In some embodiments, sensors and/or other electronics (eg, cameras 24 , 28 and light sources 26 ) of display system 1010 may be configured to communicate with local processing and data module 270 via communication links 262 , 264 .

在一些实施例中,除了提供关于用户的数据之外,可以利用相机24和28中的一个或两个来跟踪眼睛以提供用户输入。例如,观看者成像系统22可以用于选择虚拟菜单上的项目和/或向显示系统2010提供其他输入,例如用于在本文公开的各种测试和分析中提供用户响应。In some embodiments, in addition to providing data about the user, one or both of cameras 24 and 28 may be utilized to track eyes to provide user input. For example, viewer imaging system 22 may be used to select items on a virtual menu and/or provide other input to display system 2010, such as to provide user responses in the various tests and analyzes disclosed herein.

在一些实施例中,显示系统1010可以包括运动传感器32,诸如一个或多个加速度计、陀螺仪、���势传感器、步态传感器、平衡传感器和/或IMU传感器。传感器30可以包括一个或多个向内指向(用户指向)的麦克风,其被配置为检测声音以及那些声音的各种属性,包括检测到的声音的强度和类型、多个信号的存在和/或信号位置。In some embodiments, display system 1010 may include motion sensors 32, such as one or more accelerometers, gyroscopes, posture sensors, gait sensors, balance sensors, and/or IMU sensors. Sensor 30 may include one or more inwardly-pointing (user-pointing) microphones configured to detect sounds and various attributes of those sounds, including the intensity and type of detected sounds, the presence of multiple signals, and/or signal location.

传感器30被示意性地示出为连接到框架230。应当理解,该连接可以采取物理附到框架230的形式,并且可以是框架230上的任何位置,包括框架230的在用户的耳朵上延伸的腿(temple)的端部。例如,传感器30可以安装在框架230的腿的端部处、在框架230和用户之间的接触点处。在一些其他实施例中,传感器30可以远离框架230延伸以接触用户210。在其他实施例中,传感器30可以不物理地附到框架230;然而,传感器30可以与框架230间隔开。Sensor 30 is shown schematically connected to frame 230 . It should be understood that this connection may take the form of a physical attachment to the frame 230, and may be anywhere on the frame 230, including the ends of the temples of the frame 230 that extend over the user's ear. For example, the sensor 30 may be mounted at the end of the leg of the frame 230 at a point of contact between the frame 230 and the user. In some other embodiments, sensor 30 may extend away from frame 230 to contact user 210 . In other embodiments, sensor 30 may not be physically attached to frame 230 ; however, sensor 30 may be spaced apart from frame 230 .

在一些实施例中,显示系统1010还可以包括一个或多个被配置为检测用户周围的世界的对象、刺激、人、动物、位置或其他方面的环境传感器34。例如,环境传感器34可以包括一个或多个相机、高度计、气压计,化学传感器、湿度传感器、温度传感器、外部麦克风、光传感器(例如,光度计)、定时装置(例如,时钟或日历)或其任何组合或子组合。在一些实施例中,多个(例如,两个)麦克风可以间隔开,以便于声音源位置确定。在包括环境感测相机的各种实施例中,可以使相机例如面向外定位,以便捕捉与用户的普通视场的至少一部分类似的图像。环境传感器还可以包括发射装置,该发射装置被配置为接收诸如激光、可见光、不可见光波长、声音(例如,可听声音、超声波或其他频率)的信号。在一些实施例中,一个或多个环境传感器(例如,相机或光传感器)可以被配置为测量环境的环境光(例如,亮度)(例如,以捕获环境的照明条件)。例如应变仪、路缘探测器等的物理接触传感器也可以作为环境传感器包括在内。In some embodiments, display system 1010 may also include one or more environmental sensors 34 configured to detect objects, stimuli, people, animals, locations, or other aspects of the world around the user. For example, environmental sensors 34 may include one or more cameras, altimeters, barometers, chemical sensors, humidity sensors, temperature sensors, external microphones, light sensors (e.g., photometers), timing devices (e.g., clocks or calendars), or any combination or sub-combination. In some embodiments, multiple (eg, two) microphones may be spaced apart to facilitate sound source location determination. In various embodiments including an environment sensing camera, the camera may be positioned, eg, facing outward, so as to capture an image similar to at least a portion of the user's normal field of view. The environmental sensor may also include a transmitting device configured to receive signals such as laser light, visible light, invisible light wavelengths, sound (eg, audible sound, ultrasound, or other frequencies). In some embodiments, one or more environmental sensors (eg, cameras or light sensors) may be configured to measure ambient light (eg, brightness) of the environment (eg, to capture lighting conditions of the environment). Physical contact sensors such as strain gauges, curb detectors, etc. may also be included as environmental sensors.

在一些实施例中,显示系统1010还可以被配置为接收其他环境输入,诸如GPS位置数据、天气数据、日期和时间、或可以从因特网、卫星通信或其他合适的有线或无线数据通信方法接收的其他可用的环境数据。处理模块260可以被配置为访问表征用户位置的另外的信息,诸如花粉计数、人口统计、空气污染、环境毒物、来自智能恒温器的信息、生活方式统计或其他用户、建筑物或健康保健供应商接近。在一些实施例中,可以使用基于云的数据库或其他远程数据库来访问表征该位置的信息。处理模块260可以被配置为获得这样的数据和/或进一步分析来自环境传感器中的任何一个或组合的数据。In some embodiments, display system 1010 may also be configured to receive other environmental inputs, such as GPS location data, weather data, date and time, or data that may be received from the Internet, satellite communications, or other suitable wired or wireless data communication methods. Other available environment data. The processing module 260 may be configured to access additional information characterizing the user's location, such as pollen counts, demographics, air pollution, environmental toxicants, information from smart thermostats, lifestyle statistics, or other user, building, or health care providers near. In some embodiments, the information characterizing the location may be accessed using a cloud-based or other remote database. The processing module 260 may be configured to obtain such data and/or further analyze data from any one or combination of environmental sensors.

显示系统1010可以被配置为收集和存储通过上述传感器和/或输入中的任何获得的数据持续延长的时间段。在装置处接收的数据可以在本地处理模块处和/或远程地(例如,如图2所示,在远程处理模块270处或远程数据储存库280处)处理和/或存储。在一些实施例中,可以在本地处理模块处��接���收附加数据,诸如日期和时间、GPS位置或其他全局数据。关于由系统递送给用户的诸如图像、其他可视内容或听觉内容的内容的数据也可以在本地处理模块处接收。Display system 1010 may be configured to collect and store data obtained through any of the aforementioned sensors and/or inputs for an extended period of time. Data received at the device may be processed and/or stored locally at the processing module and/or remotely (eg, at the remote processing module 270 or at the remote data repository 280 as shown in FIG. 2 ). In some embodiments, additional data may be received directly at the local processing module, such as date and time, GPS location, or other global data. Data regarding content such as images, other visual content or audio content delivered to the user by the system may also be received at the local processing module.

可穿戴显示系统的自动控制Automatic Control of Wearable Display Systems

如上所述,可能发生需要或甚至必须不强调或阻止虚拟内容或者甚至关闭可穿戴装置对虚拟内容的显示的情况。这种情况可以响应于触发事件而发生,例如,紧急状况、不安全情况或者可能需要向可穿戴装置的用户呈现较少虚拟内容使得用户可以将更多注意力集中在用户之外的物理世界的情况。触发事件还可以基于用户正在使用系统的环境。可穿戴系统可以基于用户的环境来阻止虚拟内容或呈现定制的虚拟内容。例如,如果可穿戴系统检测到用户正在工作,则可穿戴系统可以阻止视频游戏。As noted above, situations may arise where it is desirable or even necessary to de-emphasize or block virtual content, or even turn off the wearable device's display of virtual content. This can happen in response to a triggering event, such as an emergency situation, an unsafe situation, or perhaps a need to present less virtual content to the user of the wearable device so that the user can focus more on the physical world outside of the user. Condition. Triggering events can also be based on the environment in which the user is using the system. Wearable systems can block virtual content or present customized virtual content based on the user's environment. For example, a wearable system could block a video game if the wearable system detects that the user is working.

本文公开的可穿戴装置的实施例可以包括能够确定是否正在发生这种情况并且采取适当的动作使可穿戴系统沉默的部件和功能,例如通过使虚拟内容沉默(例如,不强调、阻止或关闭虚拟内容的显示)或者通过使可穿戴系统的一个或多个部件沉默(例如,关闭一个或多个部件、减弱一个或多个部件、使一个或多个部件进入睡眠模式)。如本文所使用的,使虚拟内容沉默通常可以包括不强调、减弱或减少由可穿戴装置呈现给用户的视觉或听觉内容的数量或影响,直到并且包括关闭内容。沉默可以包括可见沉默(例如,关闭或调暗显示器220)或可听沉默(例如,减小扬声器240发出的声音或完全关闭扬声器)。沉默可以包括增加可见虚拟内容的透明度,这使得用户更容易看穿这样的虚拟内容来感知外部物理世界。沉默还可以包括减小虚拟内容的尺寸或改变其放置,使得它在用户的视场中不那么突出。沉默还可以包括阻止来自可穿戴装置的显示器的内容或者选择性地允许一些内容但不允许其他内容。因此,可以通过黑名单(其识别要被阻止的内容)或通过白名单(其识别允许的内容)来实现沉默。在一些实现中,黑名单和白名单的组合可以用于有效地使内容沉默。附加地或可选地,灰名单可以用于指示应该暂时被阻止(或允许)直到另一个条件或事件发生的内容。例如,在办公室环境中,某些虚拟内容可以被列入灰名单并被暂时阻止显示给用户,直到用户的管理员覆盖该阻止并将内容移动到白名单或者通过将内容移动到黑名单来永久地阻止内容。本文描述的可穿戴装置的各种实施例可以使用前述技术中的一些或全部来使呈现给用户的虚拟内容沉默。Embodiments of wearable devices disclosed herein may include components and functionality capable of determining whether this is occurring and taking appropriate action to silence the wearable system, such as by silencing virtual content (e.g., de-emphasizing, blocking, or turning off virtual content). content) or by silencing one or more components of the wearable system (eg, turning off one or more components, dimming one or more components, putting one or more components into sleep mode). As used herein, silencing virtual content may generally include de-emphasizing, attenuating, or reducing the amount or impact of visual or auditory content presented to a user by a wearable device, up to and including turning off the content. Silencing may include visible silencing (eg, turning off or dimming display 220 ) or audible silencing (eg, reducing sound from speaker 240 or turning off the speaker completely). Silencing may include increasing the transparency of visible virtual content, which makes it easier for a user to see through such virtual content to perceive the external physical world. Silencing may also include reducing the size of the virtual content or changing its placement so that it is less prominent in the user's field of view. Silencing may also include blocking content from the wearable device's display or selectively allowing some content but not others. Thus, silencing can be achieved through a blacklist (which identifies content to be blocked) or through a whitelist (which identifies content to allow). In some implementations, a combination of blacklists and whitelists can be used to effectively silence content. Additionally or alternatively, greylisting can be used to indicate content that should be temporarily blocked (or allowed) until another condition or event occurs. For example, in an office environment, certain virtual content can be greylisted and temporarily blocked from being displayed to the user until the user's administrator overrides the blocking and moves the content to the whitelist or permanently by moving the content to the blacklist to block content. Various embodiments of wearable devices described herein may use some or all of the aforementioned techniques to silence virtual content presented to a user.

在下文中,将描述用户体验的各种非限制性说明性示例,其中可能需要使虚拟内容沉默。在这些示例之后,将描述用于确定触发可穿戴装置使虚拟内容沉默的事件正在发生的技术和装置。In the following, various non-limiting illustrative examples of user experiences will be described in which it may be desirable to silence virtual content. Following these examples, techniques and devices for determining that an event is occurring that triggers a wearable device to silence virtual content will be described.

在外科背景下使可穿戴装置沉默的示例Example of Silencing a Wearable Device in a Surgical Context

图11A和11B示出了在外科背景下使HMD沉默的示例。在图11A中,外科医生正在对心脏1147执行手术。外科医生可以佩戴本文所述的HMD。外科医生可以在他的FOV中感知到心脏1147。外科医生还可以在他的FOV中感知到虚拟对象1141、1142和1145。虚拟对象1141、1142和1145可以与跟心脏相关联的各种度量(例如,心率、ECG等)以及诊断(例如,心律失常、心脏骤停等)相关。HMD可以基于由可穿戴系统的环境传感器获取的信息或通过与可穿戴系统的另一装置或远程处理模块通信来呈现虚拟对象1141、1142和1145。11A and 11B illustrate an example of silencing an HMD in a surgical setting. In FIG. 11A , a surgeon is operating on a heart 1147 . A surgeon may wear an HMD as described herein. The surgeon can feel the heart 1147 in his FOV. The surgeon can also perceive virtual objects 1141, 1142 and 1145 in his FOV. Virtual objects 1141, 1142, and 1145 may be related to various metrics (eg, heart rate, ECG, etc.) and diagnoses (eg, arrhythmia, cardiac arrest, etc.) associated with the heart. The HMD may present virtual objects 1141 , 1142 , and 1145 based on information acquired by environmental sensors of the wearable system or by communicating with another device or remote processing module of the wearable system.

然而,在手术期间,可能发生意外或紧急状况。例如,在手术部位可能存在突然的、不希望的血液流动(如图11B中的来自心脏1147的血液喷射1149所示)。可穿戴系统可以使用计算机视觉技术检测这种情况,例如,通过检测(在由面向外的相机获取的图像中)手术部位中或附近的关键点或特征的快速发生的变化。可穿戴系统还可以基于从其他装置或远程处理模块接收的数据进行检测。During surgery, however, accidents or emergencies may occur. For example, there may be a sudden, undesired flow of blood at the surgical site (as shown by the jet of blood 1149 from the heart 1147 in FIG. 11B ). A wearable system can detect this using computer vision techniques, for example, by detecting (in images acquired by an outward-facing camera) rapidly occurring changes in key points or features in or near the surgical site. The wearable system can also make detections based on data received from other devices or remote processing modules.

可穿戴系统可以确定这种情况满足触发事件的标准,其中应该使视觉或听觉虚拟内容的显示沉默,使得外科医生可以将注意力集中在意外或紧急状况上。因此,可穿戴系统可以响应于触发事件的自动检测(在该示例中,为血液的喷射1149)而自动使虚拟内容沉默。结果,在图11B中,HMD没有向外科医生呈现虚拟对象1141、1142和1145,并且外科医生可以将全部注意力集中在停止血液喷发上。The wearable system can determine that the condition meets the criteria for a triggering event, where the display of visual or auditory virtual content should be silenced so that the surgeon can focus on the unexpected or emergency situation. Accordingly, the wearable system may automatically silence the virtual content in response to automatic detection of a triggering event (in this example, the spurt of blood 1149). As a result, in FIG. 11B , the HMD does not present virtual objects 1141 , 1142 , and 1145 to the surgeon, and the surgeon can focus all his attention on stopping the eruption of blood.

HMD可以响应于终止事件而恢复正常操作并恢复向外科医生呈现虚拟内容。当触发事件结束(例如,血液停止喷射)时或当用户进入其中不存在触发事件的另一环境时(例如,当用户走出急诊室时),可以检测到终止事件。终止事件也可以基于阈值时间段。例如,HMD可以在检测到触发事件一段时间(例如,5分钟、15分钟、1小时等)之后或者在检测到触发事件结束持续一段时间之后恢复正常操作。在该示例中,可穿戴系统可以在触发事件结束之前恢复显示(或可穿戴系统的其他部件)。The HMD may resume normal operation and resume presenting virtual content to the surgeon in response to the termination event. Termination events may be detected when the triggering event ends (eg, blood stops spraying) or when the user enters another environment in which no triggering event exists (eg, when the user walks out of the emergency room). Termination events can also be based on a threshold time period. For example, the HMD may resume normal operation after detecting the triggering event for a period of time (eg, 5 minutes, 15 minutes, 1 hour, etc.) or after detecting the end of the triggering event for a period of time. In this example, the wearable system may restore the display (or other components of the wearable system) before the trigger event ends.

在工业背景下使可穿戴装置沉默的示例Example of silencing a wearable device in an industrial context

用于使HMD沉默的类似技术也可以应用于其他背景中。例如,这些技术可以用于工业背景中。作为示例,工作人员可以在佩戴HMD的同时在工厂中焊接金属工件。工作人员可以通过HMD感知到他正在处理的金属以及与焊接过程相关联的虚拟内容。例如,HMD可以显示虚拟内容,包括关于���何焊接部件的指令。Similar techniques for silencing HMDs can also be applied in other contexts. For example, these techniques can be used in industrial settings. As an example, a worker may weld a metal workpiece in a factory while wearing an HMD. Through the HMD, the worker can perceive the metal he is working on and the virtual content associated with the welding process. For example, an HMD can display virtual content, including instructions on how to solder parts.

然而,当工作人员正在使用HMD时可能发生意外或紧急状况。例如,工作人员的衣服可能会意外着火或焊枪可能会过热或者对工件或附近的材料放火。可能发生其他紧急状况,例如工作人员环境中的工业化学品泄漏。可穿戴系统可以将这些情况检测为触发HMD使虚拟内容沉默的事件。如参考图12A-12C进一步描述的那样,可穿戴系统可以通过分析工作人员的环境的图像使用计算机视觉算法(或机器学习算法)来检测触发事件。例如,为了检测火灾或过热,可穿戴系统可以分析由面向外的相机拍摄的红外(IR)图像,因为来自火灾或过热的热量在IR图像中将特别明显。可穿戴系统可以响应于检测到触发事件而自动使虚拟内容的显示沉默。在一些情况下,可穿戴系统可以提供指示HMD将自动关闭的警告,除非用户另有指示。However, accidents or emergencies may occur while workers are using the HMD. For example, a worker's clothing could accidentally catch fire or a welding torch could overheat or set fire to the workpiece or nearby materials. Other emergencies may occur, such as industrial chemical spills in the worker environment. The wearable system can detect these situations as events that trigger the HMD to silence the virtual content. As further described with reference to FIGS. 12A-12C , the wearable system can use computer vision algorithms (or machine learning algorithms) to detect trigger events by analyzing images of the worker's environment. For example, to detect fire or overheating, a wearable system could analyze infrared (IR) images taken by an outward-facing camera, since heat from a fire or overheating will be particularly evident in the IR image. The wearable system may automatically silence the display of virtual content in response to detecting a trigger event. In some cases, the wearable system may provide a warning indicating that the HMD will automatically shut down unless otherwise instructed by the user.

在某些实施例中,工作人员可以手动致动现实按钮263,这可以使HMD使虚拟内容沉默。例如,工作人员可以感知到紧急或不安全状况(例如,通过闻到过热的材料)并致动现实按钮,使得工作人员可以更容易地专注于实际现实。为了避免当工作人员仍然对虚拟内容感兴趣时意外地使虚拟内容沉默,HMD可以在执行沉默操作之前向工作人员提供警告。例如,在检测到现实按钮的致动时,HMD可以向工作人员提供指示将很快(例如,在几秒钟内)使虚拟内容沉默的消息,除非工作人员另外指示(例如通过再次致动现实按钮或改变他的姿势)。关于这种警告的进一步细节将在下面参考图14A和14B进行描述。In some embodiments, a worker can manually actuate the reality button 263, which can cause the HMD to silence the virtual content. For example, a worker could sense an emergency or unsafe condition (eg, by smelling overheated material) and actuate a reality button so that the worker can more easily focus on the actual reality. To avoid accidentally silencing virtual content when the worker is still interested in the virtual content, the HMD may provide a warning to the worker before performing the silence operation. For example, upon detecting the actuation of the reality button, the HMD may provide the worker with a message indicating that the virtual content will be silenced soon (e.g., within seconds), unless the worker instructs otherwise (e.g., by actuating the reality button again). button or change his pose). Further details regarding such warnings are described below with reference to Figures 14A and 14B.

图11C示出了操作机器(例如割草机)的园林工作人员。像许多重复工作一样,割草可能很乏味。在一段时间之后,工作人员可能会失去兴趣,从而增加了发生事故的可能性。此外,可能难以吸引合格的工作人员或很难确保工作人员充分地表现。Figure 11C shows a gardener operating a machine such as a lawn mower. Like many repetitive jobs, mowing can be tedious. After a period of time, workers may lose interest, increasing the likelihood of accidents. Furthermore, it may be difficult to attract qualified staff or ensure adequate staff performance.

图11C中所示的工作人员佩戴HMD,该HMD在用户的视场中呈现虚拟内容1130以增强工作性能。例如,如场景1100c中所示,HMD可以呈现虚拟游戏,其中目标是跟随虚拟映射图案。因为准确地跟随图案并在它们消失之前击中某些分数加成而接收到点数。因为偏离图案或者偏离以过于靠近某些物理对象(例如,树木、喷头、道路)而扣除点数。The worker shown in FIG. 11C wears an HMD that presents virtual content 1130 in the user's field of view to enhance work performance. For example, as shown in scene 1100c, the HMD may present a virtual game where the object is to follow a virtual map pattern. Points are received for accurately following patterns and hitting certain score bonuses before they disappear. Points are deducted for deviating from the pattern or deviating too close to certain physical objects (eg, trees, sprinklers, roads).

然而,工作人员可能遇到可能以非常快的速度行驶的进入车辆或者可能在机器前面行走的行人。工作人员可能需要对该进入车辆或行人做出反应(例如通过慢下来或改变方向)。可穿戴系统可以使用其的面向外的成像系统来获取工作人员周围的图像,并使用计算机视觉算法来检测进入车辆或行人。However, a worker may encounter an incoming vehicle, which may be traveling at a very fast speed, or a pedestrian, who may be walking in front of the machine. Staff may need to react to the entering vehicle or pedestrian (eg by slowing down or changing direction). The wearable system can use its outward-facing imaging system to acquire images of the worker's surroundings and use computer vision algorithms to detect entering vehicles or pedestrians.

可穿戴系统可以基于所获取的图像(或从诸如GPS的其他环境传感器获取的基于位置的数据)来计算���工作人员的速度或距离。如果可穿戴系统确定速度或距离超过阈值条件(例如,车辆非常快地接近或者车辆或行人非常靠近工作人员),则HMD可以自动使虚拟内容沉默(例如,通过暂停游戏、将虚拟游戏移动到FOV之外)以减少分心并允许工作人员集中注意力操纵割草机以避免进入车辆或行人。例如,如场景1132c所示,当HMD使虚拟内容沉默时,用户不会感知到虚拟游戏部件1130。The wearable system can calculate speed or distance from the worker based on the captured images (or location-based data from other environmental sensors such as GPS). If the wearable system determines that the speed or distance exceeds threshold conditions (e.g., a vehicle is approaching very quickly or a vehicle or pedestrian is very close to a worker), the HMD can automatically silence the virtual content (e.g., by pausing the game, moving the virtual game to FOV outside) to reduce distraction and allow workers to concentrate on maneuvering the mower to avoid entering vehicles or pedestrians. For example, as shown in scene 1132c, when the HMD silences the virtual content, the user does not perceive virtual game piece 1130.

当可穿戴系统检测到终止条件时,例如,当触发事件结束时,HMD可以恢复正常操作并恢复向工作人员呈现虚拟内容。在一些实现中,HMD可以使虚拟内容沉默,而HMD的其余部分可以继续操作。例如,可穿戴系统可以使用一个或多个环境传感器(诸如GPS或面向外的相机)连续地对用户的位置成像。当可穿戴系统确定进入车辆或行人已经经过工作人员时,可穿戴系统可以重新打开虚拟内容。When the wearable system detects a terminating condition, for example, when the triggering event ends, the HMD can resume normal operation and resume presenting virtual content to workers. In some implementations, the HMD can silence the virtual content while the rest of the HMD can continue to operate. For example, a wearable system may continuously image the user's location using one or more environmental sensors, such as GPS or an outward-facing camera. When the wearable system determines that a vehicle has entered or that a pedestrian has passed a worker, the wearable system can reopen the virtual content.

在一些实现中,可穿戴系统可以在恢复正常操作或恢复虚拟内容的呈现之前呈现警告。这可以防止在用户在紧急状况之后需要时间恢复或者出于任何其他原因的情况下使触发事件仍在进行时(例如,当用户仍然处于紧急状态时),打开虚拟内容。响应于警告,如果用户希望虚拟内容保持沉默,则用户可以致动现实按钮263。在一些实现中,用户可以在触发事件期间,通过手动用户输入或自动地恢复虚拟内容。这允许其中虚拟内容可以在触发事件期间帮助用户的情况。例如,系统可以自动检测孩子是否窒息,从而使父母的虚拟内容沉默。在系统安装了紧急响应应用的情况下,如果父母在阈值时间段内没有响应或者如果父母没有采取正确的行动,则系统可以自动选择性地仅开启与紧急响应应用相关的虚拟内容。In some implementations, the wearable system can present a warning before resuming normal operation or resuming presentation of virtual content. This may prevent virtual content from opening while the triggering event is still in progress if the user needs time to recover after an emergency or for any other reason (eg, while the user is still in an emergency). In response to the warning, the user may actuate the reality button 263 if the user wishes the virtual content to remain silent. In some implementations, a user may resume virtual content, either through manual user input or automatically, during a triggering event. This allows for situations where virtual content can assist the user during a triggering event. For example, the system could automatically detect if a child is choking, thereby silencing a parent's virtual content. Where the system has an emergency response application installed, the system may automatically selectively enable only virtual content related to the emergency response application if the parent does not respond within a threshold period of time or if the parent does not take corrective action.

在教育背景下使可穿戴装置沉默的示例Examples of Silencing Wearables in an Educational Context

图11D示出了在教育背景下使HMD沉默的示例。图11D示出了教室1100d,其中两个学生1122和1124物理地坐在教室中(在该示例中,该班级是瑜伽班)。当学生1122和1124佩戴HMD时,他们可以感知用于学生的虚拟化身1126和用于教师的虚拟化身1110,他们都不是物理地出现在房间中的。学生可以参加在他家中(而不是在教室1100d中)的课堂。Figure 1 ID shows an example of silencing an HMD in an educational context. FIG. 11D shows a classroom 1100d in which two students 1122 and 1124 are physically seated in the classroom (in this example, the class is a yoga class). When students 1122 and 1124 wear the HMD, they can perceive a virtual avatar for the student 1126 and a virtual avatar for the teacher 1110, neither of whom are physically present in the room. A student can attend classes at his home (rather than in classroom 1100d).

在一种情况下,学生1122可能想要在课堂期间与其他学生1124讨论课程相关的问题(例如,如何执行特定的瑜伽姿势)。学生1122可以走向学生1124。学生1124的可穿戴系统可以检测到另一个学生1122在她面前并且自动使由HMD呈现的音频和虚拟内容沉默以允许学生1124和1122亲自进行交互,其中呈现较少的(或没有)虚拟内容。例如,可穿戴系统可以使用面部辨别算法来检测HMD前面的自然人的存在(其可以是使HMD自动使虚拟内容沉默的触发事件的示例)。响应于该检测,HMD可以关闭(或减弱)来自HMD的音频和虚拟内容。在图11D所示的示例中,一旦使学生1124的HMD沉默,学生1124将不能感知虚拟化身1126和1110。然而,学生1124仍然可以看到也在物理教室中的学生1122并与其交互。In one instance, a student 1122 may want to discuss course-related issues (eg, how to perform a particular yoga pose) with other students 1124 during the class. Student 1122 can walk to Student 1124. The wearable system of student 1124 can detect that another student 1122 is in front of her and automatically silence the audio and virtual content presented by the HMD to allow students 1124 and 1122 to interact in person with less (or no) virtual content presented. For example, a wearable system may use a facial recognition algorithm to detect the presence of a natural person in front of the HMD (which may be an example of a trigger event for the HMD to automatically silence virtual content). In response to this detection, the HMD may turn off (or attenuate) audio and virtual content from the HMD. In the example shown in FIG. 11D , once student 1124's HMD is silenced, student 1124 will not be able to perceive avatars 1126 and 1110. However, students 1124 can still see and interact with students 1122 who are also in the physical classroom.

作为另一示例,教师1110可以告知学生参与小组讨论,并且学生1122和1124可以被分类到同一组中。在该示例中,HMD可以使虚拟内容沉默并允许学生1112和1124参与面对面讨论。HMD还可以减小虚拟化身1110和1126的尺寸,以减少在小组讨论期间的感知混淆。As another example, teacher 1110 may tell students to participate in group discussions, and students 1122 and 1124 may be sorted into the same group. In this example, the HMD may silence the virtual content and allow students 1112 and 1124 to participate in face-to-face discussions. The HMD can also reduce the size of the avatars 1110 and 1126 to reduce perceived confusion during group discussions.

在娱乐背景下使可穿戴装置沉默的示例Example of silencing a wearable in an entertainment context

可穿戴系统还可以在娱乐背景下检测触发事件并使音频/视觉内容沉默。例如,可穿戴系统可以在用户正在玩游戏时监视用户的生理数据。如果生理数据指示用户正在经历激动的情绪状态(例如由于游戏中的失败而非常生气或者在游戏期间非常害怕),则可穿戴系统可以检测到触发事件的存在并且由此可以导致HMD自动使虚拟内容沉默。可穿戴系统可以将生理数据与一个或多个阈值进行比较,以检测触发事件。作为示例,可穿戴系统可以监视用户的心率、呼吸率、瞳孔扩张等。阈值条件可以取决于用户正在玩的游戏的类型。例如,如果用户正在玩相对放松的游戏(例如生活模拟游戏),则阈值条件(例如,阈值心率、呼吸率等)可能低于用户正在玩竞赛游戏(这可能需要高度集中并可能导致用户的心率上升)的情况。如果用户的生理状态超过阈值,则触发可穿戴系统使由HMD提供的虚拟内容沉默。Wearable systems can also detect trigger events and silence audio/visual content in the context of entertainment. For example, a wearable system could monitor a user's physiological data while the user is playing a game. If the physiological data indicates that the user is experiencing an agitated emotional state (such as being very angry due to a loss in a game or very scared during a game), the wearable system can detect the presence of a trigger event and thus can cause the HMD to automatically render the virtual content silence. Wearable systems can compare physiological data to one or more thresholds to detect triggering events. As an example, a wearable system may monitor the user's heart rate, respiration rate, pupil dilation, and the like. The threshold condition may depend on the type of game the user is playing. For example, if the user is playing a relatively relaxing game (such as a life simulation game), the threshold conditions (such as threshold heart rate, breathing rate, etc.) may be lower than if the user is playing a competitive game (which may require high concentration and may cause the user's heart rate rising). If the user's physiological state exceeds a threshold, the wearable system is triggered to silence the virtual content provided by the HMD.

作为另一示例,虚拟内容可以与令人不愉快的音乐相关联。令人不愉快的音乐可以是用于使HMD的音频/视频内容沉默的触发事件。可穿戴系统可以使用面向内的成像系统(例如,以确定用户的面部表情或瞳孔扩张)或其他环境传感器(例如,以检测用户的呼吸率或心率)来检测用户的反应。例如,可穿戴系统可以在用户听到某些音乐时检测到用户皱眉。As another example, virtual content may be associated with unpleasant music. Unpleasant music may be a trigger event for silencing the HMD's audio/visual content. Wearable systems can use inward-facing imaging systems (for example, to determine the user's facial expression or pupil dilation) or other environmental sensors (for example, to detect the user's breathing rate or heart rate) to detect user reactions. For example, a wearable system could detect that a user frowns while listening to certain music.

可穿戴系统可以生成指示用户正在经历激动情绪状态的警告消息。HMD可以显示建议用户手动致动现实按钮263以使虚拟内容的显示沉默的虚拟图形。在一些实施例中,如果HMD在特定时间段内没有接收到用户确认,则HMD可以自动关闭虚拟内容。HMD还可以响应于检测到触发事件而自动关闭虚拟内容。例如,当播放不愉快的音乐时,HMD可以自动使声音沉默或降低声音的音量。同时,HMD仍然可以播放与声音相关联的虚拟图像。A wearable system can generate a warning message indicating that the user is experiencing an agitated emotional state. The HMD may display a virtual graphic advising the user to manually actuate the reality button 263 to silence the display of the virtual content. In some embodiments, the HMD may automatically turn off the virtual content if the HMD does not receive user confirmation within a certain period of time. The HMD can also automatically turn off virtual content in response to detecting a trigger event. For example, when unpleasant music is playing, the HMD can automatically silence the sound or lower the volume of the sound. At the same time, the HMD can still play the virtual image associated with the sound.

在购物背景下使可穿戴装置沉默的示例Example of silencing wearables in the context of shopping

图11E示出了在购物背景下使HMD沉默的示例。在该示例中,在购物中心1100e中用户210可以佩戴HMD。用户210可以使用HMD感知诸如她的购物清单、价格标签、推荐商品(以及他们在商店中的位置)等虚拟内容。用户还可以通过销售各种香料和烹饪用具的厨师1152来感知物理展位1150。Figure 1 IE shows an example of silencing an HMD in the context of shopping. In this example, a user 210 may be wearing an HMD in a shopping mall 1100e. User 210 can use the HMD to perceive virtual content such as her shopping list, price tags, recommended items (and their location in the store). Users may also perceive a physical booth 1150 with chefs 1152 selling various spices and cooking utensils.

可穿戴系统可以使用环境传感器(诸如GPS或面向外的成像系统)来检测用户210的位置。如果可穿戴系统确定用户210在展位1150的阈值距离内,则HMD可以自动使虚拟内容的显示沉默,使得用户可以亲自与厨师1152交互。当用户210与厨师1152进行转换(conversion)时,这可以有利地减少感知混淆。此外,例如,用户可以能够分辨出展位中的哪些物品是物理物品(而不是虚拟物品)。可穿戴系统可以检测终止条件,例如,当用户210离开展位1150时,HMD可以响应于检测到终止条件而使虚拟内容的显示取消沉默。The wearable system may detect the location of the user 210 using environmental sensors, such as GPS or outward-facing imaging systems. If the wearable system determines that the user 210 is within a threshold distance of the booth 1150, the HMD can automatically silence the display of the virtual content so that the user can interact with the chef 1152 in person. This may advantageously reduce perceived confusion when the user 210 is making conversions with the chef 1152 . Also, for example, a user may be able to tell which items in a booth are physical items (as opposed to virtual items). The wearable system may detect a termination condition, for example, when the user 210 exits the booth 1150, the HMD may unsilence the display of the virtual content in response to detecting the termination condition.

基于环境使虚拟内容沉默的示例Example of silencing virtual content based on context

除了基于环境中的事件(例如,紧急状况)或环境中的对象(例如,另一用户的面部的存在)使虚拟内容沉默之外或作为其替代,可穿戴系统还可以基于用户环境的特性使虚拟内容沉默。例如,可穿戴系统可以基于由面向外的成像系统464观察到的对象来识别用户环境的这种特性。基于用户环境的类型(例如,家庭、办公室、休息或游戏区域、户外、零售商店、商场、剧院或音乐会场地、餐厅、博物馆、交通工具(例如,汽车、飞机、公共汽车、火车等),可穿戴系统可以定制虚拟内容或使某些虚拟内容沉默。In addition to or instead of silencing virtual content based on events in the environment (e.g., an emergency) or objects in the environment (e.g., the presence of another user's face), the wearable system can also silence virtual content based on characteristics of the user's environment. Virtual content is silent. For example, the wearable system may recognize such characteristics of the user's environment based on objects observed by outward-facing imaging system 464 . Based on the type of user environment (e.g., home, office, lounge or gaming area, outdoors, retail store, mall, theater or concert venue, restaurant, museum, vehicle (e.g., car, airplane, bus, train, etc.), Wearable systems can customize virtual content or silence certain virtual content.

除了使用可穿戴系统的面向外的成像系统464之外或作为其替代,如本文将进一步描述的,可穿戴系统可以使用位置传感器(例如,GPS传感器)来确定用户的位置,从而推断出用户的环境的性质。例如,可穿戴系统可以存储用户感兴趣的位置(例如,家庭位置、办公室位置等)。位置传感器可以确定位置、与已知的感兴趣位置进行比较,以及可穿戴系统可以推断用户的环境(例如,如果系统的GPS坐标足够接近用户的家庭位置,则可穿戴系统可以确定用户处于家庭环境中并基于家庭环境应用适当的内容阻止(或允许))。In addition to or instead of using the wearable system's outward-facing imaging system 464, as will be described further herein, the wearable system may use location sensors (e.g., GPS sensors) to determine the user's location and thus infer the user's location. the nature of the environment. For example, a wearable system may store locations of interest to the user (eg, home location, office location, etc.). Location sensors can determine location, compare to known locations of interest, and the wearable system can infer the user's environment (for example, the wearable system can determine that the user is in the home environment if the system's GPS coordinates are close enough to the user's home location and apply appropriate content blocking (or allowing) based on the home environment).

作为一个示例,可穿戴系统可以包括各种虚拟内容,例如,与社交媒体、游戏邀请、视听内容、办公室内容和导航应用相关的虚拟内容。面向外的成像系统464可以检测到用户位于办公室中(例如,通过使用对象辨别器辨别计算机监视器、商务电话、办公桌上的工作文件的存在)。可穿戴系统因此可以允许办公应用并阻止社交媒体馈送和游戏邀请,以便用户可以专注于工作。然而,可穿戴系统可以被配置为不使导航应用沉默,因为它们可以有助于将用户引导到客户目的地。然而,当可穿戴系统检测到用户坐在办公室中远离用户桌子的椅子上时(例如,通过对由面向外的成像系统获取的图像的分析),可穿戴系统可以被配置为单独允许社交媒体馈送或与办公(或导航)应用组合,因为用户可能会短暂休息。附加地或可选地,可穿戴系统可以标记环境并基于用户输入指定要阻止或允许的内容。例如,可穿戴系统可以从用户接收场景是用户卧室的指示,并且用户可以选择在该场景处允许娱乐内容或阻止工作内容的选项。因此,当用户重新进入卧室时,系统可以确定用户位于卧室中,并基于用户输入自动阻止或允许内容。As one example, a wearable system may include various virtual content, such as virtual content related to social media, game invitations, audio-visual content, office content, and navigation applications. The outward-facing imaging system 464 may detect that the user is in the office (eg, by using an object recognizer to recognize the presence of a computer monitor, business phone, work files on a desk). Wearable systems can thus allow office applications and block social media feeds and game invites so users can focus on work. However, wearable systems can be configured not to silence navigation applications as they can help guide users to customer destinations. However, when the wearable system detects that the user is sitting on a chair in an office away from the user's desk (e.g., through analysis of images acquired by an outward-facing imaging system), the wearable system can be configured to allow social media feeds alone Or in combination with office (or navigation) apps, since the user might take a short break. Additionally or alternatively, the wearable system can flag the environment and specify what to block or allow based on user input. For example, the wearable system may receive an indication from the user that the scene is the user's bedroom, and the user may select the option to allow entertainment content or block work content at that scene. Thus, when the user re-enters the bedroom, the system can determine that the user is in the bedroom and automatically block or allow content based on user input.

在一些情况下,可穿戴系统可以基于环境和用户相对于环境的角色的组合来使虚拟内容沉默或呈现虚拟内容。例如,当可穿戴系统检测到用户位于办公室中(例如,通过使用对象辨别器708识别办公室家具)时,可穿戴系统可以为雇员呈现一组办公工具并阻止对因特网(或其他应用)的访问。然而,如果主管进入相同的办公环境,则可穿戴系统可以允许主管访问因特网,因为主管可以具有对虚拟内容的更多访问。In some cases, the wearable system may silence or present virtual content based on a combination of the environment and the user's role relative to the environment. For example, when the wearable system detects that the user is in an office (eg, by identifying office furniture using object recognizer 708), the wearable system may present the employee with a set of office tools and block access to the Internet (or other applications). However, if the executive enters the same office environment, the wearable system may allow the executive to access the Internet, since the executive may have more access to virtual content.

作为另一示例,可穿戴系统可以例如通过辨别环境中的家具(例如,沙发、电视、餐桌等)的存在或通过例如由用户手动标记,来辨别用户位于房屋中。因此,可穿戴系统可以允许某些虚拟内容,例如来自/到朋友的社交媒体馈送、视频游戏或遥现(telepresence)邀请。在某些实现中,即使两个用户处于相同的环境中,用户可感知的虚拟内容也可能不同。例如,孩子和父母都可以在生活环境中,但是可穿戴系统可以阻止不适合孩子年龄的虚拟内容,同时允许父母观看这样的虚拟内容。下面参考图11F和11G进一步描述基于位置使虚拟内容沉默的附加示例。As another example, the wearable system may recognize that the user is in the house, such as by recognizing the presence of furniture (eg, sofa, television, dining table, etc.) in the environment or by manually marking it, such as by the user. Thus, wearable systems may allow certain virtual content, such as social media feeds, video games, or telepresence invitations from/to friends. In some implementations, even if two users are in the same environment, the virtual content perceived by the users may be different. For example, both the child and the parent can be in the living environment, but the wearable system can block virtual content that is inappropriate for the child's age, while allowing the parent to view such virtual content. Additional examples of silencing virtual content based on location are described further below with reference to FIGS. 11F and 11G .

尽管参考阻止虚拟内容来描述示例,但是可穿戴系统还可以基于位置使虚拟内容沉默,例如,通过基于位置不强调虚拟内容的部分或全部或者关闭显示。Although examples are described with reference to blocking virtual content, the wearable system may also silence virtual content based on location, eg, by de-emphasizing some or all of the virtual content or turning off the display based on location.

工作环境中的选择性内容沉默的示例Examples of Selective Content Silencing in a Work Environment

图11F示出了选择性地阻止工作环境中的内容的示例。图11F示出了两个场景1160a和1160b,其中一些虚拟内容在场景1160b中被阻止。场景1160a和1160b示出了办公室1100f,其中用户210物理地站在办公室中。用户210可以佩戴HMD 1166(其可以是参考图2描述的HMD的实施例)。用户可以通过HMD感知到办公室中的物理对象,例如桌子1164a、椅子1164b和镜子1164c。HMD还可以被配置为呈现虚拟对象,例如用于游戏的虚拟菜单1168和虚拟化身1164。FIG. 11F shows an example of selectively blocking content in a work environment. Figure 1 IF shows two scenarios 1160a and 1160b, where some virtual content is blocked in scenario 1160b. Scenes 1160a and 1160b show office 1100f where user 210 is physically standing in the office. User 210 may wear HMD 1166 (which may be an embodiment of the HMD described with reference to FIG. 2 ). A user may perceive physical objects in the office through the HMD, such as a table 1164a, a chair 1164b, and a mirror 1164c. The HMD may also be configured to present virtual objects, such as virtual menus 1168 and virtual avatars 1164 for games.

在一些情况下,可穿戴系统可以被配置为选择性地使用户环境中的虚拟内容沉默,使得并非全部虚拟内容都由HMD 1166呈现给用户。作为一个示例,可穿戴系统可以接收关于从可穿戴系统的一个或多个环境传感器获得的环境的数据。环境数据可以单独地包括办公室的图像或者包括办公室的图像与GPS数据的组合。环境数据可以用于辨别用户环境中的对象或基于辨别的对象确定用户的位置。参考图11F,可穿戴系统可以分析环境数据以检测工作台、椅子1164b和镜子1164c的物理存在。至少部分地基于检测到工作台1514、椅子1512和镜子1164c的接收数据,可穿戴系统200可以将环境辨别为办公环境。例如,可穿戴系统可以基于与对象相关联的背景信息来进行该确定,该背景信息例如为对象的特性以及对象的布局。用户环境中的对象的集合还可以用于确定用户位于特定位置的可能性。作为一个示例,可穿戴系统可以确定L形桌子和滚动椅的存在表明环境是办公室的高可能性。可穿戴系统可以训练和应用机器学习模型(例如,神经网络)以确定环境。可以训练各种机器学习算法(例如神经网络或监督学习)并将其用于辨别环境。在各种实施例中,一个或多个对象辨别器708可以用于这种辨别。或者,用户可能先前通过用户输入将该位置标记为“工作”。In some cases, the wearable system may be configured to selectively silence virtual content in the user's environment such that not all of the virtual content is presented to the user by HMD 1166 . As one example, a wearable system may receive data about an environment obtained from one or more environmental sensors of the wearable system. The environmental data may include images of the office alone or in combination with GPS data. The environmental data may be used to identify objects in the user's environment or determine the user's location based on the identified objects. Referring to FIG. 11F , the wearable system can analyze environmental data to detect the physical presence of a workbench, chair 1164b, and mirror 1164c. Based at least in part on the received data detecting the workbench 1514, the chair 1512, and the mirror 1164c, the wearable system 200 may discern the environment as an office environment. For example, the wearable system may make this determination based on context information associated with the object, such as the characteristics of the object and the layout of the object. The collection of objects in the user's environment can also be used to determine the likelihood that the user is in a particular location. As an example, a wearable system may determine that the presence of an L-shaped desk and a rolling chair indicates a high likelihood that the environment is an office. Wearable systems can train and apply machine learning models (eg, neural networks) to determine the environment. Various machine learning algorithms (such as neural networks or supervised learning) can be trained and used to discriminate the environment. In various embodiments, one or more object discriminators 708 may be used for such discrimination. Alternatively, the user may have previously marked the location as "work" via user input.

基于环境,可穿戴系统可以自动阻止/不阻止(或允许/不允许)某些虚拟内容。可穿戴系统可以访问与环境相关联的一个或多个设置以阻止虚拟内容。参考图11F,与办公环境相关联的设置可以包括使视频游戏沉默。因此,如场景1160b中所示,可穿戴系统可以自动阻止虚拟化身1524被HMD 1166呈现以允许用户210专注于他的工作。作为另一示例,可穿戴系统可以被配置为呈现虚拟化身1164的图像,但仍然阻止与虚拟化身1164相关联的一个或多个用户界面操作。在该示例中,用户210仍将能够看到虚拟化身1164,但是当可穿戴系统��用与工作环境相关联的设置时,用户210不能与虚拟化身1164交互。Based on the environment, the wearable system can automatically block/not block (or allow/disallow) certain virtual content. The wearable system can access one or more settings associated with the environment to block virtual content. Referring to FIG. 11F , settings associated with an office environment may include muting video games. Thus, as shown in scene 1160b, the wearable system may automatically prevent avatar 1524 from being presented by HMD 1166 to allow user 210 to focus on his work. As another example, a wearable system may be configured to present an image of avatar 1164 but still prevent one or more user interfaces associated with avatar 1164 from operating. In this example, user 210 will still be able to see avatar 1164, but user 210 will not be able to interact with avatar 1164 while the wearable system enables settings associated with the work environment.

在某些实现中,用于使位置处的虚拟内容沉默的设置可以是用户可配置的。例如,用户可以选择要为环境阻止哪���虚拟���容以���哪个标签应用于该位置和/或虚拟内容选择。参考图11F,当用户210位于办公室1100f中时,用户210可以选择阻止虚拟化身1164出现在HMD中。然后,可穿戴系统可以存储与办公室1100f相关联的设置并应用该设置以选择性地阻止虚拟化身1164。因此,如场景1160b所示,虚拟化身1164在用户的视图中被阻止。In some implementations, settings for silencing virtual content at a location may be user configurable. For example, a user may select which virtual content to block for an environment and which label to apply to that location and/or virtual content selection. Referring to FIG. 11F , when the user 210 is located in the office 1100f, the user 210 may choose to prevent the avatar 1164 from appearing in the HMD. The wearable system may then store settings associated with office 1100f and apply the settings to selectively block avatar 1164. Accordingly, avatar 1164 is blocked from the user's view, as shown in scene 1160b.

参考确定环境(例如,办公室)和基于环境使虚拟内容沉默来描述示例,可穿戴系统还可以基于环境因素或内容与其他被阻止的内容的相似性使虚拟内容(或可穿戴系统的部件)沉默,使得可穿戴系统不必确定用户的具体位置。如果可穿戴系统不包括位置传感器、位置传感器被阻止(例如,阻止到GPS卫星的路径)或者位置精度不足以确定环境特性,则这可能是有利的。可穿戴系统可以辨别环境中的对象以及一般地确定环境的特征(例如,休闲环境、公共环境或工作环境)并且基于环境的特性使虚拟内容沉默。例如,可穿戴系统可以识别用户的环境包括沙发和电视。因此,可穿戴系统可以确定用户处于休闲环境中,而不需要知道休闲环境实际上是用户的家还是用户工作处的休息室。在一些实现中,系统将确定环境的类型并向用户提供接受或拒绝环境标签的通知。Examples are described with reference to determining the environment (e.g., an office) and silencing virtual content based on the environment, the wearable system may also silence virtual content (or components of the wearable system) based on environmental factors or the similarity of the content to other blocked content , so that the wearable system does not have to determine the specific location of the user. This may be advantageous if the wearable system does not include a location sensor, if the location sensor is blocked (eg, blocking the path to GPS satellites), or if the location accuracy is not sufficient to determine environmental characteristics. A wearable system can recognize objects in an environment and generally determine characteristics of an environment (eg, a leisure environment, a public environment, or a work environment) and silence virtual content based on the characteristics of the environment. For example, a wearable system can recognize that the user's environment includes sofas and televisions. Thus, the wearable system can determine that the user is in a leisure environment without knowing whether the leisure environment is actually the user's home or a break room at the user's work. In some implementations, the system will determine the type of environment and provide a notification to the user to accept or decline the environment label.

休息室环境中的选择性内容阻止的示例Example of selective content blocking in a lounge environment

图11G示出了选择性地阻止休息室环境中的内容的示例。图11G示出了两个场景1170a和1170b。图11G中所示的休息室1100g示出了佩戴HMD 1166并且物理地站立在休息室1100g中的用户210。休息室1100g包括诸如桌子1172c、沙发1172b和电视1172a的物理对象。HMD 1166还可以被配置为呈现虚拟内容,例如用于游戏的虚拟化身1176和虚拟菜单1174,该两者都不是物理地存在于房间中。在该示例中,虚拟菜单1174向用户210呈现选项1178a、1178b、1178c以分别播放纵横字谜、开始电话会议或访问工作电子邮件。FIG. 11G shows an example of selectively blocking content in a lounge environment. Figure 11G shows two scenarios 1170a and 1170b. Lounge 1100g shown in FIG. 11G shows user 210 wearing HMD 1166 and physically standing in lounge 1100g. Lounge 1100g includes physical objects such as table 1172c, sofa 1172b, and television 1172a. HMD 1166 may also be configured to present virtual content, such as virtual avatars 1176 and virtual menus 1174 for gaming, neither of which are physically present in the room. In this example, virtual menu 1174 presents user 210 with options 1178a, 1178b, 1178c to play a crossword, start a conference call, or access work email, respectively.

可穿戴系统可以被配置为基于用户的环境使一些虚拟内容沉默。例如,面向外的成像系统464可以获取用户环境的图像。可穿戴系统可以分析图像并检测咖啡桌、沙发1172b和电视1172a的物理存在。至少部分地基于咖啡桌、沙发1172b和电视1172a的存在,可穿戴系统200然后可以辨别出用户210处于休息室环境中。Wearable systems can be configured to silence some virtual content based on the user's environment. For example, outward-facing imaging system 464 may acquire images of the user's environment. The wearable system can analyze the images and detect the physical presence of the coffee table, sofa 1172b and television 1172a. Based at least in part on the presence of the coffee table, sofa 1172b, and television 1172a, wearable system 200 may then recognize that user 210 is in a lounge environment.

可穿戴系统可以基于与用户环境相关联的一个或多个设置来呈现虚拟内容或使虚拟内容沉默。该设置可以包括使环境中的一些虚拟内容沉默或使虚拟内容的一部分沉默。作为使一些虚拟内容沉默的示例,可穿戴系统可以在保持虚拟菜单1174的同时阻止虚拟化身1176显示。作为阻止虚拟内容的一部分的示例,场景1170b示出了当用户在休息室时阻止与工作相关的内容的示例。如场景1170b所示,不是阻止整个虚拟菜单1174,可穿戴系统可以选择性地阻止会议选项1178b和工作电子邮件选项1178c,但保持纵横字谜选项1178a可以用于交互,因为纵横字谜选项1178a是娱乐相关的,而选项1178b和1178c是工作相关的,并且与休息室环境相关联的设置使得能够阻止与工作相关的内容。在某些实现中,会议选项1178b和1178c仍然可以对用户可见,但是可穿戴系统可以在用户210处于休息室1100g时阻止用户与选项1178b和1178c交互。The wearable system may present or silence virtual content based on one or more settings associated with the user's environment. This setting may include silencing some virtual content in the environment or silencing a portion of the virtual content. As an example of silencing some virtual content, the wearable system may prevent virtual avatar 1176 from displaying while maintaining virtual menu 1174 . As an example of blocking a portion of virtual content, scene 1170b shows an example of blocking work-related content while the user is in a break room. As shown in scene 1170b, instead of blocking the entire virtual menu 1174, the wearable system can selectively block meeting option 1178b and work email option 1178c, but keep crossword option 1178a available for interaction since crossword option 1178a is entertainment related , while options 1178b and 1178c are work-related, and settings associated with the lounge environment enable blocking of work-related content. In some implementations, the meeting options 1178b and 1178c may still be visible to the user, but the wearable system may prevent the user from interacting with the options 1178b and 1178c while the user 210 is in the lounge 1100g.

在一些实现中,用户可以配置与环境相关联的沉默设置,并且可穿戴系统可以自动阻止类似的虚拟内容,即使虚拟内容的特定部分可能不是沉默设置的一部分。例如,用户可以配置用于使社交网络应用沉默的工作设置。可穿戴系统可以自动使游戏邀请沉默,因为游戏邀请和社交网络应用都被视为娱乐活动。作为另一示例,可穿戴系统可以被定制为在办公环境中呈现工作电子邮件和办公工具。基于此设置,可穿戴系统还可以呈现用于遥现工具的工作相关联系人,以将虚拟内容定制到办公环境。可穿戴系统可以使用参考图7中的对象辨别器708描述的一个或多个机器学习算法来确定虚拟内容是否类似于那些被阻止(或定制)的虚拟内容。In some implementations, the user can configure silence settings associated with the environment, and the wearable system can automatically block similar virtual content, even though certain portions of the virtual content may not be part of the silence settings. For example, a user may configure work settings for silencing a social networking application. The wearable system can automatically silence game invites, as both game invites and social networking apps are considered entertainment. As another example, a wearable system can be customized to present work email and office tools in an office environment. Based on this setup, the wearable system can also present work-related contacts for telepresence tools to customize virtual content to the office environment. The wearable system may use one or more machine learning algorithms described with reference to object discriminator 708 in FIG. 7 to determine whether virtual content is similar to those that are blocked (or customized).

尽管参考基于用户环境阻止内容来描述场景1170a和1170b中的示例,但是在一些实现中,与环境相关联的设置可以涉及允许某些虚拟内容。例如,与休息室环境相关联的设置可以包括启用与娱乐相关的虚拟内容的交互。Although the examples in scenarios 1170a and 1170b are described with reference to blocking content based on user context, in some implementations, settings associated with context may relate to allowing certain virtual content. For example, settings associated with a lounge environment may include enabling interaction with entertainment-related virtual content.

触发事件的示例Example of trigger event

图12A、12B和12C示出了至少部分地基于触发事件的发生使由HMD呈现的虚拟内容沉默的示例。在图12A中,HMD的用户可以在他的FOV 1200a中感知到物理对象。物理对象可以包括电视(TV)1210、遥控器1212、电视支架1214和窗口1216。这里的HMD可以是参考图2和图4描述的显示器220的实施例。HMD可以在AR或MR体验中将虚拟对象显示在用户的物理环境上。例如,在图12A中,用户可以感知到用户的环境中的诸如虚拟建筑物1222和化身1224的虚拟对象。12A, 12B, and 12C illustrate examples of silencing virtual content presented by an HMD based at least in part on the occurrence of a trigger event. In Figure 12A, the user of the HMD can perceive a physical object in his FOV 1200a. Physical objects may include television (TV) 1210 , remote control 1212 , TV stand 1214 , and window 1216 . The HMD here may be an embodiment of the display 220 described with reference to FIGS. 2 and 4 . HMDs can display virtual objects on top of the user's physical environment in an AR or MR experience. For example, in FIG. 12A, the user may perceive virtual objects such as virtual building 1222 and avatar 1224 in the user's environment.

用户可以与用户的FOV中的对象交互。例如,化身1224可以表示用户的朋友的虚拟图像。当用户正在与他的朋友进行遥现会话时,化身1224可以使用户的朋友的动作和情绪有生命力,以创建朋友在用户环境中的存在的有形的(tangible)感觉。作为另一示例,用户可以使用遥控器1212或使用由HMD呈现的虚拟遥控器与TV 1210交互。例如,用户可以使用遥控器1212或虚拟遥控器来改变频道、音量、声音设置等。作为又一示例,用户可以与虚拟建筑物1222交互。例如,用户可以使用姿势(例如,手部姿势或其他身体姿势)或致动用户输入装置(例如,图4中的用户输入装置504)选择虚拟建筑物1222。在选择虚拟建筑物时,HMD可以在虚拟建筑物1222内部显示虚拟环境。例如,虚拟建筑物1222可以包括内部的虚拟教室。用户在AR/MR/VR环境中可以模拟走进虚拟教室并参与课程。A user can interact with objects in the user's FOV. For example, an avatar 1224 may represent a virtual image of a friend of the user. When the user is having a telepresence session with his friend, the avatar 1224 can animate the actions and emotions of the user's friend to create a tangible (tangible) feeling of the friend's presence in the user's environment. As another example, the user may interact with the TV 1210 using the remote control 1212 or using a virtual remote control presented by the HMD. For example, a user may use the remote control 1212 or a virtual remote control to change channels, volume, sound settings, and the like. As yet another example, a user may interact with a virtual building 1222 . For example, a user may select virtual building 1222 using gestures (eg, hand gestures or other body gestures) or actuating a user input device (eg, user input device 504 in FIG. 4 ). Upon selection of the virtual building, the HMD may display a virtual environment inside the virtual building 1222. For example, virtual building 1222 may include a virtual classroom inside. In the AR/MR/VR environment, users can simulate walking into a virtual classroom and participating in courses.

当用户处于AR/MR/VR环境中时,环境传感器(包括用户传感器和外部传感器)可以获取用户和用户的环境的数据。可穿戴系统可以分析由环境传感器获取的数据以确定一个或多个触发事件。在发生触发事件(其可以具有高于阈值的幅值或重要性)时,可穿戴系统可以自动使虚拟内容沉默,例如通过使可见虚拟内容中的一些或全部的显示沉默或使可听虚拟内容沉默。When the user is in the AR/MR/VR environment, environmental sensors (including user sensors and external sensors) can acquire data on the user and the user's environment. A wearable system can analyze data acquired by environmental sensors to determine one or more triggering events. Upon the occurrence of a trigger event (which may have a magnitude or importance above a threshold), the wearable system may automatically silence the virtual content, for example by silencing the display of some or all of the visible virtual content or by silencing the audible virtual content silence.

触发事件可以基于在用户环境中发生的物理事件。例如,触发事件可以包括紧急或不安全的情况,例如火灾、动脉破裂(在手术中)、警车接近、化学品泄漏(在实验或工业过程中)等。触发事件还可以与用户的动作相关联,例如当用户走在拥挤的街道上、坐在汽车中(如果向用户呈现太多虚拟内容,则可能对驾驶不安全)时。触发事件还可以基于用户的位置(例如,在家中或在公园)或用户周围的场景(例如,工作场景或休闲场景)。触发事件还可以基于用户环境中的对象(包括其他人)。例如,触发事件可以基于用户的特定距离内的人的密度或已接近用户的特定人(例如,教师、警察、监督者等)的计算机面部辨别。Triggering events may be based on physical events occurring in the user's environment. For example, triggering events may include emergency or unsafe situations such as fires, ruptured arteries (during surgery), approaching police cars, chemical spills (during experiments or industrial processes), etc. Triggering events can also be associated with user actions, such as when the user is walking on a crowded street, sitting in a car (which may be unsafe to drive if too much virtual content is presented to the user). Triggering events can also be based on the user's location (eg, at home or in a park) or the context around the user (eg, work or leisure). Triggering events can also be based on objects in the user's environment, including other people. For example, the triggering event may be based on the density of people within a certain distance of the user or computer facial recognition of certain people (eg, teachers, police officers, supervisors, etc.) who have approached the user.

另外或可选地,触发事件可以基于虚拟内容。例如,触发事件可以包括AR/VR/MR环境中的突然的巨大噪声。触发事件还可以包括AR/VR/MR环境中的令人不愉快或令人不安的体验。作为又一示例,可穿戴系统可以使与先前在特定位置被可穿戴系统阻止的虚拟内容类似的虚拟内容沉默。Additionally or alternatively, the triggering event may be based on virtual content. For example, a trigger event may include a sudden loud noise in an AR/VR/MR environment. Triggering events may also include unpleasant or disturbing experiences in AR/VR/MR environments. As yet another example, the wearable system may silence virtual content similar to virtual content that was previously blocked by the wearable system in a particular location.

触发事件还可以包括用户位置的改变。图12D示出了在检测到用户环境的变化时使虚拟内容沉默的示例。在图12D中,用户210最初处于休息室1240b中。用户可以通过HMD感知到为休息室1240b定制的虚拟内容,诸如图11G中的场景1170b中所示的示例虚拟内容1178a和1176。用户210可以走出休息室1240b并进入办公室1240a。当用户210从休息室1240b转换到办公室1240a时,可穿戴系统200可以从一个或多个环境传感器获取数据。所获取的数据可以包括由面向外的成像系统464获取的图像。可穿戴系统可以分析所获取的图像以检测工作台1242、椅子1244和计算机监视器1246的存在。可穿戴系统200可以至少部分地基于环境中的一个或多个物理对象的存在辨别用户已经进入办公环境。Trigger events may also include a change in the user's location. FIG. 12D illustrates an example of silencing virtual content when a change in the user's environment is detected. In Figure 12D, user 210 is initially in lounge 1240b. A user may perceive virtual content customized for lounge 1240b through the HMD, such as example virtual content 1178a and 1176 shown in scene 1170b in FIG. 11G . User 210 may walk out of break room 1240b and into office 1240a. When user 210 transitions from lounge 1240b to office 1240a, wearable system 200 may acquire data from one or more environmental sensors. The acquired data may include images acquired by outward facing imaging system 464 . The wearable system can analyze the acquired images to detect the presence of the workbench 1242 , chair 1244 and computer monitor 1246 . Wearable system 200 may recognize that the user has entered the office environment based at least in part on the presence of one or more physical objects in the environment.

因为可穿戴系统200检测到环境发生了变化(例如,因为用户从休息室1240b走到办公室1240a),所以可穿戴系统200确定与使用于新环境的内容沉默相关联的设置。例如,可穿戴系统200可以检查先前是否启用了与办公室1240a相关联的内容阻止设置。如果先前启用了与办公室1705相关联的内容阻止设置,则可穿戴系统200可以自动应用用于内容阻止的相关设置。作为示例,用于办公室1240a的内容阻止设置可以包括阻止娱乐内容。因此,如图12D所示,用户不再能够感知虚拟游戏应用。可穿戴系统还可以移除纵横字谜应用1178a(用户在休息室1240b中能够感知到)并且替代地示出办公工具应用1252。作为另一示例,可穿戴系统可以更新遥现会话的联系人列表1254以呈现与工作相关的联系人(而不是工作之外的用户的朋友)。可穿戴系统还可以对联系人列表进行分类,使得当用户在办公室1240a中时,用户更容易感知到与工作相关的联系人(例如,将与工作相关的联系人移动到联系人列表的顶部)。Because wearable system 200 detects that the environment has changed (eg, because the user walked from break room 1240b to office 1240a), wearable system 200 determines settings associated with silencing content for the new environment. For example, wearable system 200 may check whether a content blocking setting associated with office 1240a was previously enabled. If content blocking settings associated with office 1705 were previously enabled, wearable system 200 may automatically apply the relevant settings for content blocking. As an example, content blocking settings for office 1240a may include blocking entertainment content. Therefore, as shown in FIG. 12D, the user is no longer able to perceive the virtual game application. The wearable system may also remove the crossword application 1178a (perceivable by the user in the break room 1240b) and show the office tool application 1252 instead. As another example, the wearable system may update the contact list 1254 for the telepresence session to present work-related contacts (rather than the user's friends outside of work). The wearable system can also sort the contact list so that the user is more aware of work-related contacts when the user is in office 1240a (eg, move work-related contacts to the top of the contact list) .

尽管在该示例中,用户从休息室1240b走到办公室1240a,但是������用户���办公室1240a走到休息室1240b,���可以应用类似的技术。在某些实现中,尽管用户从一个位置移动到另一个位置,但是可穿戴系统仍然可以应用相同的设置来使虚拟内容沉默,因为场景没有改变。例如,用户可以从公园移动到地铁站。可穿戴系统可以应用相同的设置使虚拟内容沉默,因为公园和地铁站都可以被视为公共场景。Although in this example the user walks from the break room 1240b to the office 1240a, similar techniques can be applied if the user walks from the office 1240a to the break room 1240b. In some implementations, the wearable system can still apply the same settings to silence virtual content despite the user moving from one location to another because the scene has not changed. For example, a user may move from a park to a subway station. Wearable systems can apply the same settings to silence virtual content, since parks and subway stations can both be considered public scenes.

基于计算机视觉和传感器的触发事件的检测Detection of triggering events based on computer vision and sensors

可以使用各种技术来检测触发事件。可以基于用户的反应来确定触发事件。例如,可穿戴系统可以分析由面向内的成像系统或生理传感器获取的数据。可穿戴系统可以使用数据来确定用户的情绪状态。可穿戴系统可以通过确定用户是否处于某种情绪状态(例如生气、害怕、不舒服等)来检测触发事件的存在。作为示例,可穿戴系统可以分析用户的瞳孔扩张、心率、呼吸率或出汗率以确定用户的情绪状态。Various techniques can be used to detect trigger events. The triggering event may be determined based on the user's reaction. For example, wearable systems can analyze data acquired by inward-facing imaging systems or physiological sensors. Wearable systems can use data to determine a user's emotional state. A wearable system can detect the presence of a trigger event by determining whether the user is in a certain emotional state (eg, angry, scared, uncomfortable, etc.). As an example, a wearable system may analyze the user's pupil dilation, heart rate, respiration rate, or sweat rate to determine the user's emotional state.

还可以使用计算机视觉技术来检测触发事件。例如,显示系统可以分析由面向外的成像系统获取的图像以执行场景重建、事件检测、视频跟踪、对象辨别、对象姿势估计、学习、索引、运动估计或图像恢复等。可以使用一个或多个计算机视觉算法来执行这些任务。计算机视觉算法的非限制性示例包括:标度(scale)不变特征变换(SIFT)、加速稳健(robust)特征(SURF)、定向(orient)FAST和旋转(rotate)BRIEF(ORB)、二进制稳健不变可缩放关键点(BRISK)、快速视网膜关键点(FREAK)、Viola-Jones算法、Eigenfaces方法、Lucas-Kanade算法、Horn-Schunk算法、Mean-shift算法、视觉同步定位和映射(vSLAM)技术、序贯(sequential)贝叶斯估计器(例如,卡尔曼滤波器、扩展卡尔曼滤波器等)、束调整、自适应阈值(和其他阈值技术)、迭代最近点(ICP)、半全局匹配(SGM)、半全局块匹配(SGBM)、特征点直方图、各种机器学习算法(诸如,支持向量机、k-最近邻算法、朴素贝叶斯、神经网络(包括卷积或深度神经网络)、或其他有监督/无监督模型等)等等。如参考图7所述,计算机视觉算法中的一个或多个可以由对象辨别器708实现,用于辨别对象、事件或环境。Computer vision techniques can also be used to detect triggering events. For example, a display system may analyze images acquired by an outward-facing imaging system to perform scene reconstruction, event detection, video tracking, object discrimination, object pose estimation, learning, indexing, motion estimation, or image restoration, among others. These tasks can be performed using one or more computer vision algorithms. Non-limiting examples of computer vision algorithms include: scale invariant feature transform (SIFT), accelerated robust feature (SURF), orientation (orient) FAST and rotate (rotate) BRIEF (ORB), binary robust Invariant Scalable Keypoint (BRISK), Fast Retinal Keypoint (FREAK), Viola-Jones Algorithm, Eigenfaces Method, Lucas-Kanade Algorithm, Horn-Schunk Algorithm, Mean-shift Algorithm, Visual Synchronous Localization and Mapping (vSLAM) Technology , sequential Bayesian estimators (e.g., Kalman filter, extended Kalman filter, etc.), bundle adjustment, adaptive thresholding (and other thresholding techniques), iterative closest point (ICP), semi-global matching (SGM), semi-global block matching (SGBM), histogram of feature points, various machine learning algorithms (such as support vector machine, k-nearest neighbor algorithm, naive Bayesian, neural network (including convolutional or deep neural network) ), or other supervised/unsupervised models, etc.) and so on. As described with reference to FIG. 7, one or more of the computer vision algorithms may be implemented by object discriminator 708 for discerning objects, events, or environments.

这些计算机视觉技术中的一个或多个也可以与从其他环境传感器(诸如例如麦克风)获取的数据一起使用,以检测触发事件的存在。One or more of these computer vision techniques may also be used with data acquired from other environmental sensors, such as, for example, microphones, to detect the presence of triggering events.

可以基于一个或多个标准来检测触发事件。这些标准可以由用户限定。例如,用户可以将触发事件设置为在用户环境中的火灾。因此,当可穿戴系统使用计算机视觉算法或使用从烟雾探测器(可能是或可能不是可穿戴系统的一部分)接收的数据检测到火灾时,可穿戴系统然后可以发出触发事件的存在的信号并自动使正在显示的虚拟内容沉默。标准也可以由另一个人设定。例如,可穿戴系统的程序员可以将触发事件设置为可穿戴系统的过热。Triggering events may be detected based on one or more criteria. These criteria can be defined by the user. For example, a user may set the trigger event to be a fire in the user's environment. Thus, when the wearable system detects a fire using computer vision algorithms or using data received from smoke detectors (which may or may not be part of the wearable system), the wearable system can then signal the presence of the triggering event and automatically Silences the virtual content being displayed. Standards can also be set by another person. For example, a programmer of a wearable system can set the trigger event to be overheating of the wearable system.

触发事件的存在也可以由用户的交互来指示。例如,用户可以做出特定姿势(例如,手部姿势或身体姿势)或致动指示触发事件的存在的用户输入装置。The existence of a trigger event may also be indicated by user interaction. For example, a user may make a particular gesture (eg, a hand gesture or a body gesture) or actuate a user input device that indicates the presence of a triggering event.

附加地或可选地,还可以基于用户的行为(或一组用户的行为)来学习标准。例如,可穿戴系统可以监视用户何时关闭HMD。可穿戴系统可以观察到用户经常响应于某种类型的虚拟内容(例如,电影中的某些类型的场景)而关闭可穿戴系统。可穿戴系统可以相应地学习用户的行为并基于用户的行为预测触发事件。作为另一示例,可穿戴系统可以基于用户先前的与虚拟内容的交互来关联用户的情绪状态。可穿戴系统可以使用该关联来预测当用户与虚拟对象交互时是否存在触发事件。Additionally or alternatively, criteria may also be learned based on the behavior of a user (or the behavior of a group of users). For example, a wearable system can monitor when the user turns off the HMD. The wearable system may observe that the user frequently turns off the wearable system in response to a certain type of virtual content (eg, certain types of scenes in a movie). The wearable system can learn the user's behavior accordingly and predict triggering events based on the user's behavior. As another example, a wearable system may correlate a user's emotional state based on the user's previous interactions with virtual content. Wearable systems can use this association to predict whether there is a trigger event when the user interacts with the virtual object.

触发事件还可以基于已知对象。例如,可穿戴系统可以在给定位置阻止来自显示器的虚拟内容。可穿戴系统可以自动阻止具有该给定位置的类似特性的其他虚拟内容。例如,用户可以配置阻止汽车中的视频观看应用。基于该配置,即使用户没有专门配置电影和音乐应用的阻止,可穿戴系统也可以自动阻止电影和音乐应用,因为电影和音乐应用具有与视频观看应用类似的特性(例如,所有这些都是视听娱乐内容)。Triggering events can also be based on known objects. For example, a wearable system could block virtual content from a display at a given location. The wearable system can automatically block other virtual content with similar characteristics for that given location. For example, a user can configure to block video viewing apps in the car. Based on this configuration, even if the user does not specifically configure the blocking of movie and music apps, the wearable system can automatically block movie and music apps, because movie and music apps have similar characteristics to video viewing apps (e.g., all of them are audio-visual entertainment content).

触发事件的机器学习Machine learning that triggers events

可以使用各种机器学习算法来学习触发事件。一旦经过训练,可穿戴系统就可以存储机器学习模型以用于后续应用。如参考图7所述,机器学习算法或模型中的一个或多个可以由对象辨别器708实现。Various machine learning algorithms can be used to learn triggering events. Once trained, wearable systems can store machine learning models for subsequent applications. As described with reference to FIG. 7 , one or more of the machine learning algorithms or models may be implemented by object discriminator 708 .

机器学习算法的一些示例可以包括监督或非监督机器学习算法,其包括回归算法(例如,普通最小二乘回归)、基于实例的算法(例如,学习矢量量化)、决策树算法(例如,分类和回归树)、贝叶斯算法(例如,朴素贝叶斯)、聚类算法(例如,k均值聚类)、关联规则学习算法(例如,先验(a-priori)算法)、人工神经网络算法(例如,感知器)、深度学习算法(例如,深度玻尔兹曼机或深度神经网络)、维数减少算法(例如,主成分分析)、集成算法(例如,层叠泛化)和/或其他机器学习算法。在一些实施例中,可以针对各个数据组定制各个模型。例如,可穿戴装置可以产生或存储基础模型。基本模型可以用作起点以产生特定于数据类型(例如,特定用户)、数据组(例如,获得的附加图像的组)、条件情况或其他变体的附加模型。在一些实施例中,可穿戴系统可以被配置为利用多种技术来产生用于分析聚合数据的模型。其他技术可以包括使用预限定的阈值或数据值。Some examples of machine learning algorithms may include supervised or unsupervised machine learning algorithms, including regression algorithms (e.g., ordinary least squares regression), instance-based algorithms (e.g., learning vector quantization), decision tree algorithms (e.g., classification and regression trees), Bayesian algorithms (e.g., Naive Bayes), clustering algorithms (e.g., k-means clustering), association rule learning algorithms (e.g., a-priori algorithms), artificial neural network algorithms (e.g., perceptrons), deep learning algorithms (e.g., deep Boltzmann machines or deep neural networks), dimensionality reduction algorithms (e.g., principal component analysis), ensemble algorithms (e.g., stacked generalization), and/or other machine learning algorithm. In some embodiments, individual models can be customized for individual data sets. For example, a wearable device may generate or store base models. The base model can be used as a starting point to generate additional models specific to data types (eg, specific users), sets of data (eg, sets of additional images obtained), conditional situations, or other variants. In some embodiments, the wearable system can be configured to utilize a variety of techniques to generate models for analyzing aggregated data. Other techniques may include using pre-defined thresholds or data values.

标准可以包括阈值条件。如果对由环境传感器获取的数据的分析指示通过了阈值条件,则可穿戴系统可以检测触发事件的存在。阈值条件可以涉及定量和/或定性测量。例如,阈值条件可以包括与触发事件发生的可能性相关联的分数或百分比。可穿戴系统可以将根据环境传感器的数据计算的分数与阈值分数进行比较。如果得分高于阈值水平,则可穿戴系统可以检测触发事件的存在。在一些实施例中,如果得分低于阈值,则可穿戴系统可以用信号通知触发事件的存在。Criteria can include threshold conditions. If analysis of data acquired by environmental sensors indicates that a threshold condition has been passed, the wearable system may detect the presence of a triggering event. Threshold conditions may involve quantitative and/or qualitative measurements. For example, a threshold condition may include a score or percentage associated with the likelihood of a triggering event occurring. A wearable system can compare a score calculated from data from environmental sensors to a threshold score. If the score is above a threshold level, the wearable system can detect the presence of a trigger event. In some embodiments, if the score is below a threshold, the wearable system may signal the presence of a triggering event.

阈值条件还可以包括字母等级,例如“A”、“B”、“C”、“D”等。每个等级可能代表情况的严重程度。例如,“A”可能是最严重的,而“D”可能是最不严重的。当可穿戴系统确定用户环境中的事件足够严重时(与阈值条件相比),可穿戴系统可以指示触发事件的存在并采取行动(例如,使虚拟内容沉默)。Threshold conditions may also include letter grades such as "A", "B", "C", "D", etc. Each rating may represent the severity of the situation. For example, "A" might be the most severe, while "D" might be the least severe. When the wearable system determines that an event in the user's environment is sufficiently severe (compared to a threshold condition), the wearable system can indicate the presence of the triggering event and take action (eg, silence virtual content).

可以基于用户的物理环境中的对象(或人)来确定阈值条件。例如,可以基于用户的心率来确定阈值条件。如果用户的心率超过阈值数量(例如,每分钟一定数量的节拍),则可穿戴系统可以发出存在触发事件的信号。作为上面参考图11A和11B描述的另一示例,可穿戴系统的用户可以是对患者进行手术的外科医生。阈值条件可以基于患者的失血量、患者的心率或其他生理参数。如参考图2和10所述,可穿戴系统可以从环境传感器(例如,对手术部位成像的面向外的相机)或从外部源(例如,由心电图仪监视的ECG数据)获取患者的数据。作为又一示例,可以基于用户环境中某些对象的存在(诸如火灾或烟雾的存在)来确定阈值条件。Threshold conditions may be determined based on objects (or people) in the user's physical environment. For example, threshold conditions may be determined based on the user's heart rate. If the user's heart rate exceeds a threshold amount (eg, a certain number of beats per minute), the wearable system can signal the presence of a triggering event. As another example described above with reference to FIGS. 11A and 11B , the user of the wearable system may be a surgeon operating on a patient. Threshold conditions may be based on the patient's blood loss, the patient's heart rate, or other physiological parameters. As described with reference to FIGS. 2 and 10 , the wearable system may acquire patient data from environmental sensors (eg, an outward-facing camera imaging the surgical site) or from external sources (eg, ECG data monitored by an electrocardiograph). As yet another example, threshold conditions may be determined based on the presence of certain objects in the user's environment, such as the presence of fire or smoke.

还可以基于正向用户显示的虚拟对象来确定阈值条件。作为一个示例,阈值条件可以基于特定数量的虚拟对象的存在(例如,来自人的多个错过的虚拟遥现呼叫)。作为另一示例,阈值条件可以基于用户与虚拟对象的交互。例如,阈值条件可以是用户观看一条虚拟内容的持续时间。Threshold conditions may also be determined based on the virtual object being displayed to the user. As one example, a threshold condition may be based on the presence of a certain number of virtual objects (eg, multiple missed virtual telepresence calls from a person). As another example, threshold conditions may be based on user interactions with virtual objects. For example, the threshold condition may be the duration that a user watches a piece of virtual content.

在一些实施例中,阈值条件、机器学习算法或计算机视觉算法可以专用于特定的背景。例如,在手术背景下,计算机视觉算法可以专用于检测手术事件。作为另一示例,可穿戴系统可以在教育背景下执行面部辨别算法(而不是事件追踪算法)以检测人是否在用户附近。In some embodiments, threshold conditions, machine learning algorithms, or computer vision algorithms may be specific to a particular context. For example, in the context of surgery, computer vision algorithms can be specialized to detect surgical events. As another example, a wearable system could implement a facial recognition algorithm (rather than an event tracking algorithm) in an educational context to detect whether a person is near the user.

示例警告example warning

可穿戴系统可以向用户提供触发事件的存在的指示。该指示可以是聚焦指示符的形式。聚焦指示符可以包括光晕、颜色、感知的尺寸或深度变化(例如,使虚拟对象在被选择时看起来更接近和/或更大)、用户界面元素的变化(例如,将光标的形状从圆圈改变为扩展(escalation)标记)、消息(带有文字或图形)或引起用户注意的其他听觉、触觉或视觉效果。可穿戴系统可以将聚焦指示符呈现在触发事件的原因附近。例如,可穿戴系统的用户可以在炉子上烹饪并且使用可穿戴系统观看虚拟电视节目。然而,用户可能在观看电视节目时忘记他正在烹饪的食物。结果,食物可能被燃烧,从而产生烟雾或火焰。可穿戴系统可以使用环境传感器或通过分析炉子的图像来检测烟雾或火焰。可穿戴系统可以进一步检测烟雾或火焰的来源是炉子上的食物。因此,可穿戴系统可以在炉子上的食物周围呈现晕圈,指示其正在燃烧。该实现可能是有益的,因为用户可能能够在事件升级(例如,进入房屋火灾)之前治好触发事件的源(例如,通过关闭炉子)。当触发事件发生时,可穿戴系统可以自动使与触发事件无关的虚拟内容的显示(例如,虚拟电视节目)沉默,使得用户可以将注意力集中在触发事件上。继续上述燃烧的食物示例,可穿戴系统可以使与食物或炉子无关的虚拟内容沉默,同时强调触发事件的源(例如,通过继续在燃烧的食物周围显示晕圈)。The wearable system can provide an indication to the user of the presence of a triggering event. The indication may be in the form of a focus indicator. Focus indicators may include halos, colors, perceived size or depth changes (e.g., making virtual objects appear closer and/or larger when selected), changes in user interface elements (e.g., changing the shape of the cursor from The circle changes to an escalation mark), a message (with text or graphics), or other audible, tactile, or visual effect that draws the user's attention. Wearable systems can present focused indicators near the cause of the triggering event. For example, a user of a wearable system can cook on the stove and use the wearable system to watch a virtual TV show. However, a user may forget about the food he is cooking while watching a TV program. As a result, the food may burn, creating smoke or flames. Wearable systems can detect smoke or flames using environmental sensors or by analyzing images of the stove. The wearable system can further detect that the source of the smoke or flames is food on the stove. Thus, a wearable system could present a halo around food on the stove, indicating that it is burning. This implementation may be beneficial because the user may be able to cure the source of the triggering event (eg, by turning off the furnace) before the event escalates (eg, entering a house fire). When a trigger event occurs, the wearable system can automatically silence the display of virtual content (eg, a virtual TV show) that is not related to the trigger event, so that the user can focus on the trigger event. Continuing with the burning food example above, the wearable system could silence virtual content unrelated to food or the stove while emphasizing the source of the triggering event (eg, by continuing to display a halo around the burning food).

作为另一示例,聚焦指示符可以是警告消息。例如,警告消息可以包括触发事件的简要描述(例如,二楼的火灾、患者的失血超过一定数量等)。在一些实施例中,警告消息还可以包括用于治好触发事件的一个或多个推荐。例如,警告消息可以说呼叫消防员、注入某种类型的血液等。As another example, the focus indicator may be a warning message. For example, a warning message may include a brief description of the triggering event (eg, a fire on the second floor, a patient's blood loss exceeds a certain amount, etc.). In some embodiments, the warning message may also include one or more recommendations for remediating the triggering event. For example, a warning message could say to call firefighters, inject some type of blood, etc.

在某些实现中,可穿戴系统可以使用用户对警告消息的响应来更新可穿戴系统对触发事件的辨别。例如,可穿戴系统可以基于由面向外的成像系统获取的图像来辨别用户已经到达家中。因此,可穿戴系统可以呈现针对用户家庭定制的虚拟内容。但是用户实际上是在朋友家里。用户可以例如通过致动现实按钮、使用手部姿势或致动用户输入装置来提供指示,以解除虚拟内容或改变设置。可穿戴系统可以记住用户对该环境的响应,并且下次当用户在同一房屋时不会呈现为用户家庭定制的虚拟内容。In some implementations, the wearable system can use the user's response to the warning message to update the wearable system's recognition of the triggering event. For example, a wearable system may recognize that a user has arrived at home based on images acquired by an outward-facing imaging system. Therefore, the wearable system can present virtual content customized to the user's home. But the user is actually at a friend's house. The user may provide instructions to dismiss virtual content or change settings, for example by actuating a real button, using hand gestures, or actuating a user input device. The wearable system can remember the user's response to the environment, and the next time the user is in the same house, the virtual content customized for the user's family will not be presented.

作为另一示例,可穿戴系统可以辨别紧急状况并呈现用于自动关闭显示的消息。用户还可以提供指示以防止可穿戴系统关闭显示。可穿戴系统可以记住用户的响应,并使用该响应来更新对象辨别器708用于确定紧急状况的存在所使用的模型。As another example, a wearable system may recognize an emergency situation and present a message to automatically turn off the display. Users can also provide instructions to prevent the wearable system from turning off the display. The wearable system may remember the user's response and use the response to update the model used by object recognizer 708 to determine the presence of an emergency condition.

响应于触发事件时使可穿戴系统的部件或虚拟内容沉默的示例Example of silencing parts of a wearable system or virtual content in response to a trigger event

响应于触发事件,可穿戴系统可以使视觉可听虚拟内容沉默。例如,可穿戴系统可以自动使来自HMD的音频沉默、关闭HMD显示的虚拟内容、使HMD进入睡眠模式、使HMD的光场变暗、减少虚拟内容的数量(例如,通过隐藏虚拟内容、将虚拟内容移出FOV或减小虚拟对象的尺寸)。在可穿戴系统提供触觉虚拟内容(例如,振动)的实施例中,可穿戴系统可以附加地或可选地使触觉虚拟内容沉默。除了使音频或视觉内容沉默之外或作为其替代,可穿戴系统还可以使可穿戴系统的其他部件中的一个或多个沉默。例如,可穿戴系统可以选择性地悬置(suspend)面向外的成像系统、面向内的成像系统、麦克风或可穿戴系统的其他敏感传感器。例如,可穿戴系统可以包括被配置为对用户的眼睛成像的两个眼睛相机。可穿戴系统可以响应于触发事件使一个或两个眼睛相机沉默。作为另一示例,可穿戴系统可以关闭在面向外的成像系统中的被配置为对用户的周围环境成像的一个或多个相机。在一些实施例中,可穿戴系统可以将面向内的成像系统或面向外的成像系统中的一个或多个相机改变为低分辨率模式,使得所获取的图像可能不具有精细细节。当用户没有观看虚拟内容时,这些实现可以减少可穿戴系统的电池消耗。In response to a trigger event, the wearable system can silence visual and audible virtual content. For example, the wearable system can automatically silence audio from the HMD, turn off virtual content displayed by the HMD, put the HMD into sleep mode, dim the HMD's light field, reduce the amount of virtual content (e.g., by hiding virtual content, turning virtual content out of FOV or reduce the size of virtual objects). In embodiments where the wearable system provides haptic virtual content (eg, vibration), the wearable system may additionally or alternatively silence the haptic virtual content. In addition to or instead of silencing audio or visual content, the wearable system may also silence one or more of the other components of the wearable system. For example, the wearable system may selectively suspend an outward-facing imaging system, an inward-facing imaging system, a microphone, or other sensitive sensors of the wearable system. For example, a wearable system may include two eye cameras configured to image a user's eyes. The wearable system can silence one or both eye cameras in response to a trigger event. As another example, a wearable system may turn off one or more cameras in an outward-facing imaging system configured to image the user's surroundings. In some embodiments, the wearable system may change one or more cameras in the inward-facing imaging system or the outward-facing imaging system to a low-resolution mode such that the captured images may not have fine details. These implementations can reduce battery consumption of wearable systems when the user is not viewing virtual content.

继续图12A-12C中所示的示例用户环境,图12B示出了可穿戴系统的虚拟显示已被关闭的示例FOV。在该图中,用户可以仅感知他的FOV 1200b中物理对象1210、1212、1214和1216,因为可穿戴系统的虚拟显示已被关闭。该图与可穿戴系统被开启的图12A相反。在图12A中,用户可以感知FOV 1200a中的虚拟对象1222、1224,而在图12B中,用户不能感知虚拟对象1222、1224。Continuing with the example user environments shown in FIGS. 12A-12C , FIG. 12B shows an example FOV with the wearable system's virtual display turned off. In this figure, the user can only perceive physical objects 1210, 1212, 1214, and 1216 in his FOV 1200b because the wearable system's virtual display has been turned off. This figure is the inverse of Figure 12A with the wearable system turned on. In FIG. 12A, the user can perceive the virtual objects 1222, 1224 in the FOV 1200a, while in FIG. 12B, the user cannot perceive the virtual objects 1222, 1224.

有利地,在一些实施例中,通过在响应于触发事件使虚拟内容的呈现沉默的同时保持可穿戴系统部件的其余部分继续操作,可穿戴系统可以在触发事件之后允许更快地重新启动或恢复。例如,可穿戴系统可以使扬声器或显示器沉默(或完全关闭),同时将可穿戴系统部件的其余部分保持在运行状态。因此,在触发事件停止之后,与可穿戴系统完全关闭时的完全重新启动相比,可穿戴系统可能不需要重新启动全部部件。作为一个示例,可穿戴系统可以使虚拟图像的显示沉默但是保持音频开启。在该示例中,可穿戴系统可以响应于触发事件而减少视觉混淆,同时允许用户通过可穿戴系统的扬声器听到警告。作为另一示例,当用户处于遥现会话中时可以发生触发事件。可穿戴系统可以使虚拟内容以及与遥现会话相关联的声音沉默,但允许遥现应用在可穿戴系统的后台运行。作为又一示例,可穿戴系统可以使虚拟内容(和音频)沉默,同时保持一个或多个环境传感器操作。响应于触发事件,可穿戴系统可以在使用GPS传感器(例如)连续获取数据的同时关闭显示器。在该示例中,可穿戴系统可以在紧急状况下允许救���人���更准确地定位用户的位置。Advantageously, in some embodiments, the wearable system may allow for a faster restart or recovery after a triggering event by silencing the presentation of virtual content in response to the triggering event while keeping the rest of the wearable system components continuing to operate. . For example, a wearable system could silence (or turn off completely) a speaker or display while keeping the rest of the wearable system components running. Therefore, the wearable system may not require a complete restart of all components after the triggering event ceases, compared to a full restart if the wearable system is completely shut down. As one example, a wearable system may silence the display of a virtual image but keep the audio on. In this example, the wearable system can respond to a trigger event to reduce visual confusion while allowing the user to hear a warning through the wearable system's speaker. As another example, a trigger event may occur when a user is in a telepresence session. The wearable system can silence the virtual content and sounds associated with the telepresence session, but allow the telepresence application to run in the background of the wearable system. As yet another example, a wearable system can silence virtual content (and audio) while keeping one or more environmental sensors operating. In response to a trigger event, the wearable system may turn off the display while continuously acquiring data using a GPS sensor (for example). In this example, the wearable system could allow rescuers to more accurately pinpoint the user's location during an emergency.

图12C示出了可穿戴系统已减少虚拟内容的量的示例FOV。与图12A相比,FOV1200c中的虚拟化身1224的尺寸已减小。另外,可穿戴系统已经将虚拟化身1224从靠近FOV的中心移动到右下角。结果,虚拟化身1224被不强调并且可以为用户创建较少的感知混淆。另外,可穿戴系统已将虚拟建筑物1222移动到FOV 1200c的外部。结果,虚拟对象1224不出现在FOV 1200c中。12C shows an example FOV where the wearable system has reduced the amount of virtual content. The size of avatar 1224 in FOV 1200c has been reduced compared to FIG. 12A. Additionally, the wearable system has moved the avatar 1224 from near the center of the FOV to the bottom right corner. As a result, the avatar 1224 is de-emphasized and may create less perceptual confusion for the user. Additionally, the wearable system has moved the virtual building 1222 outside of the FOV 1200c. As a result, virtual object 1224 does not appear in FOV 1200c.

除了基于触发事件自动使虚拟内容沉默之外或作为其替代,当用户手动致动现实按钮(例如,图2中的现实按钮263)时,可穿戴系统还可以使虚拟内容沉默。例如,用户可以按压现实按钮以关闭音频或视觉内容,或者轻轻敲击现实按钮以将虚拟内容移出FOV。下面参考图14A和14B描述与现实按钮有关的进一步细节。In addition to or instead of automatically silencing virtual content based on a trigger event, the wearable system may also silence virtual content when a user manually actuates a reality button (eg, reality button 263 in FIG. 2 ). For example, a user can press the reality button to turn off audio or visual content, or tap the reality button to move virtual content out of the FOV. Further details regarding the real button are described below with reference to FIGS. 14A and 14B.

在一些实施例中,在检测到触发事件时,可穿戴系统可以向用户呈现触发事件的听觉、触觉或视觉指示。如果用户没有对触发事件响应,则可穿戴系统可以自动沉默以减少感知混淆。在其他实施例中,如果用户对触发事件的指示响应,则可穿戴系统将沉默。例如,用户可以通过致动现实按钮或用户输入装置或者通过提供某种姿势(例如,在面向外的成像系统前面挥动他的手)来响应。In some embodiments, upon detection of a trigger event, the wearable system may present an audible, tactile, or visual indication of the trigger event to the user. If the user does not respond to the triggering event, the wearable system can automatically silence to reduce perception confusion. In other embodiments, the wearable system will be silent if the user responds to the indication of the triggering event. For example, the user may respond by actuating a real button or user input device, or by providing a gesture (eg, waving his hand in front of the outward-facing imaging system).

用于使可穿戴装置沉默的示例过程Example procedure for silencing a wearable

图13A和13B示出了基于触发事件使可穿戴系统沉默的示例过程。图13A和13B(分别)中的过程1310和1320可以由本文描述的可穿戴系统执行。在这两个过程中,一个或多个框可以是可选的或者是另一个框的一部分。另外,这两个过程不需要以图中箭头所示的顺序进行。13A and 13B illustrate an example process for silencing a wearable system based on a trigger event. Processes 1310 and 1320 in Figures 13A and 13B (respectively) may be performed by the wearable systems described herein. In both procedures, one or more boxes may be optional or part of another box. In addition, these two processes do not need to be performed in the order shown by the arrows in the figure.

在过程1310的框1312处,可穿戴系统可以接收来自环境传感器的数据。环境传感器可以包括用户传感器以及外部传感器。因此,由环境传感器获取的数据可以包括与用户和用户的物理环境相关联的数据。在一些实施例中,可穿戴系统可以与另一数据源通信以获取额外的数据。例如,可穿戴系统可以与医疗装置通信以获得患者的数据(例如心率、呼吸率、疾病史等)。作为另一示例,可穿戴系统可以与远程数据存储通信以确定用户目前正在互动的虚拟对象的信息(例如,用户正在观看的电影的类型、虚拟对象的先前交互等)。在一些实现中,可穿戴系统可以从与可穿戴系统通信的外部成像系统或从与外部成像系统联网的内部成像系统接收数据。At block 1312 of process 1310, the wearable system may receive data from environmental sensors. Environmental sensors may include user sensors as well as external sensors. Accordingly, data acquired by environmental sensors may include data associated with the user and the user's physical environment. In some embodiments, the wearable system can communicate with another data source to obtain additional data. For example, a wearable system may communicate with a medical device to obtain patient data (eg, heart rate, breathing rate, disease history, etc.). As another example, the wearable system may communicate with a remote data store to determine information about the virtual objects the user is currently interacting with (eg, the type of movie the user is watching, previous interactions with the virtual objects, etc.). In some implementations, the wearable system can receive data from an external imaging system in communication with the wearable system or from an internal imaging system networked with the external imaging system.

在框1314处,可穿戴系统分析数据以检测触发事件。可穿戴系统可以鉴于阈值条件来分析数据。如果数据指示超过阈值条件,则可穿戴系统可以检测到触发事件的存在。可以使用计算机视觉算法实时检测触发事件。还可以基于一个或多个预测模型来检测触发事件。例如,如果发生触发事件的可能性超过阈值条件,则可穿戴系统可以指示触发事件的存在。At block 1314, the wearable system analyzes the data to detect trigger events. Wearable systems can analyze data in view of threshold conditions. If the data indicates that a threshold condition has been exceeded, the wearable system can detect the presence of a triggering event. Triggering events can be detected in real time using computer vision algorithms. Triggering events may also be detected based on one or more predictive models. For example, the wearable system may indicate the presence of a trigger event if the likelihood of the trigger event occurring exceeds a threshold condition.

在框1316处,响应于触发事件,显示系统可以自动沉默。例如,可穿戴系统可以自动关闭虚拟内容显示或使显示器呈现的虚拟内容的一部分沉默。结果,用户可以穿过可穿戴系统看到物理环境而不会受到虚拟内容的干扰或者没有区分真实物理对象和虚拟对象的问题,或者可以感知与特定环境相关的虚拟内容。作为另一示例,可穿戴系统可以关闭与虚拟内容相关联的声音或降低该声音的音量以减少感知混淆。At block 1316, the display system may be automatically silenced in response to the trigger event. For example, the wearable system may automatically turn off the display of virtual content or silence a portion of the virtual content presented by the display. As a result, users can see the physical environment through the wearable system without being disturbed by virtual content or having problems distinguishing real physical objects from virtual objects, or can perceive virtual content related to a particular environment. As another example, the wearable system may turn off or lower the volume of sounds associated with virtual content to reduce perceptual confusion.

在可选框1318a处,可穿戴系统可以确定触发事件的终止。例如,可穿戴系统可以确定引起触发事件的情况是否结束(例如,火被熄灭)或者用户不再处于相同的环境中(例如,用户从家里走到公园)。如果触发事件不再存在,则过程1310可以前进到可选框1318b以恢复显示系统或沉默的虚拟内容。At optional block 1318a, the wearable system may determine termination of the triggering event. For example, the wearable system can determine whether the situation that caused the triggering event is over (eg, a fire is extinguished) or the user is no longer in the same environment (eg, the user walks from home to the park). If the triggering event no longer exists, process 1310 may proceed to optional block 1318b to resume displaying the system or silent virtual content.

在一些情况下,可穿戴系统可以在可选框1318b处确定第二触发事件的存在。第二触发事件可以使可穿戴系统恢复显示系统或沉默的虚拟内容的一部分,或者使可穿戴系统使其他虚拟内容、显示系统或可穿戴系统的其他部件沉默(如果它们之前不沉默)。In some cases, the wearable system may determine the presence of the second triggering event at optional block 1318b. The second trigger event may cause the wearable system to resume displaying the system or a portion of the silenced virtual content, or cause the wearable system to silence other virtual content, the display system, or other components of the wearable system if they were not previously silenced.

图13B中的过程1320示出了���于触发事件使虚拟内容沉默的另一示例过程。过程1310和1320中的框1312和1314遵循相同的描述。Process 1320 in FIG. 13B illustrates another example process for silencing virtual content based on a triggering event. Blocks 1312 and 1314 in processes 1310 and 1320 follow the same description.

在框1322处,可穿戴系统可以基于框1314处的数据的分析来确定是否存在触发事件。如果不存在触发事件,则过程1320返回到框1312,其中可穿戴系统继续监视从环境传感器获取的数据。At block 1322 , the wearable system may determine whether a trigger event exists based on the analysis of the data at block 1314 . If there is no trigger event, process 1320 returns to block 1312, where the wearable system continues to monitor data acquired from environmental sensors.

如果检测到触发事件,则在框1324处,可穿戴系统可以提供关于触发事件的指示。如参考图12A所述,该指示可以是聚焦指示符。例如,该指示可以是警告消息。警告消息可以表明已检测到触发事件,并且如果持续一段时间(例如,5秒、30秒、1分钟等)没有从用户接收到响应,则可穿戴系统可以自动使感知混淆沉默。If a trigger event is detected, at block 1324 the wearable system may provide an indication regarding the trigger event. As described with reference to FIG. 12A, the indication may be a focus indicator. For example, the indication may be a warning message. The warning message can indicate that a triggering event has been detected, and the wearable system can automatically silence the perceptual confusion if no response is received from the user for a period of time (eg, 5 seconds, 30 seconds, 1 minute, etc.).

在框1324处,可穿戴系统可以确定是否已接收到对指示的响应。用户可以通过致动用户输入装置或现实按钮来对该指示响应。用户还可以通过改变姿势来响应。可穿戴系统可以通过监视来自用户输入装置或现实按钮的输入来确定用户是否已提供了响应。可穿戴系统还可以分析由面向外的成像系统获取的图像或由IMU获取的数据,以确定用户是否已经改变其姿势以提供响应。At block 1324, the wearable system may determine whether a response to the indication has been received. The user may respond to the indication by actuating a user input device or a real button. Users can also respond by changing gestures. The wearable system can determine whether the user has provided a response by monitoring input from the user input device or real buttons. The wearable system can also analyze images acquired by the outward-facing imaging system or data acquired by the IMU to determine whether the user has changed his posture to provide a response.

如果可穿戴系统没有接收到响应,则可穿戴系统可以在框1328自动使虚拟内容(或声音)沉默。如果可穿戴系统确实接收到响应,则过程1320结束。在一些实施例中,如果可穿戴系统接收到响应,则可穿戴系统可以继续监视环境传感器。可穿戴系统稍后可以检测另一个触发事件。在一些实施例中,从用户接收的响应指导可穿戴系统执行未在指示中提供的另一动作。作为示例,可穿戴系统可以提供指示在用户未在阈值持续时间内响应之后将关闭虚拟显示的警告消息。然而,用户确实在持续时间内响应,例如,通过在现实按钮上敲击两次。但这种响应与调暗光场(而不是关闭)有关。因此,可穿戴系统可以改为调暗光场而不是如警告消息中所指示的将其关闭。If the wearable system does not receive a response, the wearable system may automatically silence the virtual content (or sound) at block 1328 . If the wearable system does receive a response, process 1320 ends. In some embodiments, if the wearable system receives a response, the wearable system may continue to monitor the environmental sensors. The wearable system can later detect another trigger event. In some embodiments, the response received from the user directs the wearable system to perform another action not provided in the instruction. As an example, the wearable system may provide a warning message indicating that the virtual display will be turned off after the user does not respond within a threshold duration. However, the user does respond within a duration, for example, by tapping twice on a real button. But this response is associated with dimming the light field (rather than turning it off). Therefore, the wearable system may instead dim the light field instead of turning it off as indicated in the warning message.

图13C中的过程1330示出了根据环境选择性地阻止虚拟内容的示例。过程1330可以由本文描述的可穿戴系统200执行。Process 1330 in Figure 13C illustrates an example of selectively blocking virtual content based on circumstances. Process 1330 may be performed by wearable system 200 as described herein.

过程1330从框1332开始并且移动到框1334。在框1334处,可穿戴系统可以接收从可穿戴装置的环境传感器获取的数据。例如,可穿戴系统可以接收由可穿戴装置的面向外的成像系统464获取的图像。在一些实现中,可穿戴系统可以从与可穿戴系统通信的外部成像系统或从与外部成像系统联网的内部成像系统接收数据。Process 1330 begins at block 1332 and moves to block 1334 . At block 1334, the wearable system may receive data acquired from environmental sensors of the wearable device. For example, the wearable system may receive images acquired by an outward-facing imaging system 464 of the wearable device. In some implementations, the wearable system can receive data from an external imaging system in communication with the wearable system or from an internal imaging system networked with the external imaging system.

在框1336处,可穿戴系统分析由环境传感器收集和接收的数据。至少部分地基于从环境传感器接收的数据,可穿戴系统将辨别可穿戴系统的用户当前所处的环境。如参考图11F所述,可穿戴系统可以基于环境中物理对象的存在、环境中物理对象的布置或用户相对于环境中物理对象的位置来辨别环境。At block 1336, the wearable system analyzes the data collected and received by the environmental sensors. Based at least in part on data received from environmental sensors, the wearable system will discern the environment in which the user of the wearable system is currently located. As described with reference to FIG. 11F , the wearable system may discern the environment based on the presence of physical objects in the environment, the arrangement of physical objects in the environment, or the position of the user relative to the physical objects in the environment.

在框1338处,可穿戴系统检查用于环境的内容阻止设置。例如,可穿戴系统可以确定用户是否已进入新环境(例如,用户是否已从工作环境进入休闲环境)。如果可穿戴系统确定用户尚未进入新环境,则可穿戴系统可以应用与先前环境相同的设置,因此框1340-1352可以变为可选的。At block 1338, the wearable system checks the content blocking settings for the environment. For example, a wearable system can determine whether a user has entered a new environment (eg, whether the user has moved from a work environment to a leisure environment). If the wearable system determines that the user has not entered the new environment, the wearable system may apply the same settings as the previous environment, so blocks 1340-1352 may become optional.

在框1340处,可穿戴系统确定其是否已接收到启用或编辑内容阻止设置的指示。这种指示可以来自用户(例如,基于用户的姿势或来自用户输入装置的输入)。该指示也可以是自动的。例如,可穿戴系统可以响应于触发事件自动应用特定于环境的设置。At block 1340, the wearable system determines whether it has received an indication to enable or edit content blocking settings. Such indications may come from the user (eg, based on gestures by the user or input from a user input device). The indication can also be automatic. For example, a wearable system can automatically apply environment-specific settings in response to triggering events.

如果可穿戴系统没有接收到指示,则过程1330移动到块1350,其中可穿戴系统确定是否先前已启用内容阻止设置。如果不是,则在框1352处,呈现虚拟内容而不阻止。否则,在框1344处,可穿戴系统可以基于内容阻止设置选择性地阻止虚拟内容。If the wearable system does not receive an indication, process 1330 moves to block 1350 where the wearable system determines whether content blocking settings have been previously enabled. If not, then at block 1352, the virtual content is presented without blocking. Otherwise, at block 1344, the wearable system may selectively block virtual content based on content blocking settings.

如果可穿戴系统接收到指示,则可穿戴系统可以编辑内容阻止设置或创建新的内容阻止设置。在需要为新环境配置设置的情况下,可穿戴系统可以在框1342处初始化内容阻止设置的存储。因此,当用户再次进入相同或类似的新环境时,可穿戴系统可以自动应用内容阻止设置。此外,如果用户可以重新配置将被存储的现有内容阻止设置,稍后将其应用于相同或类似的环境。If the wearable system receives the indication, the wearable system can edit the content blocking settings or create new content blocking settings. In the event that settings need to be configured for a new environment, the wearable system may initiate storage of content blocking settings at block 1342 . Therefore, when the user re-enters the same or a similar new environment, the wearable system can automatically apply the content blocking settings. Additionally, if a user can reconfigure existing content blocking settings that will be stored and later applied to the same or similar circumstances.

与环境相关联的内容阻止设置可以本地驻留在可穿戴装置上(例如,在本地处理和数据模块260处)或远程地驻留在由有线或无线网络可访问的网络存储位置(例如,远程数据储存库280处)。在一些实施例中,内容阻止设置可以部分地驻留在可穿戴系统上,并且可以部分地驻留在由有线或无线网络可访问的网络存储位置。The content blocking settings associated with the environment may reside locally on the wearable device (e.g., at the local processing and data module 260) or remotely in a network storage location accessible by a wired or wireless network (e.g., remotely data repository 280). In some embodiments, content blocking settings may reside partially on the wearable system and may reside partially in a network storage location accessible by a wired or wireless network.

在框1344处,可穿戴系统实现与新环境相关联的存储的内容阻止设置。通过应用与新环境相关联的内容阻止设置,将根据内容阻止设置来阻止一些或全部虚拟内容。然后该过程循环回到框1332。At block 1344, the wearable system implements the stored content blocking settings associated with the new environment. By applying the content blocking settings associated with the new environment, some or all of the virtual content will be blocked according to the content blocking settings. The process then loops back to block 1332.

在框1350处,可穿戴系统可以检��是否先前启用了内容阻止设置1350。如果不是,则可穿戴系统可以在框1352处���现虚拟内容而不阻止。否则,可穿戴系统可以在框1344处基于内容阻止设置选择性地阻止虚拟内容。框1350-1352和框1340-1344可以并行或按顺序运行。例如,可穿戴系统可以在确定是否已经接收到修改用于环境的内容阻止设置的指示的同时检查是否存在先前的内容阻止设置。At block 1350, the wearable system may check whether the content blocking setting 1350 was previously enabled. If not, the wearable system may present the virtual content at block 1352 without blocking. Otherwise, the wearable system may selectively block virtual content at block 1344 based on content blocking settings. Blocks 1350-1352 and blocks 1340-1344 may run in parallel or sequentially. For example, the wearable system may check for previous content blocking settings while determining whether an indication to modify the content blocking settings for the environment has been received.

可穿戴显示系统的手动控制Manual Control of Wearable Display Systems

如本文所述,可穿戴显示系统的实施例可以基于用户环境中的触发事件的发生而自动控制虚拟内容的视觉或听觉显示。附加地或可选地,用户可能希望具有手动使视觉或听觉虚拟内容沉默的能力。As described herein, embodiments of the wearable display system may automatically control the visual or audible display of virtual content based on the occurrence of triggering events in the user's environment. Additionally or alternatively, a user may wish to have the ability to manually silence visual or audible virtual content.

因此,如参考图2所述,显示系统可以包括用户可选择的现实按钮263。现实按钮263可以响应于某些情况使可穿戴装置的可视显示器220或音频系统(例如,扬声器240)沉默,该某些情况例如为突然的巨大噪声、物理或虚拟环境中的不愉快或不安全的体验或状况、现实世界中的紧急状况或者仅仅因为用户希望体验比增强或混合现实更多的“实际”现实(例如,与没有虚拟内容的显示的朋友交谈)。Accordingly, the display system may include a user-selectable display button 263 as described with reference to FIG. 2 . Reality button 263 may silence the wearable device's visual display 220 or audio system (e.g., speaker 240) in response to certain conditions, such as sudden loud noises, unpleasantness or unsafety in the physical or virtual environment experiences or situations, emergencies in the real world, or simply because the user wishes to experience more of an "actual" reality than augmented or mixed reality (e.g., talking to a friend without a display of virtual content).

现实按钮263(一旦被致动)可以使显示系统关闭或调暗显示器220的亮度或者可听见地使来自扬声器240的音频沉默。结果,用户210将能够更容易感知到环境中的物理对象,因为对用户的虚拟对象或声音显示所引起的感知混淆将被减少或消除。在一些实施例中,当致动现实按钮263时,显示系统可以在显示系统的其余部分(例如环境传感器、用户输入装置等)能够继续正常运行(可以在可穿戴装置取消沉默后提供更快的重新启动)的同时关闭VR或AR显示器220和扬声器600。Reality button 263 (once actuated) can cause the display system to turn off or dim the brightness of display 220 or audibly silence the audio from speaker 240 . As a result, user 210 will be able to more easily perceive physical objects in the environment because perceptual confusion caused by virtual objects or sound displays to the user will be reduced or eliminated. In some embodiments, when reality button 263 is actuated, the display system may continue to function normally while the rest of the display system (e.g., environmental sensors, user input devices, etc.) Restart) while turning off the VR or AR display 220 and speaker 600.

现实按钮263可以使显示系统减少虚拟内容的量。例如,显示系统减小FOV中的虚拟对象的尺寸(例如,减小虚拟化身或另一虚拟对象的尺寸)、使虚拟对象更透明或降低显示虚拟对象的亮度。现实按钮263可以附加地或可选地使显示系统将虚拟内容从一个位置移动到另一个位置,例如通过将虚拟对象从FOV内部移动到FOV外部或者将虚拟对象从中心区域移动到周边区域。附加地或可选地,现实按钮263可以使显示系统产生的光场变暗,因此降低了感知混淆的可能性。在某些实现中,当致动现实按钮263时,显示系统可以仅使虚拟内容的一部分沉默。例如,当可穿戴装置的用户在商店中购物时,可穿戴装置可以显示虚拟内容,诸如商店中的衣服的价格以及百货商店的地图。响应于百货商店中的巨大噪声,在致动现实按钮263之后,可穿戴装置可以隐藏或移动与衣服的价格相关的虚拟内容(例如,移动到FOV的外部),然而在用户需要快速离开商店的情况下,可穿戴装置离开地图。Reality button 263 may cause the display system to reduce the amount of virtual content. For example, the display system reduces the size of the virtual object in the FOV (eg, reduces the size of the avatar or another virtual object), makes the virtual object more transparent, or reduces the brightness of the displayed virtual object. Reality button 263 may additionally or alternatively cause the display system to move virtual content from one location to another, such as by moving a virtual object from inside the FOV to outside the FOV or from a central area to a peripheral area. Additionally or alternatively, reality button 263 may dim the light field generated by the display system, thus reducing the potential for perceptual confusion. In some implementations, the display system may silence only a portion of the virtual content when reality button 263 is actuated. For example, when a user of the wearable device is shopping in a store, the wearable device may display virtual content such as prices of clothes in the store and a map of a department store. In response to loud noises in the department store, after actuating the reality button 263, the wearable device can hide or move the virtual content related to the price of the clothes (e.g., move to the outside of the FOV), while the user needs to leave the store quickly. In this case, the wearable device leaves the map.

现实按钮263可以是触控敏感传感器,该触控敏感传感器安装到显示系统的框架230或安装在向显示系统提供电力的电池组上。用户可以例如在他的腰上佩戴电池组。现实按钮263可以是触控敏感区域,用户可以例如通过触控手势或通过沿轨迹滑动来致动该触控敏感区域。例如,通过在触控敏感部分上向下滑动,可以使可穿戴装置沉默,而通过向上滑动,可穿戴装置可以恢复到其正常功能。Reality button 263 may be a touch-sensitive sensor mounted to frame 230 of the display system or to a battery pack that provides power to the display system. The user may for example wear the battery pack on his waist. Reality button 263 may be a touch-sensitive area that a user may actuate, for example, by a touch gesture or by swiping along a track. For example, by swiping down on a touch-sensitive part, the wearable can be silenced, while by swiping up, the wearable can resume its normal functionality.

在一些实施例中,可穿戴装置可以(附加地或可选地)包括虚拟现实按钮,该虚拟现实按钮不是物理按钮,而是由用户手势致动的功能。例如,可穿戴装置的面向外的相机可以对用户的手势成像,并且如果辨别出特定的“沉默”手势(例如,用户举起他的手并形成拳头),则可穿戴装置将使正在向用户显示的视觉或听觉内容沉默。在一些实施例中,在用户致动现实按钮263之后,显示系统可以显示通知用户将使显示沉默的警告消息1430(图14A中所示)。在一些实施例中,显示系统将在经过一段时间(例如,如图14A所示的5秒)之后沉默,除非用户第二次致动现实按钮263或致动虚拟警告消息1430(或与消息1430相关联的虚拟按钮以取消沉默。在其他实施例中,在显示系统使视觉或听觉显示沉默之前,必须第二次致动现实按钮263或者必须致动虚拟警告消息1430(或与消息1430相关联的虚拟按钮)。在用户无意地致动现实按钮263但不希望显示系统进入沉默模式的情况下,这种功能可能是有益的。In some embodiments, the wearable device may (additionally or alternatively) include a virtual reality button that is not a physical button, but a function actuated by a user gesture. For example, a wearable device's outward-facing camera can image a user's hand gestures, and if it recognizes a specific "silence" gesture (for example, the user raises his hand and forms a fist), the wearable device will The displayed visual or auditory content is silent. In some embodiments, after the user actuates the display button 263, the display system may display a warning message 1430 (shown in FIG. 14A ) informing the user that the display will be silenced. In some embodiments, the display system will be silent after a period of time (e.g., 5 seconds as shown in FIG. Associated virtual button to unsilence. In other embodiments, the real button 263 must be actuated a second time or the virtual warning message 1430 must be actuated (or associated with the message 1430) before the display system silences the visual or audible display. This functionality may be beneficial in the event that the user inadvertently actuates the real button 263 but does not want the display system to go into silent mode.

在进入沉默模式之后,用户可以通过致动现实按钮263、访问用户界面以恢复正常操作、说出命令或允许经过一段时间来恢复到正常操作。After entering silent mode, the user can resume normal operation by actuating the reality button 263, accessing the user interface, speaking a command, or allowing a period of time to elapse.

图14B是示出用于手动致动显示系统的沉默操作模式的示例过程1400的流程图。过程1400可以由显示系统执行。在框1404处,过程接收已致动现实按钮的指示。在可选框1408处,过程使显示系统显示指示用户显示系统将进入沉默操作模式的警告消息。在沉默操作模式中,虚拟内容的视觉或听觉显示可以被减弱。在可选的判定框1410处,过程确定用户是否已提供了应该取消沉默操作模式的指示(例如,通过用户第二次致动现实按钮或致动警告消息)。如果接收到取消,则该过程结束。如果没有接收到取消,则在一些实现中,在一段时间(例如,3秒、5秒、10秒等)之后,显示系统在视觉上或听觉上是沉默的。尽管示例性过程1400描述了在框1410处接收到取消请求,但是在其他实施例中,过程1400可以在框1410处确定是否接收��确认。如果接收到确认,则过程1400移动到框1412并使显示系统沉默,如果没有接收到确认,则过程1400结束。14B is a flowchart illustrating an example process 1400 for manually actuating a silent mode of operation of a display system. Process 1400 may be performed by a display system. At block 1404, the process receives an indication that the reality button has been actuated. At optional block 1408, the process causes the display system to display a warning message indicating to the user that the display system will enter a silent mode of operation. In the silent mode of operation, the visual or audible display of virtual content may be muted. At optional decision block 1410, the process determines whether the user has provided an indication that the silent mode of operation should be canceled (eg, by the user actuating the reality button a second time or actuating a warning message). If a cancellation is received, the process ends. If no cancellation is received, in some implementations, the display system is visually or audibly silent after a period of time (eg, 3 seconds, 5 seconds, 10 seconds, etc.). Although the example process 1400 depicts receiving a cancellation request at block 1410 , in other embodiments, the process 1400 may determine whether an acknowledgment is received at block 1410 . If an acknowledgment is received, process 1400 moves to block 1412 and silences the display system, if no acknowledgment is received, process 1400 ends.

其他方面other aspects

在第1方面,一种头戴式装置(HMD),其被配置为显示增强现实图像内容,该HMD包括:显示器,其被配置为呈现虚拟内容,该显示器的至少一部分是透明的,并且当用户佩戴HMD时该显示器的至少一部分被设置在用户眼睛前方的位置,使得透明部分将光从用户前方的环境的一部分传输到用户的眼睛,以提供用户前方的环境的该部分的视图,显示器还被配置为在多个深度平面处向用户显示虚拟内容;环境传感器,其被配置为获取与以下中的至少一个相关联的数据:(1)用户的环境,或(2)用户;以及硬件处理器,其被编程为:从环境传感器接收数据;分析数据以检测触发事件;响应于检测到触发事件,向用户提供触发事件发生的指示;以及使HMD的显示沉默。In a first aspect, a head mounted device (HMD) configured to display augmented reality image content, the HMD comprising: a display configured to present virtual content, at least a portion of the display being transparent, and when At least a portion of the display is positioned in front of the user's eyes when the user is wearing the HMD such that the transparent portion transmits light from a portion of the environment in front of the user to the user's eyes to provide a view of the portion of the environment in front of the user, the display also configured to display virtual content to a user at multiple depth planes; an environmental sensor configured to acquire data associated with at least one of: (1) the user's environment, or (2) the user; and hardware processing The sensor is programmed to: receive data from the environmental sensors; analyze the data to detect a trigger event; in response to detecting the trigger event, provide an indication to the user that the trigger event occurred; and silence the display of the HMD.

在第2方面,根据方面1所述的HMD,其中,为了使HMD的显示沉默,硬件处理器至少被编程为:使显示器输出的光变暗;关闭虚拟内容的显示;减小虚拟内容的尺寸;增加虚拟内容的透明度;或者改变由显示器呈现的虚拟内容的位置。In a second aspect, the HMD of aspect 1, wherein, to silence the display of the HMD, the hardware processor is at least programmed to: dim the light output by the display; turn off the display of the virtual content; reduce the size of the virtual content ; increasing the transparency of the virtual content; or changing the position of the virtual content presented by the display.

在第3方面,根据方面1-2中任一方面所述的HMD,其中,HMD还包括扬声器,并且为了使HMD的显示沉默,硬件处理器被编程为使扬声器沉默。In aspect 3, the HMD according to any one of aspects 1-2, wherein the HMD further includes a speaker, and in order to silence the display of the HMD, the hardware processor is programmed to silence the speaker.

在第4方面,根据方面1-3中任一方面所述的HMD,其中,为了分析数据以检测触发事件,硬件处理器被编程为:鉴于与触发事件的存在相关联的阈值条件来分析数据;如果超过阈值条件,则检测到触发事件的存在。In a 4th aspect, the HMD according to any one of aspects 1-3, wherein, to analyze the data to detect the triggering event, the hardware processor is programmed to: analyze the data in view of a threshold condition associated with the presence of the triggering event ; If the threshold condition is exceeded, the presence of a trigger event is detected.

在第5方面,根据方面1-4中任一方面所述的HMD,其中,硬件处理器利用机器学习算法或计算机视觉算法中的至少一个来编程以检测触发事件。In a fifth aspect, the HMD according to any one of aspects 1-4, wherein the hardware processor is programmed with at least one of a machine learning algorithm or a computer vision algorithm to detect trigger events.

在第6方面,根据方面1-5中任一方面所述的HMD,其中,触发事件的存在的指示包括与环境中的元素相关联的聚焦指示符,该聚焦指示符至少部分地负责触发事件。In a 6th aspect, the HMD according to any one of aspects 1-5, wherein the indication of the presence of the triggering event comprises a focus indicator associated with an element in the environment at least in part responsible for the triggering event .

在第7方面,根据方面1-6中任一方面所述的HMD,其中,触发事件的存在的指示包括警告消息,其中警告消息向用户指示以下中的至少一个:(1)HMD将在一段时间之后自动沉默,除非用户执行取消动作,或(2)HMD将不沉默,除非用户执行确认动作。In a seventh aspect, the HMD according to any one of aspects 1-6, wherein the indication of the presence of a trigger event includes a warning message, wherein the warning message indicates to the user at least one of the following: (1) the HMD will Automatically silence after time unless the user performs a cancel action, or (2) the HMD will not silence unless the user performs a confirm action.

在第8方面,根据方面7所述的HMD,其中,取消动作或确认动作包括以下中的至少一个:致动现实按钮、致动由显示器呈现的虚拟用户界面元素、致动用户输入装置或者检测到用户的取消或确认姿势。In an eighth aspect, the HMD of aspect 7, wherein the canceling action or confirming action comprises at least one of: actuating a real button, actuating a virtual user interface element presented by a display, actuating a user input device, or detecting to the user's cancel or confirm gesture.

在第9方面,根据方面7-8中任一方面所述的HMD,其中,响应于用户执行取消动作,硬件处理器被编程为使显示取消沉默或继续显示虚拟内容。In aspect 9, the HMD according to any one of aspects 7-8, wherein in response to a user performing a cancel action, the hardware processor is programmed to unsilence the display or continue to display the virtual content.

在第10方面,根据方面7-9中任一方面所述的HMD,其中,响应于用户执行确认动作,硬件处理器被编程为使显示沉默或停止显示虚拟内容。In a tenth aspect, the HMD according to any one of aspects 7-9, wherein, in response to the user performing a confirmation action, the hardware processor is programmed to silence or stop displaying the virtual content.

在第11方面,根据方面1-10中任一方面所述的HMD,其中,环境传感器包括以下中的至少一个:用户传感器,其被配置为测量与HMD的用户相关联的数据;或外部传感器,其被配置为测量与用户环境相关的数据。In an eleventh aspect, the HMD of any one of aspects 1-10, wherein the environmental sensor comprises at least one of: a user sensor configured to measure data associated with a user of the HMD; or an external sensor , which is configured to measure data related to the user's environment.

在第12方面,根据方面1-11中任一方面所述的HMD,其中,触发事件包括用户环境中的紧急或不安全状况。In a 12th aspect, the HMD according to any one of aspects 1-11, wherein the triggering event comprises an emergency or unsafe condition in the user's environment.

在第13方面,根据方面1-12中任一方面所述的HMD,其中,显示器包括光场显示器。In a 13th aspect, the HMD of any one of aspects 1-12, wherein the display comprises a light field display.

在第14方面,根据方面1-13中任一方面所述的HMD,其中,显示器包括:多个波导;一个或多个光源,其被配置为将光引导到多个波导中。In a 14th aspect, the HMD according to any one of aspects 1-13, wherein the display comprises: a plurality of waveguides; one or more light sources configured to direct light into the plurality of waveguides.

在第15方面,根据方面14所述的HMD,其中,一个或多个光源包括光纤扫描投影仪。In a fifteenth aspect, the HMD of aspect 14, wherein the one or more light sources comprise a fiber optic scanning projector.

在第16方面,根据方面1-15中任一方面所述的HMD,其中,环境传感器包括面向外的成像系统,以对用户的环境成像;数据包括由面向外的成像系统获取的环境的图像;以及为了分析数据以检测触发事件,硬件处理器被编程为通过神经网络或计算机视觉算法中的一个或多个来分析环境的图像。In aspect 16, the HMD according to any one of aspects 1-15, wherein the environmental sensor comprises an outward-facing imaging system to image the user's environment; the data comprises an image of the environment acquired by the outward-facing imaging system and to analyze the data to detect trigger events, the hardware processor programmed to analyze the image of the environment through one or more of a neural network or computer vision algorithm.

在第17方面,根据方面16所述的HMD,其中,神经网络包括深度神经网络或卷积神经网络。In a seventeenth aspect, the HMD of aspect 16, wherein the neural network comprises a deep neural network or a convolutional neural network.

在第18方面,根据方面16-18中任一方面所述的HMD,其中,计算机视觉算法包括以下中的一个或多个:标度不变特征变换(SIFT)、加速稳健特征(SURF)、定向FAST和旋转BRIEF(ORB)、二进制稳健不变可缩放关键点(BRISK)算法、快速视网膜关键点(FREAK)算法、维奥拉-琼斯算法、特征���算法、卢卡斯-堪纳德算法、霍恩-申克算法、均值平移算法、视觉同步定位和映射(vSLAM)算法、序贯贝叶斯估计器、卡尔曼滤波器、束调整算法、自调节阈值算法、迭代最近点(ICP)算法、半全局匹配(SGM)算法、半全局块匹配(SGBM)算法、特征点直方图算法、支持向量机、k-最近邻算法或贝叶斯模型。In aspect 18, the HMD according to any one of aspects 16-18, wherein the computer vision algorithm comprises one or more of: scale invariant feature transform (SIFT), accelerated robust feature (SURF), Orientation FAST and Rotation BRIEF (ORB), Binary Robust Invariant Scalable Keypoint (BRISK) Algorithm, Fast Retinal Keypoint (FREAK) Algorithm, Viola-Jones Algorithm, Eigenface Algorithm, Lucas-Kannard Algorithm , Horn-Schenk algorithm, mean shift algorithm, visual simultaneous localization and mapping (vSLAM) algorithm, sequential Bayesian estimator, Kalman filter, bundle adjustment algorithm, self-adjusting threshold algorithm, iterative closest point (ICP) algorithm, semi-global matching (SGM) algorithm, semi-global block matching (SGBM) algorithm, feature point histogram algorithm, support vector machine, k-nearest neighbor algorithm or Bayesian model.

在第19方面,根据方面1-18中任一方面所述的HMD,其中,环境传感器包括对用户的环境成像的面向外的成像系统;数据包括由面向外的成像系统获取的环境的图像;以及为了分析数据以检测触发事件,硬件处理器被编程为:访问环境的第一图像;访问环境的第二图像,第二图像是在第一图像之后由面向外的成像系统获取的;将第二图像与第一图像比较以确定触发事件的发生。In a nineteenth aspect, the HMD according to any one of aspects 1-18, wherein the environmental sensor comprises an outward-facing imaging system imaging the user's environment; the data comprises an image of the environment acquired by the outward-facing imaging system; and to analyze the data to detect the triggering event, the hardware processor is programmed to: access a first image of the environment; access a second image of the environment, the second image being acquired by the outward facing imaging system after the first image; The second image is compared with the first image to determine the occurrence of a trigger event.

在第20方面,根据方面1-19中任一方面所述的HMD,其中,环境传感器包括对用户的环境成像的面向外的成像系统,���环境包括手术部位;数据包括由面向外的成像系统获取的手术部位的图像;以及为了分析数据以检测触发事件,硬件处理器被编程为:监视在手术部位中发生的医疗状况;检测医疗状况的变化;确定医疗状况的变化超过阈值。In aspect 20, the HMD of any one of aspects 1-19, wherein the environmental sensor comprises an outward-facing imaging system that images the user's environment, including the surgical site; Images of the surgical site are acquired; and to analyze the data to detect trigger events, the hardware processor is programmed to: monitor a medical condition occurring in the surgical site; detect a change in the medical condition; determine that the change in the medical condition exceeds a threshold.

在第21方面,一种HMD,其被配置为显示增强现实图像内容,该HMD包括:显示器,其被配置为呈现虚拟内容,该显示器的至少一部分是透明的,并且当用户佩戴HMD时该显示器的至少一部分被设置在用户眼睛前方的位置,使得透明部分将光从用户前方的环境的一部分传输到用户的眼睛,以提供用户前方的环境的该部分的视图,显示器还被配置为在多个深度平面处向用户显示虚拟内容;用户可致动按钮;以及硬件处理器,其被编程为:接收已致动用户可致动按钮的指示;以及响应该指示,使HMD的显示沉默。In a twenty-first aspect, an HMD configured to display augmented reality image content, the HMD comprising: a display configured to present virtual content, at least a portion of the display is transparent, and the display is configured to display when the HMD is worn by a user At least a portion of the display is disposed at a position in front of the user's eyes such that the transparent portion transmits light from a portion of the environment in front of the user to the user's eyes to provide a view of the portion of the environment in front of the user. virtual content is displayed to a user at the depth plane; a user-actuatable button; and a hardware processor programmed to: receive an indication that the user-actuatable button has been actuated; and silence a display of the HMD in response to the indication.

在第22方面,根据方面21所述的HMD,其中,为了使HMD的显示沉默,硬件处理器至少被编程为:使显示器输出的光变暗;关闭虚拟内容的显示;减小虚拟内容的尺寸;增加虚拟内容的透明度;或者改变由显示器呈现的虚拟内容的位置。In a 22nd aspect, the HMD of aspect 21, wherein, to silence the display of the HMD, the hardware processor is programmed at least to: dim the light output by the display; turn off the display of the virtual content; reduce the size of the virtual content ; increasing the transparency of the virtual content; or changing the position of the virtual content presented by the display.

在第23方面,根据方面21或方面22所述的HMD,其中,HMD还包括扬声器,以及为了使HMD的显示沉默,硬件处理器被编程为使扬声器沉默。In aspect 23, the HMD of aspect 21 or aspect 22, wherein the HMD further comprises a speaker, and in order to silence the display of the HMD, the hardware processor is programmed to silence the speaker.

在第24方面,根据方面21-23中任一方面所述的HMD,其中,响应于该指示,硬件处理器被编程为向用户提供警告。In a 24th aspect, the HMD of any one of aspects 21-23, wherein, in response to the indication, the hardware processor is programmed to provide a warning to the user.

在第25方面,根据方面24所述的HMD,其中,警告包括由显示器呈现的视觉警告或由扬声器提供的听觉警告。In a 25th aspect, the HMD of aspect 24, wherein the warning comprises a visual warning presented by a display or an audible warning provided by a speaker.

在第26方面,根据方面24-25中任一方面所述的HMD,其中,警告向用户指示以下中的至少一个:(1)HMD将在一段时间之后自动沉默,除非用户执行取消行动,或(2)HMD将不沉默,除非用户执行确认动作。In aspect 26, the HMD of any of aspects 24-25, wherein the warning indicates to the user at least one of: (1) the HMD will automatically silence after a period of time unless the user takes a cancel action, or (2) The HMD will not be silent unless the user performs a confirmation action.

在第27方面,根据方面26所述的HMD,其中,取消动作或确认动作包括以下中的至少一个:致动用户可致动按钮、致动由显示器呈现的虚拟用户界面元素、致动用户输入装置或检测到用户的取消或确认姿势。In a 27th aspect, the HMD of aspect 26, wherein the canceling action or confirming action comprises at least one of: actuating a user actuatable button, actuating a virtual user interface element presented by the display, actuating a user input The device or detects a cancel or confirm gesture from the user.

在第28方面,根据方面26-27中任一方面所述的HMD,其中,响应于用户执行取消动作,硬件处理器被编程为使显示取消沉默或继续显示虚拟内容。In aspect 28, the HMD according to any one of aspects 26-27, wherein in response to the user performing a cancel action, the hardware processor is programmed to unsilence the display or continue to display the virtual content.

在第29方面,根据方面26-28中任一方面所述的HMD,其中,响应于用户执行确认动作,硬件处理器被编程为使显示沉默或停止显示虚拟内容。In aspect 29, the HMD of any one of aspects 26-28, wherein the hardware processor is programmed to silence or stop displaying the virtual content in response to the user performing a confirmation action.

在第30方面,方面21-29中任一方面所述的HMD,其中硬件处理器还被编程为:接收已致动用户可致动按钮的第二指示;以及响应于第二指示,使HMD的显示取消沉默。In a 30th aspect, the HMD of any one of aspects 21-29, wherein the hardware processor is further programmed to: receive a second indication that the user-actuatable button has been actuated; and in response to the second indication, cause the HMD to The display cancels the silence.

在第31方面,一种可穿戴系统,其被配置为在混合现实或虚拟现实环境中显示虚拟内容,该可穿戴系统包括:显示器,其被配置为在混合现实、增强现实或虚拟现实环境中呈现虚拟内容;以及硬件处理器,其被编程为:接收用户环境的图像;使用一个或多个对象辨别器分析图像,该一个或多个对象辨别器被配置为利用机器学习算法辨别环境中的对象;至少部分地基于对图像的分析来检测触发事件;响应于检测到触发事件:响应于确定满足与触发事件相关联的阈值条件,使显示沉默。In aspect 31, a wearable system configured to display virtual content in a mixed reality or virtual reality environment, the wearable system comprising: a display configured to display virtual content in a mixed reality, augmented reality or virtual reality environment presenting virtual content; and a hardware processor programmed to: receive an image of the user's environment; analyze the image using one or more object recognizers configured to recognize objects in the environment using a machine learning algorithm The subject; detecting a triggering event based at least in part on the analysis of the image; in response to detecting the triggering event: in response to determining that a threshold condition associated with the triggering event is met, silencing the display.

在第32方面,根据方面31所述的可穿戴系统,其中,为了使显示沉默,硬件处理器被编程为至少:使显示器输出的光变暗;关闭虚拟内容的显示;减小虚拟内容的尺寸;增加虚拟内容的透明度;或者改变由显示器呈现的虚拟内容的位置。In a 32nd aspect, the wearable system of aspect 31, wherein, to silence the display, the hardware processor is programmed to at least: dim the light output by the display; turn off the display of the virtual content; reduce the size of the virtual content ; increasing the transparency of the virtual content; or changing the position of the virtual content presented by the display.

在第33方面,根据方面31-33中任一方面所述的可穿戴系统,其中,硬件处理器还被编程为:检测触发事件的终止条件;以及响应于检测到终止条件而恢复显示。In aspect 33, the wearable system of any one of aspects 31-33, wherein the hardware processor is further programmed to: detect a termination condition of the trigger event; and resume display in response to detecting the termination condition.

在第34方面,根据方面33所述的可穿戴系统,其中,为了检测终止条件,可穿戴系统被编程为:确定触发事件是否已终止;或确定用户是否已离开发生触发事件的环境。In a 34th aspect, the wearable system of aspect 33, wherein, to detect the termination condition, the wearable system is programmed to: determine whether the triggering event has terminated; or determine whether the user has left the environment in which the triggering event occurred.

在第35方面,根据方面31-34中任一方面所述的可穿戴系统,其中,硬件处理还被编程为响应于检测到触发事件而使可穿戴系统的扬声器沉默。In aspect 35, the wearable system of any one of aspects 31-34, wherein the hardware processing is further programmed to silence a speaker of the wearable system in response to detecting a trigger event.

在第36方面,根据方面31-35中任一方面所述的可穿戴系统,其中,响应于触发事件,所述硬件处理器还被编程为提供触发事件的存在的指示,其中该指示包括以下中的至少一个:与环境中的元素相关联的聚焦指示符,该聚焦指示符至少部分地负责触发事件;或者警告消息,其中警告消息向用户指示以下中的至少一个:(1)HMD将在一段时间之后自动沉默,除非用户执行取消动作,或者(2)HMD将不沉默,除非用户执行确认动作。In aspect 36, the wearable system of any one of aspects 31-35, wherein, in response to a trigger event, the hardware processor is further programmed to provide an indication of the presence of the trigger event, wherein the indication comprises at least one of: a focus indicator associated with an element in the environment that is at least partially responsible for triggering the event; or a warning message, wherein the warning message indicates to the user at least one of the following: (1) the HMD will Automatically silence after a period of time unless the user performs a cancel action, or (2) the HMD will not silence unless the user performs a confirm action.

在第37方面,根据方面36所述的可穿戴系统,其中,与触发事件相关联的阈值条件包括未检测到取消动作的持续时间。In a 37th aspect, the wearable system of aspect 36, wherein the threshold condition associated with the triggering event includes a duration of time for which a cancel action is not detected.

在第38方面,根据方面36或37所述的可穿戴系统,其中,取消动作或确认动作包括以下中的至少一个:致动现实按钮、致动由显示器呈现的虚拟用户界面元素、致动用户输入装置或检测到用户的取消或确认姿势。In aspect 38, the wearable system of aspect 36 or 37, wherein the cancel action or confirm action comprises at least one of: actuating a real button, actuating a virtual user interface element presented by the display, actuating a user The input device or detection of a cancel or confirm gesture by the user.

在第39方面,根据方面31-38中任一方面所述的可穿戴系统,其中,触发事件包括用户环境中的紧急或不安全状况。In aspect 39, the wearable system according to any one of aspects 31-38, wherein the trigger event comprises an emergency or unsafe condition in the user's environment.

在第40方面,根据方面31-39中任一方面所述的可穿戴系统,其中,机器学习算法包括深度神经网络或卷积神经网络。In aspect 40, the wearable system according to any one of aspects 31-39, wherein the machine learning algorithm comprises a deep neural network or a convolutional neural network.

在第41方面,一种用于在混合现实或虚拟现实环境中显示虚拟内容的方法,该方法包括:接收用户环境的图像;使用一个或多个对象辨别器分析图像,该一个或多个对象辨别器被配置为辨别环境中的对象;至少部分地基于图像的分析来检测触发事件;响应于检测到触发事件:响应于确定满足与触发事件相关联的阈值条件,使虚拟内容沉默。该方法可以在硬件处理器的控制下执行。硬件处理器可以设置在增强现实显示装置中。In a 41st aspect, a method for displaying virtual content in a mixed reality or virtual reality environment, the method comprising: receiving an image of a user's environment; analyzing the image using one or more object discriminators, the one or more objects The discriminator is configured to discern objects in the environment; detect a trigger event based at least in part on the analysis of the image; in response to detecting the trigger event: in response to determining that a threshold condition associated with the trigger event is met, silence the virtual content. The method can be performed under the control of a hardware processor. A hardware processor may be provided in the augmented reality display device.

在第42方面,根据方面41所述的方法,其中,使虚拟内容沉默包括以下中的至少一个:阻止虚拟内容被呈现;禁用与虚拟内容的交互;关闭虚拟内容的显示;减小虚拟内容的尺寸;提高虚拟内容的透明度;或者改变由显示器呈现的虚拟内容的位置。In aspect 42, the method of aspect 41, wherein silencing the virtual content comprises at least one of: preventing the virtual content from being presented; disabling interaction with the virtual content; turning off display of the virtual content; size; increase the transparency of the virtual content; or change the position of the virtual content rendered by the display.

在第43方面,根据方面41-42中任一方面所述的方法,还包括:检测触发事件的终止条件;以及响应于检测到终止条件而恢复显示。In aspect 43, the method according to any one of aspects 41-42, further comprising: detecting a termination condition of the trigger event; and resuming display in response to detecting the termination condition.

在第44方面,根据方面43所述的方法,其中,为了检测终止条件,可穿戴系统被编程为:确定触发事件是否已终止;或确定用户是否已离开发生触发事件的环境。In aspect 44, the method of aspect 43, wherein, to detect the termination condition, the wearable system is programmed to: determine whether the triggering event has terminated; or determine whether the user has left the environment in which the triggering event occurred.

在第45方面,根据方面41-44中任一方面所述的方法,其中,分析图像包括辨别用户环境中的对象;以及确定触发事件包括至少部分地基于所辨别的对象来确定用户的位置。In aspect 45, the method of any one of aspects 41-44, wherein analyzing the image includes identifying objects in the user's environment; and determining a trigger event includes determining a location of the user based at least in part on the identified objects.

在第46方面,根据方面45所述的方法,其中,触发事件包括用户位置的改变或用户周围场景的改变。In aspect 46, the method of aspect 45, wherein the triggering event comprises a change in the location of the user or a change in the scene around the user.

在第47方面,根据方面45或46所述的方法,其中,响应于检测到触发事件,该方法还包括:访问用于在该位置使虚拟内容沉默的设置,以及根据该设置使虚拟内容沉默。In aspect 47, the method of aspect 45 or 46, wherein, in response to detecting the triggering event, the method further comprises: accessing settings for silencing the virtual content at the location, and silencing the virtual content according to the settings .

在第48方面,根据方面45-47中任一方面所述的方法,其中,辨别用户环境中的对象由神经网络执行。In aspect 48, the method of any one of aspects 45-47, wherein recognizing objects in the user's environment is performed by a neural network.

在第49方面,根据方面41-48中任一方面所述的方法,其中,与触发事件相关联的阈值条件包括未检测到取消动作的持续时间。In aspect 49, the method according to any one of aspects 41-48, wherein the threshold condition associated with the triggering event comprises a duration for which no cancellation action has been detected.

在第50方面,根据方面41-49中任一方面所述的方法,其中,取消动作包括以下中的至少一个:致动现实按钮、致动由显示器呈现的虚拟用户界面元素、致动用户输入装置或检测到用户的取消或确认姿势。In aspect 50, the method according to any one of aspects 41-49, wherein the canceling action comprises at least one of: actuating a real button, actuating a virtual user interface element presented by a display, actuating a user input The device or detects a cancel or confirm gesture from the user.

其他考虑因素other considerations

本文描述的和/或附图描绘的过程、方法以及算法中的每一者可以体现在以下项中并通过以下项被全部或部分自动化:代码模块,其由一个或多个物理计算系统、硬件计算机处理器、专用电路执行;和/或电子硬件,其被配置为执行具体和特定计算机指令。例如,计算系统能包括用具体计算机指令或专用计算机编程的通用计算机(例如服务器)、专用电路等。代码模块可以被编译并链接到可执行程序中,安装在动态链接库中,或可以用解释性编程语言编写。在一些实施方式中,特定操作和方法可以由特定于给定功能的电路来执行。Each of the processes, methods, and algorithms described herein and/or depicted in the accompanying drawings can be embodied in and automated, in whole or in part, by a code module that is implemented by one or more physical computing systems, hardware Computer processors, application-specific circuit implementations; and/or electronic hardware configured to carry out specific and specific computer instructions. For example, a computing system can include a general purpose computer (eg, a server), special purpose circuitry, etc. programmed with specific computer instructions or a special purpose computer. Code modules can be compiled and linked into an executable program, installed in a dynamically linked library, or written in an interpreted programming language. In some implementations, certain operations and methods may be performed by circuitry specific to a given function.

此外,本公开的功能的特定实施方式在数学上、计算上或技术上都足够复杂,以至于为了执行所述功能(例如由于所涉及的计算量或复杂性)或为了基本实时地提供结果,专用硬件或者一个或多个物理计算装置(利用适当的专有可执行指令)可以是必需的。例如,动画或视频可以包括多个帧,每帧具有数百万个像素,为了处理视频数据以在商业合理的时间量内提供期望的图像处理任务或应用,专用编程计算机硬件是必需的。Furthermore, particular implementations of the disclosed functions are sufficiently mathematically, computationally, or technically complex that in order to perform the described functions (eg, due to the amount of computation or complexity involved) or to provide results in substantially real-time, Dedicated hardware or one or more physical computing devices (with appropriate proprietary executable instructions) may be necessary. For example, animation or video may include multiple frames, each frame having millions of pixels, and specially programmed computer hardware is necessary in order to process the video data to provide the desired image processing task or application in a commercially reasonable amount of time.

代码模块或任何类型的数据可以被存储在任何类型的非暂时性计算机可读介质上,诸如物理计算机存储器,包括硬盘驱动器、固态存储器、随机存取存储器(RAM)、只读存储器(ROM)、光盘、易失性或非易失性存储器以及相同和/或相似元件的组合。方法和模块(或数据)也可以在各种计算机可读传输介质上作为生成的数据信号(例如,作为载波或其他模拟或数字传播信号的一部分)传输,所述传输介质包括基于无线的介质和基于有线/电缆的介质,且可以采取多种形式(例如,作为单一或多路复用模拟信号的一部分,或者作为多个离散数字数据包或帧)。所公开的过程或处理步骤的结果可以持久地或以其他方式存储在任何类型的非暂时性实体计算机存储器中,或可以经由计算机可读传输介质进行传送。Modules of code or any type of data may be stored on any type of non-transitory computer readable medium, such as physical computer memory, including hard drives, solid state memory, random access memory (RAM), read only memory (ROM), Optical discs, volatile or non-volatile memory, and combinations of the same and/or similar elements. Methods and modules (or data) may also be transmitted as a generated data signal (e.g., as part of a carrier wave or other analog or digital propagated signal) over a variety of computer-readable transmission media, including wireless-based media and Wire/cable based medium and can take various forms (eg, as part of a single or multiplexed analog signal, or as multiple discrete digital data packets or frames). The results of disclosed procedures or processing steps may be stored persistently or otherwise in any type of non-transitory tangible computer memory, or may be transmitted via a computer-readable transmission medium.

本文所描述和/或附图所描绘的流程图中的任何过程、框、状态、步骤或功能应当被理解为潜在地表示代码模块、代码段或代码部分,它们包括在过程中实现具体功能(例如逻辑功能或算术功能)或步骤的一个或多个可以执行指令。各种过程、框、状态、步骤或功能能够根据本文提供的说明性示例进行组合、重新排列、添加、删除、修改或其他改变。在一些实施例中,额外或不同的计算系统或代码模块可以执行本文所述的一些或全部功能。本文所述方法和过程也不限于任何具体的顺序,且与其相关的框、步骤或状态能以适当的其他顺序来执行,例如以串行、并行或某种其他方式。可以向所公开的示例实施例添加或从中移除任务或事件。此外,本文所述的实施方式中的分离各种系统组件是出于说明的目的,且不应被理解为在所有实施方式中都需要������的分离。应该理解,所描述的程序组件、方法以及系统一般能一起集成在单个计算机产品中或封装到多个计算机产品中。许多实施方式变体是可行的。Any process, block, state, step or function in the flowcharts described herein and/or depicted in the accompanying drawings should be understood as potentially representing a code module, code segment, or code portion, which is included in the process to achieve a specific function ( One or more of such as logical functions or arithmetic functions) or steps may perform the instructions. Various processes, blocks, states, steps, or functions can be combined, rearranged, added, deleted, modified, or otherwise changed according to the illustrative examples provided herein. In some embodiments, additional or different computing systems or code modules may perform some or all of the functions described herein. Nor are the methods and processes described herein limited to any particular order, and blocks, steps or states related thereto can be performed in any other order as appropriate, such as in series, in parallel, or in some other manner. Tasks or events may be added to or removed from the disclosed example embodiments. Furthermore, the separation of various system components in the embodiments described herein is for purposes of illustration and should not be construed as requiring such separation in all embodiments. It should be understood that the described program components, methods, and systems can generally be integrated together in a single computer product or packaged in multiple computer products. Many implementation variants are possible.

过程、方法以及系统可以在网络(或分布式)计算环境中实施。网络环境包括企业范围的计算机网络、内联网、局域网(LAN)、广域网(WAN)、个人区域网络(PAN)、云计算网络、众包计算网络、因特网以及万维网。网络可以是有线或无线网络或者任何其他类型的通信网络。The processes, methods and systems can be implemented in a network (or distributed) computing environment. Network environments include enterprise-wide computer networks, intranets, local area networks (LANs), wide area networks (WANs), personal area networks (PANs), cloud computing networks, crowdsourced computing networks, the Internet, and the World Wide Web. The network may be a wired or wireless network or any other type of communication network.

本公开的系统和方法各自具有若干创新性方面,这些方面中的任一单个方面不单独负责本文所公开的期望待性或不是本文所公开的期望待性所必需的。上述各种特征和过程可以彼此独立使用或可以以各种方式组合使用。所有可能的组合和子组合均旨在落入此公开的范围内。对此公开所述的实施方式的各种修改对于本领域技术人员而言可以是显而易见的,且在不脱离此公开的精神或范围的情况下,可以将本文中限定的一般原理应用于其他实施方式。因此,权利要求不旨在限于本文所示的实施方式,而是应被赋予与本文所公开的此公开、原理和新颖特征一致的最宽范围。The systems and methods of the present disclosure each have several innovative aspects, no single one of which is solely responsible for, or required for, the desirability disclosed herein. The various features and processes described above can be used independently of each other or can be used in various combinations. All possible combinations and subcombinations are intended to fall within the scope of this disclosure. Various modifications to the implementations described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Way. Thus, the claims are not intended to be limited to the implementations shown herein but are to be accorded the widest scope consistent with the disclosure, principles and novel features disclosed herein.

在单独实施方式的上下文中在此说明书所述的某些特征也能在单个实施方式中组合实现。相反,在单个实施方式的上下文中所述��各种特征也能在多个实施方式中单独地或以任何合适的子组合实现。此外,尽管上文可以将特征描述为以某些组合执行,甚至最初这样要求保护,但在一些情况下,来自所要求保护的组合的一个或多个特征能被从该组合中删除,且所要求保护的组合可以涉及子组合或子组合的变体。任何单个特征或特征组对于每个实施例都不是必需或不可或缺的。Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Furthermore, although features above may be described as being performed in certain combinations, and even initially claimed as such, in some cases one or more features from a claimed combination can be deleted from that combination and all A claimed combination may relate to a subcombination or a variation of a subcombination. No single feature or group of features is required or integral to every embodiment.

本文中使用的条件语,诸如(除其他项外)“能”、“能够”、“可能”、“可以”、“例如”等一般旨在表达某些实施例包括而其他实施例不包括某些特征、��素和/或步骤,另有具体说明或在上下文中另有理解除外。因此,这样的条件语一般不旨在暗示特征、元素和/或步骤以任何方式对于一个或多个实施例是必需的,或者一个或多个实施例必然包括用于在具有或没有程序设计者输入或提示的情况下决定这些特征、元素和/或步骤是否包括在或者是否将在任何具体实施例中执行的逻辑。术语“包括”、“包含”、“具有”等是同义词,且以开放式的方式包含性地使用,且不排除额外的元素、特征、动作、操作等。此外,术语“或”以其包含性含义(而不是其专有性含义)使用,因此,当被用于例如连接元素列表时,术语“或”意味着列表中的一个、一些或全部元素。另外,本申请和所附权利要求书中使用的冠词“一”、“一个”和“所述”应被解释为意味着“一个或多个”或“至少一个”,另有具体说明除外。As used herein, conditional words such as (among others) "could," "could," "may," "may," "for example," etc. are generally intended to mean that certain embodiments include and other embodiments do not include certain These features, elements and/or steps are unless otherwise specifically stated or otherwise understood in context. Thus, such conditionals are generally not intended to imply that the features, elements and/or steps are in any way essential to one or more embodiments, or that one or more embodiments necessarily include The logic that determines whether such features, elements and/or steps are included or will be implemented in any particular embodiment is entered or prompted. The terms "comprising", "comprising", "having", etc. are synonyms and are used in an open-ended manner inclusively and do not exclude additional elements, features, acts, operations, etc. Furthermore, the term "or" is used in its inclusive sense rather than its exclusive sense, thus, when used, for example, to concatenate a list of elements, the term "or" means one, some or all of the elements in the list. Additionally, the articles "a," "an," and "the" as used in this application and the appended claims should be construed to mean "one or more" or "at least one" unless specifically stated otherwise. .

如本文所使用的,涉及项目列表的“至少一个”的短语指这些项目的任何组合,包括单个成员。作为示例,“A、B或C中的至少一个”旨在覆盖:A、B、C、A和B、A和C、B和C以及A、B和C。诸如短语“X、Y以及Z中的至少一个”的连接语(除非另有声明)以通常使用的上下文来理解,以表达项目、术语等可以是X、Y或Z中的至少一个。因此,这样的连接语一般并不旨在暗示某些实施例要求X中的至少一个、Y中的至少一个以及Z中的至少一个中的每个都存在。As used herein, the phrase "at least one" referring to a list of items refers to any combination of those items, including individual members. As an example, "at least one of A, B, or C" is intended to cover: A, B, C, A and B, A and C, B and C, and A, B, and C. Connectives such as the phrase "at least one of X, Y, and Z" (unless stated otherwise) are understood in their commonly used context to express that an item, term, etc. may be at least one of X, Y, or Z. Thus, such conjunctions are generally not intended to imply that certain embodiments require the presence of each of at least one of X, at least one of Y, and at least one of Z.

类似地,虽然操作在附图中可以以特定顺序描绘,但应认识到,这样的操作不需要以所述特定顺序或以相继顺序执行,或执行所有例示的操作以实现期望的结果。此外,附图可以以流程图的形式示意性地描绘一个或多个示例过程。然而,未示出的其他操作能并入示意性地示出的示例方法和过程中。例如,能在任何所示操作之前、之后、同时或期间执行一个或多个附加操作。另外,在其他实施方式中,操作可以被重新排列或重新排序。在某些情况下,多任务和并行处理可以具有优势。此外,上述实施方式描述的各种系统组件的分离不应被理解为在所有实施方式中都需要这种分离,且应该理解,所述程序组件和系统一般能被一起集成在单个软件产品中或封装到多个软件产品中。另外,其他实施方式处于以下权利要求的范围内。在一些情况下,权利要求中列举的动作能以不同的顺序执行,且仍实现期望的结果。Similarly, while operations may be depicted in the figures in the particular order, it should be appreciated that such operations need not be performed in the particular order, or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Additionally, the drawings may schematically depict one or more example processes in flowchart form. However, other operations not shown can be incorporated into the schematically shown example methods and processes. For example, one or more additional operations can be performed before, after, concurrently with, or during any illustrated operation. Additionally, operations may be rearranged or reordered in other implementations. In some cases, multitasking and parallel processing can have advantages. Furthermore, the separation of various system components described in the above embodiments should not be understood as requiring such separation in all embodiments, and it should be understood that the program components and systems can generally be integrated together in a single software product or Packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results.

Claims (14)

1. A wearable system configured to display virtual content in a mixed reality or virtual reality environment, the wearable system comprising:
a display configured to present virtual content in a mixed reality, augmented reality, or virtual reality environment; and
a hardware processor programmed to:
receiving an image of a physical environment of a user of the wearable system;
analyzing the image using one or more object discriminators configured to discriminate physical objects in the physical environment using a machine learning algorithm;
detecting a trigger event based at least in part on the analysis of the image, the trigger event comprising movement of the user between a first physical environment and a second physical environment, wherein the movement is determined based at least in part on: detecting at least one physical object present in the second physical environment but not in the first physical environment based on the analysis of the one or more object discriminators;
in response to detecting the trigger event:
accessing content blocking settings associated with the second physical environment;
determining one or more virtual content items available for silencing in the second physical environment based on the content blocking settings associated with the second physical environment; and
The determined one or more virtual content items are silenced.
2. The wearable system of claim 1, wherein to silence the determined one or more virtual content items, the hardware processor is programmed to at least:
dimming the light output by the display;
closing the display of the virtual content;
reducing the size of the virtual content;
increasing transparency of the virtual content; or alternatively
Changing the position of the virtual content presented by the display.
3. The wearable system of claim 1, wherein the hardware processor is further programmed to:
detecting a termination condition of the trigger event; and
in response to detecting the termination condition, ceasing to silence the determined one or more virtual content items.
4. A wearable system according to claim 3, wherein to detect the termination condition, the wearable system is programmed to:
determining whether the user has left the second physical environment in which the silencing of the determined one or more virtual content items occurred.
5. The wearable system of claim 1, wherein the hardware process is further programmed to silence a speaker of the wearable system in response to detecting the trigger event.
6. The wearable system of claim 1, wherein, in response to the detected trigger event, the hardware processor is further programmed to provide an indication of the presence of the trigger event, wherein the indication comprises at least one of:
a focus indicator associated with an element in the environment, the focus indicator being at least partially responsible for the trigger event; or alternatively
A warning message, wherein the warning message indicates to the user at least one of: (1) The determined one or more virtual content items will automatically silence after a period of time unless the user performs a cancel action, or (2) the determined one or more virtual content items will not silence unless the user performs a confirm action.
7. The wearable system of claim 6, wherein the cancel action or the confirm action comprises at least one of: actuating a real button, actuating a virtual user interface element presented by the display, actuating a user input device, or detecting a cancel or confirm gesture of the user.
8. The wearable system of claim 1, wherein the machine learning algorithm comprises a deep neural network or a convolutional neural network.
9. A method for displaying virtual content in a mixed reality or virtual reality environment, the method comprising:
under control of a hardware processor:
receiving an image of a physical environment of a user of the wearable system;
analyzing the image using one or more object discriminators configured to discriminate physical objects in the physical environment;
detecting a trigger event based at least in part on the analysis of the image, the trigger event comprising movement of the user between a first physical environment and a second physical environment, wherein the movement is determined based at least in part on: detecting at least one physical object present in the second physical environment but not in the first physical environment based on the analysis of the one or more object discriminators;
in response to detecting the trigger event:
accessing content blocking settings associated with the second physical environment;
determining one or more virtual content items available for silencing in the second physical environment based on the content blocking settings associated with the second physical environment; and
the determined one or more virtual content items are silenced.
10. The method of claim 9, wherein silencing the virtual content comprises at least one of:
preventing the virtual content from being presented;
disabling interaction with the virtual content;
closing the display of the virtual content;
reducing the size of the virtual content;
the transparency of the virtual content is improved; or alternatively
Changing the position of the virtual content presented by the display.
11. The method of claim 9, further comprising:
detecting a termination condition of the trigger event; and
the display is restored in response to detecting the termination condition.
12. The method of claim 11, wherein to detect the termination condition, the hardware processor is programmed to:
a determination is made as to whether the user has left the second physical environment.
13. The method of claim 9, wherein analyzing the image comprises discriminating between objects in the first physical environment; and detecting the trigger event includes determining a location of the user based at least in part on the recognized object.
14. The method of claim 13, wherein discriminating the object in the first physical environment is performed by a neural network.
CN201780087609.9A 2016-12-29 2017-11-17 Automatic control of wearable display devices based on external conditions Active CN110419018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310968411.9A CN117251053A (en) 2016-12-29 2017-11-17 Automatic control of wearable display device based on external conditions

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201662440099P 2016-12-29 2016-12-29
US62/440,099 2016-12-29
PCT/US2017/062365 WO2018125428A1 (en) 2016-12-29 2017-11-17 Automatic control of wearable display device based on external conditions

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310968411.9A Division CN117251053A (en) 2016-12-29 2017-11-17 Automatic control of wearable display device based on external conditions

Publications (2)

Publication Number Publication Date
CN110419018A CN110419018A (en) 2019-11-05
CN110419018B true CN110419018B (en) 2023-08-04

Family

ID=62710743

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201780087609.9A Active CN110419018B (en) 2016-12-29 2017-11-17 Automatic control of wearable display devices based on external conditions
CN202310968411.9A Pending CN117251053A (en) 2016-12-29 2017-11-17 Automatic control of wearable display device based on external conditions

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310968411.9A Pending CN117251053A (en) 2016-12-29 2017-11-17 Automatic control of wearable display device based on external conditions

Country Status (9)

Country Link
US (2) US11138436B2 (en)
EP (1) EP3563215A4 (en)
JP (2) JP7190434B2 (en)
KR (2) KR102553190B1 (en)
CN (2) CN110419018B (en)
AU (1) AU2017387781B2 (en)
CA (1) CA3051060A1 (en)
IL (2) IL290002B2 (en)
WO (1) WO2018125428A1 (en)

Families Citing this family (183)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3062142B1 (en) 2015-02-26 2018-10-03 Nokia Technologies OY Apparatus for a near-eye display
WO2017068926A1 (en) * 2015-10-21 2017-04-27 ソニー株式会社 Information processing device, control method therefor, and computer program
EP4613232A3 (en) * 2016-09-16 2025-12-03 Zimmer, Inc. Augmented reality surgical technique guidance
US10867445B1 (en) * 2016-11-16 2020-12-15 Amazon Technologies, Inc. Content segmentation and navigation
US10650552B2 (en) 2016-12-29 2020-05-12 Magic Leap, Inc. Systems and methods for augmented reality
CN110419018B (en) 2016-12-29 2023-08-04 奇跃公司 Automatic control of wearable display devices based on external conditions
EP4300160A3 (en) 2016-12-30 2024-05-29 Magic Leap, Inc. Polychromatic light out-coupling apparatus, near-eye displays comprising the same, and method of out-coupling polychromatic light
JP6298558B1 (en) * 2017-05-11 2018-03-20 株式会社コロプラ Method for providing virtual space, program for causing computer to execute the method, and information processing apparatus for executing the program
US20190005699A1 (en) * 2017-06-30 2019-01-03 Intel Corporation Technologies for generating a motion model for a virtual character
US10578870B2 (en) 2017-07-26 2020-03-03 Magic Leap, Inc. Exit pupil expander
US11432877B2 (en) 2017-08-02 2022-09-06 Medtech S.A. Surgical field camera system that only uses images from cameras with an unobstructed sight line for tracking
US11270161B2 (en) * 2017-11-27 2022-03-08 Nvidia Corporation Deep-learning method for separating reflection and transmission images visible at a semi-reflective surface in a computer image of a real-world scene
US10762620B2 (en) * 2017-11-27 2020-09-01 Nvidia Corporation Deep-learning method for separating reflection and transmission images visible at a semi-reflective surface in a computer image of a real-world scene
JP7282090B2 (en) 2017-12-10 2023-05-26 マジック リープ, インコーポレイテッド Antireflection coating on optical waveguide
CN111712751B (en) 2017-12-20 2022-11-01 奇跃公司 Insert for augmented reality viewing apparatus
CN107977586B (en) * 2017-12-22 2021-04-13 联想(北京)有限公司 Display content processing method, first electronic device and second electronic device
US10339508B1 (en) * 2018-02-12 2019-07-02 Capital One Services, Llc Methods for determining user experience (UX) effectiveness of ATMs
EP4415355A3 (en) * 2018-03-15 2024-09-04 Magic Leap, Inc. Image correction due to deformation of components of a viewing device
JP7319303B2 (en) 2018-05-31 2023-08-01 マジック リープ, インコーポレイテッド Radar head pose localization
EP3804306B1 (en) 2018-06-05 2023-12-27 Magic Leap, Inc. Homography transformation matrices based temperature calibration of a viewing system
US11816886B1 (en) * 2018-06-28 2023-11-14 Meta Platforms Technologies, Llc Apparatus, system, and method for machine perception
US11579441B2 (en) 2018-07-02 2023-02-14 Magic Leap, Inc. Pixel intensity modulation using modifying gain values
US11510027B2 (en) 2018-07-03 2022-11-22 Magic Leap, Inc. Systems and methods for virtual and augmented reality
US11856479B2 (en) 2018-07-03 2023-12-26 Magic Leap, Inc. Systems and methods for virtual and augmented reality along a route with markers
EP3821340B1 (en) 2018-07-10 2025-07-09 Magic Leap, Inc. Method and computer-readable medium for cross-instruction set architecture procedure calls
CA3107356A1 (en) 2018-07-23 2020-01-30 Mvi Health Inc. Systems and methods for physical therapy
WO2020023543A1 (en) 2018-07-24 2020-01-30 Magic Leap, Inc. Viewing device with dust seal integration
WO2020023404A1 (en) 2018-07-24 2020-01-30 Magic Leap, Inc. Flicker mitigation when toggling eyepiece display illumination in augmented reality systems
EP3827224B1 (en) 2018-07-24 2023-09-06 Magic Leap, Inc. Temperature dependent calibration of movement detection devices
US11112862B2 (en) 2018-08-02 2021-09-07 Magic Leap, Inc. Viewing system with interpupillary distance compensation based on head motion
JP7438188B2 (en) 2018-08-03 2024-02-26 マジック リープ, インコーポレイテッド Unfused pose-based drift correction of fused poses of totems in user interaction systems
US12016719B2 (en) 2018-08-22 2024-06-25 Magic Leap, Inc. Patient viewing system
US11347056B2 (en) * 2018-08-22 2022-05-31 Microsoft Technology Licensing, Llc Foveated color correction to improve color uniformity of head-mounted displays
US10706347B2 (en) 2018-09-17 2020-07-07 Intel Corporation Apparatus and methods for generating context-aware artificial intelligence characters
US20200097707A1 (en) * 2018-09-20 2020-03-26 XRSpace CO., LTD. Camera Module and Extended Reality System Using the Same
US11232635B2 (en) * 2018-10-05 2022-01-25 Magic Leap, Inc. Rendering location specific virtual content in any location
US12333065B1 (en) 2018-10-08 2025-06-17 Floreo, Inc. Customizing virtual and augmented reality experiences for neurodevelopmental therapies and education
US10712819B2 (en) * 2018-10-30 2020-07-14 Dish Network L.L.C. System and methods for recreational sport heads-up display control
WO2020102412A1 (en) 2018-11-16 2020-05-22 Magic Leap, Inc. Image size triggered clarification to maintain image sharpness
US11127282B2 (en) * 2018-11-29 2021-09-21 Titan Health & Security Technologies, Inc. Contextualized augmented reality display system
CN113631986A (en) * 2018-12-10 2021-11-09 脸谱科技有限责任公司 Adaptive Viewport for Hyperfocal Viewport (HVP) Displays
WO2020132484A1 (en) 2018-12-21 2020-06-25 Magic Leap, Inc. Air pocket structures for promoting total internal reflection in a waveguide
WO2020139754A1 (en) * 2018-12-28 2020-07-02 Magic Leap, Inc. Augmented and virtual reality display systems with shared display for left and right eyes
US11531516B2 (en) * 2019-01-18 2022-12-20 Samsung Electronics Co., Ltd. Intelligent volume control
FR3092416B1 (en) * 2019-01-31 2022-02-25 Univ Grenoble Alpes SYSTEM AND METHOD FOR INTERACTING WITH ROBOTS IN MIXED REALITY APPLICATIONS
JP2022523852A (en) 2019-03-12 2022-04-26 マジック リープ, インコーポレイテッド Aligning local content between first and second augmented reality viewers
CN120812326A (en) 2019-05-01 2025-10-17 奇跃公司 Content providing system and method
JP7372061B2 (en) * 2019-07-01 2023-10-31 株式会社日立製作所 Remote work support system
US11017231B2 (en) * 2019-07-10 2021-05-25 Microsoft Technology Licensing, Llc Semantically tagged virtual and physical objects
CN114174895B (en) 2019-07-26 2025-07-08 奇跃公司 System and method for augmented reality
US11470017B2 (en) * 2019-07-30 2022-10-11 At&T Intellectual Property I, L.P. Immersive reality component management via a reduced competition core network component
US11307647B2 (en) * 2019-09-11 2022-04-19 Facebook Technologies, Llc Artificial reality triggered by physical object
DE102019125348A1 (en) * 2019-09-20 2021-03-25 365FarmNet Group GmbH & Co. KG Method for supporting a user in an agricultural activity
US11762457B1 (en) * 2019-09-27 2023-09-19 Apple Inc. User comfort monitoring and notification
JP7713936B2 (en) 2019-10-15 2025-07-28 マジック リープ, インコーポレイテッド Device, method, and computer-readable medium for a cross-reality system with location services
JP7653982B2 (en) 2019-10-15 2025-03-31 マジック リープ, インコーポレイテッド Cross reality system using wireless fingerprinting
JP7604475B2 (en) 2019-10-15 2024-12-23 マジック リープ, インコーポレイテッド Cross-reality system supporting multiple device types
JP7604478B2 (en) 2019-10-31 2024-12-23 マジック リープ, インコーポレイテッド Cross reality systems with quality information about persistent coordinate frames.
US11386627B2 (en) 2019-11-12 2022-07-12 Magic Leap, Inc. Cross reality system with localization service and shared location-based content
WO2021097318A1 (en) 2019-11-14 2021-05-20 Magic Leap, Inc. Systems and methods for virtual and augmented reality
EP4058979A4 (en) 2019-11-15 2023-01-11 Magic Leap, Inc. A viewing system for use in a surgical environment
CN114746796B (en) * 2019-12-06 2025-05-27 奇跃公司 Dynamic browser stage
CN114762008A (en) 2019-12-09 2022-07-15 奇跃公司 Simplified virtual content programmed cross reality system
TWI712011B (en) 2019-12-18 2020-12-01 仁寶電腦工業股份有限公司 Voice prompting method of safety warning
KR102731224B1 (en) * 2019-12-31 2024-11-18 엘지전자 주식회사 A method for providing xr contents and xr device for providing xr contents
US11687778B2 (en) 2020-01-06 2023-06-27 The Research Foundation For The State University Of New York Fakecatcher: detection of synthetic portrait videos using biological signals
US11003308B1 (en) * 2020-02-03 2021-05-11 Apple Inc. Systems, methods, and graphical user interfaces for annotating, measuring, and modeling environments
WO2021163300A1 (en) 2020-02-13 2021-08-19 Magic Leap, Inc. Cross reality system with map processing using multi-resolution frame descriptors
US11410395B2 (en) 2020-02-13 2022-08-09 Magic Leap, Inc. Cross reality system with accurate shared maps
WO2021163295A1 (en) 2020-02-13 2021-08-19 Magic Leap, Inc. Cross reality system with prioritization of geolocation information for localization
KR102797008B1 (en) * 2020-02-14 2025-04-18 엘지전자 주식회사 Method for providing content and device
CN115516364B (en) * 2020-02-14 2024-04-26 奇跃公司 Tool bridge
EP4111442A4 (en) * 2020-02-26 2023-08-16 Magic Leap, Inc. Hand and totem input fusion for wearable systems
WO2021173779A1 (en) 2020-02-26 2021-09-02 Magic Leap, Inc. Cross reality system with fast localization
US11686940B2 (en) * 2020-03-19 2023-06-27 Snap Inc. Context-based image state selection
EP4127878A4 (en) * 2020-04-03 2024-07-17 Magic Leap, Inc. AVATAR ADJUSTMENT FOR OPTIMAL GAZE DISTINCTION
US11321928B2 (en) * 2020-05-14 2022-05-03 Qualcomm Incorporated Methods and apparatus for atlas management of augmented reality content
WO2021241431A1 (en) * 2020-05-29 2021-12-02 ソニーグループ株式会社 Information processing device, information processing method, and computer-readable recording medium
DE102020207314A1 (en) * 2020-06-11 2021-12-16 Volkswagen Aktiengesellschaft Control of a display of an augmented reality head-up display device for a means of locomotion
JP7492869B2 (en) * 2020-06-30 2024-05-30 株式会社メルカリ Terminal device, screen display system, display method and program
JP7096295B2 (en) * 2020-07-27 2022-07-05 ソフトバンク株式会社 Display control system, program, and display control method
US11320896B2 (en) * 2020-08-03 2022-05-03 Facebook Technologies, Llc. Systems and methods for object tracking using fused data
US11176755B1 (en) 2020-08-31 2021-11-16 Facebook Technologies, Llc Artificial reality augments and surfaces
WO2022067302A1 (en) 2020-09-25 2022-03-31 Apple Inc. Methods for navigating user interfaces
CN119440253A (en) 2020-09-25 2025-02-14 苹果公司 Methods for manipulating objects in the environment
CN116719452A (en) 2020-09-25 2023-09-08 苹果公司 Method for interacting with virtual controls and/or affordances for moving virtual objects in a virtual environment
EP4697149A2 (en) 2020-09-25 2026-02-18 Apple Inc. Methods for adjusting and/or controlling immersion associated with user interfaces
US20220101002A1 (en) * 2020-09-30 2022-03-31 Kyndryl, Inc. Real-world object inclusion in a virtual reality experience
US12437743B2 (en) 2020-10-16 2025-10-07 Hewlett-Packard Development Company, L.P. Event detections for noise cancelling headphones
TWI818305B (en) * 2020-10-29 2023-10-11 宏達國際電子股份有限公司 Head mounted display device and power management method thereof
US20240048934A1 (en) * 2020-12-14 2024-02-08 Moea Technologies, Inc. Interactive mixed reality audio technology
CN112255921B (en) * 2020-12-21 2021-09-07 宁波圻亿科技有限公司 AR glasses intelligent control system and method
WO2022146936A1 (en) 2020-12-31 2022-07-07 Sterling Labs Llc Method of grouping user interfaces in an environment
US11622002B2 (en) * 2021-01-14 2023-04-04 International Business Machines Corporation Synchronizing virtual reality notifications
US11461986B2 (en) * 2021-01-27 2022-10-04 Qualcomm Incorporated Context-aware extended reality systems
CN112783660B (en) * 2021-02-08 2024-05-07 腾讯科技(深圳)有限公司 Resource processing method and device in virtual scene and electronic equipment
EP4288950A4 (en) 2021-02-08 2024-12-25 Sightful Computers Ltd User interactions in extended reality
JP7713189B2 (en) 2021-02-08 2025-07-25 サイトフル コンピューターズ リミテッド Content Sharing in Extended Reality
KR20250103813A (en) 2021-02-08 2025-07-07 사이트풀 컴퓨터스 리미티드 Extended reality for productivity
US11995230B2 (en) 2021-02-11 2024-05-28 Apple Inc. Methods for presenting and sharing content in an environment
US20220329971A1 (en) * 2021-03-31 2022-10-13 Here Global B.V. Determining context categorizations based on audio samples
KR20230169331A (en) 2021-04-13 2023-12-15 애플 인크. How to provide an immersive experience in your environment
US11676348B2 (en) 2021-06-02 2023-06-13 Meta Platforms Technologies, Llc Dynamic mixed reality content in virtual reality
US11798276B2 (en) * 2021-06-03 2023-10-24 At&T Intellectual Property I, L.P. Providing information about members of a group using an augmented reality display
US12138361B2 (en) * 2021-06-22 2024-11-12 International Business Machines Corporation Activating emitting modules on a wearable device
US11521361B1 (en) 2021-07-01 2022-12-06 Meta Platforms Technologies, Llc Environment model with surfaces and per-surface volumes
CN113344909B (en) * 2021-07-01 2023-12-08 中国石油大学(北京) A method and device for identifying and displaying coking of flame-transparent high-temperature filters of thermal power boilers
WO2023009580A2 (en) 2021-07-28 2023-02-02 Multinarity Ltd Using an extended reality appliance for productivity
CN113593342A (en) * 2021-08-03 2021-11-02 福州大学 Immersive intelligent interactive indoor fire live-action simulation system
US11620797B2 (en) 2021-08-05 2023-04-04 Bank Of America Corporation Electronic user interface with augmented detail display for resource location
US12056268B2 (en) 2021-08-17 2024-08-06 Meta Platforms Technologies, Llc Platformization of mixed reality objects in virtual reality environments
US12201375B2 (en) * 2021-09-16 2025-01-21 Globus Medical Inc. Extended reality systems for visualizing and controlling operating room equipment
WO2023049705A1 (en) 2021-09-23 2023-03-30 Apple Inc. Devices, methods, and graphical user interfaces for content applications
JP7767836B2 (en) * 2021-11-01 2025-11-12 株式会社Jvcケンウッド Virtual rest room providing system, virtual rest room providing device, and virtual rest room providing method
JP7797814B2 (en) * 2021-09-24 2026-01-14 株式会社Jvcケンウッド Virtual rest room providing system and virtual rest room providing method
US11985176B2 (en) * 2021-09-24 2024-05-14 Jvckenwood Corporation Virtual-break-room providing system, virtual-break-room providing device, and virtual-break-room providing method
CN118215903A (en) 2021-09-25 2024-06-18 苹果公司 Device, method and graphical user interface for presenting virtual objects in a virtual environment
US11900550B2 (en) 2021-09-30 2024-02-13 Snap Inc. AR odometry using sensor data from a personal vehicle
US12030577B2 (en) 2021-09-30 2024-07-09 Snap Inc. AR based performance modulation of a personal mobility system
US11933621B2 (en) * 2021-10-06 2024-03-19 Qualcomm Incorporated Providing a location of an object of interest
JP7775620B2 (en) * 2021-10-07 2025-11-26 トヨタ自動車株式会社 Virtual space control system, control method, and control program
US11500476B1 (en) 2021-10-14 2022-11-15 Hewlett-Packard Development Company, L.P. Dual-transceiver based input devices
US11748944B2 (en) 2021-10-27 2023-09-05 Meta Platforms Technologies, Llc Virtual object structures and interrelationships
US11813528B2 (en) 2021-11-01 2023-11-14 Snap Inc. AR enhanced gameplay with a personal mobility system
US12456271B1 (en) 2021-11-19 2025-10-28 Apple Inc. System and method of three-dimensional object cleanup and text annotation
US12444217B2 (en) * 2021-12-02 2025-10-14 Citrix Systems, Inc. Notifications in extended reality environments
US20230185360A1 (en) * 2021-12-10 2023-06-15 Logitech Europe S.A. Data processing platform for individual use
US12158690B2 (en) 2021-12-14 2024-12-03 Dell Products L.P. Camera with video stream disablement responsive to movement
US12363259B2 (en) 2021-12-14 2025-07-15 Dell Products L.P. Camera with magnet attachment to display panel and lightguide housing
US12388953B2 (en) 2021-12-14 2025-08-12 Dell Products L.P. Camera front touch sensor to control video stream
US12289509B2 (en) 2021-12-14 2025-04-29 Dell Products L.P. Reversible chargeable camera and dock with rear wall privacy
US12069356B2 (en) 2021-12-14 2024-08-20 Dell Products L.P. Display backplate to facilitate camera magnet attachment to a display panel
US12108147B2 (en) * 2021-12-14 2024-10-01 Dell Products L.P. Camera with microphone mute responsive to movement
US12200328B2 (en) 2021-12-14 2025-01-14 Dell Products L.P. Camera with dock having automated alignment
US12192631B2 (en) 2021-12-14 2025-01-07 Dell Products L.P. Camera automated orientation with magnetic attachment to display panel
US11985448B2 (en) 2021-12-14 2024-05-14 Dell Products L.P. Camera with magnet attachment to display panel
US12056827B2 (en) 2021-12-30 2024-08-06 Snap Inc. AR-enhanced detection and localization of a personal mobility device
CN114241396A (en) * 2021-12-31 2022-03-25 深圳市智岩科技有限公司 Lamp effect control method, system, device, electronic equipment and storage medium
US12014465B2 (en) * 2022-01-06 2024-06-18 Htc Corporation Tracking system and method
US12524977B2 (en) 2022-01-12 2026-01-13 Apple Inc. Methods for displaying, selecting and moving objects and containers in an environment
US12093447B2 (en) 2022-01-13 2024-09-17 Meta Platforms Technologies, Llc Ephemeral artificial reality experiences
CN119473001A (en) 2022-01-19 2025-02-18 苹果公司 Methods for displaying and repositioning objects in the environment
US11948263B1 (en) 2023-03-14 2024-04-02 Sightful Computers Ltd Recording the complete physical and extended reality environments of a user
US12380238B2 (en) 2022-01-25 2025-08-05 Sightful Computers Ltd Dual mode presentation of user interface elements
US12175614B2 (en) 2022-01-25 2024-12-24 Sightful Computers Ltd Recording the complete physical and extended reality environments of a user
US12272005B2 (en) 2022-02-28 2025-04-08 Apple Inc. System and method of three-dimensional immersive applications in multi-user communication sessions
US12541280B2 (en) 2022-02-28 2026-02-03 Apple Inc. System and method of three-dimensional placement and refinement in multi-user communication sessions
US12175605B2 (en) 2022-03-22 2024-12-24 Snap Inc. Situational-risk-based AR display
CN118984984A (en) * 2022-03-31 2024-11-19 三星电子株式会社 Method for providing information and electronic device for supporting the method
WO2023196258A1 (en) 2022-04-04 2023-10-12 Apple Inc. Methods for quick message response and dictation in a three-dimensional environment
WO2023205457A1 (en) 2022-04-21 2023-10-26 Apple Inc. Representations of messages in a three-dimensional environment
KR102559938B1 (en) * 2022-04-24 2023-07-27 주식회사 피앤씨솔루션 Augmented reality texture display method using writing tool dedicated to augmented reality
US12026527B2 (en) 2022-05-10 2024-07-02 Meta Platforms Technologies, Llc World-controlled and application-controlled augments in an artificial-reality environment
JP7757529B2 (en) * 2022-05-24 2025-10-21 株式会社Nttドコモ Display Control Device
WO2023244267A1 (en) * 2022-06-13 2023-12-21 Magic Leap, Inc. Systems and methods for human gait analysis, real-time feedback and rehabilitation using an extended-reality device
US12106580B2 (en) 2022-06-14 2024-10-01 Snap Inc. AR assisted safe cycling
CN114783002B (en) * 2022-06-22 2022-09-13 中山大学深圳研究院 Object intelligent matching method applied to scientific and technological service field
CN114795181B (en) * 2022-06-23 2023-02-10 深圳市铱硙医疗科技有限公司 Method and device for assisting children in adapting to nuclear magnetic resonance examination
US12394167B1 (en) 2022-06-30 2025-08-19 Apple Inc. Window resizing and virtual object rearrangement in 3D environments
US12112011B2 (en) 2022-09-16 2024-10-08 Apple Inc. System and method of application-based three-dimensional refinement in multi-user communication sessions
US12148078B2 (en) 2022-09-16 2024-11-19 Apple Inc. System and method of spatial groups in multi-user communication sessions
US12099653B2 (en) 2022-09-22 2024-09-24 Apple Inc. User interface response based on gaze-holding event assessment
US12405704B1 (en) 2022-09-23 2025-09-02 Apple Inc. Interpreting user movement as direct touch user interface interactions
US12393273B1 (en) 2022-09-23 2025-08-19 Apple Inc. Dynamic recording of an experience based on an emotional state and a scene understanding
EP4591145A1 (en) 2022-09-24 2025-07-30 Apple Inc. Methods for time of day adjustments for environments and environment presentation during communication sessions
EP4591133A1 (en) 2022-09-24 2025-07-30 Apple Inc. Methods for controlling and interacting with a three-dimensional environment
EP4595015A1 (en) 2022-09-30 2025-08-06 Sightful Computers Ltd Adaptive extended reality content presentation in multiple physical environments
US20240221301A1 (en) * 2022-12-29 2024-07-04 Apple Inc. Extended reality assistance based on user understanding
EP4659088A1 (en) 2023-01-30 2025-12-10 Apple Inc. Devices, methods, and graphical user interfaces for displaying sets of controls in response to gaze and/or gesture inputs
US12108012B2 (en) 2023-02-27 2024-10-01 Apple Inc. System and method of managing spatial states and display modes in multi-user communication sessions
EP4690149A1 (en) * 2023-03-27 2026-02-11 Unlikely Artificial Intelligence Limited Computer implemented methods for the automated analysis or use of data, and related systems
US12548245B2 (en) 2023-03-31 2026-02-10 Meta Platforms Technologies, Llc Rendering an artificial reality environment based on a defined hierarchy of multiple states including multiple artificial reality experiences with augments
US20240386667A1 (en) * 2023-05-17 2024-11-21 Maris Jacob Ensing Handheld personal device for providing customized content by a virtual docent
US20240386065A1 (en) * 2023-05-17 2024-11-21 Maris Jacob Ensing System for providing customized content
WO2024249662A2 (en) * 2023-05-31 2024-12-05 SimX, Inc. Automated interactive simulations through fusion of interaction tracking and artificial intelligence
US12443286B2 (en) 2023-06-02 2025-10-14 Apple Inc. Input recognition based on distinguishing direct and indirect user interactions
US12118200B1 (en) 2023-06-02 2024-10-15 Apple Inc. Fuzzy hit testing
WO2024254096A1 (en) 2023-06-04 2024-12-12 Apple Inc. Methods for managing overlapping windows and applying visual effects
US12099695B1 (en) 2023-06-04 2024-09-24 Apple Inc. Systems and methods of managing spatial groups in multi-user communication sessions
US20250005785A1 (en) * 2023-06-30 2025-01-02 Rockwell Collins, Inc. Online correction for context-aware image analysis for object classification
WO2025023704A1 (en) * 2023-07-25 2025-01-30 삼성전자 주식회사 Wearable electronic device for recognizing object, and control method thereof
JP2025087377A (en) * 2023-11-29 2025-06-10 キヤノン株式会社 Image display device, image display device driving method, and program thereof
WO2025154911A1 (en) * 2024-01-17 2025-07-24 삼성전자주식회사 Method and device for acquiring content related to target object

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559713A (en) * 2013-11-10 2014-02-05 深圳市幻实科技有限公司 Method and terminal for providing augmented reality
WO2015161307A1 (en) * 2014-04-18 2015-10-22 Magic Leap, Inc. Systems and methods for augmented and virtual reality

Family Cites Families (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6222525B1 (en) 1992-03-05 2001-04-24 Brad A. Armstrong Image controllers with sheet connected sensors
US5670988A (en) 1995-09-05 1997-09-23 Interlink Electronics, Inc. Trigger operated electronic device
US20030210228A1 (en) 2000-02-25 2003-11-13 Ebersole John Franklin Augmented reality situational awareness system and method
US7503006B2 (en) * 2003-09-25 2009-03-10 Microsoft Corporation Visual indication of current voice speaker
US8778022B2 (en) 2004-11-02 2014-07-15 E-Vision Smart Optics Inc. Electro-active intraocular lenses
US20070081123A1 (en) 2005-10-07 2007-04-12 Lewis Scott W Digital eyewear
US8696113B2 (en) 2005-10-07 2014-04-15 Percept Technologies Inc. Enhanced optical and perceptual digital eyewear
US11428937B2 (en) 2005-10-07 2022-08-30 Percept Technologies Enhanced optical and perceptual digital eyewear
JP5036177B2 (en) 2005-12-12 2012-09-26 オリンパス株式会社 Information display device
JP5228307B2 (en) 2006-10-16 2013-07-03 ソニー株式会社 Display device and display method
JP2009081529A (en) 2007-09-25 2009-04-16 Nikon Corp Head mounted display device
US9304319B2 (en) 2010-11-18 2016-04-05 Microsoft Technology Licensing, Llc Automatic focus improvement for augmented reality displays
US10156722B2 (en) 2010-12-24 2018-12-18 Magic Leap, Inc. Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality
US9348143B2 (en) 2010-12-24 2016-05-24 Magic Leap, Inc. Ergonomic head mounted display device and optical system
BR112013034009A2 (en) 2011-05-06 2017-02-07 Magic Leap Inc world of massive simultaneous remote digital presence
US8223088B1 (en) 2011-06-09 2012-07-17 Google Inc. Multimode input field for a head-mounted display
US10795448B2 (en) 2011-09-29 2020-10-06 Magic Leap, Inc. Tactile glove for human-computer interaction
US9081177B2 (en) 2011-10-07 2015-07-14 Google Inc. Wearable computer with nearby object response
US9255813B2 (en) * 2011-10-14 2016-02-09 Microsoft Technology Licensing, Llc User controlled real object disappearance in a mixed reality display
CA3164530C (en) 2011-10-28 2023-09-19 Magic Leap, Inc. System and method for augmented and virtual reality
BR112014012615A2 (en) 2011-11-23 2017-06-13 Magic Leap Inc three-dimensional augmented reality and virtual reality display system
US20130178257A1 (en) * 2012-01-06 2013-07-11 Augaroo, Inc. System and method for interacting with virtual objects in augmented realities
JP6070691B2 (en) 2012-03-15 2017-02-01 三洋電機株式会社 Nonaqueous electrolyte secondary battery
KR102095330B1 (en) 2012-04-05 2020-03-31 매직 립, 인코포레이티드 Wide-field of view (fov) imaging devices with active foveation capability
CN104603865A (en) * 2012-05-16 2015-05-06 丹尼尔·格瑞贝格 A system worn by a user on the move for substantially augmented reality by anchoring virtual objects
US9671566B2 (en) 2012-06-11 2017-06-06 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
EP4130820B1 (en) 2012-06-11 2024-10-16 Magic Leap, Inc. Multiple depth plane three-dimensional display using a wave guide reflector array projector
JP5580855B2 (en) 2012-06-12 2014-08-27 株式会社ソニー・コンピュータエンタテインメント Obstacle avoidance device and obstacle avoidance method
US9219901B2 (en) * 2012-06-19 2015-12-22 Qualcomm Incorporated Reactive user interface for head-mounted display
JP5351311B1 (en) 2012-06-29 2013-11-27 株式会社ソニー・コンピュータエンタテインメント Stereoscopic image observation device and transmittance control method
KR101957943B1 (en) * 2012-08-31 2019-07-04 삼성전자주식회사 Method and vehicle for providing information
CA2884663A1 (en) 2012-09-11 2014-03-20 Magic Leap, Inc. Ergonomic head mounted display device and optical system
US9077647B2 (en) * 2012-10-05 2015-07-07 Elwha Llc Correlating user reactions with augmentations displayed through augmented views
US9812046B2 (en) * 2013-01-10 2017-11-07 Microsoft Technology Licensing, Llc Mixed reality display accommodation
US9395543B2 (en) 2013-01-12 2016-07-19 Microsoft Technology Licensing, Llc Wearable behavior-based vision system
KR102507206B1 (en) 2013-01-15 2023-03-06 매직 립, 인코포레이티드 Ultra-high resolution scanning fiber display
US10685487B2 (en) * 2013-03-06 2020-06-16 Qualcomm Incorporated Disabling augmented reality (AR) devices at speed
CN105188516B (en) 2013-03-11 2017-12-22 奇跃公司 For strengthening the System and method for virtual reality
NZ735754A (en) 2013-03-15 2019-04-26 Magic Leap Inc Display system and method
JP5813030B2 (en) * 2013-03-22 2015-11-17 キヤノン株式会社 Mixed reality presentation system, virtual reality presentation system
US9874749B2 (en) 2013-11-27 2018-01-23 Magic Leap, Inc. Virtual and augmented reality systems and methods
US10262462B2 (en) 2014-04-18 2019-04-16 Magic Leap, Inc. Systems and methods for augmented and virtual reality
GB2517143A (en) * 2013-08-07 2015-02-18 Nokia Corp Apparatus, method, computer program and system for a near eye display
US9542626B2 (en) * 2013-09-06 2017-01-10 Toyota Jidosha Kabushiki Kaisha Augmenting layer-based object detection with deep convolutional neural networks
JP6479785B2 (en) 2013-10-16 2019-03-06 マジック リープ, インコーポレイテッドMagic Leap,Inc. Virtual or augmented reality headset with adjustable interpupillary distance
US9857591B2 (en) 2014-05-30 2018-01-02 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
CN107219628B (en) 2013-11-27 2020-05-01 奇跃公司 Virtual and augmented reality systems and methods
CN103761085B (en) * 2013-12-18 2018-01-19 微软技术许可有限责任公司 Mixed reality holographic object is developed
JP6079614B2 (en) 2013-12-19 2017-02-15 ソニー株式会社 Image display device and image display method
KR102177133B1 (en) 2014-01-31 2020-11-10 매직 립, 인코포레이티드 Multi-focal display system and method
NZ722903A (en) 2014-01-31 2020-05-29 Magic Leap Inc Multi-focal display system and method
JP6440115B2 (en) * 2014-03-06 2018-12-19 パナソニックIpマネジメント株式会社 Display control apparatus, display control method, and display control program
US10203762B2 (en) 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US20150262425A1 (en) 2014-03-13 2015-09-17 Ryan Hastings Assessing augmented reality usage and productivity
EP3140780B1 (en) 2014-05-09 2020-11-04 Google LLC Systems and methods for discerning eye signals and continuous biometric identification
NZ764905A (en) 2014-05-30 2022-05-27 Magic Leap Inc Methods and systems for generating virtual content display with a virtual or augmented reality apparatus
US10311638B2 (en) * 2014-07-25 2019-06-04 Microsoft Technology Licensing, Llc Anti-trip when immersed in a virtual reality environment
US10108256B2 (en) * 2014-10-30 2018-10-23 Mediatek Inc. Systems and methods for processing incoming events while performing a virtual reality session
US20160131905A1 (en) * 2014-11-07 2016-05-12 Kabushiki Kaisha Toshiba Electronic apparatus, method and storage medium
US20160131904A1 (en) * 2014-11-07 2016-05-12 Osterhout Group, Inc. Power management for head worn computing
US20160140553A1 (en) * 2014-11-17 2016-05-19 Visa International Service Association Authentication and transactions in a three-dimensional image enhancing display device
US9858676B2 (en) * 2015-01-08 2018-01-02 International Business Machines Corporation Displaying location-based rules on augmented reality glasses
US10725297B2 (en) * 2015-01-28 2020-07-28 CCP hf. Method and system for implementing a virtual representation of a physical environment using a virtual reality environment
CN107645921B (en) 2015-03-16 2021-06-22 奇跃公司 Methods and systems for diagnosing and treating health disorders
US10296086B2 (en) * 2015-03-20 2019-05-21 Sony Interactive Entertainment Inc. Dynamic gloves to convey sense of touch and movement for virtual objects in HMD rendered environments
USD758367S1 (en) 2015-05-14 2016-06-07 Magic Leap, Inc. Virtual reality headset
EP3113106A1 (en) * 2015-07-02 2017-01-04 Nokia Technologies Oy Determination of environmental augmentation allocation data
US9836845B2 (en) * 2015-08-25 2017-12-05 Nextvr Inc. Methods and apparatus for detecting objects in proximity to a viewer and presenting visual representations of objects in a simulated environment
US10482681B2 (en) * 2016-02-09 2019-11-19 Intel Corporation Recognition-based object segmentation of a 3-dimensional image
US9726896B2 (en) * 2016-04-21 2017-08-08 Maximilian Ralph Peter von und zu Liechtenstein Virtual monitor display technique for augmented reality environments
CN110419018B (en) 2016-12-29 2023-08-04 奇跃公司 Automatic control of wearable display devices based on external conditions

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559713A (en) * 2013-11-10 2014-02-05 深圳市幻实科技有限公司 Method and terminal for providing augmented reality
WO2015161307A1 (en) * 2014-04-18 2015-10-22 Magic Leap, Inc. Systems and methods for augmented and virtual reality

Also Published As

Publication number Publication date
IL290002A (en) 2022-03-01
US20220044021A1 (en) 2022-02-10
IL290002B2 (en) 2023-10-01
JP2022140621A (en) 2022-09-26
AU2017387781A1 (en) 2019-07-25
IL267683B (en) 2022-02-01
CA3051060A1 (en) 2018-07-05
KR20230107399A (en) 2023-07-14
EP3563215A1 (en) 2019-11-06
KR102630774B1 (en) 2024-01-30
AU2017387781B2 (en) 2022-04-28
WO2018125428A1 (en) 2018-07-05
KR102553190B1 (en) 2023-07-07
KR20190100957A (en) 2019-08-29
IL290002B1 (en) 2023-06-01
IL267683A (en) 2019-08-29
US11138436B2 (en) 2021-10-05
EP3563215A4 (en) 2020-08-05
CN117251053A (en) 2023-12-19
JP7487265B2 (en) 2024-05-20
JP2020507797A (en) 2020-03-12
US11568643B2 (en) 2023-01-31
JP7190434B2 (en) 2022-12-15
CN110419018A (en) 2019-11-05
US20180189568A1 (en) 2018-07-05

Similar Documents

Publication Publication Date Title
JP7487265B2 (en) Automatic control of a wearable display device based on external conditions
JP7778127B2 (en) Context-aware user interface menus
JP7578711B2 (en) Avatar customization for optimal gaze discrimination
JP7253017B2 (en) AUGMENTED REALITY SYSTEM AND METHOD USING REFLECTION
CN116883628A (en) Wearable system and method for providing virtual remote control in mixed reality environment
NZ794186A (en) Automatic control of wearable display device based on external conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant