US20160314562A1 - Image processing method and recording medium - Google Patents
Image processing method and recording medium Download PDFInfo
- Publication number
- US20160314562A1 US20160314562A1 US15/135,639 US201615135639A US2016314562A1 US 20160314562 A1 US20160314562 A1 US 20160314562A1 US 201615135639 A US201615135639 A US 201615135639A US 2016314562 A1 US2016314562 A1 US 2016314562A1
- Authority
- US
- United States
- Prior art keywords
- model
- sight
- models
- user
- line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/363—Graphics controllers
-
- G06T5/003—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/147—Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0007—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/02—Non-photorealistic rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/80—Shading
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
- G09G5/391—Resolution modifying circuits, e.g. variable screen formats
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/36—Level of detail
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0613—The adjustment depending on the type of the information to be displayed
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/08—Arrangements within a display terminal for setting, manually or automatically, display parameters of the display terminal
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/12—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/14—Solving problems related to the presentation of information to be displayed
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/16—Use of wireless transmission of display information
Definitions
- the present invention relates to an image processing method and a recording medium storing a program related to a head mounted display having a line of sight detection function.
- a head mounted display system is conventionally known that optically more compresses a peripheral region of an image as compared to a center region.
- a user's line of sight is not always directed to the center of a display part and may be directed to a periphery of the display part. In this case, the user recognizes degradation in image quality due to a reduced resolution in a peripheral part in the prior art.
- the present invention was conceived in view of such a problem and it is therefore an object of the present invention to provide an image processing method and a recording medium in which a program is recorded that is capable of improving image quality recognized by a user regardless of a gaze direction without increasing a processing load in a head mounted display having a line of sight detection function.
- an image processing method of the present invention is characterized in that the image processing method is executed by an information processing device communicably connected to a head mounted display including a display device configured to display an image and a sight line detector configured to detect a user's line of sight.
- the image processing method includes controlling drawing of the image such that the drawing is simplified in a peripheral region of a gaze point of the user as compared to a vicinity region that is nearer to the gaze point than the peripheral region, based on information of the user's line of sight detected by the sight line detector.
- a program recorded in a recording medium of the present invention is characterized in that an information processing device communicably connected to a head mounted display including a display device configured to display an image and a sight line detector configured to detect a user's line of sight is caused to function as a drawing control part.
- the drawing control part controls drawing of the image such that the drawing is simplified in a peripheral region of a gaze point of the user as compared to a vicinity region that is nearer to the gaze point than the peripheral region, based on information of the user's line of sight detected by the sight line detector.
- optical characteristics of the eye generally degrade as a distance from the center position of the retina increases, degradation in image quality may not be recognized by a user and therefore may be acceptable in a peripheral region at a predetermined distance from the gaze point.
- the program recorded in the recording medium of the present invention causes the information processing device communicably connected to the head mounted display to function as the drawing control part.
- This drawing control part controls drawing of an image such that the drawing is simplified in the peripheral region of the user's gaze point as compared to the vicinity region of the gaze point based on the information on the user's line of sight detected by the sight line detector. Since the drawing is controlled in accordance with the direction of the user's line of sight, the simplified drawing in the peripheral region is hardly recognized by the user regardless of the direction of the line of sight. Because of the simplified drawing in the peripheral region, the vicinity region of the gaze point can accordingly be drawn with higher image quality without increasing a processing load. Therefore, the image quality recognized by the user can be improved regardless of the gaze direction.
- the image processing method of the present invention further includes storing a plurality of models drawn at different levels of image quality for each type of the model.
- the controlling drawing of the image includes switching the model located in the peripheral region to the model with lower image quality as compared to the model located in the vicinity region.
- the program recorded in the recording medium of the present invention further causes the information processing device to function as a storage part that stores a plurality of models drawn at different levels of image quality for each type of the model.
- the drawing control part switches the model located in the peripheral region to the model with lower image quality as compared to the model located in the vicinity region.
- the program recorded in the recording medium of the present invention further causes the information processing device to function as the storage part.
- This storage part stores a plurality of models drawn at different levels of image quality for each type of the model.
- the drawing control part switches the model located in the peripheral region of the gaze point to the model with the lower image quality as compared to the model located in the vicinity region of the gaze point.
- the controlling drawing of the image includes switching the model located in the vicinity region to the model with highest image quality out of corresponding type of the models stored.
- the drawing control part switches the model located in the vicinity region to the model with highest image quality out of corresponding type of the models stored in the storage part.
- the drawing control part switches the model located in the vicinity region to the model with highest image quality out of the corresponding type of the models stored in the storage part.
- the storing a plurality of models includes storing a plurality of models having different levels of polygon count for each type of the model.
- the controlling drawing of the image includes switching the model located in the peripheral region to the model having smaller polygon count as compared to the model located in the vicinity region.
- the storage part stores a plurality of models having different levels of polygon count for each type of the model.
- the drawing control part switches the model located in the peripheral region to the model having smaller polygon count as compared to the model located in the vicinity region.
- the storage part stores a plurality of models having different levels of polygon count for each type of the model.
- the drawing control part switches the model located in the peripheral region to the model having smaller polygon count as compared to the model located in the vicinity region.
- an image can be drawn with higher quality with a relatively larger polygon count in the easily recognized vicinity region while simplifying the drawing with a relatively smaller polygon count in the peripheral region hardly recognized by the user and, therefore, the image quality recognized by the user can further be improved regardless of the gaze direction.
- the storing a plurality of models includes storing a plurality of models having different levels of texture resolution for each type of the model.
- the controlling drawing of the image includes switching the model located in the peripheral region to the model having lower texture resolution as compared to the model located in the vicinity region.
- the storage part stores a plurality of models having different levels of texture resolution for each type of the model.
- the drawing control part switches the model located in the peripheral region to the model having lower texture resolution as compared to the model located in the vicinity region.
- the storage part stores a plurality of models having different levels of texture resolution for each type of the model.
- the drawing control part switches the model located in the peripheral region to the model having lower texture resolution as compared to the model located in the vicinity region.
- an image can be drawn with higher quality with a relatively higher texture resolution in the easily recognized vicinity region while simplifying the drawing with a relatively lower texture resolution in the peripheral region hardly recognized by the user and, therefore, the image quality recognized by the user can be improved regardless of the gaze direction.
- the storing a plurality of models includes storing a plurality of models having different degrees of shader effect for each type of the model.
- the controlling drawing of the image includes switching the model located in the peripheral region to the model having smaller degree of shader effect as compared to the model located in the vicinity region.
- the storage part stores a plurality of models having different degrees of shader effect for each type of the model.
- the drawing control part switches the model located in the peripheral region to the model having smaller degree of shader effect as compared to the model located in the vicinity region.
- the storage part stores a plurality of models having different degrees of shader effect for each type of the model.
- the drawing control part switches the model located in the peripheral region to the model having smaller degree of shader effect as compared to the model located in the vicinity region.
- an image can be drawn with higher quality at a relatively larger degree of shader effect in the easily recognized vicinity region while simplifying the drawing at a relatively smaller degree of shader effect in the peripheral region hardly recognized by the user and, therefore, the image quality recognized by the user can be improved regardless of the gaze direction.
- the image processing method and the program recorded in the recording medium of the present invention enables a head mounted display having a line of sight detection function to improve the image quality recognized by a user regardless of a gaze direction without increasing a processing load.
- FIG. 1 is an explanatory view of an example of an overall configuration of a head mounted display system related to an embodiment.
- FIG. 2 is a block diagram of an example of configurations of a head mounted display and an information processing device related to the embodiment.
- FIG. 3 is a flowchart of an example of process procedures related to adjustment of line of sight detection performed by a CPU of the information processing device.
- FIG. 4 is an explanatory view for explaining an example of a marker display form.
- FIG. 5 is an explanatory view for explaining another example of the marker display form.
- FIG. 6 is an explanatory view for explaining another example of the marker display form.
- FIG. 7 is an explanatory view for explaining another example of the marker display form.
- FIG. 8 is an explanatory view for explaining another example of the marker display form.
- FIG. 9 is a flowchart of an example of process procedures related to drawing control based on a line of sight performed by the CPU of the information processing device.
- FIG. 10 is an explanatory view for explaining an example of a form of the drawing control based on the line of sight.
- FIG. 11 is an explanatory table for explaining an example of storage contents of a storage part.
- FIG. 12 is an explanatory view for explaining another example of a form of the drawing control based on the line of sight.
- FIG. 13 is an explanatory table for explaining another example of storage contents of the storage part.
- FIG. 14 is a block diagram of an example of a hardware configuration of the information processing device.
- FIG. 1 An example of an overall configuration of a head mounted display system 1 related to this embodiment will first be described with reference to FIG. 1 .
- the head mounted display system 1 has an information processing device 3 and a head mounted display 5 .
- the information processing device 3 and the head mounted display 5 are communicably connected.
- FIG. 1 shows the case of wired connection, wireless connection may be used.
- the information processing device 3 is a so-called computer.
- Examples of the computer in this case include not only those manufactured and sold as computers such as server computers, desktop computers, notebook computers, and tablet computers, but also those manufactured and sold as telephones such as portable telephones, smartphones, and phablets, and those manufactured and sold as game machines or multimedia terminals such as portable game terminals, game consoles, and entertainment devices.
- the head mounted display 5 is a display device that can be mounted on the head or face of a user.
- the head mounted display 5 displays images (including still images and moving images) generated by the information processing device 3 .
- FIG. 1 shows a goggle type head mounted display as an example, the head mounted display is not limited to this shape.
- the head mounted display 5 may be of either a transmission type or a non-transmission type.
- the head mounted display 5 has a display device 7 displaying an image, a sight line detector 9 detecting a line of sight of a user, and various sensors 11 .
- the display device 7 includes, for example, a liquid crystal display or an organic EL display.
- the display device 7 has a left-eye display device 7 L and a right-eye display device 7 R and can display an independent image for each of the left and right eyes of the user to display a 3D image.
- the display device 7 may not necessarily have two display devices and may be a single display device common to the left and right eyes or may be a single display device corresponding to only one eye, for example.
- the sight line detector 9 includes, for example, a visible light camera and a line of sight information calculation part not shown.
- An image of a user's eye takin by the visible light camera is sent to the line of sight information calculation part.
- the line of sight information calculation part defines, for example, the inner corner of the eye as a reference point and the iris (so-called colored portion of the eye) as a moving point to calculate user's line of sight information based on the position of the iris relative to the inner corner of the eye.
- the line of sight information is information indicative of the direction of the user's line of sight and is, for example, vector information of the line of sight.
- the line of sight information calculation part may be implemented in the information processing device 3 .
- the line of sight information calculated by the line of sight information calculation part (in other words, detected by the sight line detector 9 ) is transmitted from the head mounted display 5 to the information processing device 3 .
- the line of sight information calculation part is implemented in the information processing device 3 , an image taken by the visible light camera is transmitted to the information processing device 3 and the line of sight information is calculated in the information processing device 3 .
- the sight line detector 9 has a sight line detector 9 L for photographing the left eye and a sight line detector 9 R for photographing the right eye and can independently detect the line of sight information of each of the left and right eyes of the user.
- the sight line detector 9 may not necessarily include two devices and the single sight line detector 9 may photograph only one eye.
- the sight line detector 9 may include an infrared LED and an infrared camera.
- the infrared camera photographs the eye irradiated by the infrared LED and, based on the photographed image, the line of sight information calculation part defines, for example, a position of reflection light on the cornea (corneal reflex) generated by the irradiation of the infrared LED as the reference point and the pupil as the moving point to calculate the user's line of sight information (line of sight vector) based on the position of the pupil relative to the position of the corneal reflex.
- the detection technique may include detecting a change in surface electromyography of the user's face or in weak myogenic potential (ocular potential) generated when the eyeball is moved.
- the various sensors 11 include an acceleration sensor and a gyro sensor, for example. These sensors detect the movement and position of the user's head. Base on the detection results of the various sensors 11 , the information processing device 3 changes the images displayed on the display device 7 in accordance with the movement and position of the user's head so as to achieve realistic virtual reality.
- a communication control part 13 controls communications with the information processing device 3 .
- the communication control part 13 receives an image to be displayed on the display device 7 from the information processing device 3 and transmits the line of sight information detected by the sight line detector 9 and the detection information detected by the various sensors 11 to the information processing device 3 .
- the communication control part 13 may be implemented by a program executed by a CPU not depicted mounted on the head mounted display 5 or may partially or entirely be implemented by an actual device such as an ASIC, an FPGA, or other electric circuits.
- the configuration form of the head mounted display 5 is not limited to the above description.
- the head mounted display 5 may be equipped with an earphone or a headphone.
- FIG. 2 An example of a functional configuration of the information processing device 3 will be described with reference to FIG. 2 . It is noted that although the functional configuration of the information processing device 3 will be described in terms of a line of sight detection adjustment function and a drawing control function based on the line of sight, a functional configuration related to the other normal functions of the information processing device 3 (e.g., display and reproduction of contents and activation of a game) will not be described.
- the arrows shown in FIG. 2 indicate an example of signal flow and are not intended to limit the signal flow directions.
- the information processing device 3 has a display control part 15 , a generation part 17 , an identification part 19 , a recording control part 21 , a determination part 23 , a drawing control part 25 , a storage part 27 , and a communication control part 29 .
- the display control part 15 displays a marker 31 (see FIGS. 4 to 8 ) guiding a user's line of sight on the display device 7 of the head mounted display 5 .
- the marker 31 has a plurality of display positions (at least two positions) each set at a predetermined position in a two-dimensional coordinate system in a visual field 33 (see FIGS. 4 to 8 ) of the display device 7 .
- the display control part 15 displays the marker 31 at the predetermined position to guide the user's line of sight to the predetermined position and causes the user gaze at the position.
- the display control part 15 displays the marker 31 such that the marker 31 stands still at preset positions in the visual field 33 of the display device 7 , or such that the marker 31 moves through a preset route in the visual field 33 , regardless of the movement and position of the user's head.
- the generation part 17 generates correlation information between the user's line of sight information guided by the marker 31 and the display position of the marker 31 . Specifically, when the user gazes at the marker 31 displayed on the display device 7 , the line of sight information is detected by the sight line detector 9 and transmitted to the information processing device 3 .
- the display positions of the marker 31 are set in advance as described above, and information of the display positions (e.g., coordinates in the two-dimensional coordinate system of the visual field 33 ) is recorded in, for example, a recording device 117 (see FIG. 14 ) of the information processing device 3 .
- the generation part 17 acquires the information of the user's line of sight guided by the marker 31 and detected by the sight line detector 9 and reads the display position information of the marker 31 from the recording device 117 etc. The generation part 17 performs this operation for a plurality of display positions of the marker 31 to generate the correlation information between the line of sight information and the display positions.
- the format of the correlation information is not particularly limited and may be, for example, a table correlating the coordinates of the visual field 33 with the line of sight information or may be, for example, an arithmetic expression for correcting the line of sight information to corresponding coordinates.
- the format may be two tables correlating the line of sight information of one eye with the coordinates and the line of sight information of the other eye with the coordinates, or may be a plurality of arithmetic expressions for correcting the pieces of the line of sight information to corresponding coordinates.
- the pieces of the line of sight information of the left and right eyes may be integrated into a piece of the line of sight information by a predetermined arithmetic expression etc., before generating the correlation information.
- the generated correlation information is written and recorded by the recording control part 21 into the recording device 117 including a hard disk, for example.
- the identification part 19 identifies a user's gaze position (also referred to as a gaze point) based on the line of sight information by using the correlation information. Specifically, after completion of the adjustment of the line of sight detection described above (from the display of the marker 31 to the recording of the correlation information), when the user gazes at a predetermined position in the visual field 33 of the display device 7 during, for example, display or reproduction of contents or activation of a game, the line of sight information in this case is detected by the sight line detector 9 and transmitted to the information processing device 3 .
- the identification part 19 refers to the correlation information read from the recording device 117 to identify a position (coordinates) corresponding to the acquired line of sight information, i.e., a user's gaze position.
- the recording control part 21 writes and reads various pieces of information into and from the recording device 117 .
- the recording control part 21 correlates the correlation information generated by the generation part 17 with identification information of a user corresponding to the correlation information and writes and records the correlation information along with the identification information for each user in the recording device 117 .
- the determination part 23 determines whether a user using the head mounted display 5 is a user having the recorded correlation information. Specifically, when using the head mounted display 5 , a user inputs identification information (such as a login ID) by using an input device 113 (see FIG. 14 ) etc. of the information processing device 3 , and the determination part 23 determines whether the correlation information corresponding to the input identification information is recorded in the recording device 117 . In the case that the correlation information corresponding to the input identification information is recorded in the recording device 117 , the determination part 23 determines that the user is a user having the recorded correlation information. In this case, the recording control part 21 reads the corresponding correlation information from the recording device 117 and the identification part 19 uses the read correlation information to identify the gaze position of the user based on the line of sight information.
- identification information such as a login ID
- the drawing control part 25 controls drawing of an image displayed on the display device 7 such that the drawing is simplified in a peripheral region of a user's gaze point 39 (see FIGS. 10 and 12 ) as compared to a vicinity region of the gaze point 39 .
- the vicinity region is nearer to the gaze point 39 than the peripheral region.
- the storage part 27 stores M (see FIGS. 11 and 13 ) a plurality of models drawn at different levels of image quality for each type of the model, and the drawing control part 25 switches the model M located in the peripheral region of the gaze point 39 to a model with lower image quality as compared to the model M located in the vicinity region in the visual field 33 of the display device 7 .
- the drawing control part 25 switches the model M located in the vicinity region of the gaze point 39 to the model with highest image quality out of the corresponding models M stored in the storage part 27 .
- the models M are person models, character models, background models, etc., displayed in the visual field 33 by the display device 7 during execution of a game, for example, and may include icons displayed on a menu screen etc. during display or reproduction of contents, for example.
- the drawing control part 25 switches the models M to control the drawing for each model in the description of this embodiment, the drawing may be controlled also in an image portion other than the models, such as a background image, a motion image, and an effect image, for example.
- the storage part 27 stores for each of the models M a plurality of models drawn at different levels of image quality as described above.
- the models drawn at different levels of image quality are, for example, models having different levels of polygon count, models having different levels of texture resolution, and models having different degrees of shader effect.
- These factors related to the image quality may be changed in stages independently of each other in the drawing, or two or more of these factors may compositely be changed in stages in the drawing.
- the factors related to the image quality are not limited to the above factors and may include other factors.
- the communication control part 29 controls communications with the head mounted display 5 .
- a mode of communications between the control parts 13 , 29 is not particularly limited as long as information can be transmitted and received. As described, either wired or wireless communication may be employed.
- the processes in the processing parts described above are not limited to the example in terms of assignment of these processes and may be executed by the fewer number of the processing parts (e.g., one processing part) or may be executed by more finely divided processing parts.
- the functions of the processing parts described above are implemented by a program executed by a CPU 101 described later (see FIG. 14 ) or may partially or entirely be implemented by an actual device such as an ASIC, an FPGA, or other electric circuits, for example.
- the information processing device 3 uses the determination part 2 to determine whether the correlation information of a user using the head mounted display 5 is recorded in the recording device 117 .
- the determination part 2 uses the identification information (such as a login ID) input by the user through the input device 113 etc., to determine whether the corresponding correlation information is recorded in the recording device 117 .
- the information processing device 3 goes to step S 10 .
- step S 10 the information processing device 3 uses the recording control part 21 to read the correlation information corresponding to the input identification information from the recording device 117 . Subsequently, the information processing device 3 goes to step S 40 .
- step S 5 the information processing device 3 goes to step S 15 .
- the information processing device 3 uses the display control part 15 to display the marker 31 guiding the user's line of sight on the display device 7 of the head mounted display 5 .
- a display form of the marker 31 in this case will be described later ( FIGS. 4 to 8 ).
- the information processing device 3 uses the generation part 17 to acquire the information of the user's line of sight guided by the marker 31 and detected by the sight line detector 9 .
- the information processing device 3 determines whether the display of the marker 31 is completed. As described in detail later, the marker 31 is statically displayed at a plurality of predetermined locations in a predetermined order in the visual field 33 of the display device 7 , or the marker 31 moves continuously through a route along the plurality of the locations. In the case that this display sequence of the marker 31 is not completed to the end (step S 25 :NO), the information processing device 3 returns to step S 15 to repeat the display of the marker 31 and the acquisition of the line of sight information. In the case that the display of the marker 31 is completed (step S 25 :YES), the information processing device 3 goes to step S 30 .
- the information processing device 3 uses the generation part 17 to generate the correlation information.
- the generation part 17 reads from the recording device 117 etc. the display position information of the marker 31 corresponding to each of the pieces of line of sight information acquired at step S 20 and generates the correlation information between the line of sight information and the display positions.
- the correlation information is, for example, a correlation table or a correction arithmetic expression as described above.
- the information processing device 3 writes and records the correlation information generated by the recording control part 21 at step S 30 into the recording device 117 .
- step S 15 to step S 35 With the process procedures from step S 15 to step S 35 , the adjustment of the line of sight detection is completed.
- the processes from step S 40 are executed during, for example, the display or reproduction of contents or the activation of a game by the information processing device 3 after completion of the adjustment of the line of sight detection.
- the information processing device 3 acquires the user's line of sight information detected by the sight line detector 9 .
- the information processing device 3 uses the identification part 19 to refer to the correlation information read from the recording device 117 so as to identify the user's gaze position corresponding to the line of sight information acquired at step S 40 .
- This identified gaze position is utilized for various applications using line of sight input (e.g., presentation of a game).
- the information processing device 3 determines whether the user inputs a termination instruction (e.g., a termination instruction for the display or reproduction of contents or a termination instruction for a game) by using the input device 113 etc. In the case that the termination instruction is not input (step S 50 :NO), the information processing device 3 returns to step S 40 to repeat the acquisition of the line of sight information, the identification of the gaze position, etc. In the case that the termination instruction is input (step S 50 :YES), this flow is terminated.
- a termination instruction e.g., a termination instruction for the display or reproduction of contents or a termination instruction for a game
- the process procedures described above are an example and may at least partially be deleted or modified, or other procedures may be added.
- the generated correlation information may not be recorded and reused, and the line of sight detection may be adjusted each time the head mounted display 5 is used. This eliminates the need for steps S 5 and S 10 .
- the information processing device 3 may only temporarily retain the correlation information generated at step S 30 in, for example, a RAM 105 (see FIG. 14 ), so as to use the correlation information without recording into the recording device 117 . This eliminates the need for step S 35 .
- the marker 31 is displayed in the user's visual field 33 in the display device 7 .
- the shape of the visual field 33 is substantially rectangular in examples shown in FIG. 4 etc., this is not a limitation and the shape may be an ellipse or a polygonal shape other than a rectangle, for example.
- the marker 31 has a plurality of display positions set in advance at predetermined coordinate positions in a two-dimensional coordinate system (an X-Y coordinate system shown in FIG. 4 ) in the visual field 33 .
- the display positions of the marker 31 are set at five positions, one at a center position and the others close to four corners of the visual field 33 .
- the center position is a position of an intersection of two diagonal lines of a rectangle, for example.
- the X-axis positive side, the X-axis negative side, the Y-axis positive side, and the Y-axis negative side are on the right, left, upper, and lower sides, respectively, in the X-Y coordinate system shown in FIG. 4 for convenience of description, the marker 31 in the example shown in FIG.
- This order of display of the marker 31 is not a limitation and may be changed such that, for example, the marker 31 is displayed at the center position at the end or in the upper left corner at the start.
- the marker 31 is statically displayed at each position for a constant time (e.g., about 1 second). While the user gazes at the marker 31 for the constant time, the sight line detector 9 detects the line of sight information. In this situation, for example, the sight line detector 9 may detect the line of sight information a plurality of times within the constant time and obtain an average etc. as the line of sight information corresponding to the display position of the marker 31 . Alternatively, for example, the line of sight information detected at certain timing within the constant time may be defined as the line of sight information corresponding to the display position of the marker 31 .
- the marker 31 is drawn as a graphic of a black circle in this example.
- the marker 31 is not limited to this drawing form and may be, for example, a graphic of another shape such as a polygonal shape and an ellipse, a character, a sign, or an icon of a mascot etc. Additionally, the marker 31 may have a pattern, color, etc. Therefore, the marker 31 may be any marker easy for the user to gaze at and the drawing form thereof is not particularly limited.
- the display control part 15 displays the marker 31 such that the marker 31 stands still at the five positions, one at the center position and the others close to the four corners of the visual field 33 , regardless of the movement and position of the user's head. As a result, even in the case that the user moves the head during adjustment of the line of sight detection, the marker 31 is displayed at the set positions without being affected by the movement and the line of sight detection can properly be adjusted.
- the display positions of the marker 31 are set at three positions, one at the center position and the others close to two corners on one diagonal line of the visual field 33 .
- the marker 31 is sequentially statically displayed in the order of the center position, the position close to the upper left corner, and the position close to the lower right corner.
- the marker 31 may be displayed in two corners on the other diagonal line, i.e., the positions close to the upper right corner and the lower left corner. This order of display of the marker 31 is not a limitation and may be changed.
- the display positions of the marker 31 can be reduced to shorten the adjustment time in this example by displaying the marker 31 at the center position and two corners on a diagonal line out of the four corners of the visual field 33 .
- the display positions of the marker 31 are set only at two positions close to two corners on one diagonal line of the visual field 33 .
- the marker 31 is sequentially statically displayed in the order of the position close to the upper left corner and the position close to the lower right corner.
- the marker 31 may be displayed at two corners on the other diagonal line, i.e., the positions close to the upper right corner and the lower left corner.
- the marker 31 may be displayed in reverse order.
- the line of sight information in the two corners on one diagonal line of the visual field 33 can be estimated as described above. Additionally, since the center position is the substantially midpoint between the two corners on the diagonal line, the line of sight information at the center position of the visual field 33 can also be estimated by obtaining an average of the line of sight information in the two corners, for example. Therefore, the display positions of the marker 31 can be minimized to further shorten the adjustment time in this example by displaying the marker 31 only at the two corners on one diagonal line.
- the displayed marker 31 continuously moves through a preset route instead of the static display as shown in FIGS. 4 to 6 .
- the displayed marker 31 continuously moves though a route 35 including the display positions shown in FIG. 4 , i.e., five positions at the center position and close to the four corners of the visual field 33 , in the order of the center position, the position close to the upper left corner, the position close to the lower left corner, the position close to the upper right corner, and the position close to the lower right corner.
- the sight line detector 9 detects the line of sight information at least when the marker 31 passes through reference positions, i.e., at least when the marker 31 passes through the five positions at the center position and the four corners in this example. In addition to these five positions, the line of sight information may also be detected at the midpoints thereof.
- the movement route of the marker 31 is not limited to the above example and may be any route at least including two corners on a diagonal line.
- the order of movement of the marker 31 is not limited to the above example and may be changed.
- a frame 37 surrounding a display area of the marker 31 is displayed along with the marker 31 in the visual field 33 . Since the marker 31 is displayed at the display positions shown in FIG. 4 , i.e., the five positions at the center positions and close to the four corners of the visual field 33 in this example, the substantially rectangular frame 37 is displayed to surround these five display positions.
- the frame 37 is not limited to this shape and may have, for example, an elliptical shape or a polygonal shape other than a rectangle in accordance with the display positions of the marker 31 .
- the display control part 15 displays the frame 37 such that the frame 37 stands still at a preset position in the visual field 33 regardless of the movement and position of the user's head. As a result, even in the case that the user moves the head during adjustment of the line of sight detection, the frame 37 is displayed along with the marker 31 at the set position without being affected by the movement and the line of sight detection can properly be adjusted.
- Processes shown in FIG. 9 are executed during, for example, the display or reproduction of contents or the activation of a game by the information processing device 3 after completion of the adjustment of the line of sight detection described above.
- the information processing device 3 acquires the user's line of sight information detected by the sight line detector 9 .
- the information processing device 3 uses the identification part 19 to identify the gaze point based on the line of sight information acquired at step S 105 .
- the gaze point is identified with a technique as described above and the identification part 19 refers to the correlation information read from the recording device 117 to identify the user's gaze point corresponding to the line of sight information acquired at step S 105 .
- the information processing device 3 switches a model located in a vicinity region of the gaze point to a model with highest image quality (a high model described later) out of the corresponding models stored in the storage part 27 based on the gaze point identified by the drawing control part 25 at step S 110 .
- the information processing device 3 switches a model located in a peripheral region of the gaze point to a model with lower image quality (a low model or a middle model described later) as compared to the model located in the vicinity region based on the gaze point identified by the drawing control part 25 at step S 110 .
- the information processing device 3 determines whether the user inputs a termination instruction (e.g., a termination instruction for the display or reproduction of contents or a termination instruction for a game) by using the input device 113 etc. In the case that the termination instruction is not input (step S 125 :NO), the information processing device 3 returns to step S 105 to repeat the acquisition of the line of sight information, the identification of the gaze position, the switching of models, etc. In the case that the termination instruction is input (step S 125 :YES), this flow is terminated.
- a termination instruction e.g., a termination instruction for the display or reproduction of contents or a termination instruction for a game
- step S 115 and step S 120 may be performed in reverse order.
- FIGS. 10 and 11 An example shown in FIGS. 10 and 11 is the case of switching two types of models drawn at different levels of image quality. As shown in FIG. 10 , this example includes five person models M 1 -M 5 displayed in the visual field 33 . The type, the number, the positions etc. of the models are not limited thereto. The images (such as background) other than the models are not shown.
- the storage part 27 stores for each type of the models M 1 -M 5 a high model with relatively higher image quality and a low model with relatively lower image quality.
- the high model is set to have a relatively larger polygon count, a relatively higher texture resolution, a relatively larger degree of shader effect.
- the low model is set to have a relatively smaller polygon count, a relatively lower texture resolution, a relatively lower degree of shader effect.
- Polygons are polygonal elements used for representing a three-dimensional shape mainly in three-dimensional computer graphics and a larger polygonal count enables finer and more elaborate drawing.
- Texture means an image applied for representing a quality (e.g., gloss and unevenness) of a surface of an object mainly in three-dimensional computer graphics and a higher texture resolution enables finer and more elaborate drawing.
- a shader is a function of shadow processing etc. mainly in three-dimensional computer graphics and a larger degree of shader effect enables finer and more elaborate drawing.
- the shader function may be implemented by hardware or software (so-called programmable shader).
- these factors related to image quality are compositely changed in stages in the models, models may be prepared such that each of the factors is independently changed in stages.
- the factors related to image quality are not limited to the above example and may include other factors such as a LOD (level of detail), for example.
- a vicinity region 41 is set in the vicinity of the gaze point 39 of the user.
- the vicinity region 41 is set as a circular region having a predetermined radius around the gaze point 39 in the two-dimensional coordinate system in the visual field 33 .
- the optical characteristics of the eyes of the user degrade as a distance from the center position of the retina increases, a change in image quality is more hardly recognized by the user in a region more distant from the gaze point 39 in the visual field 33 .
- the radius of the vicinity region 41 is set in consideration of this fact, and a change in image quality is easily recognized by the user on the inside of the vicinity region 41 , while a change in image quality is hardly recognized by the user on the region outside the vicinity region 41 , i.e., in a peripheral region 43 on the periphery of the gaze point 39 .
- FIG. 10 shows an example of the user's gaze point 39 moving from a substantially center position to the substantially lower left in the visual field 33 .
- a model M 1 located in the vicinity region 41 is displayed as the high model with the highest image quality out of the corresponding models M 1 stored in the storage part 27 .
- Models M 2 -M 5 located in the peripheral region 43 are displayed as the low models with lower image quality as compared to the model M 1 located in the vicinity region 41 .
- the drawing of the peripheral region 43 is simplified as compared to the drawing of the vicinity region 41 .
- the model M 3 consequently located in the vicinity region 41 is switched and displayed as the high model with the highest image quality out of the corresponding models M 3 stored in the storage part 27 .
- the model M 1 consequently located in the peripheral region 43 is switched and displayed as the low model with lower image quality as compared to the model M 3 located in the vicinity region 41 .
- the models M 2 , M 4 , M 5 are located in the peripheral region 43 without change before and after the movement of the gaze point 39 and are kept displayed as the low models.
- the drawing of the peripheral region 43 of the gaze point 39 is simplified as compared to the drawing of the vicinity region 41 .
- the models are not blurry displayed. Since the polygon count is decreased and the quality of object surfaces and the accuracy of the shadow processing are lowered, the models are more roughly drawn while outlines such as contours remain sharp, for example.
- the models located in the vicinity region 41 may be limited to those entirely located inside the vicinity region 41 or may include those partially located inside the vicinity region 41 . In the latter case, a proportion may be set such that, for example, a model is considered located inside when more than half of the model is located inside the vicinity region 41 .
- FIGS. 12 and 13 An example shown in FIGS. 12 and 13 is the case of switching three types of models drawn at different levels of image quality. As shown in FIG. 12 , this example includes the five person models M 1 -M 5 displayed in the visual field 33 .
- the storage part 27 stores for each of the models M 1 -M 5 a high model with highest image quality and a low model with lowest image quality as well as a middle model with intermediate image quality.
- the middle model has the factors of the polygon count, the texture resolution, and the shader effect all set to middle levels between the high model and the low model.
- the vicinity region 41 is set in the vicinity of the gaze point 39 of the user. Additionally, in this example, a peripheral region set as the region outside the vicinity region 41 is divided into a first peripheral region 45 relatively close to the gaze point 39 and a second peripheral region 47 more distant from the gaze point 39 as compared to the first peripheral region 45 . Since a change in image quality is more hardly recognized by the user in a region more distant from the gaze point 39 due to the optical characteristics of the eyes, a change in image quality is more hardly recognized by the user in second peripheral region 47 than the first peripheral region 45 .
- FIG. 12 shows an example of the user's gaze point 39 moving from the substantially center position to the substantially lower left in the visual field 33 .
- the model M 1 located in the vicinity region 41 is displayed as the high model with the highest image quality out of the corresponding models M 1 stored in the storage part 27 .
- the models M 3 , M 5 located in the first peripheral region 45 are displayed as the middle models with lower image quality as compared to the model M 1 located in the vicinity region 41 .
- the models M 2 , M 4 located in the second peripheral region 47 are displayed as the low models with further lower image quality even as compared to the models M 3 , M 5 located in the first peripheral region 45 .
- the model M 3 consequently located in the vicinity region 41 is switched and displayed as the high model with the highest image quality out of the corresponding models M 3 stored in the storage part 27 .
- the model M 1 consequently located in the first peripheral region 45 is switched and displayed as the middle model with lower image quality as compared to the model M 3 located in the vicinity region 41 .
- the model M 2 changed from the second peripheral region 47 to the first peripheral region 45 is switched from the low model to the middle model, while the model M 5 changed from the first peripheral region 45 to the second peripheral region 47 is switched from the middle model to the low model.
- the model M 4 is located in the second peripheral region 47 without change before and after the movement of the gaze point 39 and is kept displayed as the low model.
- the drawing control can be more finely performed.
- the peripheral region is divided into two in the above description, the peripheral region may further finely be divided and each model may have models prepared at more different levels of image quality.
- a hardware configuration example will be described for the information processing device 3 achieving the processing parts implemented by a program executed by the CPU 101 described above, with reference to FIG. 14 .
- the information processing device 3 has, for example, a CPU 101 , a ROM 103 , a RAM 105 , a dedicated integrated circuit 107 constructed for specific use such as an ASIC or an FPGA, an input device 113 , an output device 115 , a recording device 117 , a drive 119 , a connection port 121 , and a communication device 123 .
- These constituent elements are mutually connected via a bus 109 and an input/output (I/O) interface 111 such that signals can be transferred.
- I/O input/output
- the program can be recorded in a recording device such as the ROM 103 , the RAM 105 , and the recording device 117 , for example.
- the program can also temporarily or permanently be recorded in a removable recording medium 125 such as various optical disks including CDs, MO disks, and DVDs, and semiconductor memories.
- a removable recording medium 125 such as various optical disks including CDs, MO disks, and DVDs, and semiconductor memories.
- the removable recording medium 125 as described above can be provided as so-called packaged software.
- the program recorded in the removable recording medium 125 may be read by the drive 119 and recorded in the recording device 117 through the I/O interface 111 , the bus 109 , etc.
- the program may be recorded in, for example, a download site, another computer, or another recording medium (not shown).
- the program is transferred through a network NW such as a LAN and the Internet and the communication device 123 receives this program.
- NW such as a LAN and the Internet
- the program received by the communication device 123 may be recorded in the recording device 117 through the I/O interface 111 , the bus 109 , etc.
- the program may be recorded in appropriate external connection equipment 127 , for example.
- the program may be transferred through the appropriate connection port 121 and recorded in the recording device 117 through the I/O interface 111 , the bus 109 , etc.
- the CPU 101 executes various process in accordance with the program recorded in the recording device 117 to implement the processes of the display control part 15 , the drawing control part 25 , etc.
- the CPU 101 may directly read and execute the program from the recording device 117 or may be execute the program once loaded in the RAM 105 .
- the CPU 101 receives the program through, for example, the communication device 123 , the drive 119 , or the connection port 121 , the CPU 101 may directly execute the received program without recording in the recording device 117 .
- the CPU 101 may execute various processes based on a signal or information input from the input device 113 such as a mouse, a keyboard, a microphone, and a game operating device (not shown) as needed.
- the input device 113 such as a mouse, a keyboard, a microphone, and a game operating device (not shown) as needed.
- the CPU 101 may output a result of execution of the process from the output device 115 such as a display device and a sound output device, for example, and the CPU 101 may transmit this process result to the communication device 123 or the connection port 121 as needed or may record the process result into the recording device 117 or the removable recording medium 125 .
- the output device 115 such as a display device and a sound output device, for example
- the program of this embodiment causes the information processing device 3 communicably connected to the head mounted display 5 to act as the drawing control part 25 .
- the drawing control part 25 controls drawing of an image such that the drawing is simplified in the peripheral region 43 etc. (including the peripheral regions 45 , 47 . the same hereinafter.) of the user's gaze point 39 as compared to the vicinity region 41 of the gaze point 39 . Since the drawing is controlled in accordance with the direction of the user's line of sight, the simplified drawing of the peripheral region 43 etc. is hardly recognized by the user regardless of the direction of the line of sight. Because of the simplified drawing of the peripheral region 43 etc., the vicinity region 41 of the gaze point 39 can accordingly be drawn with higher image quality without increasing a processing load of the CPU 101 etc. Therefore, the image quality recognized by the user can be improved regardless of the direction of the gaze direction.
- the program of this embodiment further causes the information processing device 3 to act as the storage part 27 .
- the storage part 27 stores for each model M a plurality of models drawn at different levels of image quality.
- the drawing control part 25 switches the model M located in the peripheral region 43 etc. of the gaze point 39 to the model with the lower image quality as compared to the model M located in the vicinity region 41 of the gaze point 39 .
- the drawing control part 25 switches the model M located in the vicinity region 41 to the model with the highest image quality out of the corresponding models M stored in the storage part 27 .
- the image quality can be maximized in the easily recognized vicinity region 41 while lowering the image quality of the peripheral region 43 etc. hardly recognized by the user and, therefore, performance of the CPU 101 etc. related to the image can be optimized.
- the storage part 27 stores for each model M a plurality of models having different levels of polygon count.
- the drawing control part 25 switches the model M located in the peripheral region 43 etc. to the model having the smaller polygon count as compared to the model M located in the vicinity region 41 .
- an image can be drawn with higher quality with a relatively larger polygon count in the easily recognized vicinity region 41 while simplifying the drawing with a relatively smaller polygon count in the peripheral region 43 etc. hardly recognized by the user and, therefore, the image quality recognized by the user can further be improved regardless of the gaze direction.
- the storage part 27 stores for each model M a plurality of models having different levels of texture resolution.
- the drawing control part 25 switches the model M located in the peripheral region 43 etc. to the model having the lower texture resolution as compared to the model M located in the vicinity region 41 .
- an image can be drawn with higher quality with a relatively higher texture resolution in the easily recognized vicinity region 41 while simplifying the drawing with a relatively lower texture resolution in the peripheral region 43 etc. hardly recognized by the user and, therefore, the image quality recognized by the user can be improved regardless of the gaze direction.
- the storage part 27 stores for each model M a plurality of models having different degrees of shader effect.
- the drawing control part 25 switches the model M located in the peripheral region 43 etc. to the model having the smaller degree of shader effect as compared to the model M located in the vicinity region 41 .
- an image can be drawn with higher quality at a relatively larger degree of shader effect in the easily recognized vicinity region 41 while simplifying the drawing at a relatively smaller degree of shader effect in the peripheral region 43 hardly recognized by the user and, therefore, the image quality recognized by the user can be improved regardless of the gaze direction.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- The present application is based upon and claims the benefit of priority to Japanese Patent Application No. 2015-089871, filed Apr. 24, 2015, the entire contents of which are incorporated herein by reference.
- The present invention relates to an image processing method and a recording medium storing a program related to a head mounted display having a line of sight detection function.
- As described in JP, T, 2009-510540, a head mounted display system is conventionally known that optically more compresses a peripheral region of an image as compared to a center region.
- In a head mounted display, a user's line of sight is not always directed to the center of a display part and may be directed to a periphery of the display part. In this case, the user recognizes degradation in image quality due to a reduced resolution in a peripheral part in the prior art.
- The present invention was conceived in view of such a problem and it is therefore an object of the present invention to provide an image processing method and a recording medium in which a program is recorded that is capable of improving image quality recognized by a user regardless of a gaze direction without increasing a processing load in a head mounted display having a line of sight detection function.
- To achieve the object, an image processing method of the present invention is characterized in that the image processing method is executed by an information processing device communicably connected to a head mounted display including a display device configured to display an image and a sight line detector configured to detect a user's line of sight. The image processing method includes controlling drawing of the image such that the drawing is simplified in a peripheral region of a gaze point of the user as compared to a vicinity region that is nearer to the gaze point than the peripheral region, based on information of the user's line of sight detected by the sight line detector.
- To achieve the object, a program recorded in a recording medium of the present invention is characterized in that an information processing device communicably connected to a head mounted display including a display device configured to display an image and a sight line detector configured to detect a user's line of sight is caused to function as a drawing control part. The drawing control part controls drawing of the image such that the drawing is simplified in a peripheral region of a gaze point of the user as compared to a vicinity region that is nearer to the gaze point than the peripheral region, based on information of the user's line of sight detected by the sight line detector.
- Since optical characteristics of the eye generally degrade as a distance from the center position of the retina increases, degradation in image quality may not be recognized by a user and therefore may be acceptable in a peripheral region at a predetermined distance from the gaze point.
- The program recorded in the recording medium of the present invention causes the information processing device communicably connected to the head mounted display to function as the drawing control part. This drawing control part controls drawing of an image such that the drawing is simplified in the peripheral region of the user's gaze point as compared to the vicinity region of the gaze point based on the information on the user's line of sight detected by the sight line detector. Since the drawing is controlled in accordance with the direction of the user's line of sight, the simplified drawing in the peripheral region is hardly recognized by the user regardless of the direction of the line of sight. Because of the simplified drawing in the peripheral region, the vicinity region of the gaze point can accordingly be drawn with higher image quality without increasing a processing load. Therefore, the image quality recognized by the user can be improved regardless of the gaze direction.
- Preferably, the image processing method of the present invention further includes storing a plurality of models drawn at different levels of image quality for each type of the model. The controlling drawing of the image includes switching the model located in the peripheral region to the model with lower image quality as compared to the model located in the vicinity region.
- Preferably, the program recorded in the recording medium of the present invention further causes the information processing device to function as a storage part that stores a plurality of models drawn at different levels of image quality for each type of the model. The drawing control part switches the model located in the peripheral region to the model with lower image quality as compared to the model located in the vicinity region.
- The program recorded in the recording medium of the present invention further causes the information processing device to function as the storage part. This storage part stores a plurality of models drawn at different levels of image quality for each type of the model. The drawing control part switches the model located in the peripheral region of the gaze point to the model with the lower image quality as compared to the model located in the vicinity region of the gaze point.
- Since the drawing is controlled for each model, an increase in the processing load can be suppressed as compared to the case of controlling the drawing of the entire region including a background and, since the model tending to attract user's attention is subjected to the drawing control, the apparent image quality recognized by the user can effectively be improved.
- In the image processing method of the present invention, preferably, the controlling drawing of the image includes switching the model located in the vicinity region to the model with highest image quality out of corresponding type of the models stored.
- In the program recorded in the recording medium of the present invention, preferably, the drawing control part switches the model located in the vicinity region to the model with highest image quality out of corresponding type of the models stored in the storage part.
- In the present invention, the drawing control part switches the model located in the vicinity region to the model with highest image quality out of the corresponding type of the models stored in the storage part. As a result, the image quality can be maximized in the easily recognized vicinity region while lowering the image quality of the peripheral region hardly recognized by the user and, therefore, processing performance can be optimized.
- In the image processing method of the present invention, preferably, the storing a plurality of models includes storing a plurality of models having different levels of polygon count for each type of the model. The controlling drawing of the image includes switching the model located in the peripheral region to the model having smaller polygon count as compared to the model located in the vicinity region.
- In the program recorded in the recording medium of the present invention, preferably, the storage part stores a plurality of models having different levels of polygon count for each type of the model. The drawing control part switches the model located in the peripheral region to the model having smaller polygon count as compared to the model located in the vicinity region.
- In the present invention, the storage part stores a plurality of models having different levels of polygon count for each type of the model. The drawing control part switches the model located in the peripheral region to the model having smaller polygon count as compared to the model located in the vicinity region.
- As a result, an image can be drawn with higher quality with a relatively larger polygon count in the easily recognized vicinity region while simplifying the drawing with a relatively smaller polygon count in the peripheral region hardly recognized by the user and, therefore, the image quality recognized by the user can further be improved regardless of the gaze direction.
- In the image processing method of the present invention, preferably, the storing a plurality of models includes storing a plurality of models having different levels of texture resolution for each type of the model. The controlling drawing of the image includes switching the model located in the peripheral region to the model having lower texture resolution as compared to the model located in the vicinity region.
- In the program recorded in the recording medium of the present invention, preferably, the storage part stores a plurality of models having different levels of texture resolution for each type of the model. The drawing control part switches the model located in the peripheral region to the model having lower texture resolution as compared to the model located in the vicinity region.
- In the present invention, the storage part stores a plurality of models having different levels of texture resolution for each type of the model. The drawing control part switches the model located in the peripheral region to the model having lower texture resolution as compared to the model located in the vicinity region.
- As a result, an image can be drawn with higher quality with a relatively higher texture resolution in the easily recognized vicinity region while simplifying the drawing with a relatively lower texture resolution in the peripheral region hardly recognized by the user and, therefore, the image quality recognized by the user can be improved regardless of the gaze direction.
- In the image processing method of the present invention, preferably, the storing a plurality of models includes storing a plurality of models having different degrees of shader effect for each type of the model. The controlling drawing of the image includes switching the model located in the peripheral region to the model having smaller degree of shader effect as compared to the model located in the vicinity region.
- In the program recorded in the recording medium of the present invention, preferably, the storage part stores a plurality of models having different degrees of shader effect for each type of the model. The drawing control part switches the model located in the peripheral region to the model having smaller degree of shader effect as compared to the model located in the vicinity region.
- In the present invention, the storage part stores a plurality of models having different degrees of shader effect for each type of the model. The drawing control part switches the model located in the peripheral region to the model having smaller degree of shader effect as compared to the model located in the vicinity region.
- As a result, an image can be drawn with higher quality at a relatively larger degree of shader effect in the easily recognized vicinity region while simplifying the drawing at a relatively smaller degree of shader effect in the peripheral region hardly recognized by the user and, therefore, the image quality recognized by the user can be improved regardless of the gaze direction.
- The image processing method and the program recorded in the recording medium of the present invention enables a head mounted display having a line of sight detection function to improve the image quality recognized by a user regardless of a gaze direction without increasing a processing load.
-
FIG. 1 is an explanatory view of an example of an overall configuration of a head mounted display system related to an embodiment. -
FIG. 2 is a block diagram of an example of configurations of a head mounted display and an information processing device related to the embodiment. -
FIG. 3 is a flowchart of an example of process procedures related to adjustment of line of sight detection performed by a CPU of the information processing device. -
FIG. 4 is an explanatory view for explaining an example of a marker display form. -
FIG. 5 is an explanatory view for explaining another example of the marker display form. -
FIG. 6 is an explanatory view for explaining another example of the marker display form. -
FIG. 7 is an explanatory view for explaining another example of the marker display form. -
FIG. 8 is an explanatory view for explaining another example of the marker display form. -
FIG. 9 is a flowchart of an example of process procedures related to drawing control based on a line of sight performed by the CPU of the information processing device. -
FIG. 10 is an explanatory view for explaining an example of a form of the drawing control based on the line of sight. -
FIG. 11 is an explanatory table for explaining an example of storage contents of a storage part. -
FIG. 12 is an explanatory view for explaining another example of a form of the drawing control based on the line of sight. -
FIG. 13 is an explanatory table for explaining another example of storage contents of the storage part. -
FIG. 14 is a block diagram of an example of a hardware configuration of the information processing device. - An embodiment of the present invention will now be described with reference to the drawings.
- An example of an overall configuration of a head mounted
display system 1 related to this embodiment will first be described with reference toFIG. 1 . As shown inFIG. 1 , the head mounteddisplay system 1 has aninformation processing device 3 and a head mounteddisplay 5. Theinformation processing device 3 and the head mounteddisplay 5 are communicably connected. AlthoughFIG. 1 shows the case of wired connection, wireless connection may be used. - The
information processing device 3 is a so-called computer. Examples of the computer in this case include not only those manufactured and sold as computers such as server computers, desktop computers, notebook computers, and tablet computers, but also those manufactured and sold as telephones such as portable telephones, smartphones, and phablets, and those manufactured and sold as game machines or multimedia terminals such as portable game terminals, game consoles, and entertainment devices. - The head mounted
display 5 is a display device that can be mounted on the head or face of a user. The head mounteddisplay 5 displays images (including still images and moving images) generated by theinformation processing device 3. AlthoughFIG. 1 shows a goggle type head mounted display as an example, the head mounted display is not limited to this shape. The head mounteddisplay 5 may be of either a transmission type or a non-transmission type. - An example of a general configuration of the head mounted
display 5 will be described with reference toFIG. 2 . As shown inFIG. 2 , the head mounteddisplay 5 has adisplay device 7 displaying an image, asight line detector 9 detecting a line of sight of a user, andvarious sensors 11. Thedisplay device 7 includes, for example, a liquid crystal display or an organic EL display. Thedisplay device 7 has a left-eye display device 7L and a right-eye display device 7R and can display an independent image for each of the left and right eyes of the user to display a 3D image. Thedisplay device 7 may not necessarily have two display devices and may be a single display device common to the left and right eyes or may be a single display device corresponding to only one eye, for example. - The
sight line detector 9 includes, for example, a visible light camera and a line of sight information calculation part not shown. An image of a user's eye takin by the visible light camera is sent to the line of sight information calculation part. Based on the image, the line of sight information calculation part defines, for example, the inner corner of the eye as a reference point and the iris (so-called colored portion of the eye) as a moving point to calculate user's line of sight information based on the position of the iris relative to the inner corner of the eye. The line of sight information is information indicative of the direction of the user's line of sight and is, for example, vector information of the line of sight. Although the line of sight information calculation part is implemented in the head mounteddisplay 5 in the description of this embodiment, the line of sight information calculation part may be implemented in theinformation processing device 3. In the case that the line of sight information calculation part is implemented in the head mounteddisplay 5, the line of sight information calculated by the line of sight information calculation part (in other words, detected by the sight line detector 9) is transmitted from the head mounteddisplay 5 to theinformation processing device 3. In the case that the line of sight information calculation part is implemented in theinformation processing device 3, an image taken by the visible light camera is transmitted to theinformation processing device 3 and the line of sight information is calculated in theinformation processing device 3. - The
sight line detector 9 has asight line detector 9L for photographing the left eye and asight line detector 9R for photographing the right eye and can independently detect the line of sight information of each of the left and right eyes of the user. Thesight line detector 9 may not necessarily include two devices and the singlesight line detector 9 may photograph only one eye. - The line of sight detection technique of this embodiment is not limited to the above technique and various other detection techniques are employable. For example, the
sight line detector 9 may include an infrared LED and an infrared camera. In this case, the infrared camera photographs the eye irradiated by the infrared LED and, based on the photographed image, the line of sight information calculation part defines, for example, a position of reflection light on the cornea (corneal reflex) generated by the irradiation of the infrared LED as the reference point and the pupil as the moving point to calculate the user's line of sight information (line of sight vector) based on the position of the pupil relative to the position of the corneal reflex. Alternatively, the detection technique may include detecting a change in surface electromyography of the user's face or in weak myogenic potential (ocular potential) generated when the eyeball is moved. - In the case that a visible light camera is used as in this embodiment, costs can be reduced as compared to using the infrared camera; however, a problem of reduced detection accuracy occurs. Therefore, it is extremely effective to improve the detection accuracy through adjustment of the line of sight detection as described later.
- The
various sensors 11 include an acceleration sensor and a gyro sensor, for example. These sensors detect the movement and position of the user's head. Base on the detection results of thevarious sensors 11, theinformation processing device 3 changes the images displayed on thedisplay device 7 in accordance with the movement and position of the user's head so as to achieve realistic virtual reality. - A
communication control part 13 controls communications with theinformation processing device 3. For example, thecommunication control part 13 receives an image to be displayed on thedisplay device 7 from theinformation processing device 3 and transmits the line of sight information detected by thesight line detector 9 and the detection information detected by thevarious sensors 11 to theinformation processing device 3. Thecommunication control part 13 may be implemented by a program executed by a CPU not depicted mounted on the head mounteddisplay 5 or may partially or entirely be implemented by an actual device such as an ASIC, an FPGA, or other electric circuits. - The configuration form of the head mounted
display 5 is not limited to the above description. For example, although not described, the head mounteddisplay 5 may be equipped with an earphone or a headphone. - An example of a functional configuration of the
information processing device 3 will be described with reference toFIG. 2 . It is noted that although the functional configuration of theinformation processing device 3 will be described in terms of a line of sight detection adjustment function and a drawing control function based on the line of sight, a functional configuration related to the other normal functions of the information processing device 3 (e.g., display and reproduction of contents and activation of a game) will not be described. The arrows shown inFIG. 2 indicate an example of signal flow and are not intended to limit the signal flow directions. - As shown in
FIG. 2 , theinformation processing device 3 has adisplay control part 15, ageneration part 17, anidentification part 19, arecording control part 21, adetermination part 23, adrawing control part 25, astorage part 27, and acommunication control part 29. - The
display control part 15 displays a marker 31 (seeFIGS. 4 to 8 ) guiding a user's line of sight on thedisplay device 7 of the head mounteddisplay 5. As described in detail later, themarker 31 has a plurality of display positions (at least two positions) each set at a predetermined position in a two-dimensional coordinate system in a visual field 33 (seeFIGS. 4 to 8 ) of thedisplay device 7. Thedisplay control part 15 displays themarker 31 at the predetermined position to guide the user's line of sight to the predetermined position and causes the user gaze at the position. - In this case, the
display control part 15 displays themarker 31 such that themarker 31 stands still at preset positions in thevisual field 33 of thedisplay device 7, or such that themarker 31 moves through a preset route in thevisual field 33, regardless of the movement and position of the user's head. - The
generation part 17 generates correlation information between the user's line of sight information guided by themarker 31 and the display position of themarker 31. Specifically, when the user gazes at themarker 31 displayed on thedisplay device 7, the line of sight information is detected by thesight line detector 9 and transmitted to theinformation processing device 3. The display positions of themarker 31 are set in advance as described above, and information of the display positions (e.g., coordinates in the two-dimensional coordinate system of the visual field 33) is recorded in, for example, a recording device 117 (seeFIG. 14 ) of theinformation processing device 3. Thegeneration part 17 acquires the information of the user's line of sight guided by themarker 31 and detected by thesight line detector 9 and reads the display position information of themarker 31 from therecording device 117 etc. Thegeneration part 17 performs this operation for a plurality of display positions of themarker 31 to generate the correlation information between the line of sight information and the display positions. - The format of the correlation information is not particularly limited and may be, for example, a table correlating the coordinates of the
visual field 33 with the line of sight information or may be, for example, an arithmetic expression for correcting the line of sight information to corresponding coordinates. In the case that thesight line detector 9 independently detects the line of sight information of each of the left and right eyes of the user, the format may be two tables correlating the line of sight information of one eye with the coordinates and the line of sight information of the other eye with the coordinates, or may be a plurality of arithmetic expressions for correcting the pieces of the line of sight information to corresponding coordinates. Alternatively, the pieces of the line of sight information of the left and right eyes may be integrated into a piece of the line of sight information by a predetermined arithmetic expression etc., before generating the correlation information. The generated correlation information is written and recorded by therecording control part 21 into therecording device 117 including a hard disk, for example. - The
identification part 19 identifies a user's gaze position (also referred to as a gaze point) based on the line of sight information by using the correlation information. Specifically, after completion of the adjustment of the line of sight detection described above (from the display of themarker 31 to the recording of the correlation information), when the user gazes at a predetermined position in thevisual field 33 of thedisplay device 7 during, for example, display or reproduction of contents or activation of a game, the line of sight information in this case is detected by thesight line detector 9 and transmitted to theinformation processing device 3. Theidentification part 19 refers to the correlation information read from therecording device 117 to identify a position (coordinates) corresponding to the acquired line of sight information, i.e., a user's gaze position. - The
recording control part 21 writes and reads various pieces of information into and from therecording device 117. For example, therecording control part 21 correlates the correlation information generated by thegeneration part 17 with identification information of a user corresponding to the correlation information and writes and records the correlation information along with the identification information for each user in therecording device 117. - The
determination part 23 determines whether a user using the head mounteddisplay 5 is a user having the recorded correlation information. Specifically, when using the head mounteddisplay 5, a user inputs identification information (such as a login ID) by using an input device 113 (seeFIG. 14 ) etc. of theinformation processing device 3, and thedetermination part 23 determines whether the correlation information corresponding to the input identification information is recorded in therecording device 117. In the case that the correlation information corresponding to the input identification information is recorded in therecording device 117, thedetermination part 23 determines that the user is a user having the recorded correlation information. In this case, therecording control part 21 reads the corresponding correlation information from therecording device 117 and theidentification part 19 uses the read correlation information to identify the gaze position of the user based on the line of sight information. - Based on the user's line of sight information detected by the
sight line detector 9, thedrawing control part 25 controls drawing of an image displayed on thedisplay device 7 such that the drawing is simplified in a peripheral region of a user's gaze point 39 (seeFIGS. 10 and 12 ) as compared to a vicinity region of thegaze point 39. The vicinity region is nearer to thegaze point 39 than the peripheral region. Specifically, thestorage part 27 stores M (seeFIGS. 11 and 13 ) a plurality of models drawn at different levels of image quality for each type of the model, and thedrawing control part 25 switches the model M located in the peripheral region of thegaze point 39 to a model with lower image quality as compared to the model M located in the vicinity region in thevisual field 33 of thedisplay device 7. Thedrawing control part 25 switches the model M located in the vicinity region of thegaze point 39 to the model with highest image quality out of the corresponding models M stored in thestorage part 27. - The models M are person models, character models, background models, etc., displayed in the
visual field 33 by thedisplay device 7 during execution of a game, for example, and may include icons displayed on a menu screen etc. during display or reproduction of contents, for example. Although thedrawing control part 25 switches the models M to control the drawing for each model in the description of this embodiment, the drawing may be controlled also in an image portion other than the models, such as a background image, a motion image, and an effect image, for example. - As described in detail later, the
storage part 27 stores for each of the models M a plurality of models drawn at different levels of image quality as described above. The models drawn at different levels of image quality are, for example, models having different levels of polygon count, models having different levels of texture resolution, and models having different degrees of shader effect. These factors related to the image quality may be changed in stages independently of each other in the drawing, or two or more of these factors may compositely be changed in stages in the drawing. The factors related to the image quality are not limited to the above factors and may include other factors. - The
communication control part 29 controls communications with the head mounteddisplay 5. A mode of communications between the 13, 29 is not particularly limited as long as information can be transmitted and received. As described, either wired or wireless communication may be employed.control parts - The processes in the processing parts described above are not limited to the example in terms of assignment of these processes and may be executed by the fewer number of the processing parts (e.g., one processing part) or may be executed by more finely divided processing parts. The functions of the processing parts described above are implemented by a program executed by a
CPU 101 described later (seeFIG. 14 ) or may partially or entirely be implemented by an actual device such as an ASIC, an FPGA, or other electric circuits, for example. - An example of process procedures related to the adjustment of the line of sight detection performed by the
CPU 101 of theinformation processing device 3 will be described with reference toFIG. 3 . - At step S5, the
information processing device 3 uses the determination part 2 to determine whether the correlation information of a user using the head mounteddisplay 5 is recorded in therecording device 117. In particular, the determination part 2 uses the identification information (such as a login ID) input by the user through theinput device 113 etc., to determine whether the corresponding correlation information is recorded in therecording device 117. In the case that the correlation information is recorded in the recording device 117 (step S5:NO), theinformation processing device 3 goes to step S10. - At step S10, the
information processing device 3 uses therecording control part 21 to read the correlation information corresponding to the input identification information from therecording device 117. Subsequently, theinformation processing device 3 goes to step S40. - On the other hand, in the case that the correlation information is not recorded in the
recording device 117 at step S5 (step S5:YES), theinformation processing device 3 goes to step S15. - At step S15, the
information processing device 3 uses thedisplay control part 15 to display themarker 31 guiding the user's line of sight on thedisplay device 7 of the head mounteddisplay 5. A display form of themarker 31 in this case will be described later (FIGS. 4 to 8 ). - At step S20, the
information processing device 3 uses thegeneration part 17 to acquire the information of the user's line of sight guided by themarker 31 and detected by thesight line detector 9. - At step S25, the
information processing device 3 determines whether the display of themarker 31 is completed. As described in detail later, themarker 31 is statically displayed at a plurality of predetermined locations in a predetermined order in thevisual field 33 of thedisplay device 7, or themarker 31 moves continuously through a route along the plurality of the locations. In the case that this display sequence of themarker 31 is not completed to the end (step S25:NO), theinformation processing device 3 returns to step S15 to repeat the display of themarker 31 and the acquisition of the line of sight information. In the case that the display of themarker 31 is completed (step S25:YES), theinformation processing device 3 goes to step S30. - At step S30, the
information processing device 3 uses thegeneration part 17 to generate the correlation information. In particular, thegeneration part 17 reads from therecording device 117 etc. the display position information of themarker 31 corresponding to each of the pieces of line of sight information acquired at step S20 and generates the correlation information between the line of sight information and the display positions. The correlation information is, for example, a correlation table or a correction arithmetic expression as described above. - At step S35, the
information processing device 3 writes and records the correlation information generated by therecording control part 21 at step S30 into therecording device 117. - With the process procedures from step S15 to step S35, the adjustment of the line of sight detection is completed. The processes from step S40 are executed during, for example, the display or reproduction of contents or the activation of a game by the
information processing device 3 after completion of the adjustment of the line of sight detection. - At step S40, the
information processing device 3 acquires the user's line of sight information detected by thesight line detector 9. - At step S45, the
information processing device 3 uses theidentification part 19 to refer to the correlation information read from therecording device 117 so as to identify the user's gaze position corresponding to the line of sight information acquired at step S40. This identified gaze position is utilized for various applications using line of sight input (e.g., presentation of a game). - At step S50, the
information processing device 3 determines whether the user inputs a termination instruction (e.g., a termination instruction for the display or reproduction of contents or a termination instruction for a game) by using theinput device 113 etc. In the case that the termination instruction is not input (step S50:NO), theinformation processing device 3 returns to step S40 to repeat the acquisition of the line of sight information, the identification of the gaze position, etc. In the case that the termination instruction is input (step S50:YES), this flow is terminated. - The process procedures described above are an example and may at least partially be deleted or modified, or other procedures may be added. For example, the generated correlation information may not be recorded and reused, and the line of sight detection may be adjusted each time the head mounted
display 5 is used. This eliminates the need for steps S5 and S10. For example, theinformation processing device 3 may only temporarily retain the correlation information generated at step S30 in, for example, a RAM 105 (seeFIG. 14 ), so as to use the correlation information without recording into therecording device 117. This eliminates the need for step S35. - An example of the display form of the
marker 31 will be described with reference toFIGS. 4 to 8 . - As shown in
FIG. 4 , themarker 31 is displayed in the user'svisual field 33 in thedisplay device 7. Although the shape of thevisual field 33 is substantially rectangular in examples shown inFIG. 4 etc., this is not a limitation and the shape may be an ellipse or a polygonal shape other than a rectangle, for example. As described above, themarker 31 has a plurality of display positions set in advance at predetermined coordinate positions in a two-dimensional coordinate system (an X-Y coordinate system shown inFIG. 4 ) in thevisual field 33. - In the example shown in
FIG. 4 , the display positions of themarker 31 are set at five positions, one at a center position and the others close to four corners of thevisual field 33. The center position is a position of an intersection of two diagonal lines of a rectangle, for example. When it is assumed that the X-axis positive side, the X-axis negative side, the Y-axis positive side, and the Y-axis negative side are on the right, left, upper, and lower sides, respectively, in the X-Y coordinate system shown inFIG. 4 for convenience of description, themarker 31 in the example shown inFIG. 4 is sequentially statically displayed in the order of the center position, a position close to the upper left corner, a position close to the lower left corner, a position close to the upper right corner, and a position close to the lower right corner. This order of display of themarker 31 is not a limitation and may be changed such that, for example, themarker 31 is displayed at the center position at the end or in the upper left corner at the start. - The
marker 31 is statically displayed at each position for a constant time (e.g., about 1 second). While the user gazes at themarker 31 for the constant time, thesight line detector 9 detects the line of sight information. In this situation, for example, thesight line detector 9 may detect the line of sight information a plurality of times within the constant time and obtain an average etc. as the line of sight information corresponding to the display position of themarker 31. Alternatively, for example, the line of sight information detected at certain timing within the constant time may be defined as the line of sight information corresponding to the display position of themarker 31. - The
marker 31 is drawn as a graphic of a black circle in this example. Themarker 31 is not limited to this drawing form and may be, for example, a graphic of another shape such as a polygonal shape and an ellipse, a character, a sign, or an icon of a mascot etc. Additionally, themarker 31 may have a pattern, color, etc. Therefore, themarker 31 may be any marker easy for the user to gaze at and the drawing form thereof is not particularly limited. - The
display control part 15 displays themarker 31 such that themarker 31 stands still at the five positions, one at the center position and the others close to the four corners of thevisual field 33, regardless of the movement and position of the user's head. As a result, even in the case that the user moves the head during adjustment of the line of sight detection, themarker 31 is displayed at the set positions without being affected by the movement and the line of sight detection can properly be adjusted. - In the example shown in
FIG. 5 , the display positions of themarker 31 are set at three positions, one at the center position and the others close to two corners on one diagonal line of thevisual field 33. In this example, themarker 31 is sequentially statically displayed in the order of the center position, the position close to the upper left corner, and the position close to the lower right corner. Instead of the two corners described above, themarker 31 may be displayed in two corners on the other diagonal line, i.e., the positions close to the upper right corner and the lower left corner. This order of display of themarker 31 is not a limitation and may be changed. - By acquiring the line of sight information in the two corners on one diagonal line of the
visual field 33, the line of sight information in the two corners on the other diagonal line can be estimated. Therefore, the display positions of themarker 31 can be reduced to shorten the adjustment time in this example by displaying themarker 31 at the center position and two corners on a diagonal line out of the four corners of thevisual field 33. - In the example shown in
FIG. 6 , the display positions of themarker 31 are set only at two positions close to two corners on one diagonal line of thevisual field 33. In this example, themarker 31 is sequentially statically displayed in the order of the position close to the upper left corner and the position close to the lower right corner. Instead of the two corners described above, themarker 31 may be displayed at two corners on the other diagonal line, i.e., the positions close to the upper right corner and the lower left corner. Themarker 31 may be displayed in reverse order. - By acquiring the line of sight information in the two corners on one diagonal line of the
visual field 33, the line of sight information in the two corners on the other diagonal line can be estimated as described above. Additionally, since the center position is the substantially midpoint between the two corners on the diagonal line, the line of sight information at the center position of thevisual field 33 can also be estimated by obtaining an average of the line of sight information in the two corners, for example. Therefore, the display positions of themarker 31 can be minimized to further shorten the adjustment time in this example by displaying themarker 31 only at the two corners on one diagonal line. - In the example shown in
FIG. 7 , the displayedmarker 31 continuously moves through a preset route instead of the static display as shown inFIGS. 4 to 6 . In this example, the displayedmarker 31 continuously moves though aroute 35 including the display positions shown inFIG. 4 , i.e., five positions at the center position and close to the four corners of thevisual field 33, in the order of the center position, the position close to the upper left corner, the position close to the lower left corner, the position close to the upper right corner, and the position close to the lower right corner. - The user moves the line of sight to follow and gaze at the moving
marker 31. Thesight line detector 9 detects the line of sight information at least when themarker 31 passes through reference positions, i.e., at least when themarker 31 passes through the five positions at the center position and the four corners in this example. In addition to these five positions, the line of sight information may also be detected at the midpoints thereof. - The movement route of the
marker 31 is not limited to the above example and may be any route at least including two corners on a diagonal line. The order of movement of themarker 31 is not limited to the above example and may be changed. - In the example shown in
FIG. 8 , aframe 37 surrounding a display area of themarker 31 is displayed along with themarker 31 in thevisual field 33. Since themarker 31 is displayed at the display positions shown inFIG. 4 , i.e., the five positions at the center positions and close to the four corners of thevisual field 33 in this example, the substantiallyrectangular frame 37 is displayed to surround these five display positions. Theframe 37 is not limited to this shape and may have, for example, an elliptical shape or a polygonal shape other than a rectangle in accordance with the display positions of themarker 31. As is the case with themarker 31, thedisplay control part 15 displays theframe 37 such that theframe 37 stands still at a preset position in thevisual field 33 regardless of the movement and position of the user's head. As a result, even in the case that the user moves the head during adjustment of the line of sight detection, theframe 37 is displayed along with themarker 31 at the set position without being affected by the movement and the line of sight detection can properly be adjusted. - An example of process procedures related to drawing control based on the line of sight performed by the
CPU 101 of theinformation processing device 3 will be described with reference toFIG. 9 . Processes shown inFIG. 9 are executed during, for example, the display or reproduction of contents or the activation of a game by theinformation processing device 3 after completion of the adjustment of the line of sight detection described above. - At step S105, the
information processing device 3 acquires the user's line of sight information detected by thesight line detector 9. - At step S110, the
information processing device 3 uses theidentification part 19 to identify the gaze point based on the line of sight information acquired at step S105. The gaze point is identified with a technique as described above and theidentification part 19 refers to the correlation information read from therecording device 117 to identify the user's gaze point corresponding to the line of sight information acquired at step S105. - At step S115, the
information processing device 3 switches a model located in a vicinity region of the gaze point to a model with highest image quality (a high model described later) out of the corresponding models stored in thestorage part 27 based on the gaze point identified by thedrawing control part 25 at step S110. - At step S120, the
information processing device 3 switches a model located in a peripheral region of the gaze point to a model with lower image quality (a low model or a middle model described later) as compared to the model located in the vicinity region based on the gaze point identified by thedrawing control part 25 at step S110. - At step S125, the
information processing device 3 determines whether the user inputs a termination instruction (e.g., a termination instruction for the display or reproduction of contents or a termination instruction for a game) by using theinput device 113 etc. In the case that the termination instruction is not input (step S125:NO), theinformation processing device 3 returns to step S105 to repeat the acquisition of the line of sight information, the identification of the gaze position, the switching of models, etc. In the case that the termination instruction is input (step S125:YES), this flow is terminated. - The process procedures described above are an example and may at least partially be deleted or modified, or other procedures may be added. For example, step S115 and step S120 may be performed in reverse order.
- A form of the drawing control based on the line of sight described above will be described with reference to
FIGS. 10 to 13 . - An example shown in
FIGS. 10 and 11 is the case of switching two types of models drawn at different levels of image quality. As shown inFIG. 10 , this example includes five person models M1-M5 displayed in thevisual field 33. The type, the number, the positions etc. of the models are not limited thereto. The images (such as background) other than the models are not shown. - As shown in
FIG. 11 , thestorage part 27 stores for each type of the models M1-M5 a high model with relatively higher image quality and a low model with relatively lower image quality. In this example, the high model is set to have a relatively larger polygon count, a relatively higher texture resolution, a relatively larger degree of shader effect. The low model is set to have a relatively smaller polygon count, a relatively lower texture resolution, a relatively lower degree of shader effect. - Polygons are polygonal elements used for representing a three-dimensional shape mainly in three-dimensional computer graphics and a larger polygonal count enables finer and more elaborate drawing. Texture means an image applied for representing a quality (e.g., gloss and unevenness) of a surface of an object mainly in three-dimensional computer graphics and a higher texture resolution enables finer and more elaborate drawing. A shader is a function of shadow processing etc. mainly in three-dimensional computer graphics and a larger degree of shader effect enables finer and more elaborate drawing. The shader function may be implemented by hardware or software (so-called programmable shader).
- In the example shown in
FIG. 11 , these factors related to image quality are compositely changed in stages in the models, models may be prepared such that each of the factors is independently changed in stages. The factors related to image quality are not limited to the above example and may include other factors such as a LOD (level of detail), for example. - As shown in
FIG. 10 , avicinity region 41 is set in the vicinity of thegaze point 39 of the user. In this example, thevicinity region 41 is set as a circular region having a predetermined radius around thegaze point 39 in the two-dimensional coordinate system in thevisual field 33. In general, since the optical characteristics of the eyes of the user degrade as a distance from the center position of the retina increases, a change in image quality is more hardly recognized by the user in a region more distant from thegaze point 39 in thevisual field 33. The radius of thevicinity region 41 is set in consideration of this fact, and a change in image quality is easily recognized by the user on the inside of thevicinity region 41, while a change in image quality is hardly recognized by the user on the region outside thevicinity region 41, i.e., in aperipheral region 43 on the periphery of thegaze point 39. -
FIG. 10 shows an example of the user'sgaze point 39 moving from a substantially center position to the substantially lower left in thevisual field 33. As shown in an upper portion ofFIG. 10 , when thegaze point 39 is at the substantially center position, a model M1 located in thevicinity region 41 is displayed as the high model with the highest image quality out of the corresponding models M1 stored in thestorage part 27. Models M2-M5 located in theperipheral region 43 are displayed as the low models with lower image quality as compared to the model M1 located in thevicinity region 41. As a result, the drawing of theperipheral region 43 is simplified as compared to the drawing of thevicinity region 41. - As shown in a lower portion of
FIG. 10 , when thegaze point 39 has moved, the model M3 consequently located in thevicinity region 41 is switched and displayed as the high model with the highest image quality out of the corresponding models M3 stored in thestorage part 27. The model M1 consequently located in theperipheral region 43 is switched and displayed as the low model with lower image quality as compared to the model M3 located in thevicinity region 41. The models M2, M4, M5 are located in theperipheral region 43 without change before and after the movement of thegaze point 39 and are kept displayed as the low models. As a result, the drawing of theperipheral region 43 of thegaze point 39 is simplified as compared to the drawing of thevicinity region 41. - Even when the drawing is simplified as described above, the models are not blurry displayed. Since the polygon count is decreased and the quality of object surfaces and the accuracy of the shadow processing are lowered, the models are more roughly drawn while outlines such as contours remain sharp, for example.
- The models located in the
vicinity region 41 may be limited to those entirely located inside thevicinity region 41 or may include those partially located inside thevicinity region 41. In the latter case, a proportion may be set such that, for example, a model is considered located inside when more than half of the model is located inside thevicinity region 41. - Although two types of models with different image qualities are switched in the above description, three or more types of models may be switched. An example shown in
FIGS. 12 and 13 is the case of switching three types of models drawn at different levels of image quality. As shown inFIG. 12 , this example includes the five person models M1-M5 displayed in thevisual field 33. - As shown in
FIG. 13 , thestorage part 27 stores for each of the models M1-M5 a high model with highest image quality and a low model with lowest image quality as well as a middle model with intermediate image quality. The middle model has the factors of the polygon count, the texture resolution, and the shader effect all set to middle levels between the high model and the low model. - As shown in
FIG. 12 , thevicinity region 41 is set in the vicinity of thegaze point 39 of the user. Additionally, in this example, a peripheral region set as the region outside thevicinity region 41 is divided into a firstperipheral region 45 relatively close to thegaze point 39 and a secondperipheral region 47 more distant from thegaze point 39 as compared to the firstperipheral region 45. Since a change in image quality is more hardly recognized by the user in a region more distant from thegaze point 39 due to the optical characteristics of the eyes, a change in image quality is more hardly recognized by the user in secondperipheral region 47 than the firstperipheral region 45. -
FIG. 12 shows an example of the user'sgaze point 39 moving from the substantially center position to the substantially lower left in thevisual field 33. As shown in an upper portion ofFIG. 12 , when thegaze point 39 is at the substantially center position, the model M1 located in thevicinity region 41 is displayed as the high model with the highest image quality out of the corresponding models M1 stored in thestorage part 27. The models M3, M5 located in the firstperipheral region 45 are displayed as the middle models with lower image quality as compared to the model M1 located in thevicinity region 41. The models M2, M4 located in the secondperipheral region 47 are displayed as the low models with further lower image quality even as compared to the models M3, M5 located in the firstperipheral region 45. - As shown in a lower portion of
FIG. 12 , when thegaze point 39 has moved, the model M3 consequently located in thevicinity region 41 is switched and displayed as the high model with the highest image quality out of the corresponding models M3 stored in thestorage part 27. The model M1 consequently located in the firstperipheral region 45 is switched and displayed as the middle model with lower image quality as compared to the model M3 located in thevicinity region 41. The model M2 changed from the secondperipheral region 47 to the firstperipheral region 45 is switched from the low model to the middle model, while the model M5 changed from the firstperipheral region 45 to the secondperipheral region 47 is switched from the middle model to the low model. The model M4 is located in the secondperipheral region 47 without change before and after the movement of thegaze point 39 and is kept displayed as the low model. - As described above, since the peripheral region is further divided into two in accordance with a distance from the
gaze point 39 and each model has models prepared at different levels of image quality in accordance with the number of regions in this example, the drawing control can be more finely performed. Although the peripheral region is divided into two in the above description, the peripheral region may further finely be divided and each model may have models prepared at more different levels of image quality. - A hardware configuration example will be described for the
information processing device 3 achieving the processing parts implemented by a program executed by theCPU 101 described above, with reference toFIG. 14 . - As shown in
FIG. 14 , theinformation processing device 3 has, for example, aCPU 101, aROM 103, aRAM 105, a dedicatedintegrated circuit 107 constructed for specific use such as an ASIC or an FPGA, aninput device 113, anoutput device 115, arecording device 117, adrive 119, aconnection port 121, and acommunication device 123. These constituent elements are mutually connected via abus 109 and an input/output (I/O)interface 111 such that signals can be transferred. - The program can be recorded in a recording device such as the
ROM 103, theRAM 105, and therecording device 117, for example. - The program can also temporarily or permanently be recorded in a
removable recording medium 125 such as various optical disks including CDs, MO disks, and DVDs, and semiconductor memories. Theremovable recording medium 125 as described above can be provided as so-called packaged software. In this case, the program recorded in theremovable recording medium 125 may be read by thedrive 119 and recorded in therecording device 117 through the I/O interface 111, thebus 109, etc. - The program may be recorded in, for example, a download site, another computer, or another recording medium (not shown). In this case, the program is transferred through a network NW such as a LAN and the Internet and the
communication device 123 receives this program. The program received by thecommunication device 123 may be recorded in therecording device 117 through the I/O interface 111, thebus 109, etc. - The program may be recorded in appropriate
external connection equipment 127, for example. In this case, the program may be transferred through theappropriate connection port 121 and recorded in therecording device 117 through the I/O interface 111, thebus 109, etc. - The
CPU 101 executes various process in accordance with the program recorded in therecording device 117 to implement the processes of thedisplay control part 15, thedrawing control part 25, etc. In this case, theCPU 101 may directly read and execute the program from therecording device 117 or may be execute the program once loaded in theRAM 105. In the case that theCPU 101 receives the program through, for example, thecommunication device 123, thedrive 119, or theconnection port 121, theCPU 101 may directly execute the received program without recording in therecording device 117. - The
CPU 101 may execute various processes based on a signal or information input from theinput device 113 such as a mouse, a keyboard, a microphone, and a game operating device (not shown) as needed. - The
CPU 101 may output a result of execution of the process from theoutput device 115 such as a display device and a sound output device, for example, and theCPU 101 may transmit this process result to thecommunication device 123 or theconnection port 121 as needed or may record the process result into therecording device 117 or theremovable recording medium 125. - As described above, the program of this embodiment causes the
information processing device 3 communicably connected to the head mounteddisplay 5 to act as thedrawing control part 25. Based on the user's line of sight information detected by thesight line detector 9, thedrawing control part 25 controls drawing of an image such that the drawing is simplified in theperipheral region 43 etc. (including the 45, 47. the same hereinafter.) of the user'speripheral regions gaze point 39 as compared to thevicinity region 41 of thegaze point 39. Since the drawing is controlled in accordance with the direction of the user's line of sight, the simplified drawing of theperipheral region 43 etc. is hardly recognized by the user regardless of the direction of the line of sight. Because of the simplified drawing of theperipheral region 43 etc., thevicinity region 41 of thegaze point 39 can accordingly be drawn with higher image quality without increasing a processing load of theCPU 101 etc. Therefore, the image quality recognized by the user can be improved regardless of the direction of the gaze direction. - The program of this embodiment further causes the
information processing device 3 to act as thestorage part 27. Thestorage part 27 stores for each model M a plurality of models drawn at different levels of image quality. Thedrawing control part 25 switches the model M located in theperipheral region 43 etc. of thegaze point 39 to the model with the lower image quality as compared to the model M located in thevicinity region 41 of thegaze point 39. - Since the drawing is controlled for each model, an increase in the processing load of the
CPU 101 etc. can be suppressed as compared to the case of controlling the drawing of the entire region including a background and, since the model tending to attract user's attention in thevisual field 33 is subjected to the drawing control, the apparent image quality recognized by the user can effectively be improved. - In this embodiment, the
drawing control part 25 switches the model M located in thevicinity region 41 to the model with the highest image quality out of the corresponding models M stored in thestorage part 27. As a result, the image quality can be maximized in the easily recognizedvicinity region 41 while lowering the image quality of theperipheral region 43 etc. hardly recognized by the user and, therefore, performance of theCPU 101 etc. related to the image can be optimized. - In this embodiment, the
storage part 27 stores for each model M a plurality of models having different levels of polygon count. Thedrawing control part 25 switches the model M located in theperipheral region 43 etc. to the model having the smaller polygon count as compared to the model M located in thevicinity region 41. - As a result, an image can be drawn with higher quality with a relatively larger polygon count in the easily recognized
vicinity region 41 while simplifying the drawing with a relatively smaller polygon count in theperipheral region 43 etc. hardly recognized by the user and, therefore, the image quality recognized by the user can further be improved regardless of the gaze direction. - In this embodiment, the
storage part 27 stores for each model M a plurality of models having different levels of texture resolution. Thedrawing control part 25 switches the model M located in theperipheral region 43 etc. to the model having the lower texture resolution as compared to the model M located in thevicinity region 41. - As a result, an image can be drawn with higher quality with a relatively higher texture resolution in the easily recognized
vicinity region 41 while simplifying the drawing with a relatively lower texture resolution in theperipheral region 43 etc. hardly recognized by the user and, therefore, the image quality recognized by the user can be improved regardless of the gaze direction. - In this embodiment, the
storage part 27 stores for each model M a plurality of models having different degrees of shader effect. Thedrawing control part 25 switches the model M located in theperipheral region 43 etc. to the model having the smaller degree of shader effect as compared to the model M located in thevicinity region 41. - As a result, an image can be drawn with higher quality at a relatively larger degree of shader effect in the easily recognized
vicinity region 41 while simplifying the drawing at a relatively smaller degree of shader effect in theperipheral region 43 hardly recognized by the user and, therefore, the image quality recognized by the user can be improved regardless of the gaze direction. - The techniques of the embodiment and modification examples may appropriately be utilized in combination other than those described above.
- Although not exemplarily illustrated one by one, the embodiment and modification examples are implemented with other various modifications without departing from the spirit thereof.
Claims (12)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2015-089871 | 2015-04-24 | ||
| JP2015089871A JP2016202716A (en) | 2015-04-24 | 2015-04-24 | Program and recording medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20160314562A1 true US20160314562A1 (en) | 2016-10-27 |
| US9978342B2 US9978342B2 (en) | 2018-05-22 |
Family
ID=57146873
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/135,639 Active US9978342B2 (en) | 2015-04-24 | 2016-04-22 | Image processing method controlling image display based on gaze point and recording medium therefor |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US9978342B2 (en) |
| JP (1) | JP2016202716A (en) |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108665435A (en) * | 2018-01-08 | 2018-10-16 | 西安电子科技大学 | Multispectral infrared image background suppression method based on topology-graph-cut fusion optimization |
| US20190066389A1 (en) * | 2017-08-29 | 2019-02-28 | Target Brands, Inc. | Photorealistic scene generation system and method |
| US10395624B2 (en) | 2017-11-21 | 2019-08-27 | Nvidia Corporation | Adjusting an angular sampling rate during rendering utilizing gaze information |
| US10467812B2 (en) * | 2016-05-02 | 2019-11-05 | Artag Sarl | Managing the display of assets in augmented reality mode |
| US10694170B2 (en) * | 2018-03-05 | 2020-06-23 | Valve Corporation | Controlling image display via real-time compression in peripheral image regions |
| US11231591B2 (en) * | 2017-02-24 | 2022-01-25 | Sony Corporation | Information processing apparatus, information processing method, and program |
| US11238834B2 (en) * | 2017-12-14 | 2022-02-01 | SZ DJI Technology Co., Ltd. | Method, device and system for adjusting image, and computer readable storage medium |
| US20230048185A1 (en) * | 2018-04-20 | 2023-02-16 | Pcms Holdings, Inc. | Method and system for gaze-based control of mixed reality content |
| US20240111361A1 (en) * | 2022-09-27 | 2024-04-04 | Tobii Dynavox Ab | Method, System, and Computer Program Product for Drawing and Fine-Tuned Motor Controls |
| US12547005B2 (en) | 2017-03-22 | 2026-02-10 | Magic Leap, Inc. | Depth based foveated rendering for display systems |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101888364B1 (en) * | 2017-06-02 | 2018-08-14 | 주식회사 비주얼캠프 | Method for displaying contents and apparatus for executing the method |
| KR101922343B1 (en) * | 2017-07-13 | 2018-11-26 | 광운대학교 산학협력단 | Method and system for testing diynamic visual acuity |
| EP3894998A4 (en) * | 2018-12-14 | 2023-01-04 | Valve Corporation | Player biofeedback for dynamically controlling a video game state |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5894308A (en) * | 1996-04-30 | 1999-04-13 | Silicon Graphics, Inc. | Interactively reducing polygon count in three-dimensional graphic objects |
| US20120154277A1 (en) * | 2010-12-17 | 2012-06-21 | Avi Bar-Zeev | Optimized focal area for augmented reality displays |
| US20140071163A1 (en) * | 2012-09-11 | 2014-03-13 | Peter Tobias Kinnebrew | Augmented reality information detail |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2990190B2 (en) * | 1994-09-16 | 1999-12-13 | 株式会社ナムコ | Three-dimensional simulator device and image synthesizing method |
| JP3263278B2 (en) * | 1995-06-19 | 2002-03-04 | 株式会社東芝 | Image compression communication device |
| JPH0981773A (en) * | 1995-09-19 | 1997-03-28 | Namco Ltd | Image composition method and image composition device |
| JP2009510540A (en) | 2006-10-31 | 2009-03-12 | ビ−エイイ− システムズ パブリック リミテッド カンパニ− | Image display system |
| JP5689637B2 (en) * | 2010-09-28 | 2015-03-25 | 任天堂株式会社 | Stereoscopic display control program, stereoscopic display control system, stereoscopic display control apparatus, and stereoscopic display control method |
| JP5627526B2 (en) * | 2011-03-31 | 2014-11-19 | 株式会社カプコン | GAME PROGRAM AND GAME SYSTEM |
| JP2016191845A (en) * | 2015-03-31 | 2016-11-10 | ソニー株式会社 | Information processor, information processing method and program |
-
2015
- 2015-04-24 JP JP2015089871A patent/JP2016202716A/en active Pending
-
2016
- 2016-04-22 US US15/135,639 patent/US9978342B2/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5894308A (en) * | 1996-04-30 | 1999-04-13 | Silicon Graphics, Inc. | Interactively reducing polygon count in three-dimensional graphic objects |
| US20120154277A1 (en) * | 2010-12-17 | 2012-06-21 | Avi Bar-Zeev | Optimized focal area for augmented reality displays |
| US20140071163A1 (en) * | 2012-09-11 | 2014-03-13 | Peter Tobias Kinnebrew | Augmented reality information detail |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10467812B2 (en) * | 2016-05-02 | 2019-11-05 | Artag Sarl | Managing the display of assets in augmented reality mode |
| US11231591B2 (en) * | 2017-02-24 | 2022-01-25 | Sony Corporation | Information processing apparatus, information processing method, and program |
| US12547005B2 (en) | 2017-03-22 | 2026-02-10 | Magic Leap, Inc. | Depth based foveated rendering for display systems |
| US20190066389A1 (en) * | 2017-08-29 | 2019-02-28 | Target Brands, Inc. | Photorealistic scene generation system and method |
| US10643399B2 (en) * | 2017-08-29 | 2020-05-05 | Target Brands, Inc. | Photorealistic scene generation system and method |
| US10395624B2 (en) | 2017-11-21 | 2019-08-27 | Nvidia Corporation | Adjusting an angular sampling rate during rendering utilizing gaze information |
| US11238834B2 (en) * | 2017-12-14 | 2022-02-01 | SZ DJI Technology Co., Ltd. | Method, device and system for adjusting image, and computer readable storage medium |
| CN108665435A (en) * | 2018-01-08 | 2018-10-16 | 西安电子科技大学 | Multispectral infrared image background suppression method based on topology-graph-cut fusion optimization |
| US10694170B2 (en) * | 2018-03-05 | 2020-06-23 | Valve Corporation | Controlling image display via real-time compression in peripheral image regions |
| US20230048185A1 (en) * | 2018-04-20 | 2023-02-16 | Pcms Holdings, Inc. | Method and system for gaze-based control of mixed reality content |
| US12455620B2 (en) * | 2018-04-20 | 2025-10-28 | Interdigital Vc Holdings, Inc. | Method and system for gaze-based control of mixed reality content |
| US20240111361A1 (en) * | 2022-09-27 | 2024-04-04 | Tobii Dynavox Ab | Method, System, and Computer Program Product for Drawing and Fine-Tuned Motor Controls |
| US12204689B2 (en) * | 2022-09-27 | 2025-01-21 | Tobii Dynavox Ab | Method, system, and computer program product for drawing and fine-tuned motor controls |
| US20250130637A1 (en) * | 2022-09-27 | 2025-04-24 | Tobii Dynavox Ab | Method, System, and Computer Program Product for Drawing and Fine-Tuned Motor Controls |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2016202716A (en) | 2016-12-08 |
| US9978342B2 (en) | 2018-05-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9978342B2 (en) | Image processing method controlling image display based on gaze point and recording medium therefor | |
| US12299251B2 (en) | Devices, methods, and graphical user interfaces for presenting virtual objects in virtual environments | |
| US20240420435A1 (en) | Methods for moving objects in a three-dimensional environment | |
| US11838494B2 (en) | Image processing method, VR device, terminal, display system, and non-transitory computer-readable storage medium | |
| US11217024B2 (en) | Artificial reality system with varifocal display of artificial reality content | |
| CN120255701A (en) | Method for improving user's environmental awareness | |
| US11294535B2 (en) | Virtual reality VR interface generation method and apparatus | |
| CN108136258A (en) | Picture frame is adjusted based on tracking eye motion | |
| CN110895433B (en) | Method and apparatus for user interaction in augmented reality | |
| US20210089121A1 (en) | Using spatial information for dynamic dominant eye shifts | |
| US11402965B2 (en) | Object display method and apparatus for simulating feeling of blind person and storage medium | |
| US20200292825A1 (en) | Attention direction on optical passthrough displays | |
| US20240404227A1 (en) | Devices, methods, and graphical user interfaces for real-time communication | |
| JPWO2019021601A1 (en) | Information processing apparatus, information processing method, and program | |
| WO2020184317A1 (en) | Information processing device, information processing method, and recording medium | |
| JP2016207042A (en) | Program and recording medium | |
| JP6592313B2 (en) | Information processing apparatus, display control method, and display control program | |
| US20250185910A1 (en) | Eye tracking for accessibility and visibility of critical elements as well as performance enhancements | |
| EP4468124A1 (en) | Method and system for guiding a user in calibrating an eye tracking device | |
| US20240385858A1 (en) | Methods for displaying mixed reality content in a three-dimensional environment | |
| WO2017173583A1 (en) | Terminal display anti-shake method and apparatus | |
| CN117121475A (en) | Information processing device, information processing method and program | |
| US20260045043A1 (en) | Devices, methods, and graphical user interfaces for displaying movement of virtual objects in a communication session | |
| US20240289927A1 (en) | Processor, image processing method, and image processing program | |
| KR20250175276A (en) | Gaze online learning |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: KOEI TECMO GAMES CO., LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SAKAMOTO, RYOTA;REEL/FRAME:038580/0036 Effective date: 20160427 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |