US10397649B2 - Method of zooming video images and mobile display terminal - Google Patents

Method of zooming video images and mobile display terminal Download PDF

Info

Publication number
US10397649B2
US10397649B2 US15/681,192 US201715681192A US10397649B2 US 10397649 B2 US10397649 B2 US 10397649B2 US 201715681192 A US201715681192 A US 201715681192A US 10397649 B2 US10397649 B2 US 10397649B2
Authority
US
United States
Prior art keywords
video frame
coordinate
threshold
zoom
current video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US15/681,192
Other versions
US20170347153A1 (en
Inventor
Hongtao Zuo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Assigned to TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED reassignment TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ZUO, Hongtao
Publication of US20170347153A1 publication Critical patent/US20170347153A1/en
Application granted granted Critical
Publication of US10397649B2 publication Critical patent/US10397649B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Definitions

  • the present disclosure relates to video play techniques, and particularly to a method of zooming video images and mobile terminal.
  • a solution may be as follows. While playing a video, a mobile terminal receives from the user a switch request for switching the display mode into full screen. Then the video is displayed at full screen after the switch request is received.
  • various embodiments of the present disclosure provide a method of zooming video images and a mobile terminal.
  • the technical schemes are as follows.
  • various embodiments provide a method of zooming video images which may include:
  • embodiments also provide a mobile terminal which may include:
  • the storage device which stores at least one program executable by the at least one processor, the at least one program includes instructions for:
  • the technical scheme solves the problem that the user may be unable to see details in a video clearly even after the video is played at full screen, e.g., the problem that the related art cannot satisfy user demands.
  • a user is enabled to selectively zoom video frames according to the needs, thus can clearly see details in the video.
  • FIGS. 1A and 1B are schematic diagrams illustrating a playing window in accordance with embodiments of the present disclosure
  • FIG. 1C is a schematic diagram illustrating a coordinate system of vertex coordinates in accordance with embodiments of the present disclosure
  • FIG. 1D is a schematic diagram illustrating a coordinate system of texture coordinates in accordance with embodiments of the present disclosure
  • FIG. 1E is a schematic diagram illustrating texture coordinates in a coordinate system in accordance with embodiments of the present disclosure
  • FIG. 1F is a flowchart illustrating a method of zooming video images in accordance with embodiments of the present disclosure
  • FIG. 2A is a flowchart illustrating a method of zooming video images in accordance with embodiments of the present disclosure
  • FIG. 2B is a schematic diagram illustrating a video player application when a user is zooming a current video frame in accordance with embodiments of the present disclosure
  • FIG. 2C is a schematic diagram illustrating zoom guidance information presented by a video player application in accordance with embodiments of the present disclosure
  • FIG. 2D is a schematic diagram illustrating a user drags image content in accordance with embodiments of the present disclosure
  • FIG. 2E is a schematic diagram illustrating a target image area after adjustment in accordance with embodiments of the present disclosure
  • FIG. 2F is a schematic diagram illustrating a recover button presented by a video player application in accordance with embodiments of the present disclosure
  • FIG. 3 is a block diagram illustrating an apparatus of zooming video images in accordance with embodiments of the present disclosure
  • FIG. 4 is a block diagram illustrating an apparatus of zooming video images in accordance with embodiments of the present disclosure
  • FIG. 5 is a schematic diagram illustrating modules of a mobile terminal in accordance with embodiments of the present disclosure.
  • the playing interface refers to an interface provided by a player terminal for presenting videos.
  • the playing window refers to an area actually occupied by video frames in the playing interface.
  • the size of a playing window may be the same with that of a playing interface, or may be different from that of a playing interface.
  • FIG. 1A when a video frame occupies the whole playing interface, the size of the playing window 11 is the same with the size of the playing interface.
  • FIG. 1B when a video frame only occupies the area except the upper area and the bottom area (the upper area and the bottom area are generally black areas in the playing interface when a user is watching a video), the size of the playing window 11 equals the size of the playing interface.
  • Vertex coordinates refer to coordinates of each vertex of a playing window.
  • the coordinate system of vertex coordinates is built based on a horizontal central axis x and a vertical central axis y of a player terminal, denoted as P.
  • the coordinate system P may be the coordinate system as shown in FIG. 1C .
  • Vertex coordinates are coordinates of each vertex of a playing window in the coordinate system.
  • the maximum coordinate value of vertex coordinates is generally 1.
  • FIG. 1C shows vertex coordinates of each vertex of a playing window when the playing window occupies the whole playing interface.
  • those skilled in the art may also set another value as the maximum coordinate value according to the needs.
  • Texture coordinates may include coordinates of at least two vertexes of a zoomed playing window in an un-zoomed version of the current video frame.
  • a coordinate system of the texture coordinates denoted as Q, is set up along an edge of a current video frame with the bottom-left of the video frame as the center.
  • the coordinate system Q may be the coordinate system as shown in FIG. 1D (the maximum coordinate value of the coordinate system is generally set to be 1, and may be set to be another value by those skilled in the art according to the needs).
  • Texture coordinates are coordinates of a to-be-presented target image area in the coordinate system. For example, referring to FIG.
  • the target image area to be presented in the playing window in the current video frame is the area defined by A, B, C and D in the upper figure (the area defined by the dotted lines in the upper figure is an area of an un-zoomed version of the current video frame).
  • the texture coordinates are the coordinates in the coordinate system Q of at least two points of the four points A, B, C and D in the current video frame, e.g., the coordinates of at least two points of the four points A′, B′, C′ and D′ in the lower figure of FIG. 1E .
  • the at least two points may include two diagonal vertexes.
  • FIG. 1F is a flowchart illustrating a method of zooming video images in accordance with embodiments of the present disclosure.
  • the method of zooming video images may include the following procedures.
  • a zoom request for zooming a current video frame may be received while a video is being played.
  • a zoom center point and a zoom ratio may be determined according to the zoom request.
  • a target image area to be displayed in a playing window may be determined from a zoomed version of the current video frame according to the zoom center point and the zoom ratio.
  • image content within the target image area of subsequent video frames of the current video frame may be rendered in the playing window when the subsequent video frames are played.
  • the method of various embodiments provides receiving a zoom request, determining a target image area to be displayed in a playing window from a zoomed version of the current frame according to the zoom request, so that image in the target image area in each frame subsequent to the current frame is rendered in the playing window when each frame subsequent to the current frame is played.
  • the technical scheme solves the problem that the user may be unable to see details in a video clearly even after the video is played at full screen, e.g., the problem related art cannot satisfy demands of users.
  • a user is enabled to selectively zooming video frames according to the needs, thus can clearly see details in the video.
  • FIG. 2A is a flowchart illustrating a method of zooming video images in accordance with embodiments of the present disclosure.
  • the method of zooming video images may include the following procedures.
  • a zoom request for zooming a current video frame may be received while a video is being played.
  • the method may be applied to a video player application which may be an application installed in a mobile terminal device.
  • the mobile terminal device may be a touch-control terminal, e.g., a touch-control mobile phone, a tablet computer, a personal reader, or the like.
  • the user may trigger a zoom request for the current video frame.
  • the video player application may receive the zoom request.
  • this procedure may be implemented in the following two manners.
  • a stretch gesture applied to the current video frame may be received, and determined to be a magnify request.
  • the user when a user is watching a tutorial video and wants to magnify and view tutorial material in the tutorial video, the user may put two fingers on a target position in the playing window, and make a stretch gesture on the playing window.
  • the video player application may determine the stretch gesture received to be a magnify request.
  • the target position refers to the position of the center of an area the user wants to magnify to view.
  • a minify gesture applied to the current video frame may be received, and determined to be a minify request.
  • the user may make a minify gesture in the playing window.
  • the video player application may also present zoom guidance information for guiding the user to minify the video frames.
  • the video player application may present the zoom guidance information after receiving a click signal indicating the user has clicked on the video frame.
  • the video player application may present the zoom guidance information the first time when the video player application is run to play a video.
  • the video player application may always present the zoom guidance information.
  • taking a video player application presents zoom guidance the first time when the video player application is run to play a video as an example, as shown in the upper figure in FIG. 2C , the video player application may present the zoom guidance information as shown in the figure in a popover. In another example, referring to the lower figure of FIG. 2C , the video player application may always present the zoom guidance information 21 as shown in the figure.
  • a zoom center point and a zoom ratio may be determined according to the zoom request.
  • the video player application may determine a zoom center point and a zoom ratio according to the zoom request.
  • the video player application may respectively determine a zoom ratio for the horizontal axis and a zoom ratio for the vertical axis for zooming the current video frame. For example, taking a zoom request for magnifying the current video frame to two times of the original size and the current video frame is magnified according to the same ratio in the horizontal direction and the vertical direction, the video player application may determine the zoom ratio of the horizontal axis to be
  • a reference horizontal coordinate and a reference vertical coordinate may be calculated according to the zoom ratio.
  • this procedure may include:
  • xCord n x * viewHeight viewWidth * ImageWidth ImageHeight ;
  • yCord n y * viewWidth viewHeight * ImageHeight ImageWidth .
  • this procedure may include:
  • the video player application after obtaining the first value, checking by the video player application whether the first value reaches the first threshold; determining by the video player application the reference horizontal coordinate xCord to be the first threshold in response to a determination that the first value reaches the first threshold because the maximum values of the horizontal coordinate and the vertical coordinate are both the first threshold; the first threshold is generally 1;
  • the video player application may check whether the second value reaches the first threshold.
  • the video player application may determine the reference horizontal coordinate yCord to be the first threshold in response to a determination that the second value reaches the first threshold because the maximum value of both the horizontal coordinate and the vertical coordinate can only be the first threshold.
  • a determination may be made that the reference vertical coordinate yCord is the second value in response to a determination that the second value does not reach the first threshold.
  • n is the zoom ratio for the vertical axis of the current video frame
  • viewWidth is the width of the displaying area of the playing window
  • viewHeight is the height of the displaying area of the playing window
  • ImageWidth is the width of the current video frame
  • ImageHeight is the height of the current video frame.
  • the video player application may determine the reference horizontal coordinate and the reference vertical coordinate according to the following method which may include the following procedures.
  • the reference vertical coordinate yCord may be determined to be the first threshold in response to a determination that the first value does not reach the first threshold.
  • a third value may be obtained, which may be
  • n x * viewHeight viewWidth * ImageWidth ImageHeight .
  • n x is the zoom ratio for the horizontal axis of the current video frame.
  • the reference horizontal coordinate xCord may be determined to be the first threshold in response to a determination that the third value reaches the first threshold.
  • the video player application may determine the reference horizontal coordinate to be the first threshold in response to a determination that the obtained third value reaches the first threshold.
  • the reference horizontal coordinate xCord may be determined to be the third value in response to a determination that the third value does not reach the first threshold.
  • this example is taking a video player application calculating the reference horizontal coordinate and the reference vertical coordinate using the above method as an example.
  • a video player application may perform the calculation using other methods, and this is not limited in the present disclosure.
  • texture coordinates may be calculated according to the zoom center point, the reference horizontal coordinate and the reference vertical coordinate.
  • the video player application may calculate coordinate values of texture coordinates according to the zoom center point, the reference horizontal coordinate and the reference vertical coordinate.
  • the texture coordinates may include coordinates of at least two vertexes of a playing window presenting a zoomed version of the current video frame in an un-zoomed version of the current video frame.
  • the at least two vertexes may include diagonal vertexes of the playing window. This example takes the texture coordinates includes four vertexes as an example.
  • this procedure may include the following steps.
  • a target horizontal coordinate and a target vertical coordinate of the zoom center point may be obtained.
  • the target horizontal coordinate may be:
  • the target vertical coordinate may be:
  • Y 0 y 0 viewHeight * n y .
  • a first horizontal coordinate X 1 of the texture coordinates is the first threshold i
  • a second horizontal coordinate X 2 of the texture coordinates is j
  • a first vertical coordinate Y 1 of the texture coordinates is
  • a second vertical coordinate Y 2 of the texture coordinates is Y 0 .
  • a first horizontal coordinate X 1 of the texture coordinates is
  • a second horizontal coordinate X 2 of the texture coordinates is X 0
  • a first vertical coordinate Y 1 of the texture coordinates is the first threshold i
  • a second vertical coordinate Y 2 of the texture coordinates is the second threshold j.
  • the second horizontal coordinate X 2 is the second threshold j.
  • the first horizontal coordinate X 1 of the texture coordinates is
  • the second horizontal coordinate X 2 of the texture coordinates is X 0
  • the first vertical coordinate Y 1 of the texture coordinates is
  • the second vertical coordinate Y 2 of the texture coordinates is Y 0 .
  • a first horizontal coordinate X 1 of the texture coordinates is the first threshold i
  • a second horizontal coordinate X 2 of the texture coordinates is the second threshold j
  • a first vertical coordinate Y 1 of the texture coordinates is the first threshold i
  • a second vertical coordinate Y 2 of the texture coordinates is the second threshold j.
  • the second threshold j is the minimal coordinate value in the coordinate system of the video frame. For example, when the coordinate system is the coordinate system as shown in FIG. 1D , the second threshold is 0.
  • an area defined by the texture coordinates are determined to be the target image area.
  • the video player application may determine the area defined by the texture coordinates in the current video frame as the target image area.
  • vertex coordinates of the playing window may be calculated according to the reference horizontal coordinate and the reference vertical coordinate.
  • the reference horizontal coordinate may be assumed to be xCord; the reference vertical coordinate to be yCord.
  • the vertex coordinates may be: (xCord, ⁇ yCord), (xCord, yCord), (xCord, yCord), and ( ⁇ xCord, ⁇ yCord).
  • image content in the target image area of the subsequent video frames are rendered in an area defined by the vertex coordinates according to the texture coordinates.
  • the act of a user zooming an area of a video frame means the user is interested in image content in the area within the video frame. Therefore, the video player application may directly render image content in the area defined by the texture coordinates of subsequent video frames in the area defined by the vertex coordinates. The video player application may render the image content using the Open Graphics Library (OpenGL).
  • OpenGL Open Graphics Library
  • the video player application may present image content according to the above method when presenting the k'th frame subsequent to the current video frame because it may take some time for the video player application to calculate the texture coordinates and the vertex coordinates. If the calculation does not take much time, e.g., the video player application has obtained the calculated texture coordinates and the vertex coordinates before playing the next image frame of the current video frame, the video player application may present image content according to the above method when playing the next video frame of the current frame.
  • image content may be presented according to the above method as long as the texture coordinates and the vertex coordinates have been calculated, and the implementation of the method is not limited.
  • the user may selectively dragging image content rendered in the playing window when the user wants to adjust the display position of the video frames. That is, the video player application may also perform the following procedures.
  • a drag request may be received.
  • the drag request is for dragging image content rendered in the playing window, and the image content is image content within the target image area in the k'th video frame of the subsequent video frames.
  • the k is a positive integer.
  • the user when the user wants to drag tutorial content in the video frame as shown in FIG. 1E into the center of the playing window, the user may trigger a drag request by performing a leftward dragging in the video frame presented by the video player application. Accordingly, the video player application may receive the drag request.
  • the target image area is adjusted according to the drag request, and the adjusted target image area includes an area in the k'th video frame to be presented in the playing window after the dragging.
  • the video player application may adjust the target image area according to the received drag request.
  • the adjusted target image area may include an area in the k'th video frame to be presented in the playing window after dragging.
  • this procedure may include: calculating adjusted texture coordinates according to the drag request; taking an area defined by the adjusted texture coordinates as the adjusted target image area.
  • the method of the video player application calculates the adjusted texture coordinates may include: obtaining a dragging displacement corresponding to the drag request; calculating adjusted texture coordinates according to the texture coordinates before the adjustment and the dragging displacement. That is, the texture coordinates may include coordinates of at least two vertexes of a playing window presenting the dragged current video frame in an un-zoomed version of the current video frame. The at least two vertexes may include diagonal vertexes of the playing window.
  • the image area to be presented in the playing window is the area defined by E, F, G and H.
  • the video player application may calculate coordinates of E′, F′, G′ and H′, and take the area defined by E′, F′, G′ and H′ as the adjusted target image area.
  • the video player application may render image content of the video frame that falls within the adjusted target image area in the playing window. This procedure is similar to the procedure in block 207 .
  • the user may perform an action on a recover button presented in the playing interface of the video player application, e.g., the video player application may perform the following procedures.
  • a recover request may be received via the recover button.
  • the user when the user wants to zoom to return to the play mode used before the zooming after watching a video for a period of time under the mode as shown in the upper figure of FIG. 2E , the user may click on the recover button 22 as shown in FIG. 2F , and correspondingly, the video player application may receive the recover request via the recover button 22 .
  • Video frames to be played are played according to the play mode used before the zooming according to the zoom request in response to the recover request.
  • the video player application may use the play mode as shown in FIG. 2B , e.g., using the same play center point and zoom ratio with that of the video frame as shown in FIG. 2B , to play the pending video frames.
  • the method of various embodiments provides receiving a zoom request, determining a target image area to be displayed in a playing window from a zoomed version of the current frame according to the zoom request, image in the target image area in each frame after the current frame is rendered in the playing window when each frame subsequent to the current frame is played.
  • the technical scheme solves the problem that the user may be unable to see details in a video clearly even after the video is played at full screen, e.g., the problem that related art cannot satisfy demands of users.
  • a user is enabled to selectively zooming video frames according to the needs, thus can clearly see details in the video.
  • adjusted texture coordinates may be calculated and an area defined by the adjusted texture coordinates may be regarded as the adjusted target image area.
  • image content within the adjusted target image area is rendered in the playing window, such that the user is enabled to adjust content of video frames that is presented in the playing window according to the needs of the user watching the video during play of the video.
  • users' demand for watching a video can be better satisfied.
  • FIG. 3 is a schematic diagram illustrating an apparatus of zooming video images in accordance with embodiments of the present disclosure.
  • the apparatus may include: a first receiving module 310 , a parameter determining module 320 , an area determining module 330 and a first presenting module 340 .
  • the first receiving module 310 may receive a zoom request for zooming a current video frame while a video is being played.
  • the parameter determining module 320 may determine a zoom center point and a zoom ratio according to the zoom request.
  • the area determining module 330 may determine a target image area to be presented in a playing window from a zoomed version of the current video frame according to the zoom center point and the zoom ratio.
  • the first presenting module 340 may render in the playing window image content within the target image area of subsequent video frames of the current video frame when playing the subsequent video frames.
  • the apparatus of various embodiments provides receiving a zoom request, determining a target image area to be displayed in a playing window from a zoomed version of the current frame according to the zoom request, image in the target image area in each frame after the current frame is presented in the playing window when each frame subsequent to the current frame is played.
  • the technical scheme solves the problem that the user may be unable to see details in a video clearly even after the video is played at full screen, e.g., the problem related art cannot satisfy demands of users.
  • a user is enabled to selectively zooming video frames according to the needs, thus can clearly see details in the video.
  • FIG. 4 is a schematic diagram illustrating an apparatus of zooming video images in accordance with embodiments of the present disclosure.
  • the apparatus may include: a first receiving module 410 , a parameter determining module 420 , an area determining module 430 and a first presenting module 440 .
  • the first receiving module 410 may receive a zoom request for zooming a current video frame while a video is being played.
  • the parameter determining module 420 may determine a zoom center point and a zoom ratio according to the zoom request.
  • the area determining module 430 may determine a target image area to be presented in a playing window from a zoomed version of the current video frame according to the zoom center point and the zoom ratio.
  • the first presenting module 440 may render in the playing window image content within the target image area in subsequent video frames of the current video frame when playing the subsequent video frames.
  • the first receiving module 410 may also:
  • the apparatus may also include:
  • an information presenting module 450 to present zoom guidance information for guiding the user to zoom the video frame.
  • the apparatus may also include:
  • a second receiving module 460 to receive a drag request for dragging image content presented in the playing window which is image content within the target image area in the k'th video frame of the subsequent video frames;
  • the k is a positive integer;
  • an adjusting module 470 to adjust the target image area according to the dragging request so that the adjusted target image area includes an area in the k'th video frame to be presented in the playing window after the dragging;
  • a second presenting module 480 to render image content of a video frame that falls in the adjusted target image area in the playing window when playing video frames subsequent to the k'th video frame.
  • the apparatus may also include:
  • a third receiving module 490 to receive a recover request via a recover button
  • a third presenting module 510 to play video frames to be played after the recover request is received according to the play mode used before the zooming according to the zoom request.
  • the area determining module 430 may include:
  • a first calculating unit 413 to calculate a reference horizontal coordinate and a reference vertical coordinate according to the zoom ratio
  • a second calculating unit 432 to calculate coordinate values of texture coordinates according to the zoom center point, the reference horizontal coordinate and the reference vertical coordinate;
  • the texture coordinates may include coordinates of at least two vertexes of the playing window in a zoomed version of the current video frame, the at least two vertexes may include diagonal vertexes of the playing window;
  • an area determining unit 433 to determine an area defined by the texture coordinates as the target image area.
  • the first presenting module 440 may include:
  • a third calculating unit 441 to calculate vertex coordinates of the playing window according to the reference horizontal coordinate and the reference vertical coordinate;
  • a content presenting unit 442 to render image content within the target image area of subsequent video frames in an area corresponding to the vertex coordinates according to the texture coordinates.
  • the third calculating unit 442 may also:
  • the first calculating unit 431 may also:
  • xCord n x ⁇ viewHeight viewWidth ⁇ ImageWidth ImageHeight ;
  • yCord n y ⁇ viewWidth viewHeight ⁇ ImageHeight ImageWidth ;
  • n x is the zoom ratio for the horizontal axis of the current video frame
  • n y is the zoom ratio for the vertical axis of the current video frame
  • viewWidth is the width of the displaying area in the playing window
  • viewHeight is the height of the displaying area in the playing window
  • ImageWidth is the width of the current video frame
  • ImageHeight is the height of the current video frame.
  • the first calculating unit 431 may also:
  • n y viewWidth viewHeight ⁇ ImageHeight ImageWidth ;
  • n y is the zoom ratio for the vertical axis of the current video frame; viewWidth is the width of the displaying area of the playing window; viewHeight is the height of the displaying area of the playing window; ImageWidth is the width of the current video frame; ImageHeight is the height of the current video frame.
  • the first calculating unit 431 may also:
  • n x is the zoom ratio for the horizontal axis of the current video frame
  • the second calculating unit 432 may also:
  • a target horizontal coordinate and a target vertical coordinate of the zoom center point obtain a target horizontal coordinate and a target vertical coordinate of the zoom center point; determine the target horizontal coordinate to be:
  • the target vertical coordinate may be:
  • Y 0 y 0 viewHeight * n y .
  • the reference horizontal coordinate is smaller than the first threshold i and the reference vertical coordinate is the first threshold i, determine a first horizontal coordinate X 1 of the texture coordinates to be the first threshold i, a second horizontal coordinate X 2 of the texture coordinates to be a second threshold j, a first vertical coordinate Y 1 of the texture coordinates to be
  • a first horizontal coordinate X 1 of the texture coordinates is
  • a second horizontal coordinate X 2 of the texture coordinates is X 0
  • a first vertical coordinate Y 1 of the texture coordinates is the first threshold i
  • a second vertical coordinate Y 2 of the texture coordinates is the second threshold j; when the first horizontal coordinate X 1 is larger than the first threshold i, determine the first horizontal coordinate X 1 is the first threshold i, the second horizontal coordinate X 2 is
  • the second horizontal coordinate X 2 is the second threshold j;
  • the reference horizontal coordinate and the reference vertical coordinate are both the first threshold, obtain the value of a first parameter a, the value of a second parameter b, the value of a third parameter c and the value of a fourth parameter d may be obtained; determine the first horizontal coordinate X 1 of the texture coordinates is
  • the second horizontal coordinate X 2 of the texture coordinates is X 0
  • the first vertical coordinate Y 1 of the texture coordinates is
  • the second vertical coordinate Y 2 of the texture coordinates is Y 0 ; when the first horizontal coordinate X 1 is larger than the first threshold i, determine the second horizontal coordinate is
  • a first horizontal coordinate X 1 of the texture coordinates is the first threshold i
  • a second horizontal coordinate X 2 of the texture coordinates is the second threshold j
  • a first vertical coordinate Y 1 of the texture coordinates is the first threshold i
  • a second vertical coordinate Y 2 of the texture coordinates is the second threshold j
  • x 0 is the horizontal coordinate of the zoom center point in the current video frame
  • y 0 is the vertical coordinate of the zoom center point in the current video frame
  • a is the first threshold i
  • n x is the zoom ratio for the horizontal axis of the current video frame
  • n y is the zoom ratio for the vertical axis of the current video frame.
  • the apparatus of various embodiments provides receiving a zoom request, determining a target image area to be displayed in a playing window from a zoomed version of the current frame according to the zoom request, image in the target image area in each frame after the current frame is presented in the playing window when each frame subsequent to the current frame is played.
  • the technical scheme solves the problem that the user may be unable to see details in a video clearly even after the video is played at full screen, e.g., the problem related art cannot satisfy demands of users.
  • a user is enabled to selectively zooming video frames according to the needs, thus can clearly see details in the video.
  • FIG. 5 is a schematic diagram illustrating modules of a mobile terminal in accordance with embodiments of the present disclosure.
  • a video player application may run in the mobile terminal 600 .
  • the specific components are as follows.
  • the mobile terminal 600 may include a radio frequency (RF) circuit 610 , at least one memory of computer-readable storage medium 620 , an input unit 630 , a display unit 640 , at least one sensor 650 , an audio circuit 660 , a wireless fidelity (WiFi) unit 670 , at least one processor 680 and a power supply 6390 and the like.
  • RF radio frequency
  • the structure as shown in FIG. 5 is not for restricting the terminal device.
  • the terminal device of various examples may include extra components or may include fewer components, or may have some of the components integrated into one component, or may have a different deployment of the components.
  • the RF circuit 610 is capable of sending and receiving signals during a process of information sending/receiving process or a voice communication process.
  • the RF circuit 610 may send downlink information received from a base station to the at least one processor 680 for further processing, and may send uplink data to the base station.
  • the RF circuit 610 may generally include, but not limited to, an antenna, at least one amplifier, a tuner, at least one oscillator, a subscriber identity module (SIM) card, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • SIM subscriber identity module
  • the RF circuit 610 may perform wireless communications with a network and other devices.
  • the wireless communication may adopt any communication standard or protocol, including but not limited to: global system of mobile communication (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), long term evolution (LTE), email, short messaging service (SMS), or the like.
  • GSM global system of mobile communication
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • LTE long term evolution
  • email short messaging service
  • SMS short messaging service
  • the storage medium 620 may be used for storing software programs and modules.
  • the storage medium 620 may store a pre-determined time list, a software program for collecting voice signals, a software program for identifying key words, a software program for continuous speech recognition, a software program for setting events and alerts, and store relationships which associates wireless access points with user accounts, or the like.
  • the processor 680 may be capable of executing the software programs and modules stored in the storage device 620 to implement various functions and data processing.
  • the storage medium 620 may mainly include program storage sections and data storage sections.
  • the program storage sections may store an operating system, applications required for implementing at least one function (such as video playing function, image display function, touch screen recognition function, or the like).
  • the data storage sections may store data generated during usage of the mobile terminal 600 , or the like.
  • the storage medium 620 may include a high-speed random access memory, and may also include a non-transitory memory, e.g., at least one disk storage, flash memory or other non-transitory solid state storage device and the like.
  • the storage device 620 may also include a storage controller to provide the processor 680 and the inputting unit 630 with access to the storage device 620 .
  • the input unit 630 may receive digits or characters inputted, and generate a keyboard input signal, a mouse input signal, a control lever input signal, an optical input signal, or a track ball input signal which is related with user settings and function controlling.
  • the input unit 630 may include a touch sensitive surface 631 and other inputting devices 632 .
  • the touch sensitive surface 631 also referred to as a touch screen or a touchpad, is capable of collecting touch operations performed by a user on the surface or near the surface (e.g., an operation performed on or near the touch sensitive surface 631 using any proper object or attachment such as a finger or a touch pen and etc.), and driving a connecting apparatus corresponding to the operation according to a pre-defined procedure.
  • the touch sensitive surface 631 may include a touch detecting apparatus and a touch controller.
  • the touch detecting apparatus detects the position touched by the user, detects a signal generated by the touch, and sends the signal to the touch controller.
  • the touch controller receives touch information from the touch detecting apparatus, converts the touch information into coordinates of the touch position, sends the coordinates to the processor 680 , receives a command sent by the processor 680 and executes the command.
  • the touch sensitive surface 631 may be implemented via various types of touch techniques such as resistive touch screen, capacitive touch screen, infrared touch screen and surface acoustic wave touch screen and so on.
  • the input unit 630 may include another input device 632 besides the touch sensitive surface 631 .
  • the another input device 632 may include, but not limited to, at least one of a physical keyboard, a function key (e.g., a volume control key, a power on/off key and etc.), a track ball, a mouse, a control lever and the like.
  • a function key e.g., a volume control key, a power on/off key and etc.
  • a track ball e.g., a mouse, a control lever and the like.
  • the display unit 640 is capable of displaying information inputted by the user, information provided for the user and various graphical user interfaces of the mobile terminal 600 .
  • the graphical user interfaces may include any combination of graphics, texts, icons, videos.
  • the display unit 640 may include a display panel 641 .
  • the display panel 641 may be implemented by Liquid Crystal Display (LCD), Organic Light-Emitting Diode (OLED) and the like.
  • the touch sensitive surface 661 may overlay the display panel 641 . When detecting a touch operation on or near the touch sensitive surface 661 , the touch sensitive surface 661 may send the touch operation to the processor 680 to determine the type of the touch event.
  • the processor 680 may provide visual output on the display panel 641 according to the type of the touch event.
  • the touch sensitive surface 661 and the display panel 641 are depicted as two independent components respectively for input and output in FIG. 5 , the touch sensitive surface 661 and the display panel 641 may be integrated to provide input and output in various examples.
  • the mobile terminal 600 may also include at least one sensor 650 , e.g., an optical sensor, a motion sensor, or other types of sensors.
  • the optical sensor may include an ambient light sensor and a proximity sensor.
  • the ambient light sensor may adjust the brightness of the display panel 641 according to the strength of ambient light.
  • the proximity sensor may close the display panel 641 and/or the light when the mobile terminal 600 is held close to an ear.
  • a gravity sensor is a type of motion sensor, may detect the amount of acceleration in multiple directions (typically XYZ-axis), the amount and the direction of gravity when kept in stationary, and can be used in applications which need to identify phone postures (such as auto screen rotation, games using the sensing result, magnetometer attitude calibration), features related with vibration identify (such as a pedometer, percussion) and the like.
  • the mobile terminal 600 may include other sensors, e.g., a gyroscope, a barometer, a hygrometer, a thermometer, infrared sensors and the like, which are not listed further herein.
  • the audio circuit 660 , the speaker 661 and the microphone 662 may provide an audio interface between the user and the mobile terminal device 600 .
  • An audio circuit 660 may convert received audio data into electrical signals, and send the electrical signals to the speaker 661 .
  • the speaker 661 may convert the electrical signals into sound and outputs the sound.
  • the microphone 662 may convert collected sound signals into electrical signals which are received by the audio circuit 660 .
  • the audio circuit 660 may convert the electrical signals into audio data, and sends the electrical signals to the processor 680 for processing.
  • the processed audio data may be sent to another terminal device via the RF circuit 610 , or be output to the storage device 620 for further processing.
  • the audio circuit 660 may also include an ear jack providing communications between a peripheral earphone and the mobile terminal 600 .
  • the short-distance wireless communication module 670 may be a wireless fidelity (WiFi) module or a Bluetooth module, or the like.
  • the mobile terminal 600 may adopt a WiFi module 270 to provide wireless broadband Internet access to enable a user to send and receive emails, browse webpages and access stream media and so on.
  • the terminal device 600 may not include the WiFi module 670 although it is shown in FIG. 5 .
  • the structure in FIG. 5 is merely an example, modifications can be made as long as they do not change the mechanism of the examples.
  • the processor 680 is a control center of the mobile terminal 600 which interconnects all of the components in the phone using various interfaces and circuits and monitors the phone by running or executing software programs and/or modules stored in the storage device 620 and calling various functions of the mobile terminal 600 and processing data.
  • the processing unit 680 may include one or multiple processing cores.
  • the processing unit 680 may integrate an application processor and a modem processor.
  • the application processor mainly handles the operating system, user interfaces and application programs, and etc.
  • the modem processor mainly handles wireless communications.
  • the modem may not be integrated into the processor 680 .
  • the mobile terminal 600 may also include a power supply 690 (e.g., a battery) providing power for various parts.
  • the power supply may be logically connected with the processor 680 via a power supply management system to implement functions such as charging, discharging, power management and the like.
  • the power supply 690 may also include any components such as one or multiple AC or DC power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator and the like.
  • the mobile terminal 600 may also include a camera, a Bluetooth module and the like, which is not described further herein.
  • the mobile terminal 600 may also include a storage device and at least one program which may be executed by at least one processor to implement the method of zooming video images of various examples.
  • a non-transitory computer-readable storage medium including instructions e.g., a storage device including instructions
  • the instructions may be executable by a processor at the mobile terminal to implement the method of zooming video images.
  • the non-transitory computer-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disc, optical storage device, or the like.
  • the apparatus of zooming video images takes the above modules as an example.
  • the functions may be re-divided to be implemented by different modules, e.g., the apparatus may have a different inner structure composed of different modules to implement all or some of the above functions.
  • the above methods of zooming video images provided by the examples belong to the same conceptive idea. Details have been described in the above, and will not be repeated herein.
  • index numbers of the examples are merely for facilitating description, and should not be interpreted to be representative for the preference order of the examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Studio Circuits (AREA)

Abstract

The present disclosure provides a method and an apparatus for zooming video images, and belongs to video playing technology. The method may include: receiving a zoom request for zooming a current video frame while a video is being played; determining a zoom center point and a zoom ratio according to the zoom request; determining a target image area to be displayed in a playing window from the current video frame after zooming according to the zoom center point and the zoom ratio; and rendering, in the playing window, image content within the target image area of video frames subsequent to the current video frame when playing subsequent video frames. As such, a user is enabled to selectively zoom video frames according to the needs, thus can clearly see details in the video.

Description

The present disclosure is a continuation application of PCT/CN2016/078352, which claims priority to Chinese patent application No. 2015101812006 titled “method and apparatus of zooming video images” filed on Apr. 16, 2015 with the Patent Office of the People's Republic of China, each of which is hereby incorporated by reference in its entirety. Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference in their entirety under 37 CFR 1.57 for all purposes.
TECHNICAL FIELD
The present disclosure relates to video play techniques, and particularly to a method of zooming video images and mobile terminal.
BACKGROUND
When watching a video using a mobile terminal, a user may be unable to clearly see details in video images due to the limited size of the playing window.
In order to help the user see details in the video clearly, a solution may be as follows. While playing a video, a mobile terminal receives from the user a switch request for switching the display mode into full screen. Then the video is displayed at full screen after the switch request is received.
SUMMARY
In order to solve some of problems in related solutions, various embodiments of the present disclosure provide a method of zooming video images and a mobile terminal. The technical schemes are as follows.
In one aspect, various embodiments provide a method of zooming video images which may include:
receiving a zoom request for zooming a current video frame while a video is being played;
determining a zoom center point and a zoom ratio according to the zoom request;
determining a target image area to be displayed in a playing window from the current video frame after zooming according to the zoom center point and the zoom ratio; and
rendering, in the playing window, image content within the target image area of subsequent video frames of the current video frame when playing the subsequent video frames.
In another aspect, embodiments also provide a mobile terminal which may include:
at least one processor; and
a storage device;
the storage device which stores at least one program executable by the at least one processor, the at least one program includes instructions for:
receiving a zoom request for zooming a current video frame while a video is being played;
determining a zoom center point and a zoom ratio according to the zoom request;
determining a target image area to be displayed in a playing window from the current video frame after zooming according to the zoom center point and the zoom ratio; and
rendering, in the playing window, image content within the target image area of subsequent video frames of the current video frame when playing the subsequent video frames.
The technical scheme provided by embodiments of the present disclosure has the following merits.
By receiving a zoom request, determining a target image area to be displayed in a playing window from the current frame after zooming according to the zoom request, image in the target image area in each frame is rendered in the playing window when presenting each frame subsequent to the current frame. Thus, the technical scheme solves the problem that the user may be unable to see details in a video clearly even after the video is played at full screen, e.g., the problem that the related art cannot satisfy user demands. As such, a user is enabled to selectively zoom video frames according to the needs, thus can clearly see details in the video.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to make the technical scheme of embodiments of the present disclosure more clearly, the following is a brief introduction of the drawings used in description of the embodiments. The following drawings are merely some of the embodiments, and based on which other drawings can be obtained by those skilled in the art without doing any inventive work.
FIGS. 1A and 1B are schematic diagrams illustrating a playing window in accordance with embodiments of the present disclosure;
FIG. 1C is a schematic diagram illustrating a coordinate system of vertex coordinates in accordance with embodiments of the present disclosure;
FIG. 1D is a schematic diagram illustrating a coordinate system of texture coordinates in accordance with embodiments of the present disclosure;
FIG. 1E is a schematic diagram illustrating texture coordinates in a coordinate system in accordance with embodiments of the present disclosure;
FIG. 1F is a flowchart illustrating a method of zooming video images in accordance with embodiments of the present disclosure;
FIG. 2A is a flowchart illustrating a method of zooming video images in accordance with embodiments of the present disclosure;
FIG. 2B is a schematic diagram illustrating a video player application when a user is zooming a current video frame in accordance with embodiments of the present disclosure;
FIG. 2C is a schematic diagram illustrating zoom guidance information presented by a video player application in accordance with embodiments of the present disclosure;
FIG. 2D is a schematic diagram illustrating a user drags image content in accordance with embodiments of the present disclosure;
FIG. 2E is a schematic diagram illustrating a target image area after adjustment in accordance with embodiments of the present disclosure;
FIG. 2F is a schematic diagram illustrating a recover button presented by a video player application in accordance with embodiments of the present disclosure;
FIG. 3 is a block diagram illustrating an apparatus of zooming video images in accordance with embodiments of the present disclosure;
FIG. 4 is a block diagram illustrating an apparatus of zooming video images in accordance with embodiments of the present disclosure;
FIG. 5 is a schematic diagram illustrating modules of a mobile terminal in accordance with embodiments of the present disclosure.
DETAILED DESCRIPTION
Examples are hereinafter described in detail with reference to the accompanying drawings to make the objective, technical scheme and merits more apparent. It should be understood that the embodiments described are merely some examples of the present disclosure, not all of the embodiments. Based on the embodiments of the present disclosure, other embodiments obtained by those skilled in the art without any inventive work done are still within the protection scope of the present disclosure.
In order to make the disclosure more readily understood, several terms involved in the embodiments are briefly introduced herein.
The playing interface refers to an interface provided by a player terminal for presenting videos. The playing window refers to an area actually occupied by video frames in the playing interface. According to examples, the size of a playing window may be the same with that of a playing interface, or may be different from that of a playing interface. For example, referring to FIG. 1A, when a video frame occupies the whole playing interface, the size of the playing window 11 is the same with the size of the playing interface. As shown in FIG. 1B, when a video frame only occupies the area except the upper area and the bottom area (the upper area and the bottom area are generally black areas in the playing interface when a user is watching a video), the size of the playing window 11 equals the size of the playing interface.
Vertex coordinates refer to coordinates of each vertex of a playing window. Generally, the coordinate system of vertex coordinates is built based on a horizontal central axis x and a vertical central axis y of a player terminal, denoted as P. For example, the coordinate system P may be the coordinate system as shown in FIG. 1C. Vertex coordinates are coordinates of each vertex of a playing window in the coordinate system. When the playing window occupies the whole playing interface, the maximum coordinate value of vertex coordinates is generally 1. In an example, FIG. 1C shows vertex coordinates of each vertex of a playing window when the playing window occupies the whole playing interface. In other examples, those skilled in the art may also set another value as the maximum coordinate value according to the needs.
Texture coordinates may include coordinates of at least two vertexes of a zoomed playing window in an un-zoomed version of the current video frame. A coordinate system of the texture coordinates, denoted as Q, is set up along an edge of a current video frame with the bottom-left of the video frame as the center. For example, the coordinate system Q may be the coordinate system as shown in FIG. 1D (the maximum coordinate value of the coordinate system is generally set to be 1, and may be set to be another value by those skilled in the art according to the needs). Texture coordinates are coordinates of a to-be-presented target image area in the coordinate system. For example, referring to FIG. 1E, the target image area to be presented in the playing window in the current video frame is the area defined by A, B, C and D in the upper figure (the area defined by the dotted lines in the upper figure is an area of an un-zoomed version of the current video frame). The texture coordinates are the coordinates in the coordinate system Q of at least two points of the four points A, B, C and D in the current video frame, e.g., the coordinates of at least two points of the four points A′, B′, C′ and D′ in the lower figure of FIG. 1E. The at least two points may include two diagonal vertexes.
There are at least the following disadvantages in a conventional solution: since the screen size of mobile terminals is small, users may be unable to see details in videos even if the video is played at full screen, and the above solution still cannot satisfy the demand of users.
FIG. 1F is a flowchart illustrating a method of zooming video images in accordance with embodiments of the present disclosure. The method of zooming video images may include the following procedures.
At block 101, a zoom request for zooming a current video frame may be received while a video is being played.
At block 102, a zoom center point and a zoom ratio may be determined according to the zoom request.
At block 103, a target image area to be displayed in a playing window may be determined from a zoomed version of the current video frame according to the zoom center point and the zoom ratio.
At block 104, image content within the target image area of subsequent video frames of the current video frame may be rendered in the playing window when the subsequent video frames are played.
In view of the foregoing, the method of various embodiments provides receiving a zoom request, determining a target image area to be displayed in a playing window from a zoomed version of the current frame according to the zoom request, so that image in the target image area in each frame subsequent to the current frame is rendered in the playing window when each frame subsequent to the current frame is played. Thus, the technical scheme solves the problem that the user may be unable to see details in a video clearly even after the video is played at full screen, e.g., the problem related art cannot satisfy demands of users. As such, a user is enabled to selectively zooming video frames according to the needs, thus can clearly see details in the video.
FIG. 2A is a flowchart illustrating a method of zooming video images in accordance with embodiments of the present disclosure. The method of zooming video images may include the following procedures.
At block 201, a zoom request for zooming a current video frame may be received while a video is being played.
The method may be applied to a video player application which may be an application installed in a mobile terminal device. The mobile terminal device may be a touch-control terminal, e.g., a touch-control mobile phone, a tablet computer, a personal reader, or the like.
When the video player application is playing a video and a user wants to see a certain detail in the video interface, the user may trigger a zoom request for the current video frame. The video player application may receive the zoom request.
In an example, this procedure may be implemented in the following two manners.
According to a first manner, a stretch gesture applied to the current video frame may be received, and determined to be a magnify request.
For example, referring to FIG. 2B, when a user is watching a tutorial video and wants to magnify and view tutorial material in the tutorial video, the user may put two fingers on a target position in the playing window, and make a stretch gesture on the playing window. The video player application may determine the stretch gesture received to be a magnify request. The target position refers to the position of the center of an area the user wants to magnify to view.
According to a second manner, a minify gesture applied to the current video frame may be received, and determined to be a minify request.
Likewise, when a user wants to minify video images, e.g., after watching a magnified video for some time and wants to view the whole image contents, the user may make a minify gesture in the playing window.
In an example, before the minify request is received, the video player application may also present zoom guidance information for guiding the user to minify the video frames. The video player application may present the zoom guidance information after receiving a click signal indicating the user has clicked on the video frame. In another example, the video player application may present the zoom guidance information the first time when the video player application is run to play a video. In another example, the video player application may always present the zoom guidance information.
For example, taking a video player application presents zoom guidance the first time when the video player application is run to play a video as an example, as shown in the upper figure in FIG. 2C, the video player application may present the zoom guidance information as shown in the figure in a popover. In another example, referring to the lower figure of FIG. 2C, the video player application may always present the zoom guidance information 21 as shown in the figure.
At block 202, a zoom center point and a zoom ratio may be determined according to the zoom request.
After receiving the zoom request, the video player application may determine a zoom center point and a zoom ratio according to the zoom request. In an example, the video player application may respectively determine a zoom ratio for the horizontal axis and a zoom ratio for the vertical axis for zooming the current video frame. For example, taking a zoom request for magnifying the current video frame to two times of the original size and the current video frame is magnified according to the same ratio in the horizontal direction and the vertical direction, the video player application may determine the zoom ratio of the horizontal axis to be
2 2 ,
and the zoom ratio of the vertical axis is also
2 2 .
At block 203, a reference horizontal coordinate and a reference vertical coordinate may be calculated according to the zoom ratio.
In an example, when the zoom ratio determined by the video player application is smaller than 1, e.g., the zoom request is a minify request, this procedure may include:
calculating the reference horizontal coordinate to be:
xCord = n x * viewHeight viewWidth * ImageWidth ImageHeight ;
calculating the reference vertical coordinate to be:
yCord = n y * viewWidth viewHeight * ImageHeight ImageWidth .
The n is the zoom ratio for the horizontal axis of the current video frame; n is the zoom ratio for the vertical axis of the current video frame; viewWidth is the width of the displaying area of the playing window; viewHeight is the height of the displaying area of the playing window; ImageWidth is the width of the current video frame; ImageHeight is the height of the current video frame.
In an example, when the zoom ratio determined by the video player application is larger than 1, e.g., the zoom request is a magnify request, this procedure may include:
first, obtaining a first value which is
viewHeight viewWidth * ImageWidth ImageHeight ;
then, determining the reference horizontal coordinate xCord to be the first value in response to a determination that the first value reaches a first threshold;
after obtaining the first value, checking by the video player application whether the first value reaches the first threshold; determining by the video player application the reference horizontal coordinate xCord to be the first threshold in response to a determination that the first value reaches the first threshold because the maximum values of the horizontal coordinate and the vertical coordinate are both the first threshold; the first threshold is generally 1;
thirdly, obtaining a second value which is
n y * viewWidth viewHeight * ImageHeight ImageWidth ;
fourthly, determining the reference vertical coordinate yCord to be the first value in response to a determination that the second value reaches the first threshold.
After obtaining the second value, the video player application may check whether the second value reaches the first threshold. The video player application may determine the reference horizontal coordinate yCord to be the first threshold in response to a determination that the second value reaches the first threshold because the maximum value of both the horizontal coordinate and the vertical coordinate can only be the first threshold.
Fifthly, a determination may be made that the reference vertical coordinate yCord is the second value in response to a determination that the second value does not reach the first threshold.
The n is the zoom ratio for the vertical axis of the current video frame; viewWidth is the width of the displaying area of the playing window; viewHeight is the height of the displaying area of the playing window; ImageWidth is the width of the current video frame; ImageHeight is the height of the current video frame.
It should be noted that after obtaining the first value, when the first value does not reach the first threshold, the video player application may determine the reference horizontal coordinate and the reference vertical coordinate according to the following method which may include the following procedures.
Firstly, the reference vertical coordinate yCord may be determined to be the first threshold in response to a determination that the first value does not reach the first threshold.
Secondly, a third value may be obtained, which may be
n x * viewHeight viewWidth * ImageWidth ImageHeight .
nx is the zoom ratio for the horizontal axis of the current video frame.
Thirdly, the reference horizontal coordinate xCord may be determined to be the first threshold in response to a determination that the third value reaches the first threshold.
Similar to the above method, since the maximum value of both the horizontal coordinate and the vertical coordinate can only be the first threshold, the video player application may determine the reference horizontal coordinate to be the first threshold in response to a determination that the obtained third value reaches the first threshold.
Fourthly, the reference horizontal coordinate xCord may be determined to be the third value in response to a determination that the third value does not reach the first threshold.
It should be noted that this example is taking a video player application calculating the reference horizontal coordinate and the reference vertical coordinate using the above method as an example. In other examples, a video player application may perform the calculation using other methods, and this is not limited in the present disclosure.
At block 204, texture coordinates may be calculated according to the zoom center point, the reference horizontal coordinate and the reference vertical coordinate.
After calculating the reference horizontal coordinate and the reference vertical coordinate, the video player application may calculate coordinate values of texture coordinates according to the zoom center point, the reference horizontal coordinate and the reference vertical coordinate. The texture coordinates may include coordinates of at least two vertexes of a playing window presenting a zoomed version of the current video frame in an un-zoomed version of the current video frame. The at least two vertexes may include diagonal vertexes of the playing window. This example takes the texture coordinates includes four vertexes as an example.
In an example, this procedure may include the following steps.
A target horizontal coordinate and a target vertical coordinate of the zoom center point may be obtained. The target horizontal coordinate may be:
X 0 = x 0 viewWidth * n x ;
the target vertical coordinate may be:
Y 0 = y 0 viewHeight * n y .
If the reference horizontal coordinate is smaller than the first threshold i and the reference vertical coordinate is the first threshold i, a first horizontal coordinate X1 of the texture coordinates is the first threshold i, a second horizontal coordinate X2 of the texture coordinates is j, a first vertical coordinate Y1 of the texture coordinates is
i n y + Y 0 ,
and a second vertical coordinate Y2 of the texture coordinates is Y0. When the first vertical coordinate Y1 reaches the first threshold i, it is determined the first vertical coordinate Y1 is the first threshold i, the second vertical coordinate Y2 is
Y 1 - i n y .
If the reference vertical coordinate is smaller than the first threshold i and the reference horizontal coordinate is the first threshold i, a first horizontal coordinate X1 of the texture coordinates is
i n x + X 0 ,
a second horizontal coordinate X2 of the texture coordinates is X0, a first vertical coordinate Y1 of the texture coordinates is the first threshold i, a second vertical coordinate Y2 of the texture coordinates is the second threshold j. When the first horizontal coordinate X1 is larger than the first threshold i, it is determined the first horizontal coordinate X1 is the first threshold i, the second horizontal coordinate X2 is
X 1 - i n x .
When the second horizontal coordinate X2 does not reach the second threshold j, it is determined the first horizontal coordinate X1 is
i n x ,
the second horizontal coordinate X2 is the second threshold j.
If the reference horizontal coordinate and the reference vertical coordinate are both the first threshold, the value of a first parameter a, the value of a second parameter b, the value of a third parameter c and the value of a fourth parameter d may be obtained. The first horizontal coordinate X1 of the texture coordinates is
i a / b c + X 0 ,
the second horizontal coordinate X2 of the texture coordinates is X0, the first vertical coordinate Y1 of the texture coordinates is
i a / b d + Y 0 ,
the second vertical coordinate Y2 of the texture coordinates is Y0. When the first horizontal coordinate X1 is larger than the first threshold i, it is determined the second horizontal coordinate is
X 1 - ( i a / b ) c .
When the first vertical coordinate Y1 is larger than the first threshold i, it is determined the second vertical coordinate Y2 is
Y 1 - ( i a / b ) d .
When the second horizontal coordinate is smaller than the second threshold j, it is determined the second horizontal coordinate is the second threshold j, the first horizontal coordinate is
i a / b c .
If the reference horizontal coordinate and the reference vertical coordinate are both smaller than the first threshold i, a first horizontal coordinate X1 of the texture coordinates is the first threshold i, a second horizontal coordinate X2 of the texture coordinates is the second threshold j, a first vertical coordinate Y1 of the texture coordinates is the first threshold i, and a second vertical coordinate Y2 of the texture coordinates is the second threshold j.
The x0 is the horizontal coordinate of the zoom center point in the current video frame; y0 is the vertical coordinate of the zoom center point in the current video frame; a is the first threshold i;
b = n y viewWidth viewHeight ImageHeight ImageWidth ; c = viewWidth viewHeight ImageHeight ImageWidth ;
d is the first threshold i; nx is the zoom ratio for the horizontal axis of the current video frame; ny is the zoom ratio for the vertical axis of the current video frame.
The second threshold j is the minimal coordinate value in the coordinate system of the video frame. For example, when the coordinate system is the coordinate system as shown in FIG. 1D, the second threshold is 0.
At block 205, an area defined by the texture coordinates are determined to be the target image area.
After obtaining the texture coordinates, the video player application may determine the area defined by the texture coordinates in the current video frame as the target image area.
At block 206, when video frames subsequent to the current video frame are played, vertex coordinates of the playing window may be calculated according to the reference horizontal coordinate and the reference vertical coordinate.
In an example, the reference horizontal coordinate may be assumed to be xCord; the reference vertical coordinate to be yCord.
The vertex coordinates may be: (xCord, −yCord), (xCord, yCord), (xCord, yCord), and (−xCord, −yCord).
At block 207, when video frames subsequent to the current video frame are played, image content in the target image area of the subsequent video frames are rendered in an area defined by the vertex coordinates according to the texture coordinates.
The act of a user zooming an area of a video frame means the user is interested in image content in the area within the video frame. Therefore, the video player application may directly render image content in the area defined by the texture coordinates of subsequent video frames in the area defined by the vertex coordinates. The video player application may render the image content using the Open Graphics Library (OpenGL).
In an example, the video player application may present image content according to the above method when presenting the k'th frame subsequent to the current video frame because it may take some time for the video player application to calculate the texture coordinates and the vertex coordinates. If the calculation does not take much time, e.g., the video player application has obtained the calculated texture coordinates and the vertex coordinates before playing the next image frame of the current video frame, the video player application may present image content according to the above method when playing the next video frame of the current frame. According to various examples, image content may be presented according to the above method as long as the texture coordinates and the vertex coordinates have been calculated, and the implementation of the method is not limited.
In addition, after the video player application presents image contents of subsequent video frames in response to the zoom request of the user, the user may selectively dragging image content rendered in the playing window when the user wants to adjust the display position of the video frames. That is, the video player application may also perform the following procedures.
(1) A drag request may be received. The drag request is for dragging image content rendered in the playing window, and the image content is image content within the target image area in the k'th video frame of the subsequent video frames. The k is a positive integer.
For example, referring to FIG. 2D, when the user wants to drag tutorial content in the video frame as shown in FIG. 1E into the center of the playing window, the user may trigger a drag request by performing a leftward dragging in the video frame presented by the video player application. Accordingly, the video player application may receive the drag request.
(2) The target image area is adjusted according to the drag request, and the adjusted target image area includes an area in the k'th video frame to be presented in the playing window after the dragging.
The video player application may adjust the target image area according to the received drag request. The adjusted target image area may include an area in the k'th video frame to be presented in the playing window after dragging. In an example, this procedure may include: calculating adjusted texture coordinates according to the drag request; taking an area defined by the adjusted texture coordinates as the adjusted target image area. In an example, the method of the video player application calculates the adjusted texture coordinates may include: obtaining a dragging displacement corresponding to the drag request; calculating adjusted texture coordinates according to the texture coordinates before the adjustment and the dragging displacement. That is, the texture coordinates may include coordinates of at least two vertexes of a playing window presenting the dragged current video frame in an un-zoomed version of the current video frame. The at least two vertexes may include diagonal vertexes of the playing window.
For example, referring to FIG. 2E, after the user performed the dragging, the image area to be presented in the playing window is the area defined by E, F, G and H. The video player application may calculate coordinates of E′, F′, G′ and H′, and take the area defined by E′, F′, G′ and H′ as the adjusted target image area.
(3) When playing a video frame subsequent to the k'th video frame, image content of the video frame that falls in the adjusted target image area is rendered in the playing window.
Afterwards, when playing a video frame subsequent to the k'th video frame, the video player application may render image content of the video frame that falls within the adjusted target image area in the playing window. This procedure is similar to the procedure in block 207.
In addition, when the user wants to return to the play mode used before the zooming, the user may perform an action on a recover button presented in the playing interface of the video player application, e.g., the video player application may perform the following procedures.
(1) A recover request may be received via the recover button.
For example, referring to FIG. 2F, when the user wants to zoom to return to the play mode used before the zooming after watching a video for a period of time under the mode as shown in the upper figure of FIG. 2E, the user may click on the recover button 22 as shown in FIG. 2F, and correspondingly, the video player application may receive the recover request via the recover button 22.
(2) Video frames to be played are played according to the play mode used before the zooming according to the zoom request in response to the recover request.
After receiving the recover request, the video player application may use the play mode as shown in FIG. 2B, e.g., using the same play center point and zoom ratio with that of the video frame as shown in FIG. 2B, to play the pending video frames.
In view of the foregoing, the method of various embodiments provides receiving a zoom request, determining a target image area to be displayed in a playing window from a zoomed version of the current frame according to the zoom request, image in the target image area in each frame after the current frame is rendered in the playing window when each frame subsequent to the current frame is played. Thus, the technical scheme solves the problem that the user may be unable to see details in a video clearly even after the video is played at full screen, e.g., the problem that related art cannot satisfy demands of users. As such, a user is enabled to selectively zooming video frames according to the needs, thus can clearly see details in the video.
By directly rendering image content within the target image area in the playing window without decoding the whole video frame and cropping the video frame to obtain the image content in the target image area, the processing complexity at the video player application can be reduced.
After a drag request is received, adjusted texture coordinates may be calculated and an area defined by the adjusted texture coordinates may be regarded as the adjusted target image area. As such, when subsequent video frames are played, image content within the adjusted target image area is rendered in the playing window, such that the user is enabled to adjust content of video frames that is presented in the playing window according to the needs of the user watching the video during play of the video. Thus, users' demand for watching a video can be better satisfied.
FIG. 3 is a schematic diagram illustrating an apparatus of zooming video images in accordance with embodiments of the present disclosure. The apparatus may include: a first receiving module 310, a parameter determining module 320, an area determining module 330 and a first presenting module 340.
The first receiving module 310 may receive a zoom request for zooming a current video frame while a video is being played.
The parameter determining module 320 may determine a zoom center point and a zoom ratio according to the zoom request. The area determining module 330 may determine a target image area to be presented in a playing window from a zoomed version of the current video frame according to the zoom center point and the zoom ratio.
The first presenting module 340 may render in the playing window image content within the target image area of subsequent video frames of the current video frame when playing the subsequent video frames.
In view of the foregoing, the apparatus of various embodiments provides receiving a zoom request, determining a target image area to be displayed in a playing window from a zoomed version of the current frame according to the zoom request, image in the target image area in each frame after the current frame is presented in the playing window when each frame subsequent to the current frame is played. Thus, the technical scheme solves the problem that the user may be unable to see details in a video clearly even after the video is played at full screen, e.g., the problem related art cannot satisfy demands of users. As such, a user is enabled to selectively zooming video frames according to the needs, thus can clearly see details in the video.
FIG. 4 is a schematic diagram illustrating an apparatus of zooming video images in accordance with embodiments of the present disclosure. The apparatus may include: a first receiving module 410, a parameter determining module 420, an area determining module 430 and a first presenting module 440.
The first receiving module 410 may receive a zoom request for zooming a current video frame while a video is being played.
The parameter determining module 420 may determine a zoom center point and a zoom ratio according to the zoom request.
The area determining module 430 may determine a target image area to be presented in a playing window from a zoomed version of the current video frame according to the zoom center point and the zoom ratio.
The first presenting module 440 may render in the playing window image content within the target image area in subsequent video frames of the current video frame when playing the subsequent video frames.
In an example, the first receiving module 410 may also:
receive a stretch gesture applied to the current video frame and determine the stretch gesture to be a magnify request; or
receive a shrink gesture applied to the current video frame, and determine the shrink gesture to be a minify request.
In an example, the apparatus may also include:
an information presenting module 450, to present zoom guidance information for guiding the user to zoom the video frame.
In an example, the apparatus may also include:
a second receiving module 460, to receive a drag request for dragging image content presented in the playing window which is image content within the target image area in the k'th video frame of the subsequent video frames; The k is a positive integer;
an adjusting module 470, to adjust the target image area according to the dragging request so that the adjusted target image area includes an area in the k'th video frame to be presented in the playing window after the dragging;
a second presenting module 480, to render image content of a video frame that falls in the adjusted target image area in the playing window when playing video frames subsequent to the k'th video frame.
In an example, the apparatus may also include:
a third receiving module 490, to receive a recover request via a recover button;
a third presenting module 510, to play video frames to be played after the recover request is received according to the play mode used before the zooming according to the zoom request.
In an example, the area determining module 430 may include:
a first calculating unit 413, to calculate a reference horizontal coordinate and a reference vertical coordinate according to the zoom ratio;
a second calculating unit 432, to calculate coordinate values of texture coordinates according to the zoom center point, the reference horizontal coordinate and the reference vertical coordinate;
the texture coordinates may include coordinates of at least two vertexes of the playing window in a zoomed version of the current video frame, the at least two vertexes may include diagonal vertexes of the playing window;
an area determining unit 433, to determine an area defined by the texture coordinates as the target image area.
In an example, the first presenting module 440 may include:
a third calculating unit 441, to calculate vertex coordinates of the playing window according to the reference horizontal coordinate and the reference vertical coordinate;
a content presenting unit 442, to render image content within the target image area of subsequent video frames in an area corresponding to the vertex coordinates according to the texture coordinates.
In an example, the third calculating unit 442 may also:
assuming the reference horizontal coordinate to be xCord; the reference vertical coordinate to be yCord;
determine the vertex coordinates to be: (xCord, −yCord), (xCord, yCord), (−xCord, yCord), and (−xCord, −yCord).
In an example, the first calculating unit 431 may also:
if the zoom ratio is smaller than 1,
determine the reference horizontal coordinate to be:
xCord = n x viewHeight viewWidth ImageWidth ImageHeight ;
determine the reference vertical coordinate to be:
yCord = n y viewWidth viewHeight ImageHeight ImageWidth ;
nx is the zoom ratio for the horizontal axis of the current video frame; ny is the zoom ratio for the vertical axis of the current video frame; viewWidth is the width of the displaying area in the playing window; viewHeight is the height of the displaying area in the playing window; ImageWidth is the width of the current video frame; ImageHeight is the height of the current video frame.
In an example, the first calculating unit 431 may also:
if the zoom ratio is larger than 1, obtain a first value which is
viewHeight viewWidth ImageWidth ImageHeight ;
if the first value reaches a first threshold, determine the reference horizontal coordinate xCord to be the first threshold;
obtain a second value which is
n y viewWidth viewHeight ImageHeight ImageWidth ;
if the second value reaches the first threshold, determine the reference vertical coordinate yCord to be the first threshold;
if the second value does not reach the first threshold, determine the reference vertical coordinate yCord to be the second value;
ny is the zoom ratio for the vertical axis of the current video frame; viewWidth is the width of the displaying area of the playing window; viewHeight is the height of the displaying area of the playing window; ImageWidth is the width of the current video frame; ImageHeight is the height of the current video frame.
In an example, the first calculating unit 431 may also:
if the first value does not reach the first threshold, determine the reference vertical coordinate yCord to be the first threshold;
obtain a third value which is
n x viewHeight viewWidth ImageWidth ImageHeight ;
nx is the zoom ratio for the horizontal axis of the current video frame;
if the third value reaches the first threshold, determine the reference horizontal coordinate xCord to be the first threshold;
if the third value does not reach the first threshold, determine the reference horizontal coordinate xCord to be the third value.
In an example, the second calculating unit 432 may also:
obtain a target horizontal coordinate and a target vertical coordinate of the zoom center point; determine the target horizontal coordinate to be:
X 0 = x 0 viewWidth n x ;
the target vertical coordinate may be:
Y 0 = y 0 viewHeight * n y .
if the reference horizontal coordinate is smaller than the first threshold i and the reference vertical coordinate is the first threshold i, determine a first horizontal coordinate X1 of the texture coordinates to be the first threshold i, a second horizontal coordinate X2 of the texture coordinates to be a second threshold j, a first vertical coordinate Y1 of the texture coordinates to be
i n y + Y 0 ,
a second vertical coordinate Y2 of the texture coordinates to be Y0; when the first vertical coordinate Y1 reaches the first threshold i, determine the first vertical coordinate Y1 is the first threshold i, the second vertical coordinate Y2 is
Y 1 - i n y ;
if the reference vertical coordinate is smaller than the first threshold i and the reference horizontal coordinate is the first threshold i, a first horizontal coordinate X1 of the texture coordinates is
i n x + X 0 ,
a second horizontal coordinate X2 of the texture coordinates is X0, a first vertical coordinate Y1 of the texture coordinates is the first threshold i, a second vertical coordinate Y2 of the texture coordinates is the second threshold j; when the first horizontal coordinate X1 is larger than the first threshold i, determine the first horizontal coordinate X1 is the first threshold i, the second horizontal coordinate X2 is
X 1 - i n x ;
when the second horizontal coordinate X2 does not reach the second threshold j, determine the first horizontal coordinate X1 is
i n x ,
the second horizontal coordinate X2 is the second threshold j;
if the reference horizontal coordinate and the reference vertical coordinate are both the first threshold, obtain the value of a first parameter a, the value of a second parameter b, the value of a third parameter c and the value of a fourth parameter d may be obtained; determine the first horizontal coordinate X1 of the texture coordinates is
i a / b * c + X 0 ,
the second horizontal coordinate X2 of the texture coordinates is X0, the first vertical coordinate Y1 of the texture coordinates is
i a / b * d + Y 0 ,
the second vertical coordinate Y2 of the texture coordinates is Y0; when the first horizontal coordinate X1 is larger than the first threshold i, determine the second horizontal coordinate is
X 1 - ( i a / b ) * c ;
when the first vertical coordinate Y1 is larger than the first threshold i, determine the second vertical coordinate Y2 is
Y 1 - ( i a / b ) * d ;
when the second horizontal coordinate is smaller than the second threshold j, determine the second horizontal coordinate is the second threshold j, the first horizontal coordinate is
i a / b * c ;
if the reference horizontal coordinate and the reference vertical coordinate are both smaller than the first threshold i, determine a first horizontal coordinate X1 of the texture coordinates is the first threshold i, a second horizontal coordinate X2 of the texture coordinates is the second threshold j, a first vertical coordinate Y1 of the texture coordinates is the first threshold i, a second vertical coordinate Y2 of the texture coordinates is the second threshold j;
wherein x0 is the horizontal coordinate of the zoom center point in the current video frame; y0 is the vertical coordinate of the zoom center point in the current video frame; a is the first threshold i;
b = n y viewWidth viewHeight * ImageHeight ImageWidth ; c = viewWidth viewHeight * ImageHeight ImageWidth ;
d is the first threshold i; nx is the zoom ratio for the horizontal axis of the current video frame; ny is the zoom ratio for the vertical axis of the current video frame.
In view of the foregoing, the apparatus of various embodiments provides receiving a zoom request, determining a target image area to be displayed in a playing window from a zoomed version of the current frame according to the zoom request, image in the target image area in each frame after the current frame is presented in the playing window when each frame subsequent to the current frame is played. Thus, the technical scheme solves the problem that the user may be unable to see details in a video clearly even after the video is played at full screen, e.g., the problem related art cannot satisfy demands of users. As such, a user is enabled to selectively zooming video frames according to the needs, thus can clearly see details in the video.
FIG. 5 is a schematic diagram illustrating modules of a mobile terminal in accordance with embodiments of the present disclosure. A video player application may run in the mobile terminal 600. The specific components are as follows.
The mobile terminal 600 may include a radio frequency (RF) circuit 610, at least one memory of computer-readable storage medium 620, an input unit 630, a display unit 640, at least one sensor 650, an audio circuit 660, a wireless fidelity (WiFi) unit 670, at least one processor 680 and a power supply 6390 and the like. The structure as shown in FIG. 5 is not for restricting the terminal device. The terminal device of various examples may include extra components or may include fewer components, or may have some of the components integrated into one component, or may have a different deployment of the components.
The RF circuit 610 is capable of sending and receiving signals during a process of information sending/receiving process or a voice communication process. In an example, the RF circuit 610 may send downlink information received from a base station to the at least one processor 680 for further processing, and may send uplink data to the base station. The RF circuit 610 may generally include, but not limited to, an antenna, at least one amplifier, a tuner, at least one oscillator, a subscriber identity module (SIM) card, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like. The RF circuit 610 may perform wireless communications with a network and other devices. The wireless communication may adopt any communication standard or protocol, including but not limited to: global system of mobile communication (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), long term evolution (LTE), email, short messaging service (SMS), or the like. The storage medium 620 may be used for storing software programs and modules. For example, the storage medium 620 may store a pre-determined time list, a software program for collecting voice signals, a software program for identifying key words, a software program for continuous speech recognition, a software program for setting events and alerts, and store relationships which associates wireless access points with user accounts, or the like. The processor 680 may be capable of executing the software programs and modules stored in the storage device 620 to implement various functions and data processing. The storage medium 620 may mainly include program storage sections and data storage sections. The program storage sections may store an operating system, applications required for implementing at least one function (such as video playing function, image display function, touch screen recognition function, or the like). The data storage sections may store data generated during usage of the mobile terminal 600, or the like. In addition, the storage medium 620 may include a high-speed random access memory, and may also include a non-transitory memory, e.g., at least one disk storage, flash memory or other non-transitory solid state storage device and the like. Correspondingly, the storage device 620 may also include a storage controller to provide the processor 680 and the inputting unit 630 with access to the storage device 620.
The input unit 630 may receive digits or characters inputted, and generate a keyboard input signal, a mouse input signal, a control lever input signal, an optical input signal, or a track ball input signal which is related with user settings and function controlling. In an example, the input unit 630 may include a touch sensitive surface 631 and other inputting devices 632. The touch sensitive surface 631, also referred to as a touch screen or a touchpad, is capable of collecting touch operations performed by a user on the surface or near the surface (e.g., an operation performed on or near the touch sensitive surface 631 using any proper object or attachment such as a finger or a touch pen and etc.), and driving a connecting apparatus corresponding to the operation according to a pre-defined procedure. In an example, the touch sensitive surface 631 may include a touch detecting apparatus and a touch controller. The touch detecting apparatus detects the position touched by the user, detects a signal generated by the touch, and sends the signal to the touch controller. The touch controller receives touch information from the touch detecting apparatus, converts the touch information into coordinates of the touch position, sends the coordinates to the processor 680, receives a command sent by the processor 680 and executes the command. The touch sensitive surface 631 may be implemented via various types of touch techniques such as resistive touch screen, capacitive touch screen, infrared touch screen and surface acoustic wave touch screen and so on. In an example, the input unit 630 may include another input device 632 besides the touch sensitive surface 631. In an example, the another input device 632 may include, but not limited to, at least one of a physical keyboard, a function key (e.g., a volume control key, a power on/off key and etc.), a track ball, a mouse, a control lever and the like.
The display unit 640 is capable of displaying information inputted by the user, information provided for the user and various graphical user interfaces of the mobile terminal 600. The graphical user interfaces may include any combination of graphics, texts, icons, videos. The display unit 640 may include a display panel 641. In an example, the display panel 641 may be implemented by Liquid Crystal Display (LCD), Organic Light-Emitting Diode (OLED) and the like. In an example, the touch sensitive surface 661 may overlay the display panel 641. When detecting a touch operation on or near the touch sensitive surface 661, the touch sensitive surface 661 may send the touch operation to the processor 680 to determine the type of the touch event. Then the processor 680 may provide visual output on the display panel 641 according to the type of the touch event. Although the touch sensitive surface 661 and the display panel 641 are depicted as two independent components respectively for input and output in FIG. 5, the touch sensitive surface 661 and the display panel 641 may be integrated to provide input and output in various examples.
The mobile terminal 600 may also include at least one sensor 650, e.g., an optical sensor, a motion sensor, or other types of sensors. In an example, the optical sensor may include an ambient light sensor and a proximity sensor. The ambient light sensor may adjust the brightness of the display panel 641 according to the strength of ambient light. The proximity sensor may close the display panel 641 and/or the light when the mobile terminal 600 is held close to an ear. A gravity sensor is a type of motion sensor, may detect the amount of acceleration in multiple directions (typically XYZ-axis), the amount and the direction of gravity when kept in stationary, and can be used in applications which need to identify phone postures (such as auto screen rotation, games using the sensing result, magnetometer attitude calibration), features related with vibration identify (such as a pedometer, percussion) and the like. The mobile terminal 600 may include other sensors, e.g., a gyroscope, a barometer, a hygrometer, a thermometer, infrared sensors and the like, which are not listed further herein.
The audio circuit 660, the speaker 661 and the microphone 662 may provide an audio interface between the user and the mobile terminal device 600. An audio circuit 660 may convert received audio data into electrical signals, and send the electrical signals to the speaker 661. The speaker 661 may convert the electrical signals into sound and outputs the sound. The microphone 662 may convert collected sound signals into electrical signals which are received by the audio circuit 660. The audio circuit 660 may convert the electrical signals into audio data, and sends the electrical signals to the processor 680 for processing. The processed audio data may be sent to another terminal device via the RF circuit 610, or be output to the storage device 620 for further processing. The audio circuit 660 may also include an ear jack providing communications between a peripheral earphone and the mobile terminal 600.
The short-distance wireless communication module 670 may be a wireless fidelity (WiFi) module or a Bluetooth module, or the like. The mobile terminal 600 may adopt a WiFi module 270 to provide wireless broadband Internet access to enable a user to send and receive emails, browse webpages and access stream media and so on. In an example, the terminal device 600 may not include the WiFi module 670 although it is shown in FIG. 5. The structure in FIG. 5 is merely an example, modifications can be made as long as they do not change the mechanism of the examples.
The processor 680 is a control center of the mobile terminal 600 which interconnects all of the components in the phone using various interfaces and circuits and monitors the phone by running or executing software programs and/or modules stored in the storage device 620 and calling various functions of the mobile terminal 600 and processing data. The processing unit 680 may include one or multiple processing cores. In an example, the processing unit 680 may integrate an application processor and a modem processor. The application processor mainly handles the operating system, user interfaces and application programs, and etc., and the modem processor mainly handles wireless communications. The modem may not be integrated into the processor 680.
The mobile terminal 600 may also include a power supply 690 (e.g., a battery) providing power for various parts. In an example, the power supply may be logically connected with the processor 680 via a power supply management system to implement functions such as charging, discharging, power management and the like. The power supply 690 may also include any components such as one or multiple AC or DC power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator and the like.
Although not shown in the figures, the mobile terminal 600 may also include a camera, a Bluetooth module and the like, which is not described further herein.
The mobile terminal 600 may also include a storage device and at least one program which may be executed by at least one processor to implement the method of zooming video images of various examples.
According to an example, a non-transitory computer-readable storage medium including instructions, e.g., a storage device including instructions, is also provided. The instructions may be executable by a processor at the mobile terminal to implement the method of zooming video images. For example, the non-transitory computer-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disc, optical storage device, or the like.
The above description of the apparatus of zooming video images takes the above modules as an example. In practice, the functions may be re-divided to be implemented by different modules, e.g., the apparatus may have a different inner structure composed of different modules to implement all or some of the above functions. In addition, the above methods of zooming video images provided by the examples belong to the same conceptive idea. Details have been described in the above, and will not be repeated herein.
The index numbers of the examples are merely for facilitating description, and should not be interpreted to be representative for the preference order of the examples.
Those skilled in the art can understand that some or all of the steps of the methods provided by the embodiments may be implemented by hardware controlled by software. The software may be stored in a computer-readable storage medium.
The foregoing is only embodiments of the present specification. The protection scope of the present specification, however, is not limited to the above. All the modifications, equivalent replacements or improvements, which can be obtained by those skilled in the art, are included within the protection scope of the present specification.

Claims (17)

The invention claimed is:
1. A method of zooming video images, comprising:
at a video player application, receiving a zoom request for zooming a current video frame while a video is being played in a playing window;
determining a zoom center point and a zoom ratio according to the zoom request;
determining vertex coordinates of a target image area of the current video frame according to the zoom center point and the zoom ratio, the vertex coordinates are coordinates of vertices of the playing window, when a zoomed version of the current video frame corresponding to the zoom request is displayed in the playing window, in a coordinate system established using the current video frame; and
rendering, in the playing window, image content within the target image area defined by the vertex coordinates of video frames subsequent to the current video frame when playing the subsequent video frames;
wherein determining vertex coordinates of a target image area according to the zoom center point and the zoom ratio comprises:
calculating a reference horizontal coordinate and a reference vertical coordinate according to the zoom ratio;
calculating texture coordinates according to the zoom center point, the reference horizontal coordinate and the reference vertical coordinate; wherein the texture coordinates comprises at least two of the vertex coordinates; and
determining an area defined by the texture coordinates to be the target image area.
2. The method of claim 1, wherein calculating vertex coordinates of the playing window according to the reference horizontal coordinate and the reference vertical coordinate comprises:
if the zoom ratio is smaller than 1, determining the reference vertical coordinate is
xCord = n x * viewHeight viewWidth * ImageWidth ImageHeight ;
determining the reference vertical coordinate is
yCord = n y * viewWidth viewHeight * ImageHeight ImageWidth ;
wherein, nx is a zoom ratio for a horizontal axis of the current video frame; ny is a zoom ratio for a vertical axis of the current video frame; viewWidth his the width of a displaying area of the playing window; viewHeight is the height of the displaying area of the playing window; ImageWidth his the width of the current video frame; ImageHeight is the height of the current video frame.
3. The method of claim 1, wherein calculating vertex coordinates of the playing window according to the reference horizontal coordinate and the reference vertical coordinate comprises:
if the zoom ratio is larger than 1, obtaining a first value which is
viewHeight viewWidth * ImageWidth ImageHeight ;
if the first value reaches a first threshold, determining the reference horizontal coordinate xCord to be the first threshold;
obtaining a second value which is
n y * viewWidth viewHeight * ImageHeight ImageWidth ;
if the second value reaches the first threshold, determining the reference vertical coordinate yCord to be the first threshold;
if the second value does not reach the first threshold, determining the reference vertical coordinate yCord to be the second value;
wherein, ny is a zoom ratio for a horizontal axis of the current video frame; viewWidth his the width of a displaying area in the playing window; viewHeight is the height of the displaying area in the playing window; ImageWidth is the width of the current video frame; ImageHeight is the height of the current video frame.
4. The method of claim 3, further comprising:
if the first value does not reach the first threshold, determining the reference vertical coordinate yCord to be the first threshold;
obtaining a third value which is
n x * viewHeight viewWidth * ImageWidth ImageHeight ;
wherein nx is the zoom ratio for a horizontal axis of the current video frame;
if the third value reaches the first threshold, determining the reference horizontal coordinate xCord to be the first threshold; and
if the third value does not reach the first threshold, determining the reference horizontal coordinate xCord to be the third value.
5. The method of claim 1, wherein calculating coordinate values of the texture coordinates according to the zoom ratio, the reference horizontal coordinate and the reference vertical coordinate comprises:
obtaining a target horizontal coordinate and a target vertical coordinate of the zoom center point; determining the target horizontal coordinate to be:
X 0 = x 0 viewWidth * n x ;
determining the target vertical coordinate to be:
Y 0 = y 0 viewHeight * n y ;
if the reference horizontal coordinate is smaller than a first threshold i and the reference vertical coordinate is the first threshold i, a first horizontal coordinate X1 of the texture coordinates is the first threshold i, a second horizontal coordinate X2 of the texture coordinates is a second threshold j, a first vertical coordinate Y1 of the texture coordinates is
i n y + Y 0 ,
a second vertical coordinate Y2 of the texture coordinates is Y0; wherein when the first vertical coordinate Y1 reaches the first threshold i, determining the first vertical coordinate Y1 is the first threshold i, the second vertical coordinate Y2 is
Y 1 - i n y ;
if the reference vertical coordinate is smaller than the first threshold i and the reference horizontal coordinate is the first threshold i, a first horizontal coordinate X1 of the texture coordinates is
i n x + X 0 ,
a second horizontal coordinate X2 of the texture coordinates is X0, a first vertical coordinate Y1 of the texture coordinates is the first threshold i, a second vertical coordinate Y2 of the texture coordinates is a second threshold j; wherein when the first horizontal coordinate X1 is larger than the first threshold i, determining the first vertical coordinate X1 is the first threshold i, the second vertical coordinate X2 is
X 1 - i n x ;
when the second horizontal coordinate X2 does not reach the second threshold j, determining the first horizontal coordinate X1 is
i n x ,
the second horizontal coordinate X2 is the second threshold j;
if the reference horizontal coordinate and the reference vertical coordinate are both the first threshold, obtaining a value of a first parameter a, a value of a second parameter b, a value of a third parameter c and a value of a fourth parameter d;
determining the first horizontal coordinate X1 of the texture coordinates is
i a / b * c + X 0 ,
the second horizontal coordinate X2 of the texture coordinates is X0, the first vertical coordinate Y1 of the texture coordinates is
i a / b * d + Y 0 ,
the second vertical coordinate Y2 of the texture coordinates is Y0;
wherein when the first horizontal coordinate X1 is larger than the first threshold i, determining the second horizontal coordinate is
X 1 - ( i a / b ) * c ;
when the first vertical coordinate Y1 is larger than the first threshold i, determining the second vertical coordinate Y2 is
Y 1 - ( i a / b ) * d ;
when the second horizontal coordinate is smaller than the second threshold j, determining the second horizontal coordinate is the second threshold j, the first horizontal coordinate is
i a / b * c ;
if the reference horizontal coordinate and the reference vertical coordinate are both smaller than the first threshold i, a first horizontal coordinate X1 of the texture coordinates is the first threshold i, a second horizontal coordinate X2 of the texture coordinates is a second threshold j, a first vertical coordinate Y1 of the texture coordinates is the first threshold i, a second vertical coordinate of the texture coordinates is the second threshold j;
wherein x0 is a horizontal coordinate of the zoom center point in the current video frame;
y0 is the vertical coordinate of the zoom center point in the current video frame;
a is the first threshold i;
b = n y * viewWidth viewHeight * ImageHeight ImageWidth ; c = viewWidth viewHeight * ImageHeight ImageWidth ;
d is the first threshold i;
nx is a zoom ratio for a horizontal axis of the current video frame;
ny is a zoom ratio for a vertical axis of the current video frame.
6. The method of claim 1, wherein receiving a zoom request for zooming a current video frame comprises:
receiving a stretch gesture applied to the current video frame and determining the stretch gesture to be a magnify request; or
receiving a shrink gesture applied to the current video frame and determining the shrink gesture to be a minify request.
7. The method of claim 1, further comprising:
presenting zoom guidance information for guiding a user to zoom video frames.
8. The method of claim 1, further comprising:
receiving a drag request for dragging image content rendered in the playing window that belongs to the target image area of a k'th video frame, wherein the k'th video frame is one of the video frames subsequent to the current video frame; k is a positive integer;
shifting the vertex coordinates to second vertex coordinates according to the drag request to relocate the target image area, wherein the second vertex coordinates are coordinates of vertices of the playing window, when a dragged version of a zoomed k'th video frame corresponding to the drag request is displayed in the playing window, in a coordinate system established using the k'th video frame; and
rendering, in the playing window, image content that falls in the relocated target image area defined by the second vertex coordinates of a video frame subsequent to the k'th video frame when playing video frames subsequent to the k'th video frame.
9. The method of claim 1, further comprising:
receiving a recover request via a recover button; and
playing, in response to the recover request, video frames to be played according to a play mode used before the zooming according to the zoom request.
10. The method of claim 1, wherein calculating vertex coordinates of the playing window according to the reference horizontal coordinate and the reference vertical coordinate comprises:
assuming the reference horizontal coordinate is xCord; the reference vertical coordinate is yCord;
determining the vertex coordinates to be: (xCord, −yCord), (xCord, yCord), (−xCord, yCord), and (−xCord, −yCord).
11. A mobile terminal, comprising:
at least one processor; and
a storage device;
wherein the storage device stores at least one program executable by the at least one processor, the at least one program comprises instructions for:
receiving a zoom request for zooming a current video frame while a video is being played;
determining a zoom center point and a zoom ratio according to the zoom request;
determining vertex coordinates of a target image area to be displayed in a playing window from the current video frame after zooming according to the zoom center point and the zoom ratio; and
rendering, in the playing window, image content within the target image area defined by the vertex coordinates of video frames subsequent to the current video frame when playing subsequent video frames;
wherein the at least one program comprises instructions for:
receiving a drag request for dragging image content rendered in playing window that belongs to the target image area of a k'th video frame, wherein the k'th video frame is one of the video frames subsequent to the current video frame; k is a positive integer;
shifting the vertex coordinates to second vertex coordinates according to the drag request to relocate the target image area, wherein the second vertex coordinates are coordinates of vertices of the playing window, when a dragged version of a zoomed k'th video frame corresponding to the drag request is displayed in the playing window, in a coordinate system established using the k'th video frame; and
rendering, in the playing window, image content that falls in the relocated target image area defined by the second vertex coordinates of a video frame subsequent to the k'th video frame when playing video frames subsequent to the k'th video frame.
12. The mobile terminal of claim 11, wherein the at least one program comprises instructions for:
calculating a reference horizontal coordinate and a reference vertical coordinate according to the zoom ratio;
calculating texture coordinates according to the zoom center point, the reference horizontal coordinate and the reference vertical coordinate;
wherein the texture coordinates comprises at least two vertexes of the playing window presenting a zoomed version of the current video frame in an un-zoomed version of the current video frame, the at least two vertexes comprises diagonal vertexes of the playing window; and
determining an area defined by the texture coordinates to be the target image area.
13. The mobile terminal of claim 12, wherein the at least one program comprises instructions for:
calculating vertex coordinates of the playing window according to the reference horizontal coordinate and the reference vertical coordinate; and
rendering the image content within the target image area of the subsequent video frames in an area corresponding to the vertex coordinates according to the texture coordinates.
14. The mobile terminal of claim 11, wherein the at least one program comprises instructions for:
receiving a stretch gesture applied to the current video frame and determining the stretch gesture to be a magnify request; or
receiving a shrink gesture applied to the current video frame and determining the shrink gesture to be a minify request.
15. The mobile terminal of claim 11, wherein the at least one program comprises instructions for:
presenting zoom guidance information for guiding a user to zoom video frames.
16. The mobile terminal of claim 11, wherein the at least one program comprises instructions for:
receiving a recover request via a recover button; and
playing, in response to the recover request, video frames to be played according to a play mode used before the zooming according to the zoom request.
17. A non-transitory storage medium, comprising computer-readable instructions executable by a processor to:
receive a zoom request for zooming a current video frame while a video is being played;
determine a zoom center point and a zoom ratio according to the zoom request;
determine a target image area to be displayed in a playing window from the current video frame after zooming according to the zoom center point and the zoom ratio; and
render, in the playing window, image content within the target image area of video frames subsequent to the current video frame when playing subsequent video frames;
wherein the instructions that cause the processor to determine a target image area to be displayed in a playing window from the current video frame after zooming according to the zoom center point and the zoom ratio cause the processor to:
calculate a reference horizontal coordinate and a reference vertical coordinate according to the zoom ratio;
calculate texture coordinates according to the zoom center point, the reference horizontal coordinate and the reference vertical coordinate; wherein the texture coordinates comprises coordinates of at least two vertexes of the playing window presenting a zoomed version of the current video frame in an un-zoomed version of the current video frame, the at least two vertexes comprises diagonal vertexes of the playing window; and
determine an area defined by the texture coordinates to be the target image area, wherein rendering in the playing window image content within the target image area of video frames subsequent to the current video frame when playing subsequent video frames comprises:
calculating vertex coordinates of the playing window according to the reference horizontal coordinate and the reference vertical coordinate; and
rendering the image content within the target image area of the subsequent video frames in an area defined by the vertex coordinates according to the texture coordinates;
wherein calculating vertex coordinates of the playing window according to the reference horizontal coordinate and the reference vertical coordinate comprises:
assuming the reference horizontal coordinate is xCord; the reference vertical coordinate is yCord;
determining the vertex coordinates to be: (xCord, −yCord), (xCord, yCord), (−xCord, yCord), and (−xCord, −yCord).
US15/681,192 2015-04-16 2017-08-18 Method of zooming video images and mobile display terminal Active US10397649B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201510181200 2015-04-16
CN201510181200.6A CN104822088B (en) 2015-04-16 2015-04-16 Video image zooming method and apparatus
CN201510181200.6 2015-04-16
PCT/CN2016/078352 WO2016165568A1 (en) 2015-04-16 2016-04-01 Method for scaling video image, and mobile terminal

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/078352 Continuation WO2016165568A1 (en) 2015-04-16 2016-04-01 Method for scaling video image, and mobile terminal

Publications (2)

Publication Number Publication Date
US20170347153A1 US20170347153A1 (en) 2017-11-30
US10397649B2 true US10397649B2 (en) 2019-08-27

Family

ID=53732233

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/681,192 Active US10397649B2 (en) 2015-04-16 2017-08-18 Method of zooming video images and mobile display terminal

Country Status (4)

Country Link
US (1) US10397649B2 (en)
KR (1) KR101951135B1 (en)
CN (1) CN104822088B (en)
WO (1) WO2016165568A1 (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104822088B (en) 2015-04-16 2019-03-19 腾讯科技(北京)有限公司 Video image zooming method and apparatus
CN106817533A (en) * 2015-11-27 2017-06-09 小米科技有限责任公司 Image processing method and device
CN107547913B (en) * 2016-06-27 2021-06-18 阿里巴巴集团控股有限公司 Video data playing and processing method, client and equipment
CN106373169A (en) * 2016-08-30 2017-02-01 广东成德电子科技股份有限公司 Parameter-driven printed circuit board bitmap duplication method and system
WO2018097632A1 (en) 2016-11-25 2018-05-31 Samsung Electronics Co., Ltd. Method and device for providing an image
CN107577398B (en) * 2017-08-08 2021-03-12 深圳Tcl新技术有限公司 Interface animation control method, device and storage medium
CN109598672B (en) * 2017-09-30 2021-12-14 腾讯科技(深圳)有限公司 Map road rendering method and device
CN108170350A (en) * 2017-12-28 2018-06-15 努比亚技术有限公司 Realize method, terminal and the computer readable storage medium of Digital Zoom
CN109121000A (en) * 2018-08-27 2019-01-01 北京优酷科技有限公司 A kind of method for processing video frequency and client
CN110876079B (en) * 2018-08-31 2022-05-06 阿里巴巴集团控股有限公司 Video processing method, device and equipment
CN111277886B (en) * 2018-11-16 2022-10-28 北京字节跳动网络技术有限公司 Panoramic video view field control method and device, electronic equipment and storage medium
CN110933493A (en) * 2018-12-03 2020-03-27 北京仁光科技有限公司 Video rendering system, method and computer-readable storage medium
CN109729408B (en) * 2018-12-19 2022-03-11 四川坤和科技有限公司 A mobile terminal high-definition online video scaling method
CN110275749B (en) * 2019-06-19 2022-03-11 深圳顺盈康��疗设备有限公司 Surface amplifying display method
CN112446904B (en) * 2019-08-30 2024-04-09 西安诺瓦星云科技股份有限公司 Image alignment method, device and system
CN110764764B (en) * 2019-09-16 2024-03-01 平安科技(深圳)有限公司 Webpage end image fixed stretching method and device, computer equipment and storage medium
CN110996150A (en) * 2019-11-18 2020-04-10 咪咕动漫有限公司 Video fusion method, electronic device and storage medium
CN111221455B (en) 2020-01-06 2022-03-04 北京字节跳动网络技术有限公司 Material display method and device, terminal and storage medium
CN111722781A (en) * 2020-06-22 2020-09-29 京东方科技集团股份有限公司 Intelligent interaction method and device, storage medium
CN112218157A (en) * 2020-10-10 2021-01-12 杭州赛鲁班网络科技有限公司 System and method for intelligently focusing video
CN112367559B (en) * 2020-10-30 2022-10-04 北京达佳互联信息技术有限公司 Video display method and device, electronic equipment, server and storage medium
CN112667345A (en) * 2021-01-25 2021-04-16 深圳市景阳信息技术有限公司 Image display method and device, electronic equipment and readable storage medium
CN113259767B (en) * 2021-06-15 2021-09-17 北京新片场传媒股份有限公司 Method and device for zooming audio and video data and electronic equipment
US11847807B2 (en) 2021-07-02 2023-12-19 Genesys Logic, Inc. Image processing system and processing method of video stream
TWI824321B (en) * 2021-07-02 2023-12-01 創惟科技股份有限公司 Image controller, image processing system and image modifying method
CN116233539A (en) * 2023-03-07 2023-06-06 北京字跳网络技术有限公司 A method and device for displaying information

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6400852B1 (en) * 1998-12-23 2002-06-04 Luxsonor Semiconductors, Inc. Arbitrary zoom “on -the -fly”
CN101325040A (en) 2008-07-16 2008-12-17 宇龙计算机通信科技(深圳)有限公司 Mobile terminal capable of adjusting resolution and method for adjusting resolution of the mobile terminal
US20100026721A1 (en) * 2008-07-30 2010-02-04 Samsung Electronics Co., Ltd Apparatus and method for displaying an enlarged target region of a reproduced image
CN102377960A (en) 2010-08-24 2012-03-14 腾讯科技(深圳)有限公司 Video picture displaying method and device
KR20120024058A (en) 2010-09-03 2012-03-14 에스케이플래닛 주식회사 Digital contents service system, methods for creating and providing digital contents
US20120092381A1 (en) * 2010-10-19 2012-04-19 Microsoft Corporation Snapping User Interface Elements Based On Touch Input
US20130009997A1 (en) * 2011-07-05 2013-01-10 Research In Motion Limited Pinch-to-zoom video apparatus and associated method
CN103888840A (en) 2014-03-27 2014-06-25 电子科技大学 Method and device for dragging and zooming video mobile terminal in real time
US8817052B2 (en) * 2009-11-02 2014-08-26 Sony Corporation Information processing apparatus, image enlargement processing method, and computer program product with visible data area enlargement features
US20140282061A1 (en) * 2013-03-14 2014-09-18 United Video Properties, Inc. Methods and systems for customizing user input interfaces
KR20140133081A (en) 2013-05-09 2014-11-19 엘지전자 주식회사 Mobile terminal and sharing contents displaying method thereof
CN104238863A (en) * 2014-08-29 2014-12-24 广州视睿电子科技有限公司 Android-based circle selection scaling method and system
CN104469398A (en) 2014-12-09 2015-03-25 北京清源新创科技有限公司 Network video image processing method and device
CN104822088A (en) 2015-04-16 2015-08-05 腾讯科技(北京)有限公司 Video image zooming method and device
US20150268822A1 (en) * 2014-03-21 2015-09-24 Amazon Technologies, Inc. Object tracking in zoomed video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201352055A (en) * 2012-06-01 2013-12-16 Jinone Inc Apparatus for controlling LED sub-series

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6400852B1 (en) * 1998-12-23 2002-06-04 Luxsonor Semiconductors, Inc. Arbitrary zoom “on -the -fly”
CN101325040A (en) 2008-07-16 2008-12-17 宇龙计算机通信科技(深圳)有限公司 Mobile terminal capable of adjusting resolution and method for adjusting resolution of the mobile terminal
US20100026721A1 (en) * 2008-07-30 2010-02-04 Samsung Electronics Co., Ltd Apparatus and method for displaying an enlarged target region of a reproduced image
US8817052B2 (en) * 2009-11-02 2014-08-26 Sony Corporation Information processing apparatus, image enlargement processing method, and computer program product with visible data area enlargement features
CN102377960A (en) 2010-08-24 2012-03-14 腾讯科技(深圳)有限公司 Video picture displaying method and device
US20130083078A1 (en) 2010-08-24 2013-04-04 Tencent Technology (Shenzhen) Company Limited Method and Apparatus for Presenting a Video Screen
KR20120024058A (en) 2010-09-03 2012-03-14 에스케이플래닛 주식회사 Digital contents service system, methods for creating and providing digital contents
US20120092381A1 (en) * 2010-10-19 2012-04-19 Microsoft Corporation Snapping User Interface Elements Based On Touch Input
US20130009997A1 (en) * 2011-07-05 2013-01-10 Research In Motion Limited Pinch-to-zoom video apparatus and associated method
US20140282061A1 (en) * 2013-03-14 2014-09-18 United Video Properties, Inc. Methods and systems for customizing user input interfaces
KR20140133081A (en) 2013-05-09 2014-11-19 엘지전자 주식회사 Mobile terminal and sharing contents displaying method thereof
US20150268822A1 (en) * 2014-03-21 2015-09-24 Amazon Technologies, Inc. Object tracking in zoomed video
CN103888840A (en) 2014-03-27 2014-06-25 电子科技大学 Method and device for dragging and zooming video mobile terminal in real time
CN104238863A (en) * 2014-08-29 2014-12-24 广州视睿电子科技有限公司 Android-based circle selection scaling method and system
CN104469398A (en) 2014-12-09 2015-03-25 北京清源新创科技有限公司 Network video image processing method and device
CN104822088A (en) 2015-04-16 2015-08-05 腾讯科技(北京)有限公司 Video image zooming method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
International Preliminary Report on Patentability Issued in International Application No. PCT/CN2016/078352 dated Oct. 17, 2017.
International Search Report with Translation for International Application No. PCT/CN2016/078352 dated Jun. 3, 2016.
Office Action Issued in Chinese Application No. 201510181200.6 dated Jun. 2, 2017.
Office Action with Explanation of Relevance Issued in Chinese Application No. 201510181200.6 dated Oct. 9, 2018.
Office Action with Translation for Korean Patent Application No. 10-2017-7018479 dated May 25, 2018.

Also Published As

Publication number Publication date
CN104822088A (en) 2015-08-05
KR101951135B1 (en) 2019-02-21
KR20170089929A (en) 2017-08-04
CN104822088B (en) 2019-03-19
WO2016165568A1 (en) 2016-10-20
US20170347153A1 (en) 2017-11-30

Similar Documents

Publication Publication Date Title
US10397649B2 (en) Method of zooming video images and mobile display terminal
CN111061574B (en) Object sharing method and electronic device
US11054988B2 (en) Graphical user interface display method and electronic device
CN110096326B (en) Screen capture method, terminal device and computer-readable storage medium
US10133480B2 (en) Method for adjusting input-method keyboard and mobile terminal thereof
CN111142991A (en) Application function page display method and electronic equipment
CN108446058B (en) Operation method of a mobile terminal and mobile terminal
CN105975190B (en) Graphical interface processing method, device and system
JP2015007949A (en) Display device, display controlling method, and computer program
CN107193451B (en) Information display method, apparatus, computer equipment, and computer-readable storage medium
CN108664190A (en) page display method, device, mobile terminal and storage medium
CN108920069B (en) Touch operation method and device, mobile terminal and storage medium
US20150089431A1 (en) Method and terminal for displaying virtual keyboard and storage medium
CN108287650A (en) One-handed performance method based on mobile terminal and mobile terminal
CN103399657B (en) The control method of mouse pointer, device and terminal unit
CN110647277A (en) Control method and terminal equipment
CN113050863A (en) Page switching method and device, storage medium and electronic equipment
CN105513098B (en) Image processing method and device
CN103885692A (en) Page changing method, device and terminal
CN110941378B (en) Video content display method and electronic equipment
CN117435109A (en) Content display method and device and computer readable storage medium
CN109032487A (en) Electronic device control method, electronic device control device, storage medium and electronic device
CN108287745A (en) A kind of display methods and terminal device at the interfaces WebApp
CN109104573B (en) Method for determining focusing point and terminal equipment
CN108628534B (en) Character display method and mobile terminal

Legal Events

Date Code Title Description
AS Assignment

Owner name: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED, CHI

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ZUO, HONGTAO;REEL/FRAME:043347/0306

Effective date: 20170808

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4