US 12,223,601 B2
Three dimensional virtual room-based user interface for a home automation system
Robert P. Madonna, Osterville, MA (US); Maxwell R. Madonna, Santa Monica, CA (US); David W. Tatzel, West Yarmouth, MA (US); Michael A. Molta, Nantucket, MA (US); and Timothy Kallman, Hyannis, MA (US)
Assigned to Savant Systems, Inc., Hyannis, MA (US)
Filed by Savant Systems, Inc., Hyannis, MA (US)
Filed on May 23, 2023, as Appl. No. 18/201,046.
Application 18/201,046 is a continuation of application No. 17/018,886, filed on Sep. 11, 2020, granted, now 11,688,140.
Claims priority of provisional application 62/898,941, filed on Sep. 11, 2019.
Prior Publication US 2023/0290073 A1, Sep. 14, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 19/00 (2011.01); G06T 15/20 (2011.01); G06T 15/50 (2011.01); H04L 12/28 (2006.01)
CPC G06T 19/003 (2013.01) [G06T 15/20 (2013.01); G06T 15/503 (2013.01); G06T 15/506 (2013.01); H04L 12/2814 (2013.01); H04L 12/282 (2013.01); H04L 12/2829 (2013.01); H04L 2012/285 (2013.01)] 28 Claims
OG exemplary drawing
 
1. A method for controlling a home automation system using a user-navigable three-dimensional (3-D) virtual room that corresponds to a physical room, comprising:
rendering and displaying, by a control application (app) executing on an electronic device, the user-navigable 3-D virtual room from a perspective defined by a virtual camera, wherein the control app renders the user-navigable 3-D virtual room based on data from at least one of a plurality of two-dimensional (2-D) images of the physical room captured from different respective positions in the physical room, and the user-navigable 3-D virtual room includes depictions of one or more devices present in the physical room that are under the control of the home automation system, depictions of one or more boundaries of the physical room and depictions of one or more furnishings present in the physical room;
receiving an explicit navigation command or implicit action from a user;
in response to the explicit navigation command or implicit action, translating or rotating the virtual camera, by the control app, to alter a position or an orientation of the virtual camera;
re-rendering and displaying, by the control app, the user-navigable 3-D virtual room from a new perspective defined by the altered position or orientation, wherein the new perspective does not coincide with the position in the physical room from which any of the 2-D images were captured, and the control app re-renders the user-navigable 3-D virtual room by blending data from multiple 2-D images captured from different positions to show the user-navigable 3-D virtual room from the new perspective;
receiving a user interaction associated with the user-navigable 3-D virtual room; and
in response to the user interaction, causing the home automation system to change a state of a device in the physical room.