US 11,722,537 B2
Communication sessions between computing devices using dynamically customizable interaction environments
Charles A. Andon, Nashua, NH (US); and Steven W. Hansen, Groton, MA (US)
Assigned to SaySearch, Inc., Nashua, NH (US)
Filed by SaySearch, Inc., Nashua, NH (US)
Filed on Dec. 30, 2020, as Appl. No. 17/138,637.
Application 17/138,637 is a continuation of application No. 15/959,009, filed on Apr. 20, 2018, granted, now 10,917,445.
Claims priority of provisional application 62/487,871, filed on Apr. 20, 2017.
Prior Publication US 2021/0120054 A1, Apr. 22, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. H04L 65/4053 (2022.01); G06T 19/00 (2011.01); G06T 19/20 (2011.01); H04L 65/1089 (2022.01); H04L 65/403 (2022.01)
CPC H04L 65/4053 (2013.01) [G06T 19/003 (2013.01); G06T 19/20 (2013.01); H04L 65/1089 (2013.01); H04L 65/403 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
providing, by one or more configured computing systems, an extensible communication system that manages visual interactions between users, including:
providing a plurality of system-provided layers that each implement an associated type of functionality available to be included in the visual interactions;
providing extensibility capabilities to enable additional entities to add additional entity-provided layers that each implement an additional type of functionality available to be included in the visual interactions; and
providing integration capabilities to combine visual aspects from multiple selected layers to create resulting visual information for use in the visual interactions, including resolving conflicts between the multiple selected layers using priorities associated with the multiple selected layers to control whether some information from lower-priority layers is included in the resulting visual information; and
using, by the one or more configured computing systems, the extensible communication system to manage a session of visual interactions between multiple users using Web browsers executing on client devices of the multiple users, including:
receiving a selection of multiple layers to use in the session, wherein the multiple layers include at least one system-provided layer and at least one additional entity-provided layer, and wherein each of the multiple layers specifies a data source that provides information to be shown in that layer and specifies one or more functions for users to interact with the provided information for that layer to provide at least one type of functionality for that layer as part of the visual interactions of the users;
using the integration capabilities to combine visual aspects of the selected multiple layers in a resulting visible sphere around a center point that includes views in multiple directions from the center point, wherein the visible sphere includes visual information from each of the multiple layers;
initiating the session by, for each of the multiple users, participating in interactions with a Web browser that is executing on a client device of the user to determine an initial view orientation of the user that is one of the multiple directions from the center point, and to transmit visual information to the Web browser of the user that corresponds to a subset of the visible sphere that is visible from the center point using the determined initial view orientation of the user; and
continuing the session over time by performing further interactions with the Web browsers to update visual information being displayed to the multiple users based on actions of at least some of the multiple users and to perform communications between the multiple users.