US 12,076,868 B2
Cloud based computer-implemented visually programming method and system for robotic motions in construction
Shih-Chung Kang, Taipei (TW); Liang-Ting Tsai, Taipei (TW); and Cheng-Hsuan Yang, Taipei (TW)
Assigned to ROBIM TECHNOLOGIES INC. OF CANADA, Edmonton (CA)
Filed by SMART BUILDING TECH CO., LTD., Taipei (TW)
Filed on Mar. 31, 2021, as Appl. No. 17/218,653.
Claims priority of provisional application 63/007,060, filed on Apr. 8, 2020.
Prior Publication US 2021/0316458 A1, Oct. 14, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. B25J 9/16 (2006.01); G06F 3/0486 (2013.01); G06F 8/34 (2018.01); G06F 16/957 (2019.01)
CPC B25J 9/1671 (2013.01) [B25J 9/1664 (2013.01); B25J 9/1689 (2013.01); G06F 3/0486 (2013.01); G06F 8/34 (2013.01); G06F 16/9577 (2019.01)] 7 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
providing a computer-assisted cloud based robotic construction software platform which integrates and comprises an externally-operated building information model (BIM) module for linking BIM data to robot controls, and a robot simulator, for a user to select and operate, to provide a comprehensive online cloud computation based computer-assisted virtual planning, simulation, and demonstration for a virtual robotic device corresponding to a robotic device in reality, wherein the robotic device in reality performing robot motions is dedicated to prefabricate a plurality of construction components in an off-site robotic semi-automated building construction work in the field of civil engineering in accordance with a BIM data imported from the externally-operated BIM module and corresponding to actual building construction information conditions in reality in the off-site robotic semi-automated building construction work;
causing a visual programming panel comprising a dual robot timeline editor for a visual programming and integrating and comprising a first timeline editor and a second timeline editor that are associated to a first target robot and a second target robot, respectively, a play button, and a plurality of motion blocks enabling a variety of robotic motions to be displayed in a visualization interface provided by the robot simulator shown on a web browser;
selecting from the user, at the visual programming panel, at least two motion blocks from the plurality of motion blocks and adding the at least two motion blocks into the first timeline editor and the second timeline editor, respectively, according to a first temporal sequence of performance and a second temporal sequence of performance that the first target robot and the second target robot are scheduled to perform the at least two motion blocks in time, via a drag-and-drop, to form a first motion configuration and a second motion configuration, respectively, wherein an order of a first spatial arrangement of the at least two motion blocks in the first timeline editor and a second spatial arrangement of the at least two motion blocks in the second timeline editor are the first temporal sequence of performance and the second temporal sequence of performance, respectively;
at the dual robot timeline editor, providing for the user to manually perform a trial-and-error test to find out a first collision-free path and a second collision-free path between the first target robot and the second target robot, respectively, including manually activating an execution of a collision check for the first and second motion configurations by clicking the play button on the dual robot timeline editor to detect whether or not the first and second motion configurations are configured to form the first and second collision-free paths for the first and second target robots to follow based on the first and second spatial arrangements in the first and second timeline editors, respectively, highlighting a specific motion causing a collision if the first or second motion configurations fail to pass the collision check in the first or second timeline editors, and manually rearranging or adjusting the first or second spatial arrangements, respectively, if the first or second motion configurations fail to pass the collision check in the first or second timeline editors, until the first and second motion configurations all pass the collision check;
simultaneously visually simulating that the first target robot and the second target robot perform the first motion configuration and the second motion configuration represented by the at least two motion blocks according to the first temporal sequence of performance defined in the first timeline editor and the second temporal sequence of performance defined in the second timeline editor in the visualization interface provided by the robot simulator shown on the web browser by an animated simulation; and
according to the first motion configuration and the second motion configuration at the visual programming panel, automatically generating a program configured to command a first end effector equipped on the first target robot and a second end effector equipped on the second target robot in a work cell to respectively perform the at least two selected robotic motions in the robot simulator.