| CPC B25J 9/1669 (2013.01) [B07C 3/18 (2013.01); B07C 5/36 (2013.01); B25J 9/0093 (2013.01); B25J 9/1612 (2013.01); B25J 9/1664 (2013.01); B25J 9/1687 (2013.01); B25J 9/1697 (2013.01); B25J 19/04 (2013.01); G05B 19/4183 (2013.01); G05B 2219/32037 (2013.01); G05B 2219/39106 (2013.01); G05B 2219/39295 (2013.01); G05B 2219/39476 (2013.01); G05B 2219/39484 (2013.01); G05B 2219/39504 (2013.01); G05B 2219/39548 (2013.01); G05B 2219/40053 (2013.01); G05B 2219/40078 (2013.01); G05B 2219/40116 (2013.01); G05B 2219/40538 (2013.01); G05B 2219/45045 (2013.01); Y02P 90/02 (2015.11)] | 27 Claims |

|
1. An object processing system comprising:
a programmable motion device including an end-effector;
a perception unit for capturing real-time image data of the plurality of objects at an input area;
an interactive display system including a touch screen input display for displaying the real-time image data and through which machine learning grasp input data regarding a plurality of objects is received; and
a control system accessing the machine learning grasp input data and for providing object grasp information regarding a grasp location for grasping the object responsive to the machine learning grasp input data regarding a plurality of objects.
|
|
10. An object processing system comprising:
a programmable motion device including an end-effector;
a perception unit for capturing real-time image data of the plurality of objects at an input area;
an interactive display that includes a touch screen input display for displaying the real -time image data; and
a control system for providing object grasp information regarding a plurality of grasp locations for grasping the object with the end-effector, the plurality of object grasp locations being derived from a plurality of machine learning grasp input data regarding a plurality of objects, the machine learning grasp input data including data received via the interactive display system that includes the touch screen input display.
|
|
19. A method of processing objects received at an input area, said method comprising:
providing a programmable motion device with an end-effector. obtaining first grasp input information for a selected object of a plurality of objects in a container at an input area responsive to machine learning grasp input data;
using the end effector to move the selected object of the plurality of objects in the container at the input area without grasping the object; and
obtaining second grasp input information for the selected object of the plurality of objects in the container at the input area responsive to machine learning grasp input data.
|