CPC D05B 19/02 (2013.01) [D05B 69/22 (2013.01)] | 26 Claims |
1. A sewing machine, comprising:
a sewing bed, the sewing bed comprising a feed mechanism and a needle plate, wherein a workpiece can be placed on and moved across the sewing bed;
a sewing head arranged above the sewing bed, the sewing head comprising:
a needle bar extending toward the sewing bed to a distal end;
a needle attached to the distal end of the needle bar, wherein a thread is threaded through the needle;
an accessory bar extending toward the sewing bed to a distal end; and
an accessory attached to the distal end of the accessory bar;
a camera having a field of view encompassing at least a portion of any of the sewing bed, the needle plate, the workpiece, the needle, and the accessory, wherein the camera generates a camera data signal related to the portions of the sewing bed, the needle plate, the workpiece, the needle, and the accessory in the field of view of the camera;
a user interface configured to present information to the user of the sewing and to receive input from the user of the sewing machine;
an object recognition neural network that is trained to detect and to classify a recognized object from the camera data signal as at least one of the needle plate, the at least one needle, the at least one accessory, the workpiece, the thread, an embroidery hoop, and a foreign object, wherein the object recognition neural network generates an object detection data signal related to at least one of a position, an orientation, and a velocity of the recognized object and an object classification data signal related to an identity of the recognized object, wherein the recognized object comprises a first recognized object and a second recognized object; and
a processor configured to:
receive, from the object recognition neural network, an indication of at least one of the position and the orientation of the recognized object from the object recognition data signal;
receive, from the object recognition neural network, an indication of the identity of the recognized object from the object classification data signal;
control the user interface to present at least one of the position, the orientation, and the identity of the recognized object to the user;
receive, from the object recognition neural network, a first indication of at least one of a first position, a first orientation, and a first velocity of the first recognized object from the object detection data signal;
receive, from the object recognition neural network, a second indication of at least one of a second position, a second orientation, and a second velocity of the second recognized object from the object detection data signal; and
determine, based on the first indication and the second indication, whether a collision will occur or has already occurred between the first recognized object and the second recognized object.
|