| CPC G10H 1/0025 (2013.01) [G06F 40/40 (2020.01); G10H 1/0066 (2013.01); G10H 1/361 (2013.01); G10H 2210/005 (2013.01); G10H 2210/111 (2013.01); G10H 2210/391 (2013.01); G10H 2240/311 (2013.01)] | 21 Claims |

|
1. A computer-implemented method comprising:
receiving first input music sequence data that is based, at least in part, on acquiring a first portion of an original audio music signal in real-time at least by sampling the original audio music signal;
based at least in part on the first input music sequence data, generating, by a machine learning (ML) model, first output music sequence data for an output music signal;
while receiving third input music sequence data that is based, at least in part, on acquiring a third portion of the original audio music signal in real-time, which is temporally after the first portion and a second portion of the original audio music signal:
generating, by the ML model, second output music sequence data, for the output music signal, based at least in part on a previously generated particular portion of the first output music sequence data;
temporally aligning the first output music sequence data of the output music signal with the third portion of the original audio music signal;
wherein the second output music sequence data is temporally after the first output music sequence data of the output music signal.
|