US 12,266,330 B2
Generating music accompaniment
Ilia Belikov, Dilijan (AM); Aleksandr Alekseev, Erlangen (DE); Vazgen Hakobjanyan, Yerevan (AM); and Robert Joseph Pfeifer, New Paltz, NY (US)
Assigned to MacDougal Street Technology, Inc., New Paltz, NY (US)
Filed by MACDOUGAL STREET TECHNOLOGY, INC., New Paltz, NY (US)
Filed on Dec. 17, 2023, as Appl. No. 18/542,718.
Claims priority of provisional application 63/465,470, filed on May 10, 2023.
Claims priority of provisional application 63/433,908, filed on Dec. 20, 2022.
Prior Publication US 2024/0203387 A1, Jun. 20, 2024
Int. Cl. G10H 1/00 (2006.01); G06F 40/40 (2020.01); G10H 1/36 (2006.01)
CPC G10H 1/0025 (2013.01) [G06F 40/40 (2020.01); G10H 1/0066 (2013.01); G10H 1/361 (2013.01); G10H 2210/005 (2013.01); G10H 2210/111 (2013.01); G10H 2210/391 (2013.01); G10H 2240/311 (2013.01)] 21 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
receiving first input music sequence data that is based, at least in part, on acquiring a first portion of an original audio music signal in real-time at least by sampling the original audio music signal;
based at least in part on the first input music sequence data, generating, by a machine learning (ML) model, first output music sequence data for an output music signal;
while receiving third input music sequence data that is based, at least in part, on acquiring a third portion of the original audio music signal in real-time, which is temporally after the first portion and a second portion of the original audio music signal:
generating, by the ML model, second output music sequence data, for the output music signal, based at least in part on a previously generated particular portion of the first output music sequence data;
temporally aligning the first output music sequence data of the output music signal with the third portion of the original audio music signal;
wherein the second output music sequence data is temporally after the first output music sequence data of the output music signal.