US 12,322,363 B2
Techniques for generating musical plan based on both explicit user parameter adjustments and automated parameter adjustments based on conversational interface
Edward Balassanian, Austin, TX (US); Andrew C. Sorensen, Launceston (AU); and Patrick E. Hutchings, Melbourne (AU)
Assigned to AiMi Inc., Littleton, CO (US)
Filed by AiMi Inc., Littleton, CO (US)
Filed on Aug. 28, 2024, as Appl. No. 18/817,787.
Claims priority of provisional application 63/640,705, filed on Apr. 30, 2024.
Claims priority of provisional application 63/579,859, filed on Aug. 31, 2023.
Prior Publication US 2025/0078790 A1, Mar. 6, 2025
Int. Cl. G10H 1/00 (2006.01); G06F 40/20 (2020.01)
CPC G10H 1/0025 (2013.01) [G06F 40/20 (2020.01)] 20 Claims
OG exemplary drawing
 
1. A method, comprising:
a computing system generating a musical plan, including:
initializing a context of a large language model, including:
providing a text-based schema for the musical plan;
providing rules for responding to user conversational interactions, including one or more rules that instruct the model to generate the musical plan according to the schema based on at least one category of user conversational input;
generating, by the large language model, an initial version of the musical plan based on the context and one or more conversational user inputs;
adding the initial version of the musical plan to the context;
modifying the initial version of the musical plan to generate a modified plan in the context, based on non-conversational user input that indicates changes to one or more parameters of the initial version of the musical plan;
generating, by the large language model, an output version of the musical plan based on the context that includes the modified plan; and
producing, by the computing system, a music file that specifies generative music composed according to the output version of the musical plan.