US 12,326,882 B2
Green knowledge recommendation method based on characteristic similarity and user demands, electronic device and computer readable storage medium thereof
Qingdi Ke, Hefei (CN); Zhen Zhang, Hefei (CN); Zisheng Li, Hefei (CN); and Lei Zhang, Hefei (CN)
Assigned to HEFEI UNIVERSITY OF TECHNOLOGY, Hefei (CN)
Filed by HEFEI UNIVERSITY OF TECHNOLOGY, Hefei (CN)
Filed on Nov. 30, 2024, as Appl. No. 18/964,413.
Application 18/964,413 is a continuation of application No. PCT/CN2023/118564, filed on Sep. 13, 2023.
Claims priority of application No. 202310103329 (CN), filed on Feb. 13, 2023.
Prior Publication US 2025/0094451 A1, Mar. 20, 2025
Int. Cl. G06F 16/28 (2019.01); G06F 16/21 (2019.01); G06F 16/2457 (2019.01); G06F 40/30 (2020.01); G06N 5/022 (2023.01)
CPC G06F 16/285 (2019.01) [G06F 16/213 (2019.01); G06F 16/2457 (2019.01); G06N 5/022 (2013.01); G06F 40/30 (2020.01)] 3 Claims
OG exemplary drawing
 
1. A knowledge recommendation method based on characteristic similarity and user demands, executed by a processor of an electronic device, the method comprises:
step 1, receiving a current-search text e from a user u, and obtaining a historical-search-texts set Eu from the user u, Eu={e1,u, e2,u . . . , en1,u, . . . , eN1,u}, wherein, the en1,u represents the n1th historical-search text, 1≤n1≤N1; the N1 represents a total number of historical-search texts;
step 2, constructing a topics dictionary and a subtopics dictionary, and decomposing the current-search text e and the historical-search-texts set Eu on the basis of decomposition; the topics dictionary is a large type and the subtopics dictionary is a small type under the large type, the step 2 comprising steps 2.1˜2.6;
step 2.1, constructing a topics dictionary X of a knowledge base, X={x1, x2, . . . , xn2, . . . , xN2}, wherein the xn2 represents the n2th topics, the N2 represents a total number of topics in the dictionary X;
constructing a subtopics dictionary Y of the knowledge base, Y={y1, y2, . . . , yn3, . . . , yN3}, wherein the yn3 represents the n3th subtopics, the N3 represents a total number of subtopics in the dictionary Y;
constructing a daily-expressions dictionary C of a set of users, C={c1, c2, . . . , cn4, . . . , cN4}, wherein the cn4 represents the n4th daily expression, the N4 represents a total number of daily expressions in the dictionary C, the daily expression comprises everyday words including I, you, he, whatever, and want;
step 2.2, decomposing e and en1,u according to dictionaries X, Y, C to obtain two text-vector sets we and wn1 correspondingly; the we being about a current-search text e, we={w1e, w2e, . . . , wiee, . . . , wIee}, the wn1 being about the n1th historical-search text en1,u,

OG Complex Work Unit Math
wherein the wiee represents the ieth word of the current-search text e, the Ie represents a total number of words in the current-search text e, the custom character represents the ith word of the n1th historical-search text en1,u, the In1 represents a total number of words in the n1th historical-search text en1,u;
defining tiee being a label of the wiee; if the tiee belonging to the dictionary X, defining wiee∈X; if the tiee belonging to the dictionary Y, defining wiee∈Y; if the tiee belonging to the dictionary C, defining wiee∈C; otherwise defining wiee∈Ø;
defining tin1 being a label of

OG Complex Work Unit Math
if the tin1 belonging to the dictionary X, defining

OG Complex Work Unit Math
if the tin1 belonging to the dictionary Y, defining

OG Complex Work Unit Math
if the tin1 belonging to the dictionary C, defining

OG Complex Work Unit Math
otherwise defining

OG Complex Work Unit Math
step 2.3, obtaining a weight Lin1 of the ith word

OG Complex Work Unit Math
by a formula (1);

OG Complex Work Unit Math
in the formula, the δ1 representing a first weight, the δ2 representing a second weight, and 0<δ21<1;
step 2.4, obtaining the weight Liee of the ieth word wiee by the same way of step 2.3;
step 2.5, obtaining a similarity

OG Complex Work Unit Math
between the wiee and the

OG Complex Work Unit Math
by a formula (2);

OG Complex Work Unit Math
step 2.6, obtaining the similarities between each of the two words respectively from two text-vector sets we and wn1 by the same way of step 2.5, collecting words with the similarity higher than the other words to be a candidate-words set in which one candidate word would be select to be the n1th word of the we; a valid-text set Viee being defined by all candidate-words sets, Viee={v1,iee, v2,iee, . . . , vp,iee, . . . , vP,iee}, wherein the vP,iee represents the pth candidate word of the ieth word wiee, the p represents a total number of candidate words;
step 3, picking words in the we and the Viee that belong to the two dictionaries X and Y; the step 3 comprising steps 3.1˜3.6;
step 3.1, picking words in the we that belong to the dictionary X;
when wieeLien11, xiee defined to mean words corresponding to the wiee and also from the dictionary X, and a first words set defined by xiee accordingly; the Lien1 being the weight of wiee;
step 3.2, picking words in the Viee that belong to the dictionary X;
when vp,ieeLp,iee1, xp,iee defined to mean words corresponding to the vp,iee and also from the dictionary X, and a second words set is defined by Viee accordingly; the Lp,iee being the weight of the vp,iee;
step 3.3, a subject terms set Z defined by the first words set and the second words set, Z={z1X, z2X, . . . , xn5X, . . . , zN5X}, wherein the zn5X represents the n5th subject term, 1≤n5≤N5, and the N5 represents a total number of subject terms;
step 3.4, picking words in the we that belong to the dictionary Y; when wieeLin11, yiee defined to mean words corresponding to the wiee and also from the dictionary Y;
step 3.5, picking words in the Viee that belong to the dictionary Y; when Lin11, yivalid defined to mean words corresponding to the Viee and also from the dictionary Y;
step 3.6, a subject terms set V defined by the we and the Viee, V={v1Y, v2Y, . . . , vn6Y, . . . , vN6Y}, wherein the vn6Y represents the n6th subject term, 1≤n6≤N6, and the N6 represents a total number of subject terms;
step 4, finding the knowledge; the step 4 comprising steps 4.1˜4.6;
step 4.1, acquiring a knowledge to be identified, and calculating a frequency of each of the word appearing in the knowledge to be identified after decomposition under the dictionary X and the subject terms set V,

OG Complex Work Unit Math
wherein the

OG Complex Work Unit Math
represents the frequency of the n2th topic xn2 appearing in the knowledge to be identified,

OG Complex Work Unit Math
and the

OG Complex Work Unit Math
represents the frequency of the n6th subtopic vn6Y appearing in the knowledge to be identified, 0≤

OG Complex Work Unit Math
step 4.2, assigning a value to each of the word in the subject terms set V, and a weighting function H(vn6Y) of words in subject terms set V being defined as formula (3), and the value of the word in the subject terms set V is defined as the weighting function;

OG Complex Work Unit Math
step 4.3, a user-demand degree function Q(vn6Y) being defined as formula (4);

OG Complex Work Unit Math
in the formula, the k representing users' satisfaction, k∈(0,100%);
step 4.4, receiving a topic xuser required by the user in the topics dictionary X, and calculating a closing degree d1a between the topic xuser and the knowledge to be identified, d1a=1−sxusera, wherein the sxusera represents the frequency of the topic xuser appearing in the knowledge to be identified;
step 4.5, calculating a user's demand degree, using user-demand degree function for each of the subject term in the subject terms set V, and calculating the user's closing degree d2a to all of the subject terms,

OG Complex Work Unit Math
step 4.6, calculating the closing degree da between the user's demand degree and the knowledge to be identified, da=d1a+d2a, obtaining all of the closing degree of all of the knowledge, and feeding the user some knowledge with closing degree lower than other knowledge.