mirror of
https://github.com/hexastack/hexabot
synced 2025-06-26 18:27:28 +00:00
feat: implement nlp based blocks prioritization strategy
feat: add weight to nlp entity schema and readapt feat: remove commented obsolete code feat: restore settings feat: apply feedback fix: re-adapt unit tests feat: priority scoring re-calculation & enabling weight modification in builtin nlp entities fix: remove obsolete code feat: refine unit tests, apply mr coderabbit suggestions fix: minor refactoring feat: add nlp cache map type feat: refine builtin nlp entities weight updates feat: add more test cases and refine edge case handling feat: add weight validation in UI fix: apply feedback feat: add a penalty factor & fix unit tests feat: add documentation fix: correct syntax fix: remove stale log statement fix: enforce nlp entity weight restrictions fix: correct typo in docs fix: typos in docs fix: fix formatting for function comment fix: restore matchNLP function previous code fix: remove blank line, make updateOne asynchronous fix: add AND operator in docs fix: handle dependency injection in chat module feat: refactor to use findAndPopulate in block score calculation feat: refine caching mechanisms feat: add typing and enforce safety checks fix: remove typo fix: remove async from block score calculation fix: remove typo fix: correct linting fix: refine nlp pattern type check fix: decompose code into helper utils, add nlp entity dto validation, remove type casting fix: minor refactoring feat: refactor current implementation
This commit is contained in:
parent
0db40680dc
commit
bab2e3082f
102
api/docs/nlp/README.md
Normal file
102
api/docs/nlp/README.md
Normal file
@ -0,0 +1,102 @@
|
||||
# NLP Block Scoring
|
||||
## Purpose
|
||||
|
||||
**NLP Block Scoring** is a mechanism used to select the most relevant response block based on:
|
||||
|
||||
- Matching patterns between user input and block definitions
|
||||
- Configurable weights assigned to each entity type
|
||||
- Confidence values provided by the NLU engine for detected entities
|
||||
|
||||
It enables more intelligent and context-aware block selection in conversational flows.
|
||||
|
||||
## Core Use Cases
|
||||
### Standard Matching
|
||||
|
||||
A user input contains entities that directly match a block’s patterns.
|
||||
```ts
|
||||
Example: Input: intent = enquiry & subject = claim
|
||||
Block A: Patterns: intent: enquiry & subject: claim
|
||||
Block A will be selected.
|
||||
```
|
||||
|
||||
### High Confidence, Partial Match
|
||||
|
||||
A block may match only some patterns but have high-confidence input on those matched ones, making it a better candidate than others with full matches but low-confidence entities.
|
||||
**Note: Confidence is multiplied by a pre-defined weight for each entity type.**
|
||||
|
||||
```ts
|
||||
Example:
|
||||
Input: intent = issue (confidence: 0.92) & subject = claim (confidence: 0.65)
|
||||
Block A: Pattern: intent: issue
|
||||
Block B: Pattern: subject: claim
|
||||
➤ Block A gets a high score based on confidence × weight (assuming both weights are equal to 1).
|
||||
```
|
||||
|
||||
### Multiple Blocks with Similar Patterns
|
||||
|
||||
```ts
|
||||
Input: intent = issue & subject = insurance
|
||||
Block A: intent = enquiry & subject = insurance
|
||||
Block B: subject = insurance
|
||||
➤ Block B is selected — Block A mismatches on intent.
|
||||
```
|
||||
|
||||
### Exclusion Due to Extra Patterns
|
||||
|
||||
If a block contains patterns that require entities not present in the user input, the block is excluded from scoring altogether. No penalties are applied — the block simply isn't considered a valid candidate.
|
||||
|
||||
```ts
|
||||
Input: intent = issue & subject = insurance
|
||||
Block A: intent = enquiry & subject = insurance & location = office
|
||||
Block B: subject = insurance & time = morning
|
||||
➤ Neither block is selected due to unmatched required patterns (`location`, `time`)
|
||||
```
|
||||
|
||||
### Tie-Breaking with Penalty Factors
|
||||
|
||||
When multiple blocks receive similar scores, penalty factors can help break the tie — especially in cases where patterns are less specific (e.g., using `Any` as a value).
|
||||
|
||||
```ts
|
||||
Input: intent = enquiry & subject = insurance
|
||||
|
||||
Block A: intent = enquiry & subject = Any
|
||||
Block B: intent = enquiry & subject = insurance
|
||||
Block C: subject = insurance
|
||||
|
||||
Scoring Summary:
|
||||
- Block A matches both patterns, but subject = Any is considered less specific.
|
||||
- Block B has a redundant but fully specific match.
|
||||
- Block C matches only one pattern.
|
||||
|
||||
➤ Block A and Block B have similar raw scores.
|
||||
➤ A penalty factor is applied to Block A due to its use of Any, reducing its final score.
|
||||
➤ Block B is selected.
|
||||
```
|
||||
|
||||
## How Scoring Works
|
||||
### Matching and Confidence
|
||||
|
||||
For each entity in the block's pattern:
|
||||
- If the entity `matches` an entity in the user input:
|
||||
- the score is increased by: `confidence × weight`
|
||||
- `Confidence` is a value between 0 and 1, returned by the NLU engine.
|
||||
- `Weight` (default value is `1`) is a configured importance factor for that specific entity type.
|
||||
- If the match is a wildcard (i.e., the block accepts any value):
|
||||
- A **penalty factor** is applied to slightly reduce its contribution:
|
||||
``confidence × weight × penaltyFactor``. This encourages more specific matches when available.
|
||||
|
||||
### Scoring Formula Summary
|
||||
|
||||
For each matched entity:
|
||||
|
||||
```ts
|
||||
score += confidence × weight × [optional penalty factor if wildcard]
|
||||
```
|
||||
|
||||
The total block score is the sum of all matched patterns in that block.
|
||||
|
||||
### Penalty Factor
|
||||
|
||||
The **penalty factor** is a global multiplier (typically less than `1`, e.g., `0.8`) applied when the match type is less specific — such as wildcard or loose entity type matches. It allows the system to:
|
||||
- Break ties in favor of more precise blocks
|
||||
- Discourage overly generic blocks from being selected when better matches are available
|
@ -16,6 +16,7 @@ import { AttachmentModel } from '@/attachment/schemas/attachment.schema';
|
||||
import { AttachmentService } from '@/attachment/services/attachment.service';
|
||||
import { ChannelModule } from '@/channel/channel.module';
|
||||
import { CmsModule } from '@/cms/cms.module';
|
||||
import { NlpModule } from '@/nlp/nlp.module';
|
||||
import { UserModule } from '@/user/user.module';
|
||||
|
||||
import { BlockController } from './controllers/block.controller';
|
||||
@ -68,6 +69,7 @@ import { SubscriberService } from './services/subscriber.service';
|
||||
AttachmentModule,
|
||||
EventEmitter2,
|
||||
UserModule,
|
||||
NlpModule,
|
||||
],
|
||||
controllers: [
|
||||
CategoryController,
|
||||
|
@ -20,6 +20,15 @@ import { LanguageRepository } from '@/i18n/repositories/language.repository';
|
||||
import { LanguageModel } from '@/i18n/schemas/language.schema';
|
||||
import { I18nService } from '@/i18n/services/i18n.service';
|
||||
import { LanguageService } from '@/i18n/services/language.service';
|
||||
import { LoggerService } from '@/logger/logger.service';
|
||||
import { NlpEntityRepository } from '@/nlp/repositories/nlp-entity.repository';
|
||||
import { NlpSampleEntityRepository } from '@/nlp/repositories/nlp-sample-entity.repository';
|
||||
import { NlpValueRepository } from '@/nlp/repositories/nlp-value.repository';
|
||||
import { NlpEntityModel } from '@/nlp/schemas/nlp-entity.schema';
|
||||
import { NlpSampleEntityModel } from '@/nlp/schemas/nlp-sample-entity.schema';
|
||||
import { NlpValueModel } from '@/nlp/schemas/nlp-value.schema';
|
||||
import { NlpEntityService } from '@/nlp/services/nlp-entity.service';
|
||||
import { NlpValueService } from '@/nlp/services/nlp-value.service';
|
||||
import { PluginService } from '@/plugins/plugins.service';
|
||||
import { SettingService } from '@/setting/services/setting.service';
|
||||
import { InvitationRepository } from '@/user/repositories/invitation.repository';
|
||||
@ -93,6 +102,9 @@ describe('BlockController', () => {
|
||||
RoleModel,
|
||||
PermissionModel,
|
||||
LanguageModel,
|
||||
NlpEntityModel,
|
||||
NlpSampleEntityModel,
|
||||
NlpValueModel,
|
||||
]),
|
||||
],
|
||||
providers: [
|
||||
@ -116,6 +128,12 @@ describe('BlockController', () => {
|
||||
PermissionService,
|
||||
LanguageService,
|
||||
PluginService,
|
||||
LoggerService,
|
||||
NlpEntityService,
|
||||
NlpEntityRepository,
|
||||
NlpSampleEntityRepository,
|
||||
NlpValueRepository,
|
||||
NlpValueService,
|
||||
{
|
||||
provide: I18nService,
|
||||
useValue: {
|
||||
|
@ -8,6 +8,8 @@
|
||||
|
||||
import { z } from 'zod';
|
||||
|
||||
import { BlockFull } from '../block.schema';
|
||||
|
||||
import { PayloadType } from './button';
|
||||
|
||||
export const payloadPatternSchema = z.object({
|
||||
@ -57,3 +59,19 @@ export const patternSchema = z.union([
|
||||
]);
|
||||
|
||||
export type Pattern = z.infer<typeof patternSchema>;
|
||||
|
||||
export type NlpPatternMatchResult = {
|
||||
block: BlockFull;
|
||||
matchedPattern: NlpPattern[];
|
||||
};
|
||||
|
||||
export function isNlpPattern(pattern: NlpPattern) {
|
||||
return (
|
||||
(typeof pattern === 'object' &&
|
||||
pattern !== null &&
|
||||
'entity' in pattern &&
|
||||
'match' in pattern &&
|
||||
pattern.match === 'entity') ||
|
||||
pattern.match === 'value'
|
||||
);
|
||||
}
|
||||
|
@ -31,6 +31,14 @@ import { LanguageRepository } from '@/i18n/repositories/language.repository';
|
||||
import { LanguageModel } from '@/i18n/schemas/language.schema';
|
||||
import { I18nService } from '@/i18n/services/i18n.service';
|
||||
import { LanguageService } from '@/i18n/services/language.service';
|
||||
import { NlpEntityRepository } from '@/nlp/repositories/nlp-entity.repository';
|
||||
import { NlpSampleEntityRepository } from '@/nlp/repositories/nlp-sample-entity.repository';
|
||||
import { NlpValueRepository } from '@/nlp/repositories/nlp-value.repository';
|
||||
import { NlpEntityModel } from '@/nlp/schemas/nlp-entity.schema';
|
||||
import { NlpSampleEntityModel } from '@/nlp/schemas/nlp-sample-entity.schema';
|
||||
import { NlpValueModel } from '@/nlp/schemas/nlp-value.schema';
|
||||
import { NlpEntityService } from '@/nlp/services/nlp-entity.service';
|
||||
import { NlpValueService } from '@/nlp/services/nlp-value.service';
|
||||
import { PluginService } from '@/plugins/plugins.service';
|
||||
import { SettingService } from '@/setting/services/setting.service';
|
||||
import {
|
||||
@ -43,12 +51,23 @@ import {
|
||||
blockGetStarted,
|
||||
blockProductListMock,
|
||||
blocks,
|
||||
mockModifiedNlpBlock,
|
||||
mockModifiedNlpBlockOne,
|
||||
mockModifiedNlpBlockTwo,
|
||||
mockNlpBlock,
|
||||
mockNlpPatternsSetOne,
|
||||
mockNlpPatternsSetThree,
|
||||
mockNlpPatternsSetTwo,
|
||||
} from '@/utils/test/mocks/block';
|
||||
import {
|
||||
contextBlankInstance,
|
||||
subscriberContextBlankInstance,
|
||||
} from '@/utils/test/mocks/conversation';
|
||||
import { nlpEntitiesGreeting } from '@/utils/test/mocks/nlp';
|
||||
import {
|
||||
mockNlpCacheMap,
|
||||
mockNlpEntitiesSetOne,
|
||||
nlpEntitiesGreeting,
|
||||
} from '@/utils/test/mocks/nlp';
|
||||
import {
|
||||
closeInMongodConnection,
|
||||
rootMongooseTestModule,
|
||||
@ -56,7 +75,7 @@ import {
|
||||
import { buildTestingMocks } from '@/utils/test/utils';
|
||||
|
||||
import { BlockRepository } from '../repositories/block.repository';
|
||||
import { Block, BlockModel } from '../schemas/block.schema';
|
||||
import { Block, BlockFull, BlockModel } from '../schemas/block.schema';
|
||||
import { Category, CategoryModel } from '../schemas/category.schema';
|
||||
import { LabelModel } from '../schemas/label.schema';
|
||||
import { FileType } from '../schemas/types/attachment';
|
||||
@ -75,6 +94,7 @@ describe('BlockService', () => {
|
||||
let hasPreviousBlocks: Block;
|
||||
let contentService: ContentService;
|
||||
let contentTypeService: ContentTypeService;
|
||||
let nlpEntityService: NlpEntityService;
|
||||
|
||||
beforeAll(async () => {
|
||||
const { getMocks } = await buildTestingMocks({
|
||||
@ -91,6 +111,9 @@ describe('BlockService', () => {
|
||||
AttachmentModel,
|
||||
LabelModel,
|
||||
LanguageModel,
|
||||
NlpEntityModel,
|
||||
NlpSampleEntityModel,
|
||||
NlpValueModel,
|
||||
]),
|
||||
],
|
||||
providers: [
|
||||
@ -106,6 +129,14 @@ describe('BlockService', () => {
|
||||
ContentService,
|
||||
AttachmentService,
|
||||
LanguageService,
|
||||
NlpEntityRepository,
|
||||
NlpValueRepository,
|
||||
NlpSampleEntityRepository,
|
||||
NlpEntityService,
|
||||
{
|
||||
provide: NlpValueService,
|
||||
useValue: {},
|
||||
},
|
||||
{
|
||||
provide: PluginService,
|
||||
useValue: {},
|
||||
@ -145,12 +176,14 @@ describe('BlockService', () => {
|
||||
contentTypeService,
|
||||
categoryRepository,
|
||||
blockRepository,
|
||||
nlpEntityService,
|
||||
] = await getMocks([
|
||||
BlockService,
|
||||
ContentService,
|
||||
ContentTypeService,
|
||||
CategoryRepository,
|
||||
BlockRepository,
|
||||
NlpEntityService,
|
||||
]);
|
||||
category = (await categoryRepository.findOne({ label: 'default' }))!;
|
||||
hasPreviousBlocks = (await blockRepository.findOne({
|
||||
@ -291,6 +324,7 @@ describe('BlockService', () => {
|
||||
blockGetStarted,
|
||||
);
|
||||
expect(result).toEqual([
|
||||
[
|
||||
{
|
||||
entity: 'intent',
|
||||
match: 'value',
|
||||
@ -300,23 +334,153 @@ describe('BlockService', () => {
|
||||
entity: 'firstname',
|
||||
match: 'entity',
|
||||
},
|
||||
],
|
||||
]);
|
||||
});
|
||||
|
||||
it('should return undefined when it does not match nlp patterns', () => {
|
||||
it('should return empty array when it does not match nlp patterns', () => {
|
||||
const result = blockService.matchNLP(nlpEntitiesGreeting, {
|
||||
...blockGetStarted,
|
||||
patterns: [[{ entity: 'lastname', match: 'value', value: 'Belakhel' }]],
|
||||
});
|
||||
expect(result).toEqual(undefined);
|
||||
expect(result).toEqual([]);
|
||||
});
|
||||
|
||||
it('should return undefined when unknown nlp patterns', () => {
|
||||
it('should return empty array when unknown nlp patterns', () => {
|
||||
const result = blockService.matchNLP(nlpEntitiesGreeting, {
|
||||
...blockGetStarted,
|
||||
patterns: [[{ entity: 'product', match: 'value', value: 'pizza' }]],
|
||||
});
|
||||
expect(result).toEqual(undefined);
|
||||
expect(result).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('matchBestNLP', () => {
|
||||
it('should return the block with the highest NLP score', async () => {
|
||||
jest
|
||||
.spyOn(nlpEntityService, 'getNlpMap')
|
||||
.mockResolvedValue(mockNlpCacheMap);
|
||||
const blocks = [mockNlpBlock, blockGetStarted]; // You can add more blocks with different patterns and scores
|
||||
const nlp = mockNlpEntitiesSetOne;
|
||||
// Spy on calculateBlockScore to check if it's called
|
||||
const calculateBlockScoreSpy = jest.spyOn(
|
||||
blockService,
|
||||
'calculateBlockScore',
|
||||
);
|
||||
const bestBlock = await blockService.matchBestNLP(blocks, nlp);
|
||||
|
||||
// Ensure calculateBlockScore was called at least once for each block
|
||||
expect(calculateBlockScoreSpy).toHaveBeenCalledTimes(2); // Called for each block
|
||||
|
||||
// Restore the spy after the test
|
||||
calculateBlockScoreSpy.mockRestore();
|
||||
// Assert that the block with the highest NLP score is selected
|
||||
expect(bestBlock).toEqual(mockNlpBlock);
|
||||
});
|
||||
|
||||
it('should return the block with the highest NLP score applying penalties', async () => {
|
||||
jest
|
||||
.spyOn(nlpEntityService, 'getNlpMap')
|
||||
.mockResolvedValue(mockNlpCacheMap);
|
||||
const blocks = [mockNlpBlock, mockModifiedNlpBlock]; // You can add more blocks with different patterns and scores
|
||||
const nlp = mockNlpEntitiesSetOne;
|
||||
// Spy on calculateBlockScore to check if it's called
|
||||
const calculateBlockScoreSpy = jest.spyOn(
|
||||
blockService,
|
||||
'calculateBlockScore',
|
||||
);
|
||||
const bestBlock = await blockService.matchBestNLP(blocks, nlp);
|
||||
|
||||
// Ensure calculateBlockScore was called at least once for each block
|
||||
expect(calculateBlockScoreSpy).toHaveBeenCalledTimes(2); // Called for each block
|
||||
|
||||
// Restore the spy after the test
|
||||
calculateBlockScoreSpy.mockRestore();
|
||||
// Assert that the block with the highest NLP score is selected
|
||||
expect(bestBlock).toEqual(mockNlpBlock);
|
||||
});
|
||||
|
||||
it('another case where it should return the block with the highest NLP score applying penalties', async () => {
|
||||
jest
|
||||
.spyOn(nlpEntityService, 'getNlpMap')
|
||||
.mockResolvedValue(mockNlpCacheMap);
|
||||
const blocks = [mockModifiedNlpBlockOne, mockModifiedNlpBlockTwo]; // You can add more blocks with different patterns and scores
|
||||
const nlp = mockNlpEntitiesSetOne;
|
||||
// Spy on calculateBlockScore to check if it's called
|
||||
const calculateBlockScoreSpy = jest.spyOn(
|
||||
blockService,
|
||||
'calculateBlockScore',
|
||||
);
|
||||
const bestBlock = await blockService.matchBestNLP(blocks, nlp);
|
||||
|
||||
// Ensure calculateBlockScore was called at least once for each block
|
||||
expect(calculateBlockScoreSpy).toHaveBeenCalledTimes(3); // Called for each block
|
||||
|
||||
// Restore the spy after the test
|
||||
calculateBlockScoreSpy.mockRestore();
|
||||
// Assert that the block with the highest NLP score is selected
|
||||
expect(bestBlock).toEqual(mockModifiedNlpBlockTwo);
|
||||
});
|
||||
|
||||
it('should return undefined if no blocks match or the list is empty', async () => {
|
||||
jest
|
||||
.spyOn(nlpEntityService, 'getNlpMap')
|
||||
.mockResolvedValue(mockNlpCacheMap);
|
||||
const blocks: BlockFull[] = []; // Empty block array
|
||||
const nlp = mockNlpEntitiesSetOne;
|
||||
|
||||
const bestBlock = await blockService.matchBestNLP(blocks, nlp);
|
||||
|
||||
// Assert that undefined is returned when no blocks are available
|
||||
expect(bestBlock).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('calculateBlockScore', () => {
|
||||
it('should calculate the correct NLP score for a block', async () => {
|
||||
jest
|
||||
.spyOn(nlpEntityService, 'getNlpMap')
|
||||
.mockResolvedValue(mockNlpCacheMap);
|
||||
const score = await blockService.calculateBlockScore(
|
||||
mockNlpPatternsSetOne,
|
||||
mockNlpEntitiesSetOne,
|
||||
);
|
||||
const score2 = await blockService.calculateBlockScore(
|
||||
mockNlpPatternsSetTwo,
|
||||
mockNlpEntitiesSetOne,
|
||||
);
|
||||
|
||||
expect(score).toBeGreaterThan(0);
|
||||
expect(score2).toBe(0);
|
||||
expect(score).toBeGreaterThan(score2);
|
||||
});
|
||||
|
||||
it('should calculate the correct NLP score for a block and apply penalties ', async () => {
|
||||
jest
|
||||
.spyOn(nlpEntityService, 'getNlpMap')
|
||||
.mockResolvedValue(mockNlpCacheMap);
|
||||
const score = await blockService.calculateBlockScore(
|
||||
mockNlpPatternsSetOne,
|
||||
mockNlpEntitiesSetOne,
|
||||
);
|
||||
const score2 = await blockService.calculateBlockScore(
|
||||
mockNlpPatternsSetThree,
|
||||
mockNlpEntitiesSetOne,
|
||||
);
|
||||
|
||||
expect(score).toBeGreaterThan(0);
|
||||
expect(score2).toBeGreaterThan(0);
|
||||
expect(score).toBeGreaterThan(score2);
|
||||
});
|
||||
|
||||
it('should return 0 if no matching entities are found', async () => {
|
||||
jest.spyOn(nlpEntityService, 'getNlpMap').mockResolvedValue(new Map());
|
||||
const score = await blockService.calculateBlockScore(
|
||||
mockNlpPatternsSetTwo,
|
||||
mockNlpEntitiesSetOne,
|
||||
);
|
||||
|
||||
expect(score).toBe(0); // No matching entity, so score should be 0
|
||||
});
|
||||
});
|
||||
|
||||
|
@ -16,6 +16,8 @@ import { CONSOLE_CHANNEL_NAME } from '@/extensions/channels/console/settings';
|
||||
import { NLU } from '@/helper/types';
|
||||
import { I18nService } from '@/i18n/services/i18n.service';
|
||||
import { LanguageService } from '@/i18n/services/language.service';
|
||||
import { NlpCacheMapValues } from '@/nlp/schemas/types';
|
||||
import { NlpEntityService } from '@/nlp/services/nlp-entity.service';
|
||||
import { PluginService } from '@/plugins/plugins.service';
|
||||
import { PluginType } from '@/plugins/types';
|
||||
import { SettingService } from '@/setting/services/setting.service';
|
||||
@ -53,6 +55,7 @@ export class BlockService extends BaseService<
|
||||
private readonly pluginService: PluginService,
|
||||
protected readonly i18n: I18nService,
|
||||
protected readonly languageService: LanguageService,
|
||||
protected readonly entityService: NlpEntityService,
|
||||
) {
|
||||
super(repository);
|
||||
}
|
||||
@ -181,20 +184,21 @@ export class BlockService extends BaseService<
|
||||
.shift();
|
||||
|
||||
// Perform an NLP Match
|
||||
|
||||
if (!block && nlp) {
|
||||
// Find block pattern having the best match of nlp entities
|
||||
let nlpBest = 0;
|
||||
filteredBlocks.forEach((b, index, self) => {
|
||||
const nlpPattern = this.matchNLP(nlp, b);
|
||||
if (nlpPattern && nlpPattern.length > nlpBest) {
|
||||
nlpBest = nlpPattern.length;
|
||||
block = self[index];
|
||||
}
|
||||
});
|
||||
// Use the `reduce` function to iterate over `filteredBlocks` and accumulate a new array `matchesWithPatterns`.
|
||||
// This approach combines the matching of NLP patterns and filtering of blocks with empty or invalid matches
|
||||
// into a single operation. This avoids the need for a separate mapping and filtering step, improving performance.
|
||||
// For each block in `filteredBlocks`, we call `matchNLP` to find patterns that match the NLP data.
|
||||
// If `matchNLP` returns a non-empty list of matched patterns, the block and its matched patterns are added
|
||||
// to the accumulator array `acc`, which is returned as the final result.
|
||||
// This ensures that only blocks with valid matches are kept, and blocks with no matches are excluded,
|
||||
// all while iterating through the list only once.
|
||||
|
||||
block = await this.matchBestNLP(filteredBlocks, nlp);
|
||||
}
|
||||
}
|
||||
// Uknown event type => return false;
|
||||
// this.logger.error('Unable to recognize event type while matching', event);
|
||||
|
||||
return block;
|
||||
}
|
||||
|
||||
@ -304,7 +308,7 @@ export class BlockService extends BaseService<
|
||||
matchNLP(
|
||||
nlp: NLU.ParseEntities,
|
||||
block: Block | BlockFull,
|
||||
): NlpPattern[] | undefined {
|
||||
): NlpPattern[][] | undefined {
|
||||
// No nlp entities to check against
|
||||
if (nlp.entities.length === 0) {
|
||||
return undefined;
|
||||
@ -313,14 +317,13 @@ export class BlockService extends BaseService<
|
||||
const nlpPatterns = block.patterns?.filter((p) => {
|
||||
return Array.isArray(p);
|
||||
}) as NlpPattern[][];
|
||||
|
||||
// No nlp patterns found
|
||||
if (nlpPatterns.length === 0) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
// Find NLP pattern match based on best guessed entities
|
||||
return nlpPatterns.find((entities: NlpPattern[]) => {
|
||||
return nlpPatterns.filter((entities: NlpPattern[]) => {
|
||||
return entities.every((ev: NlpPattern) => {
|
||||
if (ev.match === 'value') {
|
||||
return nlp.entities.find((e) => {
|
||||
@ -338,6 +341,139 @@ export class BlockService extends BaseService<
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Matches the provided NLU parsed entities with patterns in a set of blocks and returns
|
||||
* the block with the highest matching score.
|
||||
*
|
||||
* For each block, it checks the patterns against the NLU parsed entities, calculates
|
||||
* a score for each match, and selects the block with the highest score.
|
||||
*
|
||||
* @param {BlockFull[]} blocks - An array of BlockFull objects representing potential matches.
|
||||
* @param {NLU.ParseEntities} nlp - The NLU parsed entities used for pattern matching.
|
||||
*
|
||||
* @returns {Promise<BlockFull | undefined>} - A promise that resolves to the BlockFull
|
||||
* with the highest match score, or undefined if no matches are found.
|
||||
*/
|
||||
async matchBestNLP(
|
||||
blocks: BlockFull[],
|
||||
nlp: NLU.ParseEntities,
|
||||
): Promise<BlockFull | undefined> {
|
||||
const scoredBlocks = await Promise.all(
|
||||
blocks.map(async (block) => {
|
||||
const matchedPatterns = this.matchNLP(nlp, block) || [];
|
||||
|
||||
const scores = await Promise.all(
|
||||
matchedPatterns.map((pattern) =>
|
||||
this.calculateBlockScore(pattern, nlp),
|
||||
),
|
||||
);
|
||||
|
||||
const maxScore = scores.length > 0 ? Math.max(...scores) : 0;
|
||||
|
||||
return { block, score: maxScore };
|
||||
}),
|
||||
);
|
||||
|
||||
const best = scoredBlocks.reduce(
|
||||
(acc, curr) => (curr.score > acc.score ? curr : acc),
|
||||
{ block: undefined, score: 0 },
|
||||
);
|
||||
|
||||
return best.block;
|
||||
}
|
||||
|
||||
/**
|
||||
* Computes the NLP score for a given block using its matched NLP patterns and parsed NLP entities.
|
||||
*
|
||||
* Each pattern is evaluated against the parsed NLP entities to determine matches based on entity name,
|
||||
* value, and confidence. A score is computed using the entity's weight and the confidence level of the match.
|
||||
* A penalty factor is optionally applied for entity-level matches to adjust the scoring.
|
||||
*
|
||||
* The function uses a cache (`nlpCacheMap`) to avoid redundant database lookups for entity metadata.
|
||||
*
|
||||
* @param patterns - The NLP patterns associated with the block.
|
||||
* @param nlp - The parsed NLP entities from the user input.
|
||||
* @param nlpCacheMap - A cache to reuse fetched entity metadata (e.g., weights and valid values).
|
||||
* @param nlpPenaltyFactor - A multiplier applied to scores when the pattern match type is 'entity'.
|
||||
* @returns A numeric score representing how well the block matches the given NLP context.
|
||||
*/
|
||||
async calculateBlockScore(
|
||||
patterns: NlpPattern[],
|
||||
nlp: NLU.ParseEntities,
|
||||
): Promise<number> {
|
||||
if (!patterns.length) return 0;
|
||||
|
||||
const nlpCacheMap = await this.entityService.getNlpMap();
|
||||
// @TODO Make nluPenaltyFactor configurable in UI settings
|
||||
const nluPenaltyFactor = 0.95;
|
||||
// Compute individual pattern scores using the cache
|
||||
const patternScores: number[] = patterns.map((pattern) => {
|
||||
const entityData = nlpCacheMap.get(pattern.entity);
|
||||
if (!entityData) return 0;
|
||||
|
||||
const matchedEntity: NLU.ParseEntity | undefined = nlp.entities.find(
|
||||
(e) => this.matchesEntityData(e, pattern, entityData),
|
||||
);
|
||||
|
||||
return this.computePatternScore(
|
||||
matchedEntity,
|
||||
pattern,
|
||||
entityData,
|
||||
nluPenaltyFactor,
|
||||
);
|
||||
});
|
||||
|
||||
// Sum the scores
|
||||
return patternScores.reduce((sum, score) => sum + score, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a given `ParseEntity` from the NLP model matches the specified pattern
|
||||
* and if its value exists within the values provided in the cache for the specified entity.
|
||||
*
|
||||
* @param e - The `ParseEntity` object from the NLP model, containing information about the entity and its value.
|
||||
* @param pattern - The `NlpPattern` object representing the entity and value pattern to be matched.
|
||||
* @param entityData - The `NlpCacheMapValues` object containing cached data, including entity values and weight, for the entity being matched.
|
||||
*
|
||||
* @returns A boolean indicating whether the `ParseEntity` matches the pattern and entity data from the cache.
|
||||
*
|
||||
* - The function compares the entity type between the `ParseEntity` and the `NlpPattern`.
|
||||
* - If the pattern's match type is not `'value'`, it checks if the entity's value is present in the cache's `values` array.
|
||||
* - If the pattern's match type is `'value'`, it further ensures that the entity's value matches the specified value in the pattern.
|
||||
* - Returns `true` if all conditions are met, otherwise `false`.
|
||||
*/
|
||||
private matchesEntityData(
|
||||
e: NLU.ParseEntity,
|
||||
pattern: NlpPattern,
|
||||
entityData: NlpCacheMapValues,
|
||||
): boolean {
|
||||
return (
|
||||
e.entity === pattern.entity &&
|
||||
entityData?.values.some((v) => v === e.value) &&
|
||||
(pattern.match !== 'value' || e.value === pattern.value)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Computes the score for a given entity based on its confidence, weight, and penalty factor.
|
||||
*
|
||||
* @param entity - The `ParseEntity` to check, which may be `undefined` if no match is found.
|
||||
* @param pattern - The `NlpPattern` object that specifies how to match the entity and its value.
|
||||
* @param entityData - The cached data for the given entity, including `weight` and `values`.
|
||||
* @param nlpPenaltyFactor - The penalty factor applied when the pattern's match type is 'entity'.
|
||||
* @returns The computed score based on the entity's confidence, the cached weight, and the penalty factor.
|
||||
*/
|
||||
private computePatternScore(
|
||||
entity: NLU.ParseEntity | undefined,
|
||||
pattern: NlpPattern,
|
||||
entityData: NlpCacheMapValues,
|
||||
nlpPenaltyFactor: number,
|
||||
): number {
|
||||
if (!entity || !entity.confidence) return 0;
|
||||
const penalty = pattern.match === 'entity' ? nlpPenaltyFactor : 1;
|
||||
return entity.confidence * entityData.weight * penalty;
|
||||
}
|
||||
|
||||
/**
|
||||
* Matches an outcome-based block from a list of available blocks
|
||||
* based on the outcome of a system message.
|
||||
|
@ -33,6 +33,14 @@ import { LanguageRepository } from '@/i18n/repositories/language.repository';
|
||||
import { LanguageModel } from '@/i18n/schemas/language.schema';
|
||||
import { I18nService } from '@/i18n/services/i18n.service';
|
||||
import { LanguageService } from '@/i18n/services/language.service';
|
||||
import { NlpEntityRepository } from '@/nlp/repositories/nlp-entity.repository';
|
||||
import { NlpSampleEntityRepository } from '@/nlp/repositories/nlp-sample-entity.repository';
|
||||
import { NlpValueRepository } from '@/nlp/repositories/nlp-value.repository';
|
||||
import { NlpEntityModel } from '@/nlp/schemas/nlp-entity.schema';
|
||||
import { NlpSampleEntityModel } from '@/nlp/schemas/nlp-sample-entity.schema';
|
||||
import { NlpValueModel } from '@/nlp/schemas/nlp-value.schema';
|
||||
import { NlpEntityService } from '@/nlp/services/nlp-entity.service';
|
||||
import { NlpValueService } from '@/nlp/services/nlp-value.service';
|
||||
import { PluginService } from '@/plugins/plugins.service';
|
||||
import { SettingService } from '@/setting/services/setting.service';
|
||||
import { installBlockFixtures } from '@/utils/test/fixtures/block';
|
||||
@ -100,6 +108,9 @@ describe('BlockService', () => {
|
||||
MenuModel,
|
||||
ContextVarModel,
|
||||
LanguageModel,
|
||||
NlpEntityModel,
|
||||
NlpSampleEntityModel,
|
||||
NlpValueModel,
|
||||
]),
|
||||
JwtModule,
|
||||
],
|
||||
@ -131,6 +142,11 @@ describe('BlockService', () => {
|
||||
ContextVarService,
|
||||
ContextVarRepository,
|
||||
LanguageService,
|
||||
NlpEntityService,
|
||||
NlpEntityRepository,
|
||||
NlpSampleEntityRepository,
|
||||
NlpValueRepository,
|
||||
NlpValueService,
|
||||
{
|
||||
provide: HelperService,
|
||||
useValue: {},
|
||||
|
@ -139,6 +139,7 @@ describe('BaseNlpHelper', () => {
|
||||
updatedAt: new Date(),
|
||||
builtin: false,
|
||||
lookups: [],
|
||||
weight: 1,
|
||||
},
|
||||
entity2: {
|
||||
id: new ObjectId().toString(),
|
||||
@ -147,6 +148,7 @@ describe('BaseNlpHelper', () => {
|
||||
updatedAt: new Date(),
|
||||
builtin: false,
|
||||
lookups: [],
|
||||
weight: 1,
|
||||
},
|
||||
});
|
||||
jest.spyOn(NlpValue, 'getValueMap').mockReturnValue({
|
||||
@ -207,6 +209,7 @@ describe('BaseNlpHelper', () => {
|
||||
updatedAt: new Date(),
|
||||
builtin: false,
|
||||
lookups: [],
|
||||
weight: 1,
|
||||
},
|
||||
});
|
||||
|
||||
|
@ -30,6 +30,14 @@ import { MenuModel } from '@/cms/schemas/menu.schema';
|
||||
import { ContentService } from '@/cms/services/content.service';
|
||||
import { MenuService } from '@/cms/services/menu.service';
|
||||
import { I18nService } from '@/i18n/services/i18n.service';
|
||||
import { NlpEntityRepository } from '@/nlp/repositories/nlp-entity.repository';
|
||||
import { NlpSampleEntityRepository } from '@/nlp/repositories/nlp-sample-entity.repository';
|
||||
import { NlpValueRepository } from '@/nlp/repositories/nlp-value.repository';
|
||||
import { NlpEntityModel } from '@/nlp/schemas/nlp-entity.schema';
|
||||
import { NlpSampleEntityModel } from '@/nlp/schemas/nlp-sample-entity.schema';
|
||||
import { NlpValueModel } from '@/nlp/schemas/nlp-value.schema';
|
||||
import { NlpEntityService } from '@/nlp/services/nlp-entity.service';
|
||||
import { NlpValueService } from '@/nlp/services/nlp-value.service';
|
||||
import { NlpService } from '@/nlp/services/nlp.service';
|
||||
import { PluginService } from '@/plugins/plugins.service';
|
||||
import { SettingService } from '@/setting/services/setting.service';
|
||||
@ -75,6 +83,9 @@ describe('TranslationController', () => {
|
||||
BlockModel,
|
||||
ContentModel,
|
||||
LanguageModel,
|
||||
NlpEntityModel,
|
||||
NlpSampleEntityModel,
|
||||
NlpValueModel,
|
||||
]),
|
||||
],
|
||||
providers: [
|
||||
@ -130,6 +141,11 @@ describe('TranslationController', () => {
|
||||
},
|
||||
LanguageService,
|
||||
LanguageRepository,
|
||||
NlpEntityRepository,
|
||||
NlpEntityService,
|
||||
NlpValueRepository,
|
||||
NlpValueService,
|
||||
NlpSampleEntityRepository,
|
||||
],
|
||||
});
|
||||
[translationService, translationController] = await getMocks([
|
||||
|
@ -6,6 +6,7 @@
|
||||
* 2. All derivative works must include clear attribution to the original creator and software, Hexastack and Hexabot, in a prominent location (e.g., in the software's "About" section, documentation, and README file).
|
||||
*/
|
||||
|
||||
import { CACHE_MANAGER } from '@nestjs/cache-manager';
|
||||
import {
|
||||
BadRequestException,
|
||||
MethodNotAllowedException,
|
||||
@ -67,6 +68,12 @@ describe('NlpEntityController', () => {
|
||||
NlpValueService,
|
||||
NlpSampleEntityRepository,
|
||||
NlpValueRepository,
|
||||
{
|
||||
provide: CACHE_MANAGER,
|
||||
useValue: {
|
||||
del: jest.fn(),
|
||||
},
|
||||
},
|
||||
],
|
||||
});
|
||||
[nlpEntityController, nlpValueService, nlpEntityService] = await getMocks([
|
||||
@ -109,6 +116,7 @@ describe('NlpEntityController', () => {
|
||||
) as NlpEntityFull['values'],
|
||||
lookups: curr.lookups!,
|
||||
builtin: curr.builtin!,
|
||||
weight: curr.weight!,
|
||||
});
|
||||
return acc;
|
||||
},
|
||||
@ -163,6 +171,7 @@ describe('NlpEntityController', () => {
|
||||
name: 'sentiment',
|
||||
lookups: ['trait'],
|
||||
builtin: false,
|
||||
weight: 1,
|
||||
};
|
||||
const result = await nlpEntityController.create(sentimentEntity);
|
||||
expect(result).toEqualPayload(sentimentEntity);
|
||||
@ -214,6 +223,7 @@ describe('NlpEntityController', () => {
|
||||
updatedAt: firstNameEntity!.updatedAt,
|
||||
lookups: firstNameEntity!.lookups,
|
||||
builtin: firstNameEntity!.builtin,
|
||||
weight: firstNameEntity!.weight,
|
||||
};
|
||||
const result = await nlpEntityController.findOne(firstNameEntity!.id, [
|
||||
'values',
|
||||
@ -238,6 +248,7 @@ describe('NlpEntityController', () => {
|
||||
doc: '',
|
||||
lookups: ['trait'],
|
||||
builtin: false,
|
||||
weight: 1,
|
||||
};
|
||||
const result = await nlpEntityController.updateOne(
|
||||
firstNameEntity!.id,
|
||||
@ -258,7 +269,7 @@ describe('NlpEntityController', () => {
|
||||
).rejects.toThrow(NotFoundException);
|
||||
});
|
||||
|
||||
it('should throw exception when nlp entity is builtin', async () => {
|
||||
it('should throw an exception if entity is builtin but weight not provided', async () => {
|
||||
const updateNlpEntity: NlpEntityCreateDto = {
|
||||
name: 'updated',
|
||||
doc: '',
|
||||
@ -269,6 +280,57 @@ describe('NlpEntityController', () => {
|
||||
nlpEntityController.updateOne(buitInEntityId!, updateNlpEntity),
|
||||
).rejects.toThrow(MethodNotAllowedException);
|
||||
});
|
||||
|
||||
it('should update weight if entity is builtin and weight is provided', async () => {
|
||||
const updatedNlpEntity: NlpEntityCreateDto = {
|
||||
name: 'updated',
|
||||
doc: '',
|
||||
lookups: ['trait'],
|
||||
builtin: false,
|
||||
weight: 4,
|
||||
};
|
||||
const findOneSpy = jest.spyOn(nlpEntityService, 'findOne');
|
||||
const updateWeightSpy = jest.spyOn(nlpEntityService, 'updateWeight');
|
||||
|
||||
const result = await nlpEntityController.updateOne(
|
||||
buitInEntityId!,
|
||||
updatedNlpEntity,
|
||||
);
|
||||
|
||||
expect(findOneSpy).toHaveBeenCalledWith(buitInEntityId!);
|
||||
expect(updateWeightSpy).toHaveBeenCalledWith(
|
||||
buitInEntityId!,
|
||||
updatedNlpEntity.weight,
|
||||
);
|
||||
expect(result.weight).toBe(updatedNlpEntity.weight);
|
||||
});
|
||||
|
||||
it('should update only the weight of the builtin entity', async () => {
|
||||
const updatedNlpEntity: NlpEntityCreateDto = {
|
||||
name: 'updated',
|
||||
doc: '',
|
||||
lookups: ['trait'],
|
||||
builtin: false,
|
||||
weight: 4,
|
||||
};
|
||||
const originalEntity: NlpEntity | null = await nlpEntityService.findOne(
|
||||
buitInEntityId!,
|
||||
);
|
||||
|
||||
const result: NlpEntity = await nlpEntityController.updateOne(
|
||||
buitInEntityId!,
|
||||
updatedNlpEntity,
|
||||
);
|
||||
|
||||
// Check weight is updated
|
||||
expect(result.weight).toBe(updatedNlpEntity.weight);
|
||||
|
||||
Object.entries(originalEntity!).forEach(([key, value]) => {
|
||||
if (key !== 'weight' && key !== 'updatedAt') {
|
||||
expect(result[key as keyof typeof result]).toEqual(value);
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
describe('deleteMany', () => {
|
||||
it('should delete multiple nlp entities', async () => {
|
||||
|
@ -157,10 +157,19 @@ export class NlpEntityController extends BaseController<
|
||||
this.logger.warn(`Unable to update NLP Entity by id ${id}`);
|
||||
throw new NotFoundException(`NLP Entity with ID ${id} not found`);
|
||||
}
|
||||
|
||||
if (nlpEntity.builtin) {
|
||||
throw new MethodNotAllowedException(
|
||||
`Cannot update builtin NLP Entity ${nlpEntity.name}`,
|
||||
// Only allow weight update for builtin entities
|
||||
if (updateNlpEntityDto.weight) {
|
||||
return await this.nlpEntityService.updateWeight(
|
||||
id,
|
||||
updateNlpEntityDto.weight,
|
||||
);
|
||||
} else {
|
||||
throw new MethodNotAllowedException(
|
||||
`Cannot update builtin NLP Entity ${nlpEntity.name} except for weight`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return await this.nlpEntityService.updateOne(id, updateNlpEntityDto);
|
||||
|
@ -372,6 +372,7 @@ describe('NlpSampleController', () => {
|
||||
lookups: ['trait'],
|
||||
doc: '',
|
||||
builtin: false,
|
||||
weight: 1,
|
||||
};
|
||||
const priceValueEntity = await nlpEntityService.findOne({
|
||||
name: 'intent',
|
||||
|
@ -6,6 +6,7 @@
|
||||
* 2. All derivative works must include clear attribution to the original creator and software, Hexastack and Hexabot, in a prominent location (e.g., in the software's "About" section, documentation, and README file).
|
||||
*/
|
||||
|
||||
import { CACHE_MANAGER } from '@nestjs/cache-manager';
|
||||
import { BadRequestException, NotFoundException } from '@nestjs/common';
|
||||
import { MongooseModule } from '@nestjs/mongoose';
|
||||
|
||||
@ -57,6 +58,12 @@ describe('NlpValueController', () => {
|
||||
NlpSampleEntityRepository,
|
||||
NlpEntityService,
|
||||
NlpEntityRepository,
|
||||
{
|
||||
provide: CACHE_MANAGER,
|
||||
useValue: {
|
||||
del: jest.fn(),
|
||||
},
|
||||
},
|
||||
],
|
||||
});
|
||||
[nlpValueController, nlpValueService, nlpEntityService] = await getMocks([
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright © 2024 Hexastack. All rights reserved.
|
||||
* Copyright © 2025 Hexastack. All rights reserved.
|
||||
*
|
||||
* Licensed under the GNU Affero General Public License v3.0 (AGPLv3) with the following additional terms:
|
||||
* 1. The name "Hexabot" is a trademark of Hexastack. You may not use this name in derivative works without express written permission.
|
||||
@ -11,10 +11,13 @@ import {
|
||||
IsArray,
|
||||
IsBoolean,
|
||||
IsIn,
|
||||
IsInt,
|
||||
IsNotEmpty,
|
||||
IsNumber,
|
||||
IsOptional,
|
||||
IsString,
|
||||
Matches,
|
||||
Min,
|
||||
} from 'class-validator';
|
||||
|
||||
import { DtoConfig } from '@/utils/types/dto.types';
|
||||
@ -47,6 +50,17 @@ export class NlpEntityCreateDto {
|
||||
@IsBoolean()
|
||||
@IsOptional()
|
||||
builtin?: boolean;
|
||||
|
||||
@ApiPropertyOptional({
|
||||
description: 'Nlp entity associated weight for next block triggering',
|
||||
type: Number,
|
||||
minimum: 1,
|
||||
})
|
||||
@IsNumber()
|
||||
@IsOptional()
|
||||
@Min(1, { message: 'Weight must be a positive integer' })
|
||||
@IsInt({ message: 'Weight must be an integer' })
|
||||
weight?: number;
|
||||
}
|
||||
|
||||
export type NlpEntityDto = DtoConfig<{
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright © 2024 Hexastack. All rights reserved.
|
||||
* Copyright © 2025 Hexastack. All rights reserved.
|
||||
*
|
||||
* Licensed under the GNU Affero General Public License v3.0 (AGPLv3) with the following additional terms:
|
||||
* 1. The name "Hexabot" is a trademark of Hexastack. You may not use this name in derivative works without express written permission.
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright © 2024 Hexastack. All rights reserved.
|
||||
* Copyright © 2025 Hexastack. All rights reserved.
|
||||
*
|
||||
* Licensed under the GNU Affero General Public License v3.0 (AGPLv3) with the following additional terms:
|
||||
* 1. The name "Hexabot" is a trademark of Hexastack. You may not use this name in derivative works without express written permission.
|
||||
@ -58,6 +58,12 @@ export class NlpEntityStub extends BaseSchema {
|
||||
@Prop({ type: Boolean, default: false })
|
||||
builtin: boolean;
|
||||
|
||||
/**
|
||||
* Entity's weight used to determine the next block to trigger in the conversational flow.
|
||||
*/
|
||||
@Prop({ type: Number, default: 1, min: 0 })
|
||||
weight: number;
|
||||
|
||||
/**
|
||||
* Returns a map object for entities
|
||||
* @param entities - Array of entities
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright © 2024 Hexastack. All rights reserved.
|
||||
* Copyright © 2025 Hexastack. All rights reserved.
|
||||
*
|
||||
* Licensed under the GNU Affero General Public License v3.0 (AGPLv3) with the following additional terms:
|
||||
* 1. The name "Hexabot" is a trademark of Hexastack. You may not use this name in derivative works without express written permission.
|
||||
@ -25,3 +25,11 @@ export enum NlpSampleState {
|
||||
test = 'test',
|
||||
inbox = 'inbox',
|
||||
}
|
||||
|
||||
export type NlpCacheMap = Map<string, NlpCacheMapValues>;
|
||||
|
||||
export type NlpCacheMapValues = {
|
||||
id: string;
|
||||
weight: number;
|
||||
values: string[];
|
||||
};
|
||||
|
@ -6,6 +6,7 @@
|
||||
* 2. All derivative works must include clear attribution to the original creator and software, Hexastack and Hexabot, in a prominent location (e.g., in the software's "About" section, documentation, and README file).
|
||||
*/
|
||||
|
||||
import { CACHE_MANAGER } from '@nestjs/cache-manager';
|
||||
import { MongooseModule } from '@nestjs/mongoose';
|
||||
|
||||
import { nlpEntityFixtures } from '@/utils/test/fixtures/nlpentity';
|
||||
@ -20,7 +21,11 @@ import { buildTestingMocks } from '@/utils/test/utils';
|
||||
import { NlpEntityRepository } from '../repositories/nlp-entity.repository';
|
||||
import { NlpSampleEntityRepository } from '../repositories/nlp-sample-entity.repository';
|
||||
import { NlpValueRepository } from '../repositories/nlp-value.repository';
|
||||
import { NlpEntity, NlpEntityModel } from '../schemas/nlp-entity.schema';
|
||||
import {
|
||||
NlpEntity,
|
||||
NlpEntityFull,
|
||||
NlpEntityModel,
|
||||
} from '../schemas/nlp-entity.schema';
|
||||
import { NlpSampleEntityModel } from '../schemas/nlp-sample-entity.schema';
|
||||
import { NlpValueModel } from '../schemas/nlp-value.schema';
|
||||
|
||||
@ -48,6 +53,12 @@ describe('nlpEntityService', () => {
|
||||
NlpValueService,
|
||||
NlpValueRepository,
|
||||
NlpSampleEntityRepository,
|
||||
{
|
||||
provide: CACHE_MANAGER,
|
||||
useValue: {
|
||||
del: jest.fn(),
|
||||
},
|
||||
},
|
||||
],
|
||||
});
|
||||
[nlpEntityService, nlpEntityRepository, nlpValueRepository] =
|
||||
@ -117,6 +128,77 @@ describe('nlpEntityService', () => {
|
||||
expect(result).toEqualPayload(entitiesWithValues);
|
||||
});
|
||||
});
|
||||
describe('NlpEntityService - updateWeight', () => {
|
||||
let createdEntity: NlpEntity;
|
||||
beforeEach(async () => {
|
||||
createdEntity = await nlpEntityRepository.create({
|
||||
name: 'testentity',
|
||||
builtin: false,
|
||||
weight: 3,
|
||||
});
|
||||
});
|
||||
|
||||
it('should update the weight of an NLP entity', async () => {
|
||||
const newWeight = 8;
|
||||
|
||||
const updatedEntity = await nlpEntityService.updateWeight(
|
||||
createdEntity.id,
|
||||
newWeight,
|
||||
);
|
||||
|
||||
expect(updatedEntity.weight).toBe(newWeight);
|
||||
});
|
||||
|
||||
it('should handle updating weight of non-existent entity', async () => {
|
||||
const nonExistentId = '507f1f77bcf86cd799439011'; // Example MongoDB ObjectId
|
||||
|
||||
try {
|
||||
await nlpEntityService.updateWeight(nonExistentId, 5);
|
||||
fail('Expected error was not thrown');
|
||||
} catch (error) {
|
||||
expect(error).toBeDefined();
|
||||
}
|
||||
});
|
||||
|
||||
it('should use default weight of 1 when creating entity without weight', async () => {
|
||||
const createdEntity = await nlpEntityRepository.create({
|
||||
name: 'entityWithoutWeight',
|
||||
builtin: true,
|
||||
// weight not specified
|
||||
});
|
||||
|
||||
expect(createdEntity.weight).toBe(1);
|
||||
});
|
||||
|
||||
it('should throw an error if weight is less than 1', async () => {
|
||||
const invalidWeight = 0;
|
||||
|
||||
await expect(
|
||||
nlpEntityService.updateWeight(createdEntity.id, invalidWeight),
|
||||
).rejects.toThrow('Weight must be a positive integer');
|
||||
});
|
||||
|
||||
it('should throw an error if weight is a decimal', async () => {
|
||||
const invalidWeight = 2.5;
|
||||
|
||||
await expect(
|
||||
nlpEntityService.updateWeight(createdEntity.id, invalidWeight),
|
||||
).rejects.toThrow('Weight must be a positive integer');
|
||||
});
|
||||
|
||||
it('should throw an error if weight is negative', async () => {
|
||||
const invalidWeight = -3;
|
||||
|
||||
await expect(
|
||||
nlpEntityService.updateWeight(createdEntity.id, invalidWeight),
|
||||
).rejects.toThrow('Weight must be a positive integer');
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
// Clean the collection after each test
|
||||
await nlpEntityRepository.deleteOne(createdEntity.id);
|
||||
});
|
||||
});
|
||||
|
||||
describe('storeNewEntities', () => {
|
||||
it('should store new entities', async () => {
|
||||
@ -150,4 +232,47 @@ describe('nlpEntityService', () => {
|
||||
expect(result).toEqualPayload(storedEntites);
|
||||
});
|
||||
});
|
||||
describe('getNlpMap', () => {
|
||||
it('should return a NlpCacheMap with the correct structure', async () => {
|
||||
// Arrange
|
||||
const firstMockValues = {
|
||||
id: '1',
|
||||
weight: 1,
|
||||
};
|
||||
const firstMockEntity = {
|
||||
name: 'intent',
|
||||
...firstMockValues,
|
||||
values: [{ value: 'buy' }, { value: 'sell' }],
|
||||
} as unknown as Partial<NlpEntityFull>;
|
||||
const secondMockValues = {
|
||||
id: '2',
|
||||
weight: 5,
|
||||
};
|
||||
const secondMockEntity = {
|
||||
name: 'subject',
|
||||
...secondMockValues,
|
||||
values: [{ value: 'product' }],
|
||||
} as unknown as Partial<NlpEntityFull>;
|
||||
const mockEntities = [firstMockEntity, secondMockEntity];
|
||||
|
||||
// Mock findAndPopulate
|
||||
jest
|
||||
.spyOn(nlpEntityService, 'findAllAndPopulate')
|
||||
.mockResolvedValue(mockEntities as unknown as NlpEntityFull[]);
|
||||
|
||||
// Act
|
||||
const result = await nlpEntityService.getNlpMap();
|
||||
|
||||
expect(result).toBeInstanceOf(Map);
|
||||
expect(result.size).toBe(2);
|
||||
expect(result.get('intent')).toEqual({
|
||||
name: 'intent',
|
||||
...firstMockEntity,
|
||||
});
|
||||
expect(result.get('subject')).toEqual({
|
||||
name: 'subject',
|
||||
...secondMockEntity,
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
@ -1,13 +1,18 @@
|
||||
/*
|
||||
* Copyright © 2024 Hexastack. All rights reserved.
|
||||
* Copyright © 2025 Hexastack. All rights reserved.
|
||||
*
|
||||
* Licensed under the GNU Affero General Public License v3.0 (AGPLv3) with the following additional terms:
|
||||
* 1. The name "Hexabot" is a trademark of Hexastack. You may not use this name in derivative works without express written permission.
|
||||
* 2. All derivative works must include clear attribution to the original creator and software, Hexastack and Hexabot, in a prominent location (e.g., in the software's "About" section, documentation, and README file).
|
||||
*/
|
||||
|
||||
import { Injectable } from '@nestjs/common';
|
||||
import { CACHE_MANAGER } from '@nestjs/cache-manager';
|
||||
import { Inject, Injectable } from '@nestjs/common';
|
||||
import { OnEvent } from '@nestjs/event-emitter';
|
||||
import { Cache } from 'cache-manager';
|
||||
|
||||
import { NLP_MAP_CACHE_KEY } from '@/utils/constants/cache';
|
||||
import { Cacheable } from '@/utils/decorators/cacheable.decorator';
|
||||
import { BaseService } from '@/utils/generics/base-service';
|
||||
|
||||
import { Lookup, NlpEntityDto } from '../dto/nlp-entity.dto';
|
||||
@ -17,7 +22,7 @@ import {
|
||||
NlpEntityFull,
|
||||
NlpEntityPopulate,
|
||||
} from '../schemas/nlp-entity.schema';
|
||||
import { NlpSampleEntityValue } from '../schemas/types';
|
||||
import { NlpCacheMap, NlpSampleEntityValue } from '../schemas/types';
|
||||
|
||||
import { NlpValueService } from './nlp-value.service';
|
||||
|
||||
@ -30,6 +35,7 @@ export class NlpEntityService extends BaseService<
|
||||
> {
|
||||
constructor(
|
||||
readonly repository: NlpEntityRepository,
|
||||
@Inject(CACHE_MANAGER) private readonly cacheManager: Cache,
|
||||
private readonly nlpValueService: NlpValueService,
|
||||
) {
|
||||
super(repository);
|
||||
@ -46,6 +52,28 @@ export class NlpEntityService extends BaseService<
|
||||
return await this.repository.deleteOne(id);
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the `weight` field of a specific NLP entity by its ID.
|
||||
*
|
||||
* This method is part of the NLP-based blocks prioritization strategy.
|
||||
* The weight influences the scoring of blocks when multiple blocks match a user's input.
|
||||
* @param id - The unique identifier of the entity to update.
|
||||
* @param updatedWeight - The new weight to assign. Must be a positive integer.
|
||||
* @throws Error if the weight is not a positive integer.
|
||||
* @returns A promise that resolves to the updated entity.
|
||||
*/
|
||||
async updateWeight(id: string, updatedWeight: number): Promise<NlpEntity> {
|
||||
if (!Number.isInteger(updatedWeight) || updatedWeight < 1) {
|
||||
throw new Error('Weight must be a positive integer');
|
||||
}
|
||||
|
||||
return await this.repository.updateOne(
|
||||
id,
|
||||
{ weight: updatedWeight },
|
||||
{ new: true },
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Stores new entities based on the sample text and sample entities.
|
||||
* Deletes all values relative to this entity before deleting the entity itself.
|
||||
@ -97,4 +125,49 @@ export class NlpEntityService extends BaseService<
|
||||
);
|
||||
return Promise.all(findOrCreate);
|
||||
}
|
||||
|
||||
/**
|
||||
* Clears the NLP map cache
|
||||
*/
|
||||
async clearCache() {
|
||||
await this.cacheManager.del(NLP_MAP_CACHE_KEY);
|
||||
}
|
||||
|
||||
/**
|
||||
* Event handler for Nlp Entity updates. Listens to 'hook:nlpEntity:*' events
|
||||
* and invalidates the cache for nlp entities when triggered.
|
||||
*/
|
||||
@OnEvent('hook:nlpEntity:*')
|
||||
async handleNlpEntityUpdateEvent() {
|
||||
this.clearCache();
|
||||
}
|
||||
|
||||
/**
|
||||
* Event handler for Nlp Value updates. Listens to 'hook:nlpValue:*' events
|
||||
* and invalidates the cache for nlp values when triggered.
|
||||
*/
|
||||
@OnEvent('hook:nlpValue:*')
|
||||
async handleNlpValueUpdateEvent() {
|
||||
this.clearCache();
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves NLP entity lookup information for the given list of entity names.
|
||||
*
|
||||
* This method queries the database for lookups that match any of the provided
|
||||
* entity names, transforms the result into a map structure where each key is
|
||||
* the entity name and each value contains metadata (id, weight, and list of values),
|
||||
* and caches the result using the configured cache key.
|
||||
*
|
||||
* @param entityNames - Array of entity names to retrieve lookup data for.
|
||||
* @returns A Promise that resolves to a map of entity name to its corresponding lookup metadata.
|
||||
*/
|
||||
@Cacheable(NLP_MAP_CACHE_KEY)
|
||||
async getNlpMap(): Promise<NlpCacheMap> {
|
||||
const entities = await this.findAllAndPopulate();
|
||||
return entities.reduce((acc, curr) => {
|
||||
acc.set(curr.name, curr);
|
||||
return acc;
|
||||
}, new Map());
|
||||
}
|
||||
}
|
||||
|
@ -6,6 +6,7 @@
|
||||
* 2. All derivative works must include clear attribution to the original creator and software, Hexastack and Hexabot, in a prominent location (e.g., in the software's "About" section, documentation, and README file).
|
||||
*/
|
||||
|
||||
import { CACHE_MANAGER } from '@nestjs/cache-manager';
|
||||
import { MongooseModule } from '@nestjs/mongoose';
|
||||
|
||||
import { LanguageRepository } from '@/i18n/repositories/language.repository';
|
||||
@ -76,6 +77,12 @@ describe('NlpSampleEntityService', () => {
|
||||
NlpSampleEntityService,
|
||||
NlpEntityService,
|
||||
NlpValueService,
|
||||
{
|
||||
provide: CACHE_MANAGER,
|
||||
useValue: {
|
||||
del: jest.fn(),
|
||||
},
|
||||
},
|
||||
],
|
||||
});
|
||||
[
|
||||
|
@ -6,6 +6,7 @@
|
||||
* 2. All derivative works must include clear attribution to the original creator and software, Hexastack and Hexabot, in a prominent location (e.g., in the software's "About" section, documentation, and README file).
|
||||
*/
|
||||
|
||||
import { CACHE_MANAGER } from '@nestjs/cache-manager';
|
||||
import { MongooseModule } from '@nestjs/mongoose';
|
||||
|
||||
import { BaseSchema } from '@/utils/generics/base-schema';
|
||||
@ -58,6 +59,12 @@ describe('NlpValueService', () => {
|
||||
NlpEntityRepository,
|
||||
NlpValueService,
|
||||
NlpEntityService,
|
||||
{
|
||||
provide: CACHE_MANAGER,
|
||||
useValue: {
|
||||
del: jest.fn(),
|
||||
},
|
||||
},
|
||||
],
|
||||
});
|
||||
[
|
||||
|
@ -18,3 +18,5 @@ export const LANGUAGES_CACHE_KEY = 'languages';
|
||||
export const DEFAULT_LANGUAGE_CACHE_KEY = 'default_language';
|
||||
|
||||
export const ALLOWED_ORIGINS_CACHE_KEY = 'allowed_origins';
|
||||
|
||||
export const NLP_MAP_CACHE_KEY = 'nlp_map';
|
||||
|
5
api/src/utils/test/fixtures/nlpentity.ts
vendored
5
api/src/utils/test/fixtures/nlpentity.ts
vendored
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright © 2024 Hexastack. All rights reserved.
|
||||
* Copyright © 2025 Hexastack. All rights reserved.
|
||||
*
|
||||
* Licensed under the GNU Affero General Public License v3.0 (AGPLv3) with the following additional terms:
|
||||
* 1. The name "Hexabot" is a trademark of Hexastack. You may not use this name in derivative works without express written permission.
|
||||
@ -17,18 +17,21 @@ export const nlpEntityFixtures: NlpEntityCreateDto[] = [
|
||||
lookups: ['trait'],
|
||||
doc: '',
|
||||
builtin: false,
|
||||
weight: 1,
|
||||
},
|
||||
{
|
||||
name: 'first_name',
|
||||
lookups: ['keywords'],
|
||||
doc: '',
|
||||
builtin: false,
|
||||
weight: 1,
|
||||
},
|
||||
{
|
||||
name: 'built_in',
|
||||
lookups: ['trait'],
|
||||
doc: '',
|
||||
builtin: true,
|
||||
weight: 1,
|
||||
},
|
||||
];
|
||||
|
||||
|
@ -16,7 +16,7 @@ import { ButtonType, PayloadType } from '@/chat/schemas/types/button';
|
||||
import { CaptureVar } from '@/chat/schemas/types/capture-var';
|
||||
import { OutgoingMessageFormat } from '@/chat/schemas/types/message';
|
||||
import { BlockOptions, ContentOptions } from '@/chat/schemas/types/options';
|
||||
import { Pattern } from '@/chat/schemas/types/pattern';
|
||||
import { NlpPattern, Pattern } from '@/chat/schemas/types/pattern';
|
||||
import { QuickReplyType } from '@/chat/schemas/types/quick-reply';
|
||||
|
||||
import { modelInstance } from './misc';
|
||||
@ -246,6 +246,121 @@ export const blockGetStarted = {
|
||||
message: ['Welcome! How are you ? '],
|
||||
} as unknown as BlockFull;
|
||||
|
||||
export const mockNlpPatternsSetOne: NlpPattern[] = [
|
||||
{
|
||||
entity: 'intent',
|
||||
match: 'value',
|
||||
value: 'greeting',
|
||||
},
|
||||
{
|
||||
entity: 'firstname',
|
||||
match: 'value',
|
||||
value: 'jhon',
|
||||
},
|
||||
];
|
||||
|
||||
export const mockNlpPatternsSetTwo: NlpPattern[] = [
|
||||
{
|
||||
entity: 'intent',
|
||||
match: 'value',
|
||||
value: 'affirmation',
|
||||
},
|
||||
{
|
||||
entity: 'firstname',
|
||||
match: 'value',
|
||||
value: 'mark',
|
||||
},
|
||||
];
|
||||
|
||||
export const mockNlpPatternsSetThree: NlpPattern[] = [
|
||||
{
|
||||
entity: 'intent',
|
||||
match: 'value',
|
||||
value: 'greeting',
|
||||
},
|
||||
{
|
||||
entity: 'firstname',
|
||||
match: 'entity',
|
||||
},
|
||||
];
|
||||
|
||||
export const mockNlpBlock: BlockFull = {
|
||||
...baseBlockInstance,
|
||||
name: 'Mock Nlp',
|
||||
patterns: [
|
||||
'Hello',
|
||||
'/we*lcome/',
|
||||
{ label: 'Mock Nlp', value: 'MOCK_NLP' },
|
||||
|
||||
mockNlpPatternsSetOne,
|
||||
[
|
||||
{
|
||||
entity: 'intent',
|
||||
match: 'value',
|
||||
value: 'greeting',
|
||||
},
|
||||
{
|
||||
entity: 'firstname',
|
||||
match: 'value',
|
||||
value: 'doe',
|
||||
},
|
||||
],
|
||||
],
|
||||
|
||||
trigger_labels: customerLabelsMock,
|
||||
message: ['Good to see you again '],
|
||||
} as unknown as BlockFull;
|
||||
|
||||
export const mockModifiedNlpBlock: BlockFull = {
|
||||
...baseBlockInstance,
|
||||
name: 'Modified Mock Nlp',
|
||||
patterns: [
|
||||
'Hello',
|
||||
'/we*lcome/',
|
||||
{ label: 'Modified Mock Nlp', value: 'MODIFIED_MOCK_NLP' },
|
||||
mockNlpPatternsSetThree,
|
||||
],
|
||||
trigger_labels: customerLabelsMock,
|
||||
message: ['Hello there'],
|
||||
} as unknown as BlockFull;
|
||||
|
||||
export const mockModifiedNlpBlockOne: BlockFull = {
|
||||
...baseBlockInstance,
|
||||
name: 'Modified Mock Nlp One',
|
||||
patterns: [
|
||||
'Hello',
|
||||
'/we*lcome/',
|
||||
{ label: 'Modified Mock Nlp One', value: 'MODIFIED_MOCK_NLP_ONE' },
|
||||
mockNlpPatternsSetTwo,
|
||||
[
|
||||
{
|
||||
entity: 'firstname',
|
||||
match: 'entity',
|
||||
},
|
||||
],
|
||||
],
|
||||
trigger_labels: customerLabelsMock,
|
||||
message: ['Hello Sir'],
|
||||
} as unknown as BlockFull;
|
||||
|
||||
export const mockModifiedNlpBlockTwo: BlockFull = {
|
||||
...baseBlockInstance,
|
||||
name: 'Modified Mock Nlp Two',
|
||||
patterns: [
|
||||
'Hello',
|
||||
'/we*lcome/',
|
||||
{ label: 'Modified Mock Nlp Two', value: 'MODIFIED_MOCK_NLP_TWO' },
|
||||
[
|
||||
{
|
||||
entity: 'firstname',
|
||||
match: 'entity',
|
||||
},
|
||||
],
|
||||
mockNlpPatternsSetThree,
|
||||
],
|
||||
trigger_labels: customerLabelsMock,
|
||||
message: ['Hello Madam'],
|
||||
} as unknown as BlockFull;
|
||||
const patternsProduct: Pattern[] = [
|
||||
'produit',
|
||||
[
|
||||
@ -285,3 +400,5 @@ export const blockCarouselMock = {
|
||||
} as unknown as BlockFull;
|
||||
|
||||
export const blocks: BlockFull[] = [blockGetStarted, blockEmpty];
|
||||
|
||||
export const nlpBlocks: BlockFull[] = [blockGetStarted, mockNlpBlock];
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright © 2024 Hexastack. All rights reserved.
|
||||
* Copyright © 2025 Hexastack. All rights reserved.
|
||||
*
|
||||
* Licensed under the GNU Affero General Public License v3.0 (AGPLv3) with the following additional terms:
|
||||
* 1. The name "Hexabot" is a trademark of Hexastack. You may not use this name in derivative works without express written permission.
|
||||
@ -7,6 +7,7 @@
|
||||
*/
|
||||
|
||||
import { NLU } from '@/helper/types';
|
||||
import { NlpCacheMap } from '@/nlp/schemas/types';
|
||||
|
||||
export const nlpEntitiesGreeting: NLU.ParseEntities = {
|
||||
entities: [
|
||||
@ -27,3 +28,52 @@ export const nlpEntitiesGreeting: NLU.ParseEntities = {
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
export const mockNlpEntitiesSetOne: NLU.ParseEntities = {
|
||||
entities: [
|
||||
{
|
||||
entity: 'intent',
|
||||
value: 'greeting',
|
||||
confidence: 0.999,
|
||||
},
|
||||
{
|
||||
entity: 'firstname',
|
||||
value: 'jhon',
|
||||
confidence: 0.5,
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
export const mockNlpEntitiesSetTwo: NLU.ParseEntities = {
|
||||
entities: [
|
||||
{
|
||||
entity: 'intent',
|
||||
value: 'greeting',
|
||||
confidence: 0.94,
|
||||
},
|
||||
{
|
||||
entity: 'firstname',
|
||||
value: 'doe',
|
||||
confidence: 0.33,
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
export const mockNlpCacheMap: NlpCacheMap = new Map([
|
||||
[
|
||||
'intent',
|
||||
{
|
||||
id: '67e3e41eff551ca5be70559c',
|
||||
weight: 1,
|
||||
values: ['greeting', 'affirmation'],
|
||||
},
|
||||
],
|
||||
[
|
||||
'firstname',
|
||||
{
|
||||
id: '67e3e41eff551ca5be70559d',
|
||||
weight: 1,
|
||||
values: ['jhon', 'doe'],
|
||||
},
|
||||
],
|
||||
]);
|
||||
|
@ -121,7 +121,9 @@
|
||||
"file_error": "File not found",
|
||||
"audio_error": "Audio not found",
|
||||
"video_error": "Video not found",
|
||||
"missing_fields_error": "Please make sure that all required fields are filled"
|
||||
"missing_fields_error": "Please make sure that all required fields are filled",
|
||||
"weight_required_error": "Weight is required or invalid",
|
||||
"weight_positive_integer_error": "Weight must be a positive integer"
|
||||
},
|
||||
"menu": {
|
||||
"terms": "Terms of Use",
|
||||
@ -348,6 +350,7 @@
|
||||
"nlp_lookup_trait": "Trait",
|
||||
"doc": "Documentation",
|
||||
"builtin": "Built-in?",
|
||||
"weight": "Weight",
|
||||
"dataset": "Dataset",
|
||||
"yes": "Yes",
|
||||
"no": "No",
|
||||
|
@ -121,7 +121,9 @@
|
||||
"file_error": "Fichier introuvable",
|
||||
"audio_error": "Audio introuvable",
|
||||
"video_error": "Vidéo introuvable",
|
||||
"missing_fields_error": "Veuillez vous assurer que tous les champs sont remplis correctement"
|
||||
"missing_fields_error": "Veuillez vous assurer que tous les champs sont remplis correctement",
|
||||
"weight_positive_integer_error": "Le poids doit être un nombre entier positif",
|
||||
"weight_required_error": "Le poids est requis ou bien invalide"
|
||||
},
|
||||
"menu": {
|
||||
"terms": "Conditions d'utilisation",
|
||||
@ -347,6 +349,7 @@
|
||||
"nlp_lookup_trait": "Trait",
|
||||
"synonyms": "Synonymes",
|
||||
"doc": "Documentation",
|
||||
"weight": "Poids",
|
||||
"builtin": "Intégré?",
|
||||
"dataset": "Données",
|
||||
"yes": "Oui",
|
||||
|
@ -156,8 +156,7 @@ function StackComponent<T extends GridValidRowModel>({
|
||||
disabled={
|
||||
(isDisabled && isDisabled(params.row)) ||
|
||||
(params.row.builtin &&
|
||||
(requires.includes(PermissionAction.UPDATE) ||
|
||||
requires.includes(PermissionAction.DELETE)))
|
||||
requires.includes(PermissionAction.DELETE))
|
||||
}
|
||||
onClick={() => {
|
||||
action && action(params.row);
|
||||
|
@ -167,6 +167,16 @@ const NlpEntity = () => {
|
||||
resizable: false,
|
||||
renderHeader,
|
||||
},
|
||||
{
|
||||
maxWidth: 210,
|
||||
field: "weight",
|
||||
headerName: t("label.weight"),
|
||||
renderCell: (val) => <Chip label={val.value} variant="title" />,
|
||||
sortable: true,
|
||||
disableColumnMenu: true,
|
||||
resizable: false,
|
||||
renderHeader,
|
||||
},
|
||||
{
|
||||
maxWidth: 90,
|
||||
field: "builtin",
|
||||
|
@ -60,6 +60,7 @@ export const NlpEntityVarForm: FC<ComponentFormProps<INlpEntity>> = ({
|
||||
name: nlpEntity?.name || "",
|
||||
doc: nlpEntity?.doc || "",
|
||||
lookups: nlpEntity?.lookups || ["keywords"],
|
||||
weight: nlpEntity?.weight || 1,
|
||||
},
|
||||
});
|
||||
const validationRules = {
|
||||
@ -82,6 +83,7 @@ export const NlpEntityVarForm: FC<ComponentFormProps<INlpEntity>> = ({
|
||||
reset({
|
||||
name: nlpEntity.name,
|
||||
doc: nlpEntity.doc,
|
||||
weight: nlpEntity.weight,
|
||||
});
|
||||
} else {
|
||||
reset();
|
||||
@ -121,6 +123,7 @@ export const NlpEntityVarForm: FC<ComponentFormProps<INlpEntity>> = ({
|
||||
required
|
||||
autoFocus
|
||||
helperText={errors.name ? errors.name.message : null}
|
||||
disabled={nlpEntity?.builtin}
|
||||
/>
|
||||
</ContentItem>
|
||||
<ContentItem>
|
||||
@ -128,6 +131,33 @@ export const NlpEntityVarForm: FC<ComponentFormProps<INlpEntity>> = ({
|
||||
label={t("label.doc")}
|
||||
{...register("doc")}
|
||||
multiline={true}
|
||||
disabled={nlpEntity?.builtin}
|
||||
/>
|
||||
</ContentItem>
|
||||
<ContentItem>
|
||||
<Input
|
||||
label={t("label.weight")}
|
||||
{...register("weight", {
|
||||
valueAsNumber: true,
|
||||
required: t("message.weight_required_error"),
|
||||
min: {
|
||||
value: 1,
|
||||
message: t("message.weight_positive_integer_error"),
|
||||
},
|
||||
validate: (value) =>
|
||||
value && Number.isInteger(value) && value! > 0
|
||||
? true
|
||||
: t("message.weight_positive_integer_error"),
|
||||
})}
|
||||
type="number"
|
||||
inputProps={{
|
||||
min: 1,
|
||||
step: 1,
|
||||
inputMode: "numeric",
|
||||
pattern: "[1-9][0-9]*",
|
||||
}}
|
||||
error={!!errors.weight}
|
||||
helperText={errors.weight?.message}
|
||||
/>
|
||||
</ContentItem>
|
||||
</ContentContainer>
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright © 2024 Hexastack. All rights reserved.
|
||||
* Copyright © 2025 Hexastack. All rights reserved.
|
||||
*
|
||||
* Licensed under the GNU Affero General Public License v3.0 (AGPLv3) with the following additional terms:
|
||||
* 1. The name "Hexabot" is a trademark of Hexastack. You may not use this name in derivative works without express written permission.
|
||||
@ -19,6 +19,7 @@ export interface INlpEntityAttributes {
|
||||
lookups: Lookup[];
|
||||
doc?: string;
|
||||
builtin?: boolean;
|
||||
weight?: number;
|
||||
}
|
||||
|
||||
export enum NlpLookups {
|
||||
|
Loading…
Reference in New Issue
Block a user