Generate Summary
Generate Summary is a feature provided by askone.ai for quickly generating summaries from content of interest. This feature requires a text-processing large language model to Generate Summary from the content.
1. Selecting a Model
You must choose a text-processing large language model and service provider that can be accessed. For the current list, please refer to AI Models.
If the AI service provider is unavailable in your list, it may be due to the following reasons:
- No supported large language models available for generating summaries;
- You have not configured access keys;
- Your subscription plan does not support it;
If the large language model is unavailable in your list, it may be due to the following reasons:
- Not available for generating summaries;
- You have not configured access keys;
- Your subscription plan does not support it;
2. Enable / Disable
The Enable
status means that your large language model will be used to Generate Summary from the content every time you call the large language model for content of interest. This means that each of your summary generation requests will use the large language model and parameters you selected. If you choose a paid model, you will need to pay the corresponding fees.
The Disable
status means that the summary generation feature of your large language model will be turned off, which means that no summaries will be generated when calling the large language model.
3. Access Methods
We provide two ways to access the large language model: Unified Access
and Direct Access
. For issues related to privacy and protection, please refer to the Privacy Policy.
3.1 Unified Access
We recommend choosing Unified Access
for large language models. We will optimize your input and the large language model's output based on an evaluation of the large language model's understanding capabilities to provide the best summary generation experience. Additionally, we will merge access requests for the same model from other features to reduce the number of accesses and the number of tokens used by the large language model, thereby saving you costs.
3.2 Direct Access
If you are particularly concerned about privacy and security, you can choose Direct Access
to the third-party large language model. We will send your input directly to the large language model and return the large language model's output directly to you. This process will have no intermediate processing, and the result you receive will be the raw output of the large language model.
This ensures that your input and output do not go through our servers. However, this may increase the number of accesses and the number of tokens used by the large language model, and you will need to pay more fees.
4. Size Configuration
You can set the number of sentences generated by the large language model for each content size, and even set it to Ignore
for sizes where you do not want to enable the feature. For more information, please refer to the Content Size.