jonaspoelmans / laravel-gpt
此包集成了与LLM API(如OpenAI和Mistral)的集成
v1.1.3
2024-09-14 16:50 UTC
Requires
- php: ^8.1
- guzzlehttp/guzzle: ^7.8
- illuminate/config: ^10.48
- illuminate/contracts: ^10.48
- illuminate/support: ^10.48
Requires (Dev)
- mockery/mockery: ^1.6
- phpoption/phpoption: ^1.9
- phpunit/phpunit: ^10.5
- psr/log: ^3.0
- vlucas/phpdotenv: ^5.6
README
文档、安装和用法说明
Laravel
在您的composer.json中添加此包,并更新composer。这将下载包和laravel-gpt库。
composer require jonaspoelmans/laravel-gpt
您还需要创建一个OpenAI API密钥,并将其添加到.env文件中
OPENAI_API_KEY=xxxxxxxxxxxxx
它做什么
此包允许您通过API连接到大型语言模型,如OpenAI ChatGPT和Mistral。
安装后,您可以创建一个提示,包含您希望输入OpenAI的任何文本内容。
// Create your prompt for the LLM $prompt = "Give me the list of 10 best PHP frameworks." // Instantiate the Laravel GPT service $laravelGPT = new LaravelGPTService(); // Retrieve a response from ChatGPT $response = $laravelGPT->generateOpenAIResponse($prompt);
除了单个提示外,您还可以输入OpenAI之前的消息历史记录。
// Prior messages are generated by a user, the system or the chat assistant $priorMessageUser = new OpenAIMessage('user', 'I love Laravel but exclude it from the list.'); $priorMessageAssistant = new OpenAIMessage('assistant', 'Which list?'); $history = [$priorMessageUser, $priorMessageAssistant]; // The prompt can be fed to ChatGPT alongside the chat history $response = $laravelGPT->generateOpenAIResponse($prompt, $history);
响应对象是一个关联数组,如果请求成功,或者是一个空字符串,如果发生错误。
可以通过laravelgpt.php配置类配置LLM参数。您可以通过以下方式发布配置文件:
php artisan vendor:publish --tag="laravel-gpt-config"
对于OpenAI,您可以调整以下参数:
// The base URI for the OpenAI API. // This is the endpoint where all API requests will be sent. 'openai_base_uri' => 'https://api.openai.com/v1/', // The default model to be used for generating responses. // You can change this to any valid model identifier provided by OpenAI, // such as 'gpt-3.5-turbo' or 'gpt-4-1106-preview'. 'openai_model' => 'gpt-4-1106-preview', // The maximum number of tokens to generate in the response. // Tokens can be thought of as pieces of words. The maximum number // of tokens allowed is determined by the model you are using. 'openai_max_tokens' => 4000, // The temperature setting for the response generation. // Temperature controls the randomness of the output. // A value closer to 0 makes the output more deterministic and repetitive, // while a value closer to 1 makes it more random. 'openai_temperature' => 0.7, // Enable or disable logging of errors. // When set to true, any errors encountered while using the API // will be logged using Laravel's built-in logging system. 'openai_logging' => true,
贡献
如果您想贡献,请直接给我发消息。
测试
composer test
安全
如果您发现任何与安全相关的问题,请通过电子邮件jonas.poelmans@gmail.com联系,而不是使用问题跟踪器。
Postcardware
您可以使用此包,但如果它进入您的生产环境,我将非常感激如果您给我发一条消息:)我会回复所有消息!
鸣谢
许可证
MIT许可证(MIT)。有关更多信息,请参阅许可证文件。