ardagnsrn / ollama-php
这是一个用于Ollama的PHP库。Ollama是一个开源项目,它是一个强大且用户友好的平台,可在本地机器上运行LLM。它充当LLM技术的复杂性与对可访问和可定制AI体验的需求之间的桥梁。
Requires
- php: ^8.1
- guzzlehttp/guzzle: ^7.9
Requires (Dev)
- laravel/pint: ^1.0
- pestphp/pest: ^2.20
- spatie/ray: ^1.28
This package is auto-updated.
Last update: 2024-09-09 10:27:36 UTC
README
这是一个用于Ollama的PHP库。Ollama是一个开源项目,它是一个强大且用户友好的平台,可在本地机器上运行LLM。它充当LLM技术的复杂性与对可访问和可定制AI体验的需求之间的桥梁。
☕️ 买我一杯咖啡
无论您是否使用此项目,是否从中学习到东西,或者只是喜欢它,请考虑通过购买我一杯咖啡来支持它,这样我可以花更多的时间在像这样的开源项目上 :)
目录
入门
您可以在这里找到官方Ollama文档。
首先,通过Composer包管理器安装Ollama PHP
composer require ardagnsrn/ollama-php
然后,您可以创建一个新的Ollama客户端实例
// with default base URL $client = \ArdaGnsrn\Ollama\Ollama::client(); // or with custom base URL $client = \ArdaGnsrn\Ollama\Ollama::client('https://:11434');
用法
Completions 资源
create
使用提供的模型为给定的提示生成响应。
$completions = $client->completions()->create([ 'model' => 'llama3.1', 'prompt' => 'Once upon a time', ]); $completions->response; // '...in a land far, far away...' $response->toArray(); // ['model' => 'llama3.1', 'response' => '...in a land far, far away...', ...]
createStreamed
使用提供的模型和流式传输响应为给定的提示生成响应。
$completions = $client->completions()->createStreamed([ 'model' => 'llama3.1', 'prompt' => 'Once upon a time', ]); foreach ($completions as $completion) { echo $completion->response; } // 1. Iteration: '...in' // 2. Iteration: ' a' // 3. Iteration: ' land' // 4. Iteration: ' far,' // ...
Chat 资源
create
使用提供的模型为给定的提示生成响应。
$response = $client->chat()->create([ 'model' => 'llama3.1', 'messages' => [ ['role' => 'system', 'content' => 'You are a llama.'], ['role' => 'user', 'content' => 'Hello!'], ['role' => 'assistant', 'content' => 'Hi! How can I help you today?'], ['role' => 'user', 'content' => 'I need help with my taxes.'], ], ]); $response->message->content; // 'Ah, taxes... *chew chew* Hmm, not really sure how to help with that.' $response->toArray(); // ['model' => 'llama3.1', 'message' => ['role' => 'assistant', 'content' => 'Ah, taxes...'], ...]
此外,您还可以使用 tools 参数向聊天提供自定义函数。tools 参数不能与 createStreamed 方法一起使用。
$response = $client->chat()->create([ 'model' => 'llama3.1', 'messages' => [ ['role' => 'user', 'content' => 'What is the weather today in Paris?'], ], 'tools' => [ [ 'type' => 'function', 'function' => [ 'name' => 'get_current_weather', 'description' => 'Get the current weather', 'parameters' => [ 'type' => 'object', 'properties' => [ 'location' => [ 'type' => 'string', 'description' => 'The location to get the weather for, e.g. San Francisco, CA', ], 'format' => [ 'type' => 'string', 'description' => 'The location to get the weather for, e.g. San Francisco, CA', 'enum' => ['celsius', 'fahrenheit'] ], ], 'required' => ['location', 'format'], ], ], ] ] ]); $toolCall = $response->message->toolCalls[0]; $toolCall->function->name; // 'get_current_weather' $toolCall->function->arguments; // ['location' => 'Paris', 'format' => 'celsius'] $response->toArray(); // ['model' => 'llama3.1', 'message' => ['role' => 'assistant', 'toolCalls' => [...]], ...]
createStreamed
使用提供的模型和流式传输响应为给定的提示生成响应。
$responses = $client->chat()->createStreamed([ 'model' => 'llama3.1', 'messages' => [ ['role' => 'system', 'content' => 'You are a llama.'], ['role' => 'user', 'content' => 'Hello!'], ['role' => 'assistant', 'content' => 'Hi! How can I help you today?'], ['role' => 'user', 'content' => 'I need help with my taxes.'], ], ]); foreach ($responses as $response) { echo $response->message->content; } // 1. Iteration: 'Ah,' // 2. Iteration: ' taxes' // 3. Iteration: '... ' // 4. Iteration: ' *chew,' // ...
Models 资源
list
列出所有可用的模型。
$response = $client->models()->list(); $response->toArray(); // ['models' => [['name' => 'llama3.1', ...], ['name' => 'llama3.1:80b', ...], ...]]
show
显示特定模型的详细信息。
$response = $client->models()->show('llama3.1'); $response->toArray(); // ['modelfile' => '...', 'parameters' => '...', 'template' => '...']
create
创建新模型。
$response = $client->models()->create([ 'name' => 'mario', 'modelfile' => "FROM llama3.1\nSYSTEM You are mario from Super Mario Bros." ]); $response->status; // 'success'
createStreamed
创建新模型并流式传输响应。
$responses = $client->models()->createStreamed([ 'name' => 'mario', 'modelfile' => "FROM llama3.1\nSYSTEM You are mario from Super Mario Bros." ]); foreach ($responses as $response) { echo $response->status; }
copy
复制现有模型。
$client->models()->copy('llama3.1', 'llama3.2'); // bool
delete
删除模型。
$client->models()->delete('mario'); // bool
pull
从Ollama服务器拉取模型。
$response = $client->models()->pull('llama3.1'); $response->toArray() // ['status' => 'downloading digestname', 'digest' => 'digestname', 'total' => 2142590208, 'completed' => 241970]
pullStreamed
从Ollama服务器拉取模型并流式传输响应。
$responses = $client->models()->pullStreamed('llama3.1'); foreach ($responses as $response) { echo $response->status; }
push
将模型推送到Ollama服务器。
$response = $client->models()->push('llama3.1'); $response->toArray() // ['status' => 'uploading digestname', 'digest' => 'digestname', 'total' => 2142590208]
pushStreamed
将模型推送到Ollama服务器并流式传输响应。
$responses = $client->models()->pushStreamed('llama3.1'); foreach ($responses as $response) { echo $response->status; }
runningList
列出所有正在运行的模型。
$response = $client->models()->runningList(); $response->toArray(); // ['models' => [['name' => 'llama3.1', ...], ['name' => 'llama3.1:80b', ...], ...]]
Blobs 资源
exists
检查是否存在块。
$client->blobs()->exists('blobname'); // bool
create
创建新块。
$client->blobs()->create('blobname'); // bool
Embed 资源
create
使用提供的模型为给定的文本生成嵌入。
$response = $client->embed()->create([ 'model' => 'llama3.1', 'input' => [ "Why is the sky blue?", ] ]); $response->toArray(); // ['model' => 'llama3.1', 'embedding' => [0.1, 0.2, ...], ...]
测试
composer test
变更日志
有关最近更改的更多信息,请参阅CHANGELOG。
贡献
有关详细信息,请参阅CONTRIBUTING。
致谢
许可
MIT许可(MIT)。有关更多信息,请参阅许可文件。