Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 0 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,3 @@ Help Symfony by [sponsoring](https://symfony.com/sponsor) its development!
## Contributing

Thank you for considering contributing to Symfony AI! You can find the [contribution guide here](CONTRIBUTING.md).

## Fixture Licenses

For testing multi-modal features, the repository contains binary media content, with the following owners and licenses:

* `tests/Fixture/image.jpg`: Chris F., Creative Commons, see [pexels.com](https://www.pexels.com/photo/blauer-und-gruner-elefant-mit-licht-1680755/)
* `tests/Fixture/audio.mp3`: davidbain, Creative Commons, see [freesound.org](https://freesound.org/people/davidbain/sounds/136777/)
* `tests/Fixture/document.pdf`: Chem8240ja, Public Domain, see [Wikipedia](https://en.m.wikipedia.org/wiki/File:Re_example.pdf)
128 changes: 128 additions & 0 deletions examples/huggingface/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
# Symfony Hugging Face Examples

This directory contains various examples of how to use the Symfony AI with [Hugging Face](https://huggingface.co/)
and sits on top of the [Hugging Face Inference API](https://huggingface.co/inference-api).

The Hugging Face Hub provides access to a wide range of pre-trained open source models for various AI tasks, which you
can directly use via Symfony AI's Hugging Face Platform Bridge.

## Getting Started

Hugging Face offers a free tier for their Inference API, which you can use to get started. Therefore, you need to create
an account on [Hugging Face](https://huggingface.co/join), generate an
[access token](https://huggingface.co/settings/tokens), and add it to your `.env.local` file in the root of the
examples' directory as `HUGGINGFACE_KEY`.

```bash
echo 'HUGGINGFACE_KEY=hf_your_access_key' >> .env.local
```

Different to other platforms, Hugging Face provides close to 50.000 models for various AI tasks, which enables you to
easily try out different, specialized models for your use case. Common use cases can be found in this example directory.

## Running the Examples

You can run an example by executing the following command:

```bash
# Run all example with runner:
./runner huggingface

# Or run a specific example standalone, e.g., object detection:
php huggingface/object-detection.php
```

## Available Models

When running the examples, you might experience that some models are not available, and you encounter an error like:

```
Model, provider or task not found (404).
```

This can happen due to pre-selected models in the examples not being available anymore or not being "warmed up" on
Hugging Face's side. You can change the model used in the examples by updating the model name in the example script.

To find available models for a specific task, you can check out the [Hugging Face Model Hub](https://huggingface.co/models)
and filter by the desired task, or you can use the `huggingface/_model-listing.php` script.

### Listing Available Models

List _all_ models:

```bash
php huggingface/_model.php ai:huggingface:model-list
```
(This is limited to 1000 results by default.)

Limit models to a specific _task_, e.g., object-detection:

```bash
php huggingface/_model.php ai:huggingface:model-list --task=object-detection
```

Limit models to a specific _provider_, e.g., "hf-inference":

```bash
# Single provider:
php huggingface/_model.php ai:huggingface:model-list --provider=hf-inference

# Multiple providers:
php huggingface/_model.php ai:huggingface:model-list --provider=sambanova,novita
```

Search for models matching a specific term, e.g., "gpt":

```bash
php huggingface/_model.php ai:huggingface:model-list --search=gpt
```

Limit models to currently warm models:

```bash
php huggingface/_model.php ai:huggingface:model-list --warm
```

You can combine task and provider filters, task and warm filters, but not provider and warm filters.

```bash
# Combine provider and task:
php huggingface/_model.php ai:huggingface:model-list --provider=hf-inference --task=object-detection

# Combine task and warm:
php huggingface/_model.php ai:huggingface:model-list --task=object-detection --warm

# Search for warm gpt model for text-generation:
php huggingface/_model.php ai:huggingface:model-list --warm --task=text-generation --search=gpt
```

### Model Information

To get detailed information about a specific model, you can use the `huggingface/_model-info.php` script:

```bash
php huggingface/_model.php ai:huggingface:model-info google/vit-base-patch16-224

Hugging Face Model Information
==============================

Model: google/vit-base-patch16-224
----------- -----------------------------
ID google/vit-base-patch16-224
Downloads 2985836
Likes 889
Task image-classification
Warm yes
----------- -----------------------------

Inference Provider:
----------------- -----------------------------
Provider hf-inference
Status live
Provider ID google/vit-base-patch16-224
Task image-classification
Is Model Author no
----------------- -----------------------------
```

Important to understand is what you can use a model for and its availability on different providers.
48 changes: 0 additions & 48 deletions examples/huggingface/_model-listing.php

This file was deleted.

27 changes: 27 additions & 0 deletions examples/huggingface/_model.php
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
<?php

/*
* This file is part of the Symfony package.
*
* (c) Fabien Potencier <fabien@symfony.com>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/

use Symfony\AI\Platform\Bridge\HuggingFace\ApiClient;
use Symfony\AI\Platform\Bridge\HuggingFace\Command\ModelInfoCommand;
use Symfony\AI\Platform\Bridge\HuggingFace\Command\ModelListCommand;
use Symfony\Component\Console\Application;

require_once dirname(__DIR__).'/bootstrap.php';

$apiClient = new ApiClient(http_client());

$app = new Application('Hugging Face Model Commands');
$app->addCommands([
new ModelListCommand($apiClient),
new ModelInfoCommand($apiClient),
]);

$app->run();
2 changes: 1 addition & 1 deletion examples/huggingface/object-detection.php
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

$platform = PlatformFactory::create(env('HUGGINGFACE_KEY'), httpClient: http_client());

$image = Image::fromFile(dirname(__DIR__, 2).'/fixtures/image.jpg');
$image = Image::fromFile(dirname(__DIR__, 2).'/fixtures/accordion.jpg');
$result = $platform->invoke('facebook/detr-resnet-50', $image, [
'task' => Task::OBJECT_DETECTION,
]);
Expand Down
4 changes: 2 additions & 2 deletions examples/huggingface/table-question-answering.php
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@
$input = [
'query' => 'select year where city = beijing',
'table' => [
'year' => [1896, 1900, 1904, 2004, 2008, 2012],
'year' => ['1896', '1900', '1904', '2004', '2008', '2012'],
'city' => ['athens', 'paris', 'st. louis', 'athens', 'beijing', 'london'],
],
];

$result = $platform->invoke('microsoft/tapex-base', $input, [
$result = $platform->invoke('google/tapas-base-finetuned-wtq', $input, [
'task' => Task::TABLE_QUESTION_ANSWERING,
]);

Expand Down
2 changes: 1 addition & 1 deletion examples/huggingface/text-generation.php
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

$platform = PlatformFactory::create(env('HUGGINGFACE_KEY'), httpClient: http_client());

$result = $platform->invoke('gpt2', 'The quick brown fox jumps over the lazy', [
$result = $platform->invoke('katanemo/Arch-Router-1.5B', 'The quick brown fox jumps over the lazy', [
'task' => Task::TEXT_GENERATION,
]);

Expand Down
8 changes: 8 additions & 0 deletions fixtures/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Fixture Licenses

For testing multi-modal features, the repository contains binary media content, with the following owners and licenses:

* `tests/Fixture/accordion.jpg`: Jefferson Lucena, Creative Commons, see [pexels.com](https://www.pexels.com/photo/man-playing-accordion-10153219/)
* `tests/Fixture/audio.mp3`: davidbain, Creative Commons, see [freesound.org](https://freesound.org/people/davidbain/sounds/136777/)
* `tests/Fixture/document.pdf`: Chem8240ja, Public Domain, see [Wikipedia](https://en.m.wikipedia.org/wiki/File:Re_example.pdf)
* `tests/Fixture/image.jpg`: Chris F., Creative Commons, see [pexels.com](https://www.pexels.com/photo/blauer-und-gruner-elefant-mit-licht-1680755/)
Binary file added fixtures/accordion.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
66 changes: 64 additions & 2 deletions src/platform/src/Bridge/HuggingFace/ApiClient.php
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@

namespace Symfony\AI\Platform\Bridge\HuggingFace;

use Symfony\AI\Platform\Exception\RuntimeException;
use Symfony\AI\Platform\Model;
use Symfony\Component\HttpClient\HttpClient;
use Symfony\Contracts\HttpClient\HttpClientInterface;
Expand All @@ -27,17 +28,78 @@ public function __construct(
}

/**
* @return array{
* id: string,
* downloads: int,
* likes: int,
* pipeline_tag: string|null,
* inference: string|null,
* inferenceProviderMapping: array<string, array{
* status: 'live'|'staging',
* providerId: string,
* task: string,
* isModelAuthor: bool,
* }>|null,
* }
*/
public function getModel(string $modelId): array
{
$result = $this->httpClient->request('GET', 'https://huggingface.co/api/models/'.$modelId, [
'query' => [
'expand' => ['downloads', 'likes', 'pipeline_tag', 'inference', 'inferenceProviderMapping'],
],
]);

$data = $result->toArray(false);

if (isset($data['error'])) {
throw new RuntimeException(\sprintf('Error fetching model info for "%s": "%s"', $modelId, $data['error']));
}

return $data;
}

/**
* @param ?string $provider Filter by inference provider (see Provider::*)
* @param ?string $task Filter by task (see Task::*)
* @param ?string $search Search term to filter models by
* @param bool $warm Filter for models with warm inference available
*
* @return Model[]
*/
public function models(?string $provider, ?string $task): array
public function getModels(?string $provider = null, ?string $task = null, ?string $search = null, bool $warm = false): array
{
$result = $this->httpClient->request('GET', 'https://huggingface.co/api/models', [
'query' => [
'inference_provider' => $provider,
'pipeline_tag' => $task,
'search' => $search,
...$warm ? ['inference' => 'warm'] : [],
],
]);

return array_map(fn (array $model) => new Model($model['id']), $result->toArray());
$data = $result->toArray(false);

if (isset($data['error'])) {
throw new RuntimeException(\sprintf('Error fetching models: "%s"', $data['error']));
}

return array_map($this->convertToModel(...), $data);
}

/**
* @param array{
* id: string,
* pipeline_tag?: string,
* } $data
*/
private function convertToModel(array $data): Model
{
return new Model(
$data['id'],
options: [
'tags' => isset($data['pipeline_tag']) ? [$data['pipeline_tag']] : [],
],
);
}
}
Loading