Skip to content

Releases: reworkd/tarsier

v.0.6.0 - Microsoft OCR Support

13 Jun 00:48
Compare
Choose a tag to compare

Highlights πŸ”₯

  • Added support for azure ocr service, previously the only provider was AWS
  • Improved positioning of text chunks and fonts

What's Changed πŸ‘€

New Contributors ❀️

Full Changelog: v0.5.0...v0.6.0

v0.5.0 - Multiple Tag Types

05 Dec 18:07
b488eda
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.4.0...v0.5.0

v0.4.0 - Improved Tagging

15 Nov 05:52
ae5a749
Compare
Choose a tag to compare

πŸŽ‰ What's Changed

  • ✍️ Fix readme citation link by @Krupskis in #3
  • ✍️Fix Citation Repository URL in Readme by @debanjum in #4
  • πŸš€ Remove Annotations and Tag All text elements (optionally) by @awtkns in #8
  • πŸ†‘ Make spans have red background with white text by @awtkns in #9

πŸ‘€ New Contributors

Full Changelog: v0.3.1...v0.4.0

v0.3.1 - Initial Release

11 Nov 19:52
Compare
Choose a tag to compare

Tarsier Monkey

πŸ™ˆ Vision utilities for web interaction agents πŸ™ˆ

Python

πŸ”— Main site Β Β β€’Β Β  🐦 Twitter Β Β β€’Β Β  πŸ“’ Discord

Announcing Tarsier

If you've tried using GPT-4(V) to automate web interactions, you've probably run into questions like:

  • How do you map LLM responses back into web elements?
  • How can you mark up a page for an LLM better understand its action space?
  • How do you feed a "screenshot" to a text-only LLM?

At Reworkd, we found ourselves reusing the same utility libraries to solve these problems across multiple projects.
Because of this we're now open-sourcing this simple utility library for multimodal web agents... Tarsier!
The video below demonstrates Tarsier usage by feeding a page snapshot into a langchain agent and letting it take actions.

tarsier.mp4

How does it work?

Tarsier works by visually "tagging" interactable elements on a page via brackets + an id such as [1].
In doing this, we provide a mapping between elements and ids for GPT-4(V) to take actions upon.
We define interactable elements as buttons, links, or input fields that are visible on the page.

Can provide a textual representation of the page. This means that Tarsier enables deeper interaction for even non multi-modal LLMs.
This is important to note given performance issues with existing vision language models.
Tarsier also provides OCR utils to convert a page screenshot into a whitespace-structured string that an LLM without vision can understand.

Usage

Visit our cookbook for agent examples using Tarsier:

Otherwise, basic Tarsier usage might look like the following:

import asyncio

from playwright.async_api import async_playwright
from tarsier import Tarsier, GoogleVisionOCRService

async def main():
    google_cloud_credentials = {}

    ocr_service = GoogleVisionOCRService(google_cloud_credentials)
    tarsier = Tarsier(ocr_service)

    async with async_playwright() as p:
        browser = await p.chromium.launch(headless=False)
        page = await browser.new_page()
        await page.goto("https://news.ycombinator.com")

        page_text, tag_to_xpath = await tarsier.page_to_text(page)

        print(tag_to_xpath)  # Mapping of tags to x_paths
        print(page_text)  # My Text representation of the page


if __name__ == '__main__':
    asyncio.run(main())

Supported OCR Services

Special shoutout to @KhoomeiK for making this happen! ❀️