Skip to content

Conversation

jjmachan
Copy link
Member

@jjmachan jjmachan commented Jan 28, 2024

Unifed the calls to LLM, embeddings and json_loader with the following logic

if is_async: # just call the async version
	return await self._asafe_load(text=text, llm=llm, callbacks=callbacks)
else: # call the sync version inside the event_loop
	loop = asyncio.get_event_loop()
	safe_load = partial(
		self._safe_load, text=text, llm=llm, callbacks=callbacks
	)
	return await loop.run_in_executor(
		None,
		safe_load,
	)

LLM

async def generate(
	self,
	prompt: PromptValue,
	n: int = 1,
	temperature: float = 1e-8,
	stop: t.Optional[t.List[str]] = None,
	callbacks: Callbacks = [],
	is_async: bool = True,
) -> LLMResult:

Embeddings

async def embed_texts(
	self, texts: List[str], is_async: bool = True
) -> t.List[t.List[float]]:

Json Load

async def safe_load(
	self,
	text: str,
	llm: BaseRagasLLM,
	callbacks: Callbacks = None,
	is_async: bool = True,
):

@jjmachan jjmachan requested a review from shahules786 January 28, 2024 03:29
@shahules786 shahules786 merged commit a04ac58 into explodinggradients:main Jan 28, 2024
@jjmachan jjmachan deleted the feat/full_async branch January 29, 2024 19:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants