Server and Client
The entire Lumos API is also available as a server / client fashion to deploy and remotely call the API.
A seperate service can be useful isolation for
- Running compute heavy book parsing operations
- A centralised AI server for your org
Deploy (Server)
Simply host the FastAPI server, authenticated by an API key:
Client SDK
Once deployed, you can conveniently access the service with the LumosClient
that fully mirrors the Python API.
Now you can do similar operations like:
from pydantic import BaseModel
class Response(BaseModel):
steps: list[str]
final_answer: str
lumos.call_ai(
messages=[
{"role": "system", "content": "You are a mathematician."},
{"role": "user", "content": "What is 100 * 100?"},
],
response_format=Response,
model="gpt-4o-mini",
)
# Response(steps=['Multiply 100 by 100.', '100 * 100 = 10000.'], final_answer='10000')