A huge promise for LLMs is being able to answer questions and solve tasks of arbitrary complexity over an arbitrary number of data sources. The world has started to shift from simple RAG stacks, which are mostly good for answering pointed questions, to agents that can more autonomously reason over a diverse set of inputs, and interleave retrieval and tool use to produce sophisticated outputs.
Building a reliable multi-agent system is challenging. There's a core question of developer ergonomics and production deployment - what makes sense outside a notebook setting. In this talk we outline some core building blocks for building advanced research assistants:
1. Advanced data and retrieval modules
2. Advanced agent workflows that balance dynamic agent reasoning with constraints
3. A service architecture that can run agents in production.
Space for this event is limited, so please register if you plan to attend.
*** Exact location details will be shared via email ***
For questions email events@wandb.com