Programming with LLMs
A deeper dive into the things you can do with LLMs when you’re programming with them that are harder to do in a chat interface.
Slides
Resources
- (15m) Choosing a model
- Overview of major providers: OpenAI, Anthropic, Google, ollama
- Tradeoffs: capability, context length, speed, cost, intelligence
- Activity: same question, change one string to switch models, e.g.
chat("openai"),chat("anthropic").
- (15m) Multi-modal input (vision, PDF)
- Activity: images of food and ask for recipes
- Activity: take a PDF of a recipe, turn it into markdown
- (15m) Structured output
- Explain
ellmer::type_*()or pydantic model in chatlas - Activity: Extract rich data from the recipe PDF
- Note use_attribute_docstrings
- Explain
- (10m) Parallel/batch calls
- Activity: Extract recipe data in parallel or batch