[Cuis-dev] [Loose Discussion] LLM agents in Smalltalk

Michał Olszewski miolszewski at outlook.com
Sun Jan 4 11:30:52 PST 2026


Hi all,

I'd like to start a loose discussion around trend that has been 
happening in the past two years, namely using LLM agents for rapidly 
building prototypes and applications alike (infamously known as "vibe 
coding" if you don't know what you're doing :). More neutral term is "AI 
assisted workflow"). The "state of the art" advanced to the point where 
it's possible to generate, refactor and document entire codebases 
without a sweat using multi agent workflows, MCP servers, task-oriented 
instructions etc. - see Claude Code (Sonnet 4.5, Opus 4.5) ecosystem for 
example [*1*].

Since Smalltalk environments are quite walled gardens (code pretty much 
lives in the binary image, with attempts from Cuis and others to store 
packages in textual format) there hasn't been much motion towards 
integrating LLM workflows with the internal tooling, as it's requires 
dedicated communication protocol (any packages for that already? :)) and 
besides that, there wasn't opportunity to train on large chunks of ST 
sources.

Open ended questions (with my opinion for each of them):

  * given there would be proper integration (fine-tuning, dedicated
    package for interfacing, set of human-written instructions etc.),
    what do you think about using LLM agents for: 1) rapid building of
    prototypes or entire applications 2) progress verification e.g.
    whether implementation matches functionality spec 3) knowledge
    finding and example generation? For 1) and 2) see
    director-implementor pattern [*2*].
  * do you think Smalltalk-like systems are more suitable for LLMs than
    file based languages? - The tight integration of tools-system is
    already there - there is no need to implement heavy MCP servers or
    RAG, just ask/explore the system for the answer! There is also
    question about token usage - context windows don't need to store
    entire text blocks anymore, only relationships provided by the tooling.
  * given above, would local, task-oriented LLMs provide first class
    experience for us, just like one-size-fits-all models for the
    broader world?

References:

 1. https://www.anthropic.com/engineering/claude-code-best-practices
 2. https://github.com/maxim-ist/elixir-architect/blob/main/skills/elixir-architect/SKILL.md

Cheers,
Michał
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cuis.st/mailman/archives/cuis-dev/attachments/20260104/64b0231a/attachment.htm>


More information about the Cuis-dev mailing list