[Cuis-dev] [Loose Discussion] LLM agents in Smalltalk

Luciano Notarfrancesco luchiano at gmail.com
Sun Jan 4 23:41:00 PST 2026


I use chatgpt, claude and gemini daily, but not for writing code. (I don’t
even use refactorings much!). I think the reason is that Smalltalk systems
tend to be very simple and small, and I often rather go over the code
myself and review it and tweak it carefully, and I enjoy doing that. Also,
the time I spend actually writing code is very small, I spend much more
time reading code and thinking about it rather than writing it. So I’m not
inclined to look for automated code writing (or transforming) tools.

But as I said, I use the chat bots daily. Sometimes I use them as rubber
ducks (https://en.wikipedia.org/wiki/Rubber_duck_debugging), in the sense
that I feel I extract value from the process of explaining a problem to
them, and I don’t even care much about their response. Other times I find
them useful to quickly go over conventions about UI design, and this helps
me to avoid reinventing the wheel and makes my UIs more consistent with
existing UIs. Or ask them about how different softwares approach a problem
or implement something, and saves me the time of reading lots of
documentation or source code (with the caveat that sometimes they
hallucinate, and ultimately I might have go and read documentation or
source code). So I find them very useful while coding, but not for writing
code.

And, I share Ken’s concerns about the impact to the environment. And it
bothers me a lot that big corporations own them, and that they are ripping
all our content to train them. The world is going backwards, the open
internet for sharing ideas between humans seems to be dying, and we’re
moving towards centralization, control, enshitification, and away from
human freedom.

That said, I expect to be surprised sometime in the future. I know people
is working on tools to integrate LLMs with Smalltalk and they might create
something amazing and useful.

Cheers,
Luciano

On Mon, Jan 5, 2026 at 02:31 Michał Olszewski via Cuis-dev <
cuis-dev at lists.cuis.st> wrote:

> Hi all,
>
> I'd like to start a loose discussion around trend that has been happening
> in the past two years, namely using LLM agents for rapidly building
> prototypes and applications alike (infamously known as "vibe coding" if you
> don't know what you're doing :). More neutral term is "AI assisted
> workflow"). The "state of the art" advanced to the point where it's
> possible to generate, refactor and document entire codebases without a
> sweat using multi agent workflows, MCP servers, task-oriented instructions
> etc. - see Claude Code (Sonnet 4.5, Opus 4.5) ecosystem for example [*1*].
>
> Since Smalltalk environments are quite walled gardens (code pretty much
> lives in the binary image, with attempts from Cuis and others to store
> packages in textual format) there hasn't been much motion towards
> integrating LLM workflows with the internal tooling, as it's requires
> dedicated communication protocol (any packages for that already? :)) and
> besides that, there wasn't opportunity to train on large chunks of ST
> sources.
>
> Open ended questions (with my opinion for each of them):
>
>    - given there would be proper integration (fine-tuning, dedicated
>    package for interfacing, set of human-written instructions etc.), what do
>    you think about using LLM agents for: 1) rapid building of prototypes or
>    entire applications 2) progress verification e.g. whether implementation
>    matches functionality spec 3) knowledge finding and example generation? For
>    1) and 2) see director-implementor pattern [*2*].
>    - do you think Smalltalk-like systems are more suitable for LLMs than
>    file based languages? - The tight integration of tools-system is already
>    there - there is no need to implement heavy MCP servers or RAG, just
>    ask/explore the system for the answer! There is also question about token
>    usage - context windows don't need to store entire text blocks anymore,
>    only relationships provided by the tooling.
>    - given above, would local, task-oriented LLMs provide first class
>    experience for us, just like one-size-fits-all models for the broader
>    world?
>
> References:
>
>    1. https://www.anthropic.com/engineering/claude-code-best-practices
>    2.
>    https://github.com/maxim-ist/elixir-architect/blob/main/skills/elixir-architect/SKILL.md
>
> Cheers,
> Michał
> --
> Cuis-dev mailing list
> Cuis-dev at lists.cuis.st
> https://lists.cuis.st/mailman/listinfo/cuis-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cuis.st/mailman/archives/cuis-dev/attachments/20260105/05b6386a/attachment.htm>


More information about the Cuis-dev mailing list