<div><div dir="auto">I use chatgpt, claude and gemini daily, but not for writing code. (I don’t even use refactorings much!). I think the reason is that Smalltalk systems tend to be very simple and small, and I often rather go over the code myself and review it and tweak it carefully, and I enjoy doing that. Also, the time I spend actually writing code is very small, I spend much more time reading code and thinking about it rather than writing it. So I’m not inclined to look for automated code writing (or transforming) tools.</div><div dir="auto"><br></div><div dir="auto">But as I said, I use the chat bots daily. Sometimes I use them as rubber ducks (<a href="https://en.wikipedia.org/wiki/Rubber_duck_debugging" style="font-size:inherit" target="_blank">https://en.wikipedia.org/wiki/Rubber_duck_debugging</a><span style="font-size:inherit">), in the sense that I feel I extract value from the process of explaining a problem to them, and I don’t even care much about their response. Other times I find them useful to quickly go over conventions about UI design, and this helps me to avoid reinventing the wheel and makes my UIs more consistent with existing UIs. Or ask them about how different softwares approach a problem or implement something, and saves me the time of reading lots of documentation or source code (with the caveat that sometimes they hallucinate, and ultimately I might have go and read documentation or source code). So I find them very useful while coding, but not for writing code.</span></div><div dir="auto"><span style="font-size:inherit"><br></span></div><div dir="auto"><span style="font-size:inherit"><div style="font-size:inherit" dir="auto"><span style="font-size:inherit;font-style:normal;font-weight:400;letter-spacing:normal;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline!important;background-color:rgba(0,0,0,0);border-color:rgb(0,0,0);color:rgb(0,0,0)">And, I share Ken’s concerns about the impact to the environment. And it bothers me a lot that big corporations own them, and that they are ripping all our content to train them. The world is going backwards, the open internet for sharing ideas between humans seems to be dying, and we’re moving towards centralization, control, enshitification, and away from human freedom.</span></div><br></span></div><div dir="auto"><span style="font-size:inherit">That said, I expect to be surprised sometime in the future. I know people is working on tools to integrate LLMs with Smalltalk and they might create something amazing and useful.</span></div></div><div><div dir="auto"><br></div><div dir="auto">Cheers,</div><div dir="auto">Luciano</div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jan 5, 2026 at 02:31 Michał Olszewski via Cuis-dev <<a href="mailto:cuis-dev@lists.cuis.st" target="_blank">cuis-dev@lists.cuis.st</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><u></u>
<div>
<p>Hi all,</p>
<p>I'd like to start a loose discussion around trend that has been
happening in the past two years, namely using LLM agents for
rapidly building prototypes and applications alike (infamously
known as "vibe coding" if you don't know what you're doing :).
More neutral term is "AI assisted workflow"). The "state of the
art" advanced to the point where it's possible to generate,
refactor and document entire codebases without a sweat using multi
agent workflows, MCP servers, task-oriented instructions etc. -
see Claude Code (Sonnet 4.5, Opus 4.5) ecosystem for example [<b>1</b>].</p>
<p>Since Smalltalk environments are quite walled gardens (code
pretty much lives in the binary image, with attempts from Cuis and
others to store packages in textual format) there hasn't been much
motion towards integrating LLM workflows with the internal
tooling, as it's requires dedicated communication protocol (any
packages for that already? :)) and besides that, there wasn't
opportunity to train on large chunks of ST sources.</p>
<p>Open ended questions (with my opinion for each of them):</p>
<ul>
<li>given there would be proper integration (fine-tuning,
dedicated package for interfacing, set of human-written
instructions etc.), what do you think about using LLM agents
for: 1) rapid building of prototypes or entire applications 2)
progress verification e.g. whether implementation matches
functionality spec 3) knowledge finding and example generation?
For 1) and 2) see director-implementor pattern [<b>2</b>].</li>
<li>do you think Smalltalk-like systems are more suitable for LLMs
than file based languages? - The tight integration of
tools-system is already there - there is no need to implement
heavy MCP servers or RAG, just ask/explore the system for the
answer! There is also question about token usage - context
windows don't need to store entire text blocks anymore, only
relationships provided by the tooling.</li>
<li>given above, would local, task-oriented LLMs provide first
class experience for us, just like one-size-fits-all models for
the broader world? </li>
</ul>
<p>References:<br>
</p>
<ol>
<li><a href="https://www.anthropic.com/engineering/claude-code-best-practices" target="_blank">https://www.anthropic.com/engineering/claude-code-best-practices</a></li>
<li><a href="https://github.com/maxim-ist/elixir-architect/blob/main/skills/elixir-architect/SKILL.md" target="_blank">https://github.com/maxim-ist/elixir-architect/blob/main/skills/elixir-architect/SKILL.md</a></li>
</ol>
<p>Cheers,<br>
Michał</p></div><div>
</div>
-- <br>
Cuis-dev mailing list<br>
<a href="mailto:Cuis-dev@lists.cuis.st" target="_blank">Cuis-dev@lists.cuis.st</a><br>
<a href="https://lists.cuis.st/mailman/listinfo/cuis-dev" rel="noreferrer" target="_blank">https://lists.cuis.st/mailman/listinfo/cuis-dev</a><br>
</blockquote></div></div>
</div>