On April 9 I joined another Claude Code meetup in Copenhagen.
I always enjoy these meetups. It is good to be around fellow developers and engineers, and I love the energy of people sharing knowledge. This piece is partly a summary of what I saw there, and partly an extension of what I think is still missing.
When I talk about efficiency with AI, I mean simple things. Spending less money on tokens. Getting better results. Finishing tasks with less friction. A tool is useful when it clears the way. If it gets in the way, it is not helping.
What was discussed

There were four or five talks.
The first talk was about spec-driven development with Linear. That was interesting to me because we also use Linear at Dreamdata. The presenters were using the Linear Claude integration together with one of the open spec libraries. The idea was simple: start from a specification, let Claude create tasks and subtasks in Linear, and let the tooling move the work forward as those tasks get completed.
What was interesting there was the reaction in the room. Someone from the audience said they were not a fan, because it started sounding too close to older waterfall habits where everything has to be defined through a large specification up front. I understood that concern. The talk showed that the workflow is possible. It also raised the question of when that kind of structure helps and when it becomes heavy.
Another talk was about Jumbo CLI, built here in Copenhagen. As I understood it, it was trying to deal with the memory problem. Each new session starts close to zero, so you end up restating context and decisions again and again. The speaker framed it through short-term and long-term memory and showed a CLI approach to storing and recalling context depending on the task.
Another short talk was about building tools in Claude Code. The speaker described a path from a simple SKILL.md file to references and then to scripts. That is a useful pattern. If you keep repeating the same prompt and the model keeps doing roughly the same thing, it makes sense to abstract that behavior instead of paying for the same repetition over and over again.
The demo there was about the population of Copenhagen. The tool called a public API, formatted the output, and showed year-over-year growth. The point was not the population number itself. The point was showing how a repeated prompt can become a reusable tool.
There were also broader talks about the latest models, and Paperclip came up as an orchestrator for running many agents in parallel.
What still feels missing

What still feels missing to me is that most of these talks remain disconnected from the regular workflow of a real organization.
A lot of these workflows look great for side projects, solo founders, or small AI-first teams. In those environments there is very little decision friction. You do not have to align with multiple departments. You do not have to deal with years of accumulated systems and process.
But many companies do not work like that. One team is in Linear. Another team is in Notion. Sales is in HubSpot or Salesforce. Design has its own tools and review process. When that is the reality, these demos stop feeling so complete.
That first talk with Linear is a good example. It works well if the company can centralize a lot of its work into Linear. But different teams need different tools, and Linear is a very technical, task-oriented system. The hard question is not whether Claude can create tasks there. The hard question is how the work stays coherent across the rest of the organization.
This is also why A feature should carry its own explanation is related to this. We keep getting better at generating output inside one function, usually engineering, but we are still weak at carrying the explanation, context, and downstream artifacts across the company.
The capabilities here are still very interesting to me. What I keep thinking about is a clearer source of truth inside the organization, where decisions are made, comments are made, research is collected, and then downstream artifacts can be produced from that point. Documentation. Internal notes. Website changes. Support summaries. Sales context. Static sites updated through CI. Not blindly, of course. There still has to be a human in the loop. But the system should not force people to recreate the same explanation five times in five different tools.
For that to happen, companies will probably have to simplify parts of their tooling and be more deliberate about where knowledge lives.
The main thing still matters

The main point I forgot to say clearly is this: if you want to get the most efficiency out of AI, especially on existing codebases, your existing code quality still matters a lot.
If the codebase is structured well, both people and AI become more efficient. If the system follows good design principles, if responsibilities are clear, if files and folders make sense, if functions are broken out properly, if code is reusable, if the architecture reflects intent, then there is less waste for everyone.
You can call it SOLID. You can call it DRY. You can call it DDD or simply good engineering. The label is not the important part. The important part is that a well-structured system is easier to scan, easier to reason about, and easier to change.
That applies to AI directly. A model in a clean system has to read fewer files, make fewer guesses, and deal with less hidden coupling. The same is true for humans.
The ability to break a problem into smaller problems is still one of the most important engineering skills. If you give an AI a vague prompt like “build me a feature” inside a messy codebase with weak team guidelines, then of course you are going to get uneven results.
What has worked best for me and my colleagues is much more incremental. We book a meeting room, sit together for thirty minutes, an hour, sometimes more, and go through the spec and the plan. We pressure-test it. What is missing? What is unclear? What have we not thought through yet? Then we execute in small steps.
That workflow has been extremely effective for us. Small pushes to production. One agent as a companion. Incremental execution. Fast feedback.
And the old workflow fundamentals still matter. How often do you push code? How long does it sit on your machine? How big are the pull requests? How quickly do reviews happen? If a PR takes three days to review because it is too large, that is still a bad system with or without AI.
Small PRs are easier for people to review. They are easier for AI to review too. Small changes are easier to verify, easier to trust, and easier to ship. In my experience, this is still one of the best ways to improve a team workflow, with or without AI.
A lot of this is still just Markdown

Another thing I kept thinking during the meetup is that many of these fancy tools still reduce to a bunch of Markdown files, prompt files, references, and scripts.
That does not make them useless. We use them every day. They help. But it does mean that a lot of the current wave still feels like we are doing the same thing in a more structured way, rather than reaching something fundamentally different.
What I would like to see is a cheap way to adapt models to our own data and our own way of working. I would love to build a knowledge base of my own work over a year, how I prompt, how I think, how I break problems down, how I review, and then eventually use that to adapt a model to my own habits and judgment.
That is much more interesting to me than yet another wrapper around prompt files.
The talk I still want to see

I am still waiting for a talk that shows how AI gets introduced into an existing company slowly and with some integrity.
Not only inside engineering. Across the organization.
How do you integrate it into existing systems? How do you keep security and review in place? How do you make sure it is not only an individual productivity boost, but part of a coherent workflow across teams?
Most meetups and videos still focus on solo people, tiny teams, or companies that started AI-first. Those cases are real, but they are the easy cases.
The harder case is the older company. The company with legacy systems, multiple teams, and accumulated complexity. That is where I think the next serious work is.
The agent is not the bottleneck. The organization is.