{
  "version": "https://jsonfeed.org/version/1.1",
  "title": "Aleksandar Grbic",
  "home_page_url": "https://aleksandar.xyz/",
  "feed_url": "https://aleksandar.xyz/feed.json",
  "description": "Articles on software engineering, development workflows, and technology.",
  "language": "en",
  "authors": [
    {
      "name": "Aleksandar Grbic",
      "url": "https://aleksandar.xyz"
    }
  ],
  "items": [
    {
      "id": "https://aleksandar.xyz/blog/2026-04-14-how-to-do-effective-code-reviews",
      "url": "https://aleksandar.xyz/blog/2026-04-14-how-to-do-effective-code-reviews",
      "title": "How to do effective code reviews",
      "content_text": "import Resources from \"../../components/Resources.astro\";\n\n<Tldr>AI didn't create a new bottleneck. It exposed the process problems that were already there. The fix is the same as it was before: automate style, keep PRs small, review your own code first, and be kind.</Tldr>\n\nI talked about this a couple of years ago in [a video](https://www.youtube.com/watch?v=csEeefdFQaA). Back then I thought the points were obvious. Break your PRs down. Don't argue in comments. Review your own code before asking others to. Be kind. If you have been on enough teams, none of this should surprise you.\n\nLately I keep reading the same take online: PRs are the new bottleneck. People can generate so much code with AI now that reviews can't keep up. The process is holding us back.\n\nI don't buy it. The bottleneck was never how fast you could write code. It was always how you worked. PR size, task breakdown, how you communicate, how you treat each other in reviews. That was true before AI and it is true now. The only thing that changed is that it's easier to produce a huge mess faster.\n\nI have been doing code reviews for a long time. I have seen reviews that made people better and reviews that made people dread opening a pull request. I have seen a lot of reviews that were just noise pretending to be rigor.\n\nMost of the problems are not technical. They are human. That part has not changed.\n\n## Myth versus reality\n\n![A person posing on a pedestal while another sits unimpressed](/images/posts/review-myth-vs-reality-dark-transparent.webp)\n\nA lot of developers treat code reviews as a performance. They leave comments to show how much they know, how many edge cases they can spot. That's not reviewing code. That's showing off.\n\nYou are there to understand what your colleague has been working on. To learn the domain. To catch things they might have missed because they have been staring at the same code for three days. Not to prove anything about yourself.\n\nIf you are good at what you do, you uplift others. You help them. You do not put them down. Even when seniority requires you to push back, you do it with kindness, because we never know how our words affect others.\n\n## Set up your team before you review anything\n\n![A wrench tightening a gear](/images/posts/review-setup-your-team-dark-transparent.webp)\n\nCode reviews do not work well out of the box. You have to set up your team for them.\n\nThe biggest waste of time in reviews is arguing about style. Formatting. Naming conventions. Bracket placement. None of that belongs in a pull request discussion. It belongs in your tooling.\n\nSet up pre-commit hooks. Run linters, formatters, and whatever static analysis your language supports before code even leaves someone's machine. Then run them again in CI. If something would fail in your continuous integration pipeline, it should never leave a developer's local environment. If it does, the setup is wrong.\n\nOnce you remove style from the equation, reviews become about what actually matters: functionality.\n\nThe other thing that saves enormous time is agreeing on code principles as a team and writing them down. How do you structure modules? How do you write tests? Where do utility functions go? Do you use classes or plain functions? Every team answers these differently, and there are dozens of decisions like this.\n\nDocument them. Then when a discussion comes up during review, you point to the document. That is where the discussion ends. You do not relitigate the same thing every two weeks.\n\nThis also helps when onboarding new people. They read the conventions, understand how the team works, and can start contributing without guessing.\n\n## Focus on functionality, not style\n\n![A magnifying glass over code](/images/posts/review-focus-functionality-dark-transparent.webp)\n\nOnce the tooling handles formatting and the team has agreed on principles, your job as a reviewer is to focus on the feature itself.\n\nDoes it work? Does it deliver what the ticket described? Are there regressions? Are there edge cases the author might have missed?\n\nUse the review as a chance to learn what your colleagues are building and why. Stay in context. Know what features are being worked on across the team. If someone adds you as a reviewer and you have no idea what the feature is, that is partly on you.\n\nYou can still leave comments about small things. But frame them as opinions, not demands. \"I would probably do this differently\" is very different from requesting a change on something that is a matter of taste.\n\n## Stop arguing in GitHub comments\n\n![Two people talking face to face at a table](/images/posts/review-stop-arguing-comments-dark-transparent.webp)\n\nThis one I catch myself doing almost every month, and I have to actively pull myself out of it.\n\nIf a PR discussion goes past five messages back and forth, stop. Close the laptop. Call your colleague. Get on a video call. Sit together if you are in the same office.\n\nArguing in GitHub comments for two hours does not increase anyone's productivity. It widens the disagreement because text strips out tone and intention. One of you gets triggered, and from that point on you are not even discussing the code anymore. You are defending positions.\n\nIt also sets a bad example for the rest of the team. Other people read those threads. Long comment wars feel divisive. They make the process feel adversarial. And the resolution of a two-hour thread is almost never worth the combined time that went into it.\n\nIf something needs a discussion, have it in real time.\n\n## Use the right medium\n\n![A hand holding a phone showing a video call](/images/posts/review-right-medium-dark-transparent.webp)\n\nNot all reviews are the same, and not all of them should happen the same way.\n\nFor simple changes, comments work fine. For anything more complex, give the reviewer something to look at. Screenshots, videos, whatever helps them understand the change without having to pull the branch and run it locally.\n\nIf you did UI work, drop a screenshot of before and after right in the PR description. We do this on every visual change. A screenshot of how it looks now and a screenshot of how it looks with the change. The reviewer can instantly see what changed without reading a single line of code first. It takes thirty seconds to do and saves minutes of context-building on the other end.\n\nVideos help too, even for non-visual work. Record your screen for two minutes, walk through what you built, explain why you made certain decisions. Paste the link in the PR. It brings clarity that text never will.\n\nFor the big changes that could not be broken down further, pair programming is the way to go. Sit together, go through it, even if it takes a couple of hours. If the feature is big enough to justify two thousand lines of code, it is big enough to justify a focused review session.\n\n## Large PRs are the enemy\n\n![A towering stack of papers about to topple](/images/posts/review-large-prs-dark-transparent.webp)\n\nThere is a well-known paradox in code reviews. You push a twenty-line commit, someone leaves ten comments. You push two thousand lines, you get one comment: \"looks good.\"\n\nLarge PRs are overwhelming. People open the diff, see two hundred files, and give up. They rubber-stamp it because the alternative is spending half a day trying to understand something they do not have context for.\n\nBreaking tasks down into smaller chunks is the fix, and it is one of the skills developers are worst at. It matters more than most technical skills. Good task breakdown leads to small PRs. Small PRs lead to reviews that actually catch things.\n\nThis is exactly why the \"PRs are the new bottleneck\" take is wrong. People see AI generating code faster and conclude that reviews are what's slowing them down. But that was never the hard part. The hard part was always making sure the code is correct, that it fits the system, that somebody else on the team can look at it and understand what happened. None of that got faster because a machine typed it. If anything, it got harder, because now you can produce a three-thousand-line diff in an afternoon without even thinking about how to break it down.\n\n## Ship incrementally, review incrementally\n\n![A hand stacking building blocks one at a time](/images/posts/review-ship-incrementally-dark-transparent.webp)\n\nFeature flags make this practical. Hide the work behind a flag, push to production from day one, and build the feature in slices.\n\nSay you are building a dashboard with a handful of widgets. Do not build the whole thing and open one massive PR at the end. Push an empty dashboard page. Review it. Merge it behind the flag. Then add the first widget. Review, merge. Then the next. Each PR is small, focused, and easy to review. The feature grows in production, and anyone who needs to follow the progress can just toggle the flag and see where things stand.\n\nSame thing on the backend. You are building an API. First PR: a single endpoint that returns hello world. Merge it. Next: add authentication. Next: hook up the database and define the schema. Next: the actual business logic. Each step is reviewable in isolation, each step works on its own, and at no point does someone open a diff with forty files and give up.\n\nThis keeps PRs small without artificial splitting. It gives stakeholders visibility into progress without you having to schedule a demo or write a status update. The work is always there, always accessible, always moving forward.\n\nIt also changes how the team thinks about task breakdown. When you know every slice has to be deployable on its own, you start breaking work down differently. You stop thinking in terms of \"the feature\" and start thinking in terms of the smallest useful increment you can ship next. That skill transfers to everything else.\n\n## When there is nothing to say\n\n![A thumbs up](/images/posts/review-nothing-to-say-dark-transparent.webp)\n\nCode reviews are not a requirement to leave a comment. If you look at a PR and everything looks good, approve it. You do not have to agree with every line. You do not have to find something to nitpick.\n\nIf there are no problems, approve and move on.\n\n## Review your own code first\n\n![A person looking into a mirror examining their reflection](/images/posts/review-own-code-first-dark-transparent.webp)\n\nBefore you open a pull request, review your own code.\n\nPush your branch. Create a draft PR. Open the diff view. Read through it yourself. Remove the debug statements, the commented-out code, the things you forgot to clean up. Do not make your colleagues spend their time catching things that you should have caught in thirty seconds.\n\nThis goes double when AI wrote some of the code. If you didn't write every line yourself, you owe it to your colleagues to at least read and understand the diff before you ask them to. You can't push code you don't fully understand and then expect someone else to make sense of it for you.\n\nOnce you have reviewed it yourself, run it through whatever AI code review tools you have access to. There are plenty of them now. Let them catch the obvious stuff before a human ever looks at it. Unused imports, potential bugs, missing edge cases, inconsistent naming. If a machine can flag it in seconds, there is no reason a colleague should be spending their time on it.\n\nYour colleagues' time is the most expensive resource on the team. By the time someone opens your PR, it should have already passed your own eyes and whatever automated reviewers are available. What is left for the human review is the stuff that actually needs a human: architecture decisions, business logic, whether this fits the broader system.\n\n## Hierarchy does not make you right\n\n![A figure with a crown leaning down to listen to a smaller figure making a point](/images/posts/review-hierarchy-dark-transparent.webp)\n\nIf you are a senior developer, a tech lead, a CTO, your title does not make your point stronger. Having a role does not mean other people cannot have a better argument.\n\nI say this as a tech lead myself. Build a culture where the best idea wins, not the highest title. If a junior developer has a valid point, that point stands on its own. If you use your seniority to shut down discussion, people will stop bringing ideas to you. They will stop pushing back. And your team will get worse.\n\n## Exceptions will happen\n\n![A ruler snapped in half](/images/posts/review-exceptions-dark-transparent.webp)\n\nEverything I have said here applies most of the time. Not all of the time.\n\nThere will be Fridays when something breaks and you have to push a fix without a proper review. There will be emergencies where someone has to approve their own PR. There will be large changes that could not be broken down further.\n\nThat is fine. Especially in startups. But the goal is to make those exceptions rare. The baseline matters. The habits matter.\n\n## Be kind\n\n![Two open hands reaching toward each other](/images/posts/review-be-kind-dark-transparent.webp)\n\nIf I had to reduce everything about code reviews to one sentence, it would be this: be kind.\n\nBe kind in how you phrase your comments and how you talk about someone else's work. Code was written in a specific context, under specific constraints, with whatever information was available at the time. Do not trash it. Do not call it bad.\n\nBecause if you do, nobody will want to talk to you. People will be reluctant to ask you questions. They will dread your reviews.\n\nCode reviews exist to make your team better. To catch mistakes before they reach users. They are not a competition.\n\nShow up with humility. Remove your ego.\n\n<Resources links={[\n  { title: \"How to do effective code reviews (video)\", url: \"https://www.youtube.com/watch?v=csEeefdFQaA\", note: \"The original video this article is based on\" },\n]} />",
      "date_published": "2026-04-14T06:42:38.000Z",
      "tags": [
        "Engineering",
        "Leadership"
      ],
      "image": "https://aleksandar.xyz/og/2026-04-14-how-to-do-effective-code-reviews.png"
    },
    {
      "id": "https://aleksandar.xyz/blog/2026-04-13-a-practical-guide-to-cutting-claude-code-token-usage-by-50-plus",
      "url": "https://aleksandar.xyz/blog/2026-04-13-a-practical-guide-to-cutting-claude-code-token-usage-by-50-plus",
      "title": "A practical guide to cutting Claude Code token usage by 50%+",
      "content_text": "import Resources from \"../../components/Resources.astro\";\n\n<Tldr>Tokens leak from four places: input bloat, output bloat, thinking budget, and stale context. A CLAUDE.md with strict rules, capped thinking tokens, early compaction, and RTK for command output compression can cut usage by half or more. But none of it helps if the codebase is a mess.</Tldr>\n\nIf you use Claude Code daily, you already know the problem. Sessions burn through tokens faster than you'd expect, long refactors get cut off mid-thought, and regardless of what plan you're on, the only thing under your control is how much context you generate and consume per task.\n\nThis is a working guide. Everything below is something you can apply today, ordered by effort-to-impact. No meme stack, no hype. Just the levers that actually move the meter.\n\n## The mental model: four places tokens leak\n\nBefore reaching for tools, it helps to know where your budget actually goes. In a typical Claude Code session, tokens disappear into four buckets.\n\n*Input bloat.* Command output, file contents, grep hits, test logs. Raw text pouring into the context window every time you run something. This is usually the single biggest line item and the least visible.\n\n*Output bloat.* Claude's own prose. Preambles, recaps, \"here's what I did,\" full-file echoes after a one-line edit, offering three alternatives when you asked for one.\n\n*Thinking bloat.* Extended thinking tokens. By default, Claude Code allocates close to 32K thinking tokens per request. That default exists for backward compatibility with older Opus limits, not because your task needs it.\n\n*Context rot.* Long sessions where old, now-irrelevant context sits in the window consuming budget every turn until auto-compact finally fires at 95% full. By that point, compaction itself is expensive.\n\nEvery technique below targets one of these four. When you're choosing what to adopt, ask which bucket it drains.\n\n## Tier 1: Free wins, do these first\n\n### Measure before you optimize\n\nInstall `ccusage` before anything else. It reads the JSONL files Claude Code already writes locally and reports daily, session, and 5-hour-block spend. There's a statusline integration so you see live consumption in your prompt.\n\n```bash\nnpx ccusage@latest daily\nnpx ccusage@latest blocks --live\n```\n\nWithout this you're guessing. With it, every change below has a number attached.\n\n### Write a CLAUDE.md that refuses waste\n\nClaude Code auto-loads a `CLAUDE.md` from your repo root (or `~/.claude/CLAUDE.md` globally). This is the single most effective change you can make and it takes five minutes. The rules that matter:\n\n- No preamble, no flattery, no \"Great question.\"\n- No summaries of what you just did unless asked.\n- Use Edit/patch. Never re-emit a whole file after a small change.\n- Don't echo file contents back after modifying them.\n- Don't offer alternatives I didn't ask for. Pick one.\n- For small tasks, skip the plan and act. For larger ones, 3-5 bullets then stop.\n\nThere's a good [token-efficient CLAUDE.md template](https://github.com/drona23/claude-token-efficient/blob/main/CLAUDE.md) you can drop into your repo as a starting point. Expect a 15-25% drop in output tokens from this alone.\n\n### Cap the thinking budget\n\nAdd one line to your `~/.zshrc` (or `~/.bashrc` if you're on bash):\n\n```bash\nexport MAX_THINKING_TOKENS=8000\n```\n\nThis persists across every terminal session. Eight thousand is enough for most day-to-day work. Reserve `/effort high` for the hard problems: architecture decisions, subtle bugs, cross-file refactors. This is the cheapest trick on the list and the one most people don't know exists.\n\n### Run `/compact` at 60%, not 95%\n\nAuto-compact fires when your context is nearly full. By then, half the window is stale and compaction is working against a bloated transcript. Run it manually around 60% and *tell it what to keep*:\n\n```\n/compact keep: current task, file paths in play, last failing test output\n```\n\nThis is a habit, not a tool. It's also the single biggest improvement to session *quality*, not just cost. Long sessions stop degrading into confused soup.\n\n## Tier 2: Install once, save forever\n\n### RTK: compress command output before it hits context\n\n`rtk-ai/rtk` is a small Rust binary that sits between your shell and Claude Code. When the agent runs `pytest`, `npm test`, `cargo build`, `grep`, `find`, and 100+ other common commands, RTK intercepts the output, strips noise, and compresses it before it enters the context window. Reported reductions land in the 60-90% range on filtered commands.\n\n```bash\ncurl -fsSL https://raw.githubusercontent.com/rtk-ai/rtk/master/install.sh | bash\nrtk init -g\n```\n\nZero dependencies, single binary, under 10ms overhead. It's hook-transparent, so you don't change how you invoke anything. Of everything in this guide, RTK is the one I'd call underappreciated.\n\n### Route subagent work to Haiku\n\nOpus and Sonnet are expensive because they think. Subagents usually don't need to. Grep, file reads, doc lookups, pattern-matching: that's Haiku territory.\n\nThe simplest way to do this globally is one environment variable in your `~/.zshrc`:\n\n```bash\nexport CLAUDE_CODE_SUBAGENT_MODEL=haiku\n```\n\nThis routes all subagent calls to Haiku by default. If you have custom subagents that need more horsepower, you can override per-subagent by setting `model: sonnet` or `model: opus` in that subagent's YAML frontmatter.\n\nThe price delta between Haiku and Opus is roughly an order of magnitude. Once you've identified which parts of your workflow are routine, the savings compound fast.\n\n## Tier 3: Situational, still useful\n\n### Caveman mode, unironically\n\n`JuliusBrussee/caveman` is a Claude Code skill that forces the model into caveman-speak for prose output. Code, paths, commands, and technical tokens pass through untouched; only narrative English gets compressed. Output-token reduction is reported around 65-75%.\n\nIt reads as a joke. It also works. There's a March 2026 paper (\"Brevity Constraints Reverse Performance Hierarchies in Language Models\") showing that forcing brief outputs actually improved benchmark accuracy. Useful on prose-heavy tasks like code review comments, documentation sweeps, and exploratory conversations.\n\n```bash\nnpx skills add JuliusBrussee/caveman\n```\n\n### Prefer Edit over Write; prefer partial reads over whole files\n\nTwo habits worth training:\n- When modifying a file, insist on `Edit` (minimal diff) over `Write` (full rewrite). Your CLAUDE.md should already enforce this, but it's worth watching for.\n- When reading a 1,200-line file, read the range you care about. Reading \"just to be safe\" is how context fills with material the model will never use.\n\n## A recommended rollout order\n\nIf you're starting from zero, do it in this order. Each step's savings are measurable against the previous baseline thanks to `ccusage`.\n\n1. ccusage, for baseline measurement\n2. CLAUDE.md rules, for output-side discipline\n3. `MAX_THINKING_TOKENS=8000`, for thinking-side discipline\n4. `/compact` at 60%, for context hygiene\n5. RTK, for input-side compression\n6. Haiku routing for subagents, for model-tier discipline\n7. Caveman mode, for prose compression\n\nDo steps 1-4 in a single afternoon. You'll see the shift immediately. Steps 5-6 take a little more setup but pay back within a day of normal use.\n\n## None of this helps if the codebase is a mess\n\nTools and environment variables can only compress what the model already has to read. If your codebase is poorly structured, with unclear responsibilities, hidden coupling, and 800-line files that do five things, then the model has to read more, guess more, and produce worse output. You pay for that in tokens every single session.\n\nI wrote about this recently in [AI Is Not the Bottleneck](/blog/2026-04-10-ai-is-not-the-bottleneck-your-organization-is). The short version: a model working in a clean system reads fewer files, makes fewer mistakes, and generates smaller diffs. SOLID, DRY, clear folder structure, single-responsibility files. The label does not matter. The result does. A well-organized codebase is cheaper to operate with AI, the same way it's cheaper to operate with humans.\n\nThe same applies to how you work. Break tasks into small, specific pieces before handing them to the agent. Give it one clear job, not \"build me a feature.\" Write a short spec. Pressure-test it before you start. Then execute incrementally, with small pushes to production and fast feedback loops. The agents that burn the most tokens are the ones given vague instructions inside large, tangled systems.\n\nSmall PRs help here too. They're easier for people to review, easier for AI to review, and easier to verify. If a pull request sits for three days because it's too large, that's a workflow problem no token-saving trick will fix.\n\n## The bigger picture\n\nSmaller context is cleaner, full stop. Models stay sharper, sessions stay coherent, and you stop waiting for giant responses when a three-line diff would do.\n\nToken discipline is, more than anything, just a way of being more specific about what you're asking for. The tools above are scaffolding for a habit that makes the agent better regardless of price.\n\n---\n\n## Quick setup scripts\n\nThese scripts add the environment variables to your shell rc, install RTK, and set up global hooks. They're idempotent, so running them twice won't duplicate anything.\n\n**zsh** (macOS default):\n\n```bash\ncurl -fsSL https://aleksandar.xyz/scripts/claude-token-setup-zsh.sh | zsh\n```\n\n**bash**:\n\n```bash\ncurl -fsSL https://aleksandar.xyz/scripts/claude-token-setup-bash.sh | bash\n```\n\nOpen a new terminal after running, or `source` your rc file.\n\n<Resources links={[\n  { title: \"Claude Code cost management — Anthropic docs\", url: \"https://code.claude.com/docs/en/costs\", note: \"Official docs on model costs, token budgets, and billing\" },\n  { title: \"Claude Code CLI Guide & Reference — Blake Crosley\", url: \"https://blakecrosley.com/guides/claude-code\", note: \"Thorough guide covering hooks, MCP, subagents, and more\" },\n  { title: \"drona23/claude-token-efficient\", url: \"https://github.com/drona23/claude-token-efficient/blob/main/CLAUDE.md\", note: \"Drop-in CLAUDE.md template for token-efficient sessions\" },\n  { title: \"UltraThink is Dead. Long Live Extended Thinking — Decode Claude\", url: \"https://decodeclaude.com/ultrathink-deprecated/\", note: \"Why MAX_THINKING_TOKENS replaced the old UltraThink approach\" },\n  { title: \"rtk-ai/rtk\", url: \"https://github.com/rtk-ai/rtk\", note: \"Rust binary that compresses command output before it enters context\" },\n  { title: \"ryoppippi/ccusage\", url: \"https://github.com/ryoppippi/ccusage\", note: \"Track daily and per-block Claude Code token spend\" },\n  { title: \"JuliusBrussee/caveman\", url: \"https://github.com/JuliusBrussee/caveman\", note: \"Claude Code skill that compresses prose output via caveman-speak\" },\n]} />",
      "date_published": "2026-04-13T12:00:00.000Z",
      "tags": [
        "Engineering"
      ],
      "image": "https://aleksandar.xyz/og/2026-04-13-a-practical-guide-to-cutting-claude-code-token-usage-by-50-plus.png"
    },
    {
      "id": "https://aleksandar.xyz/blog/2026-04-10-ai-is-not-the-bottleneck-your-organization-is",
      "url": "https://aleksandar.xyz/blog/2026-04-10-ai-is-not-the-bottleneck-your-organization-is",
      "title": "AI Is Not the Bottleneck. Your Organization Is",
      "summary": "Notes from a Copenhagen Claude Code meetup, and why code quality, small PRs, and organizational coherence still matter more than agent tricks.",
      "content_text": "import Resources from \"../../components/Resources.astro\";\n\n<Tldr>Most AI demos work great for solo devs and side projects. The harder case is the existing company with legacy systems and multiple teams. Code quality, small PRs, and clear architecture still matter more than agent tricks. The agent is not the bottleneck. The organization is.</Tldr>\n\nOn April 9 I joined another [Claude Code meetup in Copenhagen](https://luma.com/jgvmj9ji?tk=zsSIAz).\n\nI always enjoy these meetups. It is good to be around fellow developers and engineers, and I love the energy of people sharing knowledge. This piece is partly a summary of what I saw there, and partly an extension of what I think is still missing.\n\nWhen I talk about efficiency with AI, I mean simple things. Spending less money on tokens. Getting better results. Finishing tasks with less friction. A tool is useful when it clears the way. If it gets in the way, it is not helping.\n\n## What was discussed\n\n![A small group of people sitting in chairs facing a presentation screen](/images/posts/bottleneck-what-was-discussed-dark-transparent.webp)\n\nThere were four or five talks.\n\nThe first talk was about spec-driven development with Linear. That was interesting to me because we also use Linear at Dreamdata. The presenters were using the [Linear Claude integration](https://linear.app/integrations/claude) together with one of the open spec libraries. The idea was simple: start from a specification, let Claude create tasks and subtasks in Linear, and let the tooling move the work forward as those tasks get completed.\n\nWhat was interesting there was the reaction in the room. Someone from the audience said they were not a fan, because it started sounding too close to older waterfall habits where everything has to be defined through a large specification up front. I understood that concern. The talk showed that the workflow is possible. It also raised the question of when that kind of structure helps and when it becomes heavy.\n\nAnother talk was about [Jumbo CLI](https://github.com/jumbocontext/jumbo.cli), built here in Copenhagen. As I understood it, it was trying to deal with the memory problem. Each new session starts close to zero, so you end up restating context and decisions again and again. The speaker framed it through short-term and long-term memory and showed a CLI approach to storing and recalling context depending on the task.\n\nAnother short talk was about building tools in Claude Code. The speaker described a path from a simple [`SKILL.md` file](https://support.claude.com/en/articles/12512198-how-to-create-custom-skills) to references and then to scripts. That is a useful pattern. If you keep repeating the same prompt and the model keeps doing roughly the same thing, it makes sense to abstract that behavior instead of paying for the same repetition over and over again.\n\nThe demo there was about the population of Copenhagen. The tool called a public API, formatted the output, and showed year-over-year growth. The point was not the population number itself. The point was showing how a repeated prompt can become a reusable tool.\n\nThere were also broader talks about the latest models, and [Paperclip](https://github.com/paperclipai/paperclip) came up as an orchestrator for running many agents in parallel.\n\n## What still feels missing\n\n![A puzzle with one piece missing, floating nearby](/images/posts/bottleneck-still-missing-dark-transparent.webp)\n\nWhat still feels missing to me is that most of these talks remain disconnected from the regular workflow of a real organization.\n\nA lot of these workflows look great for side projects, solo founders, or small AI-first teams. In those environments there is very little decision friction. You do not have to align with multiple departments. You do not have to deal with years of accumulated systems and process.\n\nBut many companies do not work like that. One team is in Linear. Another team is in Notion. Sales is in HubSpot or Salesforce. Design has its own tools and review process. When that is the reality, these demos stop feeling so complete.\n\nThat first talk with Linear is a good example. It works well if the company can centralize a lot of its work into Linear. But different teams need different tools, and Linear is a very technical, task-oriented system. The hard question is not whether Claude can create tasks there. The hard question is how the work stays coherent across the rest of the organization.\n\nThis is also why [A feature should carry its own explanation](/blog/2026-04-08-a-feature-should-carry-its-own-explanation) is related to this. We keep getting better at generating output inside one function, usually engineering, but we are still weak at carrying the explanation, context, and downstream artifacts across the company.\n\nThe capabilities here are still very interesting to me. What I keep thinking about is a clearer source of truth inside the organization, where decisions are made, comments are made, research is collected, and then downstream artifacts can be produced from that point. Documentation. Internal notes. Website changes. Support summaries. Sales context. Static sites updated through CI. Not blindly, of course. There still has to be a human in the loop. But the system should not force people to recreate the same explanation five times in five different tools.\n\nFor that to happen, companies will probably have to simplify parts of their tooling and be more deliberate about where knowledge lives.\n\n## The main thing still matters\n\n![A solid foundation of stacked building blocks with a small crane on top](/images/posts/bottleneck-main-thing-dark-transparent.webp)\n\nThe main point I forgot to say clearly is this: if you want to get the most efficiency out of AI, especially on existing codebases, your existing code quality still matters a lot.\n\nIf the codebase is structured well, both people and AI become more efficient. If the system follows good design principles, if responsibilities are clear, if files and folders make sense, if functions are broken out properly, if code is reusable, if the architecture reflects intent, then there is less waste for everyone.\n\nYou can call it SOLID. You can call it DRY. You can call it DDD or simply good engineering. The label is not the important part. The important part is that a well-structured system is easier to scan, easier to reason about, and easier to change.\n\nThat applies to AI directly. A model in a clean system has to read fewer files, make fewer guesses, and deal with less hidden coupling. The same is true for humans.\n\nThe ability to break a problem into smaller problems is still one of the most important engineering skills. If you give an AI a vague prompt like \"build me a feature\" inside a messy codebase with weak team guidelines, then of course you are going to get uneven results.\n\nWhat has worked best for me and my colleagues is much more incremental. We book a meeting room, sit together for thirty minutes, an hour, sometimes more, and go through the spec and the plan. We pressure-test it. What is missing? What is unclear? What have we not thought through yet? Then we execute in small steps.\n\nThat workflow has been extremely effective for us. Small pushes to production. One agent as a companion. Incremental execution. Fast feedback.\n\nAnd the old workflow fundamentals still matter. How often do you push code? How long does it sit on your machine? How big are the pull requests? How quickly do reviews happen? If a PR takes three days to review because it is too large, that is still a bad system with or without AI.\n\nSmall PRs are easier for people to review. They are easier for AI to review too. Small changes are easier to verify, easier to trust, and easier to ship. In my experience, this is still one of the best ways to improve a team workflow, with or without AI.\n\n## A lot of this is still just Markdown\n\n![A sheet of paper with lines of text sketched as simple strokes](/images/posts/bottleneck-just-markdown-dark-transparent.webp)\n\nAnother thing I kept thinking during the meetup is that many of these fancy tools still reduce to a bunch of Markdown files, prompt files, references, and scripts.\n\nThat does not make them useless. We use them every day. They help. But it does mean that a lot of the current wave still feels like we are doing the same thing in a more structured way, rather than reaching something fundamentally different.\n\nWhat I would like to see is a cheap way to adapt models to our own data and our own way of working. I would love to build a knowledge base of my own work over a year, how I prompt, how I think, how I break problems down, how I review, and then eventually use that to adapt a model to my own habits and judgment.\n\nThat is much more interesting to me than yet another wrapper around prompt files.\n\n## The talk I still want to see\n\n![An empty stage with a single microphone stand](/images/posts/bottleneck-talk-i-want-dark-transparent.webp)\n\nI am still waiting for a talk that shows how AI gets introduced into an existing company slowly and with some integrity.\n\nNot only inside engineering. Across the organization.\n\nHow do you integrate it into existing systems? How do you keep security and review in place? How do you make sure it is not only an individual productivity boost, but part of a coherent workflow across teams?\n\nMost meetups and videos still focus on solo people, tiny teams, or companies that started AI-first. Those cases are real, but they are the easy cases.\n\nThe harder case is the older company. The company with legacy systems, multiple teams, and accumulated complexity. That is where I think the next serious work is.\n\nThe agent is not the bottleneck. The organization is.\n\n<Resources links={[\n  { title: \"Claude Code meetup Copenhagen\", url: \"https://luma.com/jgvmj9ji?tk=zsSIAz\", note: \"The April 9 event on Luma\" },\n  { title: \"Jumbo CLI\", url: \"https://github.com/jumbocontext/jumbo.cli\", note: \"Context memory for AI coding sessions\" },\n  { title: \"Claude Code custom skills\", url: \"https://support.claude.com/en/articles/12512198-how-to-create-custom-skills\", note: \"Official docs on SKILL.md files\" },\n  { title: \"Paperclip\", url: \"https://github.com/paperclipai/paperclip\", note: \"Multi-agent orchestrator\" },\n  { title: \"A feature should carry its own explanation\", url: \"/blog/2026-04-08-a-feature-should-carry-its-own-explanation\", note: \"Related post on this blog\" },\n]} />",
      "date_published": "2026-04-10T12:00:00.000Z",
      "tags": [
        "Engineering"
      ],
      "image": "https://aleksandar.xyz/og/2026-04-10-ai-is-not-the-bottleneck-your-organization-is.png"
    },
    {
      "id": "https://aleksandar.xyz/blog/2026-04-08-a-feature-should-carry-its-own-explanation",
      "url": "https://aleksandar.xyz/blog/2026-04-08-a-feature-should-carry-its-own-explanation",
      "title": "A feature should carry its own explanation",
      "content_text": "<Tldr>We can build faster than ever, but the process still feels slow because every feature gets re-explained five times across five tools. The feature is not done when the code ships. It's done when the rest of the company understands it. I want a process where the explanation gets built as the work happens, not after.</Tldr>\n\nLately I have been thinking a lot about product building.\n\nNot discovery alone, not design alone, not implementation alone. The whole thing, end to end.\n\nFor a long time, the hardest part was building the thing. Or at least that is how most of us experienced it. Writing the code, wiring the backend, dealing with infrastructure, dealing with limitations, dealing with time. Even when teams had good ideas, it still took a lot of effort to turn them into something real.\n\nThat is no longer the world we live in.\n\nToday, with the tools we have, we can move much faster in almost every part of the process. We can explore ideas faster, prototype faster, write code faster, ship faster.\n\nAnd yet, the overall process still feels slow.\n\nThe work itself is not slow. The system around the work is still static.\n\nA feature gets discussed. Then researched. Then designed. Then built. Then someone has to update the documentation. Someone has to explain it to support. Someone has to tell sales what changed. Someone has to write internal notes. Someone has to capture screenshots. Someone has to prepare release notes. Someone has to make sure the rest of the company even knows the thing exists.\n\nThe feature is done, but it is not really done.\n\nThat is the part I keep thinking about.\n\n## The problem is not only building\n\n![A wave overwhelming a small boat](/images/posts/feature-explain-overwhelmed-dark-transparent.webp)\n\nA lot of teams can now build much more than they can comfortably absorb.\n\nThat is the strange part of the moment we are in.\n\nWe talk a lot about faster engineering, faster prototyping, faster delivery. But very little about what happens after that. What happens when the pace of change increases, but the rest of the organization still depends on manual handoffs, repeated explanations, scattered Slack messages, half-written tickets, and somebody remembering to update the help center.\n\nWe are getting better at producing change than at carrying that change through the company.\n\nI think that is becoming one of the more important problems in product work.\n\nBecause the feature is not only the feature. The feature is also the explanation. The support context. The screenshots. The documentation. The changelog. The internal rollout. The sales context. The marketing context. The \"what changed, why did it change, and who needs to care\" part.\n\nThat work is usually treated as something that happens later.\n\nI think that is wrong.\n\n## What I am interested in now\n\n![A hand holding a magnifying glass over a tangled knot](/images/posts/feature-explain-interested-dark-transparent.webp)\n\nWhat interests me now is not only how to use AI to build faster. That part is already obvious.\n\nWhat interests me more is whether we can use these tools to make the whole product process less static. Not only the coding part, but everything around it.\n\nCan an idea become a better brief faster? Can design feedback become a useful decision log instead of disappearing into meetings and chat threads? And once something ships, can the explanation of that change be generated as part of the same flow instead of becoming a separate manual project?\n\nThat is the kind of system I have been thinking about. A system where one product change can move through ideation, discovery, design, engineering, documentation, and internal communication without being recreated from scratch every time it moves from one person to the next.\n\nBecause if you think about it, that is what a lot of this friction really is. Re-creation. The same feature gets re-explained again and again, in different words, for different people, in different tools, at different times.\n\nThat is expensive. It is slow. And at this point, it feels unnecessary.\n\n## One change, many outputs\n\n![A single root branching into many separate lines](/images/posts/feature-explain-one-to-many-dark-transparent.webp)\n\nIf AI is useful for anything here, it is translation.\n\nOne rough idea can become a better-defined problem statement. One feature brief can become engineering tasks. One set of design decisions can become implementation notes. One shipped change can become a draft for documentation, a support update, a changelog entry, and an internal announcement.\n\nThe underlying thing is still the same. What changes is the audience.\n\nEngineering needs one version. Support needs another. Sales needs another. But those versions should come from the same source of truth. They should not be invented independently by five different people trying to reconstruct what happened.\n\nThat is the part I want to improve. I want a process where the work carries its own context forward. Not perfectly. Not without human judgment. But much better than it does now.\n\n## What I would like product building to look like\n\n![A stack of layered documents growing behind each other](/images/posts/feature-explain-living-artifact-dark-transparent.webp)\n\nThe way I think about it right now is simple.\n\nEvery meaningful feature should have one living artifact behind it.\n\nAt the beginning, it is just an idea. Then it becomes a clearer problem. Then a proposal. Then it accumulates research, design references, tradeoffs, decisions, open questions, technical notes, rollout implications, and eventually the actual implementation.\n\nBy the time the feature ships, that same artifact should already contain most of what the rest of the company needs in order to understand the change.\n\nThen AI can help transform that into the right outputs. A docs draft. A changelog entry. A support summary. An internal post. Maybe screenshots too. Maybe a short explanation for sales or a cleaner summary for leadership.\n\nNot because humans should disappear from the process. Humans still need to review things, decide what matters, exercise judgment. But humans should stop starting from zero every single time.\n\nThat is the key thing for me.\n\nI am not interested in blind automation. I do not want random generated text being pushed to customers or posted internally without anybody thinking. But I do think we should start building systems where the explanation grows with the work.\n\nInstead of \"we built it, now let's explain it,\" I want the explanation to get built as the work happens.\n\nThat feels much closer to the kind of process our tools should enable.\n\n## Why this matters\n\n![A small box trailing a long tangled tail of loops and knots](/images/posts/feature-explain-tail-dark-transparent.webp)\n\nThis is not only about speed.\n\nSpeed is nice, obviously. But the deeper value is reducing the cost of change.\n\nA lot of good product improvements are not blocked because they are impossible to build. They get slowed down because every change creates a tail around it. Docs need updating. Other teams need context. Internal communication needs to happen. Screenshots need to change. Support needs to understand what customers will see.\n\nThat tail is real work.\n\nAnd if we want teams to become more autonomous, then this is part of the problem too. Autonomy is not only the ability to write and ship code. A team that is actually autonomous can move without creating confusion around itself. The rest of the company can keep up with the product as it changes.\n\nThat is where AI becomes interesting to me. Not only as a coding tool, but as something that connects the stages of product work. From ideation through discovery, design, implementation, documentation, and internal rollout. One connected flow instead of a series of disconnected retellings.\n\n## The direction I want to explore\n\n![A person stepping onto a path stretching into the distance](/images/posts/feature-explain-direction-dark-transparent.webp)\n\nI do not have a final answer here. I am still working through what this should look like in practice. What should be automated, what should stay manual, what should be reviewed, what should be generated, what should be captured early so that the rest becomes easier later.\n\nBut I am pretty convinced about the direction.\n\nWe should stop thinking about AI only as something that helps us type faster. The more interesting opportunity is building a product process that does not go static the moment a feature leaves engineering.\n\nA process where the feature does not stop at implementation.\n\nA process where the feature carries its own explanation.",
      "date_published": "2026-04-08T12:00:00.000Z",
      "tags": [
        "Product"
      ],
      "image": "https://aleksandar.xyz/og/2026-04-08-a-feature-should-carry-its-own-explanation.png"
    },
    {
      "id": "https://aleksandar.xyz/blog/2026-04-04-feedback",
      "url": "https://aleksandar.xyz/blog/2026-04-04-feedback",
      "title": "Feedback",
      "content_text": "<Tldr>Feedback is the single most important thing for growth, and one of the hardest to do well. Most people say they want it but actually want validation. Seek it actively, take it without defending yourself in the moment, and build a life around people who will tell you the truth.</Tldr>\n\nFor anything in life to work, both sides need to want to get better. Marriage, friendship, a working relationship. It does not matter the context. Both people have to be looking forward, willing to change, willing to hear things they might not want to hear.\n\nFeedback is no different. In my experience, feedback is the single most important thing for us as individuals to grow. It is also one of the hardest things to do well, because it requires both people to be honest and neither to be defensive.\n\nThe best companies I have worked with were the ones where feedback was part of the culture. Not a quarterly ritual, not a form you fill out, but something people actually valued and practiced. My managers over the years have been very different from each other, different styles, different approaches. But the ones I learned the most from gave very good feedback. Not always comfortable, but very good.\n\n## When you are not ready to hear it\n\n![A person with arms crossed turning away from an extended hand](/images/posts/feedback-not-ready-dark-transparent.webp)\n\nWhen you are inexperienced, and when you are undeveloped in terms of emotional and social intelligence, you are extremely prone to getting offended by any feedback. The offense does not come from the manager directly, as in \"why did she tell me this?\" It is a mix of ego, self-perception, and self-disappointment with what you have been told. You hear the feedback and instead of processing it, you fight it internally because it challenges the image you have of yourself.\n\nA lot of people think they crave feedback. A lot of people will tell you, in casual conversation, that they lack it. But many of those same people are not ready to take it. They cannot sit with it, hold it as information, and process it. Instead they take it to heart, get offended, get defensive. What they actually want is validation, not feedback.\n\nThe ideal response to feedback is not a response at all in the moment. Take it. Let it sit. Sleep on it. Then come back, go through it with your manager, figure out what is right, what needs more context, what you disagree with and why. That is the productive version.\n\nAdam Grant has a concept he calls the \"second score,\" which I think captures this well. The idea is simple: when someone gives you feedback, that is your first score. Maybe it is a three out of ten. You cannot change that. What you can do is try to get a ten for how well you took the three. Score yourself on how you received it, not on the feedback itself. It shifts your focus from defending yourself to proving that you can listen. I find this works especially well for people who are still building the emotional muscle to sit with criticism.\n\n## You have to seek it\n\n![A person reaching out with an open hand](/images/posts/feedback-seek-it-dark-transparent.webp)\n\nLike everything else, to be able to receive feedback, you also need to be open to it. You need to build a relationship with your manager where you are not just waiting for feedback to arrive. You actively ask what you can do better. You throw the ball into their hands a little so you can relieve yourself of some of the anxiety.\n\nThere is a subtlety here in how you ask. Grant makes a point that stuck with me: when you ask someone \"can you give me feedback,\" you tend to get either cheerleaders or critics. Cheerleaders tell you what you did well, critics tell you what you did wrong. Neither is particularly useful. What you actually want is a coach. And the way to get people into coaching mode is to ask \"what is one thing I could do better next time?\" That shifts the conversation from evaluating the past to improving the future. It is a small change in phrasing, but it tends to produce much more useful input. The relationship still has to be there for this to work, but how you frame the question matters more than most people realize.\n\nA lot of people I see, including some of my colleagues, fall into an anxiety trap because they are not getting any feedback at all. And what makes it worse is that they are not seeking it either. When feedback finally comes, they react badly because it feels uncomfortable, or because it is not what they wanted to hear.\n\nThis is exactly like a relationship. If you have ever dated someone, especially when you were younger, you know how it goes. One of you sits on something for months. Then six months later it explodes: \"you did this, and then this, and then this.\" And the logical response is always the same: \"Why didn't you tell me when the first thing happened? We could have sorted it out.\"\n\nThis applies directly to feedback at work. If it comes too late, if it accumulates, it arrives all at once and by then the process has already failed.\n\n## It goes both ways\n\n![Two people facing each other with hands extended in a mirrored gesture](/images/posts/feedback-both-ways-dark-transparent.webp)\n\nYour relationship with your manager is not one-directional. Your manager is not there to monitor you, to program you, to knock on your door, to stand behind your shoulder. Your manager is someone who hopefully cares about you, but also someone who, as a function of their role, exists for you. If you need advice, if you need feedback, you go to that person.\n\nIf you have not built a good relationship with your manager, that is also a problem, because it usually comes down to trust. And if the trust is not there, you need to step back and ask: what can I do to improve this? Because through improving trust, you open the door for honest feedback in both directions.\n\nAnd it does go both ways. A manager must also be able to take feedback. This street is rarely one-way. Nobody is going to sit there and listen to a manager who refuses to hear anything back. If you want to give feedback but are not willing to receive it, people will stop taking your input seriously. They will see you as someone who talks but does not listen.\n\n## Three types of feedback\n\n![Three arrows hitting a target at different distances from the center](/images/posts/feedback-three-types-dark-transparent.webp)\n\nThere are three types of feedback.\n\n**Outcome feedback** tells you the final result but not why. It shows overall performance: a test score, a grade, audience applause. It says good or bad but offers no advice on how to improve.\n\n**Informational feedback** tells you what you are doing wrong. It provides a direct reaction to your actions but does not tell you how to fix it. An error message in code. A native speaker looking confused when you speak their language. Missing the dartboard.\n\n**Corrective feedback** is the most useful. It tells you what is wrong and how to correct it. It comes from a mentor, a coach, an expert. It is the kind of feedback that makes the most difference at higher skill levels, because it directly addresses what to change and how.\n\nIn my experience, corrective feedback is ideal when it is possible. But a manager also has to be careful not to overdo it. If every piece of feedback is corrective, the person on the receiving end starts to feel like everything they do needs fixing. The mix matters. Using all three types, adjusting based on context, that is what makes feedback sustainable.\n\nAvoiding corrective feedback because you are worried about hurting someone is a mistake. But I have seen managers avoid it for a different reason entirely: not because they care about the other person's feelings, but because it is not worth the effort to them. They avoid responsibility by not putting themselves in those situations at all.\n\nThat is a bigger problem.\n\nIf you are a manager and you are not having these conversations, you should be talked to. Nobody enjoys telling people they are doing something wrong. But that is the job. That is how people grow. If you avoid it, you put people in a position where, months or years from now, they are confused about why they are not getting promoted, why their career is not moving, because nobody told them anything honest for a very long time.\n\n## Build your life around it\n\n![A person standing with arms wide open in a posture of openness](/images/posts/feedback-build-around-it-dark-transparent.webp)\n\nFeedback is not just a work thing. It is a life thing. Through your judgment, your thinking, your self-awareness, your willingness to be wrong, you need to start seeking feedback everywhere. From close friends. From family. You have to tell your friends: if I am talking too much, if I am saying something stupid, tell me.\n\nSurround yourself with people you can argue with, disagree with, people who think differently than you. Not people who confirm your biases. Build yourself into a person who is open to hearing things that are hard to hear.\n\nWhen you do that, it becomes visible to others. They see that you are humble, that you welcome critique, that giving you a piece of advice is not going to turn into a fight. And on the other side, if you build yourself up defensively, if every time someone approaches you with something you instantly start explaining and justifying, people stop trying. They see that it is pointless. They see that helping you means fighting your battles for you, and nobody wants to take on that responsibility.\n\nIf someone is trying to help you and you keep pushing that help away, the likelihood of them trying again is very, very low.",
      "date_published": "2026-04-04T11:30:00.000Z",
      "tags": [
        "Leadership"
      ],
      "image": "https://aleksandar.xyz/og/2026-04-04-feedback.png"
    },
    {
      "id": "https://aleksandar.xyz/blog/2026-04-02-judgment",
      "url": "https://aleksandar.xyz/blog/2026-04-02-judgment",
      "title": "Judgment",
      "content_text": "import Quote from \"../../components/Quote.astro\";\n\n<Tldr>We treat \"trust is impossible to rebuild\" like settled science. It's not. It's a choice that becomes self-fulfilling. Between an event and our response there's a gap, and judgment lives in that gap. If nobody gave the person feedback along the way, the silence is the problem, not the trust.</Tldr>\n\nWe judge everything. Situations, decisions, processes, people. It's constant, and it's ours to fight. Most of us barely try.\n\nTake this one:\n\n<Quote>Once trust is lost, it's almost impossible to get back.</Quote>\n\nA close colleague said this to me recently. I've been hearing it in professional settings constantly, and nobody ever pushes back. As if it's settled science.\n\nIt's one of the most damaging things we repeat to each other.\n\nThink about what we're actually saying. We're saying that a person who loses our trust is, for all practical purposes, permanently diminished in our eyes. That no amount of changed behavior or growth can undo the label we've placed on them. We've turned a single event, or a pattern of events, into a life sentence.\n\nThat is not wisdom. That's ego dressed up as principle.\n\n## The axiom we never question\n\n![A cracked stone tablet](/images/posts/judgment-axiom-dark-transparent.webp)\n\nTrust takes years to build, seconds to break, and forever to repair. People say it with such conviction, like they're quoting gravity. But it's not a law. It's a choice. A choice we've repeated so many times it feels inevitable.\n\nAnd once you accept it as a rule, you stop trying. You stop giving people room to correct course, stop looking for evidence that contradicts the judgment you've already made. The axiom becomes self-fulfilling: trust can't be rebuilt because we've decided it can't.\n\n## What judgment actually does\n\n![A face seen through a distorting lens](/images/posts/judgment-what-it-does-dark-transparent.webp)\n\nWhen someone loses our trust, we don't just become cautious. We reorganize our entire perception of them. Everything they do gets filtered through the verdict we've already reached. A good action becomes suspicious. A neutral one becomes confirmation of something darker. No benefit of the doubt, no room for correction.\n\nWe essentially decide: this person did this thing, therefore this person _is_ this thing, and this person will never be anything else.\n\nIt's absurd. And yet it's exactly how most of us operate.\n\nThe irony is, I don't disagree with my colleague. We _are_ wired this way. I can't prove it genetically, but the pattern is everywhere. Once judgment locks in, we don't fight it. We feed it. We look for reasons we're right. We build a case for why the loss of trust was justified, why our reaction is proportional. We convince ourselves that giving another chance would be naive.\n\nWe never ask whether the judgment itself might be the problem.\n\n## The workplace version\n\n![A person at a desk with an empty speech bubble](/images/posts/judgment-workplace-silence-dark-transparent.webp)\n\nThis gets sharper in professional settings, because at work there's usually a system around the person. Managers, processes, feedback loops.\n\nSay someone makes the same mistake ten times. You've watched it happen. Your trust in them is gone. Fair enough, on the surface. But the harder question is worth asking: did anyone actually tell them? Did they get clear feedback after the first time, the second, the fifth? Or did everyone just quietly accumulate judgment while the person kept operating in the dark?\n\nWhen someone loses trust through repeated mistakes, and nobody along the way gave them a chance to correct course, the silence is the problem. The system had the information. The feedback never happened. And now we call it a trust issue when it might be something else entirely.\n\n## What the Stoics understood\n\n![Event, then judgment, then reaction](/images/posts/judgment-stoic-diagram-dark-transparent.webp)\n\nStoicism is my favorite corner of philosophy, and it's no accident that judgment sits at the center of almost everything the Stoics wrote about. Marcus Aurelius, Epictetus, Seneca: they all understood that between an event and our response, there's a gap. Judgment lives in that gap.\n\nSomething happens. We judge it. Then we act on the judgment, not on the event itself.\n\nThe Stoics would say: the event is neutral. Your judgment is what gives it weight. And that judgment is shaped by everything you've lived through, feared, and protected. It feels like truth. It's interpretation.\n\nThis is also why no serious Stoic would ever call themselves one. They knew that mastering judgment isn't a destination. It's a lifelong fight you never fully win.\n\n## The harder path\n\n![A person questioning their own reflection](/images/posts/judgment-harder-path-dark-transparent.webp)\n\nStart with yourself. Challenge your own judgment. Detach enough to ask whether the story you've built about someone is the full picture or just the comfortable one. Locking in on a belief because it's simpler doesn't solve anything.\n\nBut the internal work alone isn't enough. Ask yourself: why did this person lose my trust? What did I do to prevent it? What am I willing to do to restore it? Am I prepared to prove my own judgment wrong?\n\nWe are all human beings with our own judgments, and if we want to work together, especially in a professional setting, we have to give each other more than a verdict. The benefit of the doubt. The feedback. The processes and tools to make trust easier to build than it is to lose.\n\nThat's the only version of this that actually works.",
      "date_published": "2026-04-02T12:38:24.000Z",
      "tags": [
        "Leadership",
        "Philosophy"
      ],
      "image": "https://aleksandar.xyz/og/2026-04-02-judgment.png"
    },
    {
      "id": "https://aleksandar.xyz/blog/2026-03-24-a-morning-conversation-about-leadership",
      "url": "https://aleksandar.xyz/blog/2026-03-24-a-morning-conversation-about-leadership",
      "title": "A Morning Conversation About Leadership",
      "summary": "A conversation with my CTO about why leadership is so hard to write about, and why most leadership advice falls apart the moment you try to use it.",
      "content_text": "import Resources from \"../../components/Resources.astro\";\n\n<Tldr>Leadership is almost impossible to write about well because it depends entirely on context. Most leadership advice falls apart the moment you try to use it on a specific team. It's better consumed as stories, not guidelines. The reading gives you vocabulary. The work gives you judgment.</Tldr>\n\nThis morning I spoke to my CTO James at work. We were the only two in the office, it was early, and we got to talking about some of the blog posts I have been writing recently. He praised the [Just Make a Decision](/blog/2026-03-23-just-make-a-decision) one, and then we ended up on the topic of leadership. I asked him something that has been on my mind: how would you write about leadership? What really qualifies as a good leadership article?\n\nHis answer was that leadership is very hard to write about because it depends so much on context. Of course, most of us sort of know this. But when you think a little deeper about it, you realize just how much that matters. It is not just different from organization to organization. It is different from team to team, from dynamic to dynamic, from person to person.\n\nYou see a bunch of this content online. A bunch of courses, processes, frameworks, all kinds of things. And a lot of these things simply do not apply to a given team in a given context at a given size. It does not transfer. It looks good on paper, or in a LinkedIn post, or in a book, and then it falls apart the moment you try to use it somewhere specific. This reminds me of the article I wrote a couple of days ago called [Make $5](/blog/2026-03-19-the-truth-about-go-to-market-strategy), which was about go-to-market strategy from a founder meetup I went to. It is more or less the same problem. For some companies, something is going to work. For others it is not going to even remotely be the same. It might not even be similar. It might actually be harmful. The advice that saves one team will break another.\n\nIt seems absurd when you watch some of the courses or go to some of the leadership conferences or read some of the popular leadership content. A lot of it is formulated in a way that feels like the only truth. Like this is how you lead, this is what works. In retrospect, I do not know if that is actually done on purpose, if these people believe that, or maybe they are assuming that their readers will understand that the advice is context-specific. Either way, when you read a lot of it, it reads as if it was the ultimate guide to leadership.\n\nMaybe leadership, when it is read through a lot of these articles written based on the experience of others, should be observed exactly like that. As a story. This is also what James was saying this morning. He prefers to read articles where people write about their experiences of certain things, whether that is leadership or infrastructure or programming. He said he likes to read examples where people did something that they either failed at or succeeded at. Not guidelines. Stories.\n\nMaybe leadership content should be consumed that way too. Maybe aligning with certain experiences is more useful than aligning with general principles. For instance, reading about experiences from teams of a similar size, or in similar circumstances, might work better because you can actually see yourself in those situations. You can connect the advice to something concrete.\n\nIt seems to me that no matter what subject we discuss, whether that is leadership or programming or code reviews or anything else, all of it becomes very abstract and very general if not applied to a given problem, a given context, a given example. Maybe even I, writing these articles, should add more context. As James said, maybe talk more about personal examples, personal experiences. If you try to write about leadership in a very general way, it might attract some people, but it might also push a lot of people away, because you just might sound as wrong as you sound right.\n\n## The question of how you quantify this\n\n![A person reaching for a book on a tall bookshelf](/images/posts/leadership-quantify-dark-transparent.webp)\n\nSo the question becomes, how do you quantify this? I have read many good leadership articles. I want to become better as a person and in my career as I work towards being an engineering manager. You can read all of this content. You can apply to courses. I am currently taking the [LeadDev Together: Pillars of Engineering Management](https://leaddev.com/course/together-pillars22) course through work. And as I discussed this morning with James, there are so many principles out there that are simply made for larger scale. For enterprises. For organizations with layers and layers of management.\n\nWhen you are on a small team, a lot of it does not apply. Some of it is counterproductive. Processes designed for 200-person engineering orgs create overhead that kills a 10-person team. Feedback frameworks built for formal review cycles feel absurd when you sit two meters from the person every day.\n\nIt is almost ridiculous in a way. It is almost ridiculous writing about this. Because some of the things that I might write about might be completely obvious to some people. Might be obviously wrong to others. Each one of these rules, each one of these dynamics really boils down to the context, to your team, to your organization, to the timeline when it is executed, to the circumstances, and to the competences that people have in a given company.\n\n## A toolbox, not a manual\n\n![An open toolbox with various tools](/images/posts/leadership-toolbox-dark-transparent.webp)\n\nI think, as I get more and more into leadership, I think more and more that the only rule worth following is to apply certain tools in a given context at a given time. Leadership is a toolbox. Being very open to change. Being open to ambiguity and lack of clarity. But also trying to use advice as a set of tools rather than something that must be applied in a certain order.\n\nTake something as simple as direct feedback. In one team, very direct feedback can build trust because everyone knows the intent is good and nobody is playing politics. In another team, if trust is weak or people already feel unsafe, the exact same approach can create more distance and more damage. Same principle. Different context. Completely different outcome.\n\nOne could say that a lot of these leadership principles or rules or advice could literally be consumed as functions. And then being able to remind yourself of these functions when the time comes, when the situation calls for it. Each function has its own conditions. Each one works in some situations and not in others. Your job is to know which one to reach for. No course teaches you that part. That comes from working with people long enough to start recognizing the patterns.\n\n## What is underneath all of this\n\n![A person sitting quietly on a chair, looking ahead](/images/posts/leadership-underneath-dark-transparent.webp)\n\nIf I try to summarize my thoughts, and if we think about leadership meaning essentially just working with people, it boils down to this. In all of the sea of various principles and strategies, what you are really trying to build is judgment. More social intelligence as a person. More strategy. More patience. Less rush to judge.\n\nThat means gaining enough experience to recognize certain events when they happen. Certain disagreements. Certain frictions. Certain conflicts. And then being able to handle them as they appear, not as a framework told you they should appear.\n\nThat is the part nobody can teach you. You can read about it. You can take courses on it. Those things help, the same way studying music theory helps a musician. But the actual skill comes from being in the room when things go wrong. From being there when people disagree, when nobody knows the right answer. And learning to act anyway.\n\nThe reading gives you vocabulary. The work gives you judgment.\n\n<Resources links={[\n  { title: \"LeadDev Together: Pillars of Engineering Management\", url: \"https://leaddev.com/course/together-pillars22\", note: \"The course I'm currently taking through work\" },\n]} />",
      "date_published": "2026-03-24T09:30:00.000Z",
      "tags": [
        "Leadership"
      ],
      "image": "https://aleksandar.xyz/og/2026-03-24-a-morning-conversation-about-leadership.png"
    },
    {
      "id": "https://aleksandar.xyz/blog/2026-03-23-just-make-a-decision",
      "url": "https://aleksandar.xyz/blog/2026-03-23-just-make-a-decision",
      "title": "Just Make a Decision",
      "content_text": "<Tldr>Building software is faster than ever. The bottleneck now is decision-making. Timebox your decisions, always have something visible to discuss, and stop dwelling. Decisions don't need to be final. They need to happen.</Tldr>\n\nLately, working with my PM and designer, I feel we've come to a simple realization: now that building software is easier than ever, especially when it comes to UIs and APIs, we need to get much faster at making decisions.\n\nI wrote about this in [From Inspired to Expired](/blog/2026-03-04-marty-we-need-a-new-book), about the bottleneck moving from engineering to product discovery. Once you optimize one bottleneck, it moves somewhere else. Right now it's on decision-making and alignment.\n\nWhoever manages to discover, learn, and research fast, then actually align on a decision, is going to win. In the past couple of days I've been feeling this more and more.\n\n## Timebox everything\n\n![Narrowing from many options to one](/images/posts/decision-ten-to-one-dark-transparent.webp)\n\nWhat we've started doing is timeboxing the decisions that matter. If we're discussing something, we say \"let's make this decision in three hours.\" Or by tomorrow. Or by end of day. Whatever makes sense. The point is to put a limit on it so something actionable comes out the other end, not another open-ended discussion that trails off.\n\nThe process itself is what I described in that article. Come up with 10 variations. The designer produces a number of solutions, the team goes through ideation quickly. Then narrow it down to two or three. Sit again. Narrow further. Then build.\n\n## Stop discussing invisible things\n\n![Two people at a table with an empty thought bubble](/images/posts/decision-discussing-invisible-dark-transparent.webp)\n\nInstead of plainly discussing something that's not even in front of your eyes, always have something to look at before you decide.\n\n## Decisions don't need to be final\n\n![A hand pressing a button decisively](/images/posts/decision-new-discipline-dark-transparent.webp)\n\nAt least some of the revelation I keep having is straightforward. Decisions need to be fast. People need to be aligned, at least to some degree. Not to the final degree, not in any way, shape, or form. But aligned enough to move.\n\nDue to the iterative speed we can write software today, there's no reason to sit on a decision longer than you have to.\n\nAs a tech lead, I'm going to push my team harder on this. Less dwelling on possibilities, more starting with prototypes. More producing variations and then narrowing down. Stop trying to prolong or vaguely agree on something that might not ever be mentioned again.\n\nJust make a decision. The new hero on the block.",
      "date_published": "2026-03-23T18:39:50.000Z",
      "tags": [
        "Product",
        "Leadership"
      ],
      "image": "https://aleksandar.xyz/og/2026-03-23-just-make-a-decision.png"
    },
    {
      "id": "https://aleksandar.xyz/blog/2026-03-22-the-offloaded-mind",
      "url": "https://aleksandar.xyz/blog/2026-03-22-the-offloaded-mind",
      "title": "The Offloaded Mind",
      "summary": "On learning like an interpreter, handing cognition to capable systems, and what happens when organizations expect speed without room to understand.",
      "content_text": "import Resources from \"../../components/Resources.astro\";\n\n<Tldr>As we offload more cognitive work to AI, we gain speed but risk losing depth. The tools are not the problem. The problem is when organizations expect the speed without leaving room for understanding. Output without understanding has a cost, and someone always pays for it eventually.</Tldr>\n\nA few months ago I was reading Mustafa Suleyman's [_The Coming Wave_](https://the-coming-wave.com/). He talks about what he calls ACI, artificial capable intelligence, and I've been thinking a lot about that lately. As AI advances, and as we keep offloading more and more tasks to these capable systems, I wonder what humanity looks like five or ten years from now.\n\nI am not even talking mainly about jobs or the market or whether some roles disappear and others appear. Humanity is dynamic. We usually find some way to correct and adapt. What worries me more is something else: I do not think that in history, at least not to this degree, we have ever given ourselves this much mental freedom by offloading so many cognitive tasks.\n\nI am dictating this piece through [Wispr Flow](https://wisprflow.ai/). I sit at the PC, talk into a microphone, and the text appears. It is useful. It means I do not have to worry about every single word or character while I am still thinking. I can think more freely and just talk. That is great. It is also part of what worries me.\n\n## Learning like an interpreter\n\n![A person reading code line by line, finger tracing down a page](/images/posts/offloaded-mind-interpreter-dark-transparent.webp)\n\nFor a long time, no matter whether we were juniors or seniors, there was a path we had to take to understand the things we were doing. As we understood them, new paths opened. If you hit a roadblock, you usually had two options. Either a more senior colleague helped unblock you, even if you did not fully understand yet, or you had to stop, learn something, and then move on.\n\nIn that sense, we almost had to work like an interpreter. You move forward step by step. If the interpreter cannot understand a line, it breaks. And for a long time our growth worked a bit like that too. When you did not understand something deeply enough, the work pushed back.\n\nThat did not mean everyone learned perfectly. Of course not. People have always worked around gaps. But the gaps were harder to hide. You had to come into contact with the thing itself.\n\n## What changed on my side\n\n![Two hands hovering above a keyboard without touching it](/images/posts/offloaded-mind-hands-off-keyboard-dark-transparent.webp)\n\nI am a programmer. I am a tech lead. I have been in this industry for a long time. And I have not written a single meaningful line of code by hand in months. Maybe longer. Maybe close to a year.\n\nYes, I still work. I write specifications. I think about the problem. I think about the architecture. I think about the solution and how it lands in a system that already has history. Then I offload a lot of the implementation to these tools.\n\nOf course that is possible only because of the experience I gained over ten or fifteen years or more. I have done this work many times before. I know what to look for. I know which questions to ask. But still, it feels a little scary.\n\nThere is something strange about realizing that a lot of the hard cognitive work I used to do myself is now being handed to a machine. All of that is great. All of that is useful. But it still leaves me with the question: what does that do to us over time?\n\n## Reading, writing, and surface-level thinking\n\n![A person skimming the surface of water with their fingertips](/images/posts/offloaded-mind-surface-reading-dark-transparent.webp)\n\nPeople now automate writing emails. People automate writing articles. People summarize books. People summarize papers. Maybe people will summarize this article too because they will not have the patience to read it fully.\n\nSo then the question becomes: what happens to our writing abilities? What happens to our reading abilities? What happens to patience? What happens to the ability to stay with something instead of reducing it immediately to the shortest possible version?\n\nOf course, a lot of this can also be useful. If a tool gives you quick access to the one point you care about, that can help. It can save time. It can make learning more accessible in some cases. I am not denying that.\n\nBut I do not know. A lot of this feels like one vicious circle. If we stop writing, if we stop reading fully, if we keep training ourselves on summaries and surface-level information, then do we also start thinking that way? Do we train ourselves, as a species, to stay at the surface?\n\nHumanity already likes absolute thinking. We already like simple binaries. Good and bad. Left and right. Enemy and ally. Everything is simplified, everything is flattened, everything wants closure. If capable systems reinforce that by making shallow understanding even easier, where does that end?\n\nSometimes I think about _Idiocracy_, not as a literal prediction, but as a question. What happens when the shallow path becomes the easiest path, the fastest path, and eventually the rewarded path?\n\n## What happens when nothing new gets made\n\n![A bucket lowered into a dry well](/images/posts/offloaded-mind-empty-well-dark-transparent.webp)\n\nAnother thing I keep wondering about is the systems themselves.\n\nIf these models need fresh data to improve, what happens if fewer people keep producing new work? What happens if people stop writing open source software in public and start closing more of it because the bargain has changed? What happens if more and more of the internet becomes recycled knowledge, synthetic output, and repeated summaries of old material?\n\nThen the question becomes: who builds better systems? Who pushes things forward? Who actually adds something new instead of only recombining what already exists?\n\nI am not claiming I know the answer. I am saying it feels like a question worth sitting with.\n\n## Curiosity is not the main problem\n\n![A wide-open eye looking through a magnifying glass](/images/posts/offloaded-mind-curious-eye-dark-transparent.webp)\n\nI do not actually think the main problem is human curiosity. I think plenty of people still want to learn. I do. I still want to understand things on a deeper level. I read philosophy. I read Stoicism. I like going deep. I do not think the human desire to learn has disappeared.\n\nThe issue, to me, is pressure from the other direction.\n\n## The pressure from the other direction\n\n![A person being squeezed between two large clock hands](/images/posts/offloaded-mind-clock-pressure-dark-transparent.webp)\n\nIf we train each other that writing code or solving complex problems is now just a prompt or two, maybe sent to ten different models at the same time, and if organizations start expecting an order of magnitude more output per day than before, then where does the time for understanding come from?\n\nHow do people ship so much stuff and still remain knowledgeable about what they are doing? How do you generate so much code, or product, or output, and still have time to understand the consequences of it?\n\nThere is also an ownership problem here. People on the internet like to ask who is responsible when generated code fails. To me the answer is simple. The person who ships it is responsible. If you use AI, you are still responsible for both the good and the bad. That makes deep understanding even more important, not less important.\n\nBecause if AI becomes the norm, and if that speed becomes the expectation, then you end up with massive pressure on programmers and other professionals. You have to produce more, but you also have to understand more. And the people setting those expectations will often be the same people who do not understand how long it takes to really understand a complex system in the first place.\n\n## Juniors, axioms, and the hard part\n\nThere is a huge discussion right now about junior employees. The market is rough for them, obviously. But I think the deeper issue is older than that.\n\nA junior person in any field still has to learn the axioms of the profession. Programming, law, accounting, math, whatever it is. Of course a junior programmer today will have a very different path than we did in the 1990s or 2000s. That is not the problem.\n\nThe real question is whether they will really understand the consequences of their code, or of their product, and how much they will be able to reason about it without constantly falling back to AI.\n\nThere is nothing wrong with AI. AI can be very accurate. But if you do not have someone in the room who really understands the underlying thing, who is there to confirm what is true and what is false? If nobody understands the axioms, and everything becomes prediction on top of prediction, then what is the difference between a person with a formal title and anyone else who can prompt well?\n\nThat is one of the hardest questions in this whole thing for me.\n\n## The open edge\n\n![A figure standing at an open doorway looking out into empty space](/images/posts/offloaded-mind-open-door-dark-transparent.webp)\n\nI do not know what humanity looks like in five or ten years. I do not think anybody really knows. That is what makes all of this both exciting and intimidating.\n\nWhat I worry about is not only whether we lose jobs or whether the market changes. I worry about what happens if we normalize a way of working where output stays high, but understanding gets thinner. I worry about what happens if companies keep taking the speed but do not create the time for people to actually understand what they are building.\n\nMaybe some kind of self-correction happens. Maybe expectations and reality collide hard enough that a new balance gets forced on us. Maybe companies push too far, ask for too much output, put understanding on the bottom shelf, and then reality pushes back through failures, bad decisions, lost clients, broken trust, and errors that are simply too expensive to keep making.\n\nIn simple words, maybe over time we discover that moving faster and faster eventually becomes too fast. Not because people suddenly become wiser, but because systems break, customers leave, and businesses learn the hard way that output without understanding has a cost.\n\nTo me that is the open edge of the problem. The tools are not going away. The question is whether we build workplaces, expectations, and habits that still leave room for depth, or whether depth becomes something you are expected to pursue after hours, on your own, like a hobby.\n\nI also think we owe something to ourselves here. As leaders, as individuals, no matter where we are, we owe ourselves the discipline to stay pragmatic when it comes to knowledge. We still need to read books. We need to resist the urge to always have things summarized and handed to us. We still need to think, to argue, to discuss, and to evolve our thinking beyond the surface.\n\nIn simple words, almost like a junkie has to refuse another shot, we sometimes have to refuse the temptation to offload everything just because we can. These tools are amazing. That is exactly why the temptation is so strong.\n\nNo matter what the future brings, I still believe the people who matter most will be the ones who stay curious. The ones who are still willing to run the marathon, finish the book, and learn how the things they build actually work.\n\nI hope it is the first one. I really do. I am just not sure it will be.\n\n<Resources links={[\n  { title: \"The Coming Wave by Mustafa Suleyman\", url: \"https://the-coming-wave.com/\", note: \"On artificial capable intelligence and what comes next\" },\n  { title: \"Wispr Flow\", url: \"https://wisprflow.ai/\", note: \"Voice-to-text dictation tool used to write this piece\" },\n]} />",
      "date_published": "2026-03-22T14:00:00.000Z",
      "tags": [
        "Engineering"
      ],
      "image": "https://aleksandar.xyz/og/2026-03-22-the-offloaded-mind.png"
    },
    {
      "id": "https://aleksandar.xyz/blog/2026-03-21-we-are-the-culture",
      "url": "https://aleksandar.xyz/blog/2026-03-21-we-are-the-culture",
      "title": "We Are the Culture",
      "summary": "Culture is not what a company says. It is what people repeat, what leaders reward, and what everyone learns is normal.",
      "content_text": "<Tldr>Culture is not what a company says. It's what people repeat, what leaders reward, and what everyone learns is normal. Individual behavior shapes the room, but leadership shapes the rules of the room. You don't \"have\" a culture. You make one, every day.</Tldr>\n\nEvery morning my kid and I bike to school together. Along the way I say good morning to every parent, every kid, every teacher. I hold doors. I move out of the way. Not because someone told me to. Because that's how you shape the space around you.\n\nThat's culture.\n\nI've been living in Denmark for around ten years, and one thing I've come to appreciate is how much small everyday gestures matter. On the street, on the bike lane, at school, people smile, say good morning, hold the door, make room. Small things, but they add up. They create a loop. One small act makes the next one more likely.\n\nThat is part of culture too. Good habits spread. Good acts cascade. And this is true almost anywhere: on the street, at work, at home, in a family, in a company. If you show up in a kind and constructive way, you make that behavior more likely in other people too. We do not exist in isolation. We live among other people, and the mood of the room belongs to all of us.\n\nOf course, nobody is like this all the time. I'm not. Sometimes we're tired or simply not up for much. But I still think we should aim for it. Not because we owe the world a performance, but because the way we show up affects other people, and they affect it in return.\n\nWork is no different. A company is just another place where people teach each other what kind of behavior is normal.\n\n## How culture gets made\n\n![A hand holding a door open for someone walking through](/images/posts/culture-how-it-gets-made-dark-transparent.webp)\n\nCulture is one of the most discussed words in the startup world, and one of the least understood. Most people talk about it as if it were a company trait, something that exists independently of the people inside it.\n\nIt isn't.\n\nCulture is the accumulation of repeated behavior. The small choices people make when no one is watching. Whether they say good morning. Whether they help when it costs them nothing. Whether they tell the truth when it's uncomfortable. Over time those choices become expectations, expectations become norms, and norms become culture.\n\nSo yes, individual behavior matters. A lot. Doing your job well is baseline. Writing clean code or keeping clean books is your job, not your culture. Culture is everything around the work: how you treat people and whether you make the room lighter or heavier when you walk in.\n\nBut that is only half the story.\n\nCulture is also what a group rewards, tolerates, and protects. If people who hoard information get promoted, that is culture. If someone can behave badly as long as they perform, that is culture too.\n\nYou do not \"have\" a culture. You make one, every day, individually and collectively.\n\n## The company makes it too\n\n![A person alone at a desk with TRUST and VALUES written on the wall behind them](/images/posts/culture-company-makes-it-too-dark-transparent.webp)\n\nThis is also where companies get it wrong.\n\nIf you want openness but punish honesty, that is culture. If you say people matter but keep rewarding the ones who damage trust because they perform, that is also culture.\n\nCompanies love talking about culture as if it were a vibe problem. Usually, it's an incentive problem. Or a leadership problem. Often both. People learn very quickly what actually matters by watching who gets promoted, who gets protected, and what behavior gets excused under pressure.\n\nThat is why culture cannot be reduced to individual behavior. We all shape the room, but the company shapes the rules of the room. It tells people what is safe, what is rewarded, and what will be ignored.\n\nYou cannot ask people to be generous in a system that rewards selfishness. You cannot ask people to speak up in a system that punishes candor. If the company itself keeps teaching the wrong lesson, calling it a \"culture issue\" is dishonest.\n\n## Professional enough to disagree\n\n![Two people in direct conversation, one gesturing openly](/images/posts/culture-professional-disagree-dark-transparent.webp)\n\nThere is another part of culture people avoid talking about: professionalism.\n\nEven in imperfect systems, individuals are still responsible for how they handle tension. No normal person enjoys conflict. I don't. Most people don't. But professionalism means raising concerns directly, respectfully, and in the room where something can actually change.\n\nWhat happens instead is passive aggression. People avoid the conversation, then move the conflict somewhere safer: gossip, resentment, sarcasm, complaints over beers, side channels. None of that solves anything. It teaches people that honesty is dangerous and politics is safer. And over time, those behaviors become the culture everyone claims to dislike.\n\nBad cultures can make directness harder. But avoided conflict does not disappear. It becomes culture.\n\nHealthy culture requires both institutional safety and individual courage.\n\n## What leadership can do\n\n![A team gathered around a table with one person standing apart, watching](/images/posts/culture-what-leadership-can-do-dark-transparent.webp)\n\nIndividual behavior does a lot of the daily work, but leadership shapes the conditions. Leaders decide what gets rewarded, what gets ignored, what gets confronted, and who feels safe to speak. They do not create culture with slides. They create it with consequences.\n\nOnce trust exists, one of the best things leaders can do is stop overmanaging. Give people room to take initiative, solve problems directly, and act like owners. But autonomy only works when standards are clear and people feel protected.\n\nGood leaders do both. They get out of the way when they should, and step in when they must. They refuse to reward behavior that corrodes trust, no matter how valuable the person seems.\n\nYou cannot ask your team to be open and direct if you are political or afraid of hard conversations yourself. People learn culture less from what leaders say than from what leaders allow.\n\nSo yes, we are the culture. But that `we` includes leadership, incentives, structure, and consequences, not just individual goodwill.\n\nCulture is not what you expect. It is what people learn will happen here.\n\nIt is what you repeat, reward, tolerate, and protect.",
      "date_published": "2026-03-21T12:00:00.000Z",
      "tags": [
        "Leadership"
      ],
      "image": "https://aleksandar.xyz/og/2026-03-21-we-are-the-culture.png"
    },
    {
      "id": "https://aleksandar.xyz/blog/2026-03-19-the-truth-about-go-to-market-strategy",
      "url": "https://aleksandar.xyz/blog/2026-03-19-the-truth-about-go-to-market-strategy",
      "title": "Make $5",
      "summary": "Takeaway from a Copenhagen founder panel: early go-to-market is less about playbooks and more about finding the first customer willing to pay.",
      "content_text": "import Panel from \"../../components/Panel.astro\";\nimport Resources from \"../../components/Resources.astro\";\n\n<Tldr>Most go-to-market advice is non-transferable. What worked for one founder has nothing to do with what will work for you. The only thing the founders at this panel agreed on: find one customer willing to pay you, learn from that, and keep going. GTM strategy is just another way of saying perseverance.</Tldr>\n\nI just came back from [Founders Circle: From Launch to Traction. Go-To-Market Lessons from Founders](https://luma.com/heqfvt6h?tk=dJv2HC), a meetup at the [Corti](https://www.corti.ai/) office here in Copenhagen. I've recently started my own company, and I've been trying to semi-commercialize products for a long time, so when I saw a GTM meetup it felt like the right moment to get out. Good excuse to reconnect with some ex-colleagues from [Airtame](https://airtame.com/) too.\n\nThree founders on stage, moderated by my former colleague Simon:\n\n<Panel\n  panelists={[\n    {\n      name: \"Tony Beltramelli\",\n      role: \"Founder & CEO\",\n      company: \"Uizard (acquired by Miro)\",\n    },\n    { name: \"Dennis Green-Lieber\", role: \"Founder & CEO\", company: \"Propane\" },\n    { name: \"Tegan Spinner\", role: \"Founder & CEO\", company: \"Worthmore\" },\n  ]}\n  moderator=\"Simon Hansen, CPO at Airtame\"\n/>\n\n## Why GTM advice breaks down\n\nListening to all three founders, it became very clear why so much go-to-market advice breaks down, especially at the earliest stage.\n\nWhat worked for one founder often had nothing to do with what worked for another. And it goes further than that. The same strategy that built one company might completely break another. What looks like a strategy in hindsight often includes timing, repetition, brute force, and some luck. The common thread was the ability to keep going long enough to learn what actually worked for their specific market.\n\nThat tells you something about entrepreneurship: it isn't for everyone.\n\nEvery playbook, every manual, every YouTube video, every AI-generated answer about GTM starts to look much weaker once you put a few different founders in the same room. You realize how much of it depends on what you're building, who you're selling to, and when you happen to enter the market.\n\n## What they did agree on\n\nFirst, know your user group. Find the niche your product is built for and sell to them as quickly as possible. Don't be shy about asking for money.\n\nTony shared a story from the early [Uizard](https://uizard.io/) days about pricing. They've spent a lot of time gathering data, writing graphs and charts to find the right pricing model. What they did in the end could have been a five-minute exercise: just check what competitors are charging and position yourself around that number. Pricing is often not rocket science.\n\nOne of the other founders took it further. If you're a very early-stage founder, your go-to-market strategy can be as simple as: \"I want to make $5.\" Find one customer willing to pay you. The number is arbitrary. It could be $5, it could be $500. The point is to prove that somebody will exchange money for what you're making. When that happens, find a similar person and see if they'll buy too.\n\nAt that stage, you don't need a full go-to-market machine yet. You need one customer. Once you have that, you find the next one. That's how the real strategy starts.\n\nThere was also a lot of discussion about what I'd call social selling, even though nobody used that term. People follow personalities. They buy from a person, not a brand. Posting on LinkedIn, building a community, organizing events in person. All panelists agreed that building your physical network, showing up at events, appearing next to other names in your industry, gets you out there more than any strategy deck.\n\nAnd then there was value proposition. One of the founders talked about selling the product and later discovering that customers were using features the founder hadn't checked in months. They'd forgotten those features existed. Customers wanted things nobody planned for. Knowing what you actually solve, and for whom, that kept coming up.\n\n## Advice that doesn't travel\n\nBeyond those points, most of the discussion was specific to each founder's industry and timing.\n\nTony from Uizard had a good example. They were one of the first AI design tools on the market, years before the current LLM wave. They'd been positioning themselves in SEO as an \"AI design tool\" for a long time. When the AI hype hit, they were suddenly dominating search rankings across every related category. His investors had actually advised against the SEO investment. Simon asked the panel about their biggest failures, and Tony turned his into a win: \"We invested into SEO. A lot of investors thought that was wrong. But when AI became a thing, it turned into a massive win that brought a lot of traffic and users.\"\n\nIt's a great story. But it's his story. It happened because of one product, at one moment, with one stroke of luck. You can't copy it.\n\nMost GTM advice is non-transferable. What worked for a design tool acquired by Miro has nothing to do with what will work for your B2B SaaS, your marketplace, or your consulting business.\n\nPeople hear these stories and treat them as recipes. They're not. They're survival stories. And survivors often can't tell you exactly why they survived. They just did, then reverse-engineered a narrative around it.\n\n## The real lesson\n\nGo-to-market strategy is just another way of saying perseverance. There's no playbook. The founders who make it are the ones who can allow themselves to fail enough times. Most people can't. That's why 90% of startups fail.\n\nFind your person, make $5, and keep going.\n\n<Resources links={[\n  { title: \"Founders Circle: Go-To-Market Lessons\", url: \"https://luma.com/heqfvt6h?tk=dJv2HC\", note: \"The Copenhagen meetup on Luma\" },\n  { title: \"Corti\", url: \"https://www.corti.ai/\", note: \"AI for patient safety, hosted the event\" },\n  { title: \"Airtame\", url: \"https://airtame.com/\", note: \"Wireless screen sharing for meetings and classrooms\" },\n  { title: \"Uizard\", url: \"https://uizard.io/\", note: \"AI-powered design tool, acquired by Miro\" },\n]} />",
      "date_published": "2026-03-19T12:00:00.000Z",
      "tags": [
        "Product"
      ],
      "image": "https://aleksandar.xyz/og/2026-03-19-the-truth-about-go-to-market-strategy.png"
    },
    {
      "id": "https://aleksandar.xyz/blog/2026-03-04-marty-we-need-a-new-book",
      "url": "https://aleksandar.xyz/blog/2026-03-04-marty-we-need-a-new-book",
      "title": "From Inspired to expired (Marty, we need a new book)",
      "content_text": "import Resources from \"../../components/Resources.astro\";\n\n<Tldr>Inspired was written for a world where code was the bottleneck. That world is gone. The bottleneck moved to product discovery and decision-making, and engineers who only write code are no longer enough. We need product engineers, blurred role boundaries, and organizations willing to let the best argument win regardless of title.</Tldr>\n\nFor many years now, and for many of us, Marty Cagan's book [Inspired](https://www.svpg.com/books/inspired-how-to-create-tech-products-customers-love-2nd-edition/) has been a guiding star when it comes to product development.\n\nThe idea is heavily centered around [dual track agile](https://www.svpg.com/dual-track-agile/) and expert product teams that consist of a tech lead, PM, designer and a couple of software engineers. In essence, while developers were writing the code, the rest of the team would spend time on research, defining the problem, and optimizing the process around what used to be the most expensive part of building digital products, writing the actual code.\n\n![Dual Track Agile then vs the new reality now](/images/posts/dual-track-then-vs-now.svg)\n\nInspired was written for a world where code was the bottleneck and product teams existed to feed the engineering machine. That world is gone.\n\n## The bottleneck moved\n\nThis shift didn't start with AI. If we go back 10 to 15 years, software engineers were rare, very technical, and obsessed with code quality, performance, patterns, testing. Then JavaScript took the whole landscape by storm. Suddenly, with technologies like Electron, two or three people could ship a cross-platform app that used to require 10 to 15 engineers working in C++ or Qt. Sure, you'd get a front-end app that weighs 100 megabytes, but businesses were fine with that because they could ship faster with fewer people.\n\nThat was the first move from engineering-centric thinking to product-centric thinking. Quality declined, but speed went up, and speed won. The trend was already in motion long before AI showed up.\n\nNow, with AI, that same shift went into overdrive. As it always happens, once you optimize one thing, the [bottleneck moves elsewhere](https://www.squid-club.com/blog/the-product-velocity-paradox-when-your-engineers-outrun-your-product-team).\n\nWe've reached a point where, as engineers, we can output way more than we can agree on. Product discovery, \"what should we build, and even more importantly, when\", became the new bottleneck, a new frontier that needs optimization.\n\nThings are moving really fast. The future is literally now, not even tomorrow. This isn't about values or morality, about whether AI is good or bad. I hear opinions from every bracket and ideology, and they are all equally right or wrong. And equally meaningless. The core bottleneck at this point is people's ability to adjust, accept the new reality and embrace it.\n\n## The new engineer\n\nAs I've discussed in my video [Death of the Software Engineer (and the Rise of the Product Engineer)](https://www.youtube.com/watch?v=mGb9vtWSQxo), we don't only need to optimize the discovery process. Software engineers also have to understand that they are no longer valuable just because they can write code.\n\nWhen I'm hiring people for my team, I don't look at how many frameworks they know. I look at how much they care about the product. What was the outcome for the customer? For the business? Did we make money or lose it? Was this a waste of time or not? That's what matters now.\n\nAny AI model today can write an API. Let's not pretend that's where our value is. If a business has to make compromises, and they always do, they will skip premature optimization every single time. Nobody is going to write perfect code for a billion users when they have none. The value is shifting towards being semi PMs, semi designers, people who can navigate the full picture.\n\nThe value of engineers hasn't decreased. If anything, it increased. But only as long as we understand what our new roles are, and what they no longer are.\n\n## The new workflow\n\nThe fact is, we can build products so much faster, which means we need new workflows and new ways to validate them.\n\nWe used to spend months in waterfall or agile rituals, researching every edge case, trying to figure out every potential thing that can go wrong before writing a single line of code. That's over. You no longer need to spend two months building something to see whether it works. You can build ten things in two days, throw away seven the same day, then test the other three with your customers.\n\nThe quality of that code can be amazing or absolutely terrible, and it makes no difference. Once you know that what you're building is what your customers actually want, your engineers will refactor and make that code production-ready in hours, if not minutes.\n\n## Blur the lines\n\nSpeed alone isn't enough. We also need to rethink who does what.\n\nAs engineers, our job is no longer just to build the product. It's to build the product in a way that enables everyone on the team to contribute directly. PMs, designers, even stakeholders. Not by writing code, but by prompting their way into things.\n\nThis is already happening. Think about component libraries and tools like Storybook. If we break our UI into well-isolated, self-contained libraries, a designer can maintain and iterate on visual components without ever worrying about the underlying codebase. They prompt, AI writes the code, engineers review it. The designer stays in their domain, the code stays clean, and the whole team ships faster.\n\nThe same applies to discovery. Tools like Gemini Deep Research, NotebookLM, MCPs and countless other AI tools are making it possible for non-technical team members to do research, synthesize data, and arrive at product decisions that used to require weeks of cross-functional meetings. The entire team becomes more autonomous.\n\nThe boundaries between roles are dissolving. Everyone on the team can now touch parts of the product that used to be locked behind technical gatekeeping. Our job as engineers is to architect for that, to build systems that are open enough for the whole team to move through, not just us.\n\n## The harder problem\n\nEngineers changing is only half of it. Organizations have to change too, and that's the harder problem.\n\nIf your company still operates with rigid, hardcoded views on who does what, none of this works. If an engineer can't challenge a PM's assumptions because of hierarchy, if a designer can't push back on a product decision because \"that's not their role\", you're stuck in the old model no matter how good your AI tools are.\n\nThis requires a culture where the best argument wins, not the highest title. Where an engineer can say \"I don't think we should build this\" and be taken seriously. Where a designer can contribute to the product backlog, not just the Figma file.\n\nYes, this makes things more complicated. Clear responsibility becomes harder to define when everyone can touch everything. Team dynamics get messier. That's just where we are. The alternative is pretending the old boundaries still make sense, and they don't.\n\nWe need a new playbook, one that acknowledges that the hardest part of building products is no longer writing the code, but figuring out what to write and why. And that figuring it out is no longer one person's job.\n\nMarty, we need a new book.\n\n<Resources links={[\n  { title: \"Inspired by Marty Cagan\", url: \"https://www.svpg.com/books/inspired-how-to-create-tech-products-customers-love-2nd-edition/\", note: \"The book that shaped product development for a generation\" },\n  { title: \"Dual Track Agile\", url: \"https://www.svpg.com/dual-track-agile/\", note: \"SVPG's original framing of discovery and delivery\" },\n  { title: \"The Product Velocity Paradox\", url: \"https://www.squid-club.com/blog/the-product-velocity-paradox-when-your-engineers-outrun-your-product-team\", note: \"When engineers outrun the product team\" },\n  { title: \"Death of the Software Engineer (video)\", url: \"https://www.youtube.com/watch?v=mGb9vtWSQxo\", note: \"My talk on the rise of the product engineer\" },\n]} />",
      "date_published": "2026-03-04T13:05:34.000Z",
      "tags": [
        "Product",
        "Engineering"
      ],
      "image": "https://aleksandar.xyz/og/2026-03-04-marty-we-need-a-new-book.png"
    }
  ]
}