Communicating ideas and exploring them quickly
While the term vibe coding has been controversial, I do think the sentiment hints towards a way you might want to work with these tools. Leaving aside what is exactly meant by the term, the way I would like to work is that I have a fragment of an idea, I then communicate that idea to my computer. It can then show me a mostly working version of that idea.
An idea can be design, UX, system design, code design and really anything I can think of. The thing with ideas is most of them need iteration. It would be very rare that an initial idea is the final one, and most ideas should be thrown away.
Nothing really changes in this regard pre-AI. It's just that pre-AI an idea would be much more expensive to explore. You would have to place a larger bet on an idea up front considering the time involved in exploring it.
Models can't tell you what's good
What’s good is up to us. As you might have realised, these tools think most of your ideas are good ideas, and they are not… They can help you explore ideas that you may not have considered. But in my experience, they can’t be trusted to judge what is good.
Quality still depends on us
When it comes to quality, that’s up to us also. The model may be able to produce something that looks visually correct but doesn’t have the architecture in place for our goals of maintainability. This is where we can use our does-this-look-good-meter to tweak it and communicate with the computer until we get to where we want to go with quality.
Switching approaches and models
We might then get to a stage where we know what we want done but after many attempts the model is just not outputting what we were aiming for. That’s where having some knowledge about different models and their strengths and weaknesses is helpful. You might be able to switch to a different model to get the result you are looking for. The editor still exists too, so if you need to take over to unblock your coding agent that is still available. Being flexible in your approach for what you are working on will give you the best results.
Non-determinism means no one right approach
I think it's too early in the world of AI coding to land on exact practices for getting the best results. We are also talking about non-deterministic systems working on dynamic problems. Even two people using identical prompts and starting points are going to get different results.
Guiding the model with skills and communication
What we can land on now for the best results is leveraging your own technical skills and communication skills (along with some knowledge about the models you are using) to guide the models to achieve the result you are looking for.
Don’t look for intent, look for output
While I do think these models are starting to mirror some likeness to intelligence (well, at least advanced pattern matching), I think we need to dehumanise our approach to expectations of their output and treat them more like an output machine, and think about it less like a team-mate. Don’t waste any time asking it why it did something, erase its memory, move on. The less time you spend questioning the machine for its reasoning, the more likely you will find another approach/prompt/model that will output what you are asking for.
Collaboration surfaces
Using Cursor is very much a solo endeavour, and having an AI coding model available has seemed to impact how much we are pairing on writing code. The rate at which you can now build up a massive PR with your idea that’s unaligned with the rest of the team has increased. While there is less sunk-cost in generated code it still exists, which can be a frustrating experience for you and your teammates that will need to review your code. Communicating early and often with your ideas seems to me just as important as it's been in the past, and maybe even more so. Working like this might mean scoping out your idea with a messy AI coding session to explore the pros and cons, then making a loom or having an old-fashioned meeting to discuss what you have found.
One area I'm very interested in is how it is evolving with AI coding in PRs and possibly them morphing into more of a collaboration space than strictly reviewing someone else's code. With Codex today you can sort of achieve this. Many people can comment on a PR, you can then have a conversation, make a decision, and then tag Codex to generate the changes you have asked for in the cloud. I think areas like this are where we might see a real uplift in dev satisfaction, output speed and quality. We should continue to explore opportunities like this to work with AI. The individual process that people go through when using tools like Cursor is much harder to come to clear guidelines that will uplift everyone’s experience of working with it and the output it provides.
Looking ahead
Who knows the advances we are going to see in the coming years, but I think communicating ideas and judging the quality of output is not going anywhere anytime soon. We are only a couple of years into AI coding (and only 1 into decent coding models), so the more experimenting we can do, often the better outputs we will have. Even if we found a few workflows that worked really well, they are going to be eclipsed by the latest model release or model degradation (which companies like Anthropic are admitting to). Keeping this experimentation mindset will help us find the next best thing through the avalanche of changes.