Tools: From Diagram to Code with Kiro CLI & Claude Opus 4.6

Tools: From Diagram to Code with Kiro CLI & Claude Opus 4.6

Source: Dev.to

1. From Picture to Diagram ## The transformation prompt ## The result ## 2. From Picture to Diagram - Second Version ## Why did we update the initial architecture? ## Simply ask Kiro why we replaced a part of the diagram ## The dual-write problem ## Solving dual-write problem ## From diagram to code ## 1. Create a Kiro steering file ## 2. Generate code from the diagram ## 3. Deploy the generated application ## 4. Generate an end-to-end test ## From Characteristics to Fitness Functions ## Listing architecture characteristics ## Listing fitness functions ## Visualize your fitness functions ## Prevent dual-write problem with a fitness function ## Continue talking to your diagram ## Show Architecture Patterns ## Show Architecture Constructs ## Compute Architecture Cost ## Conclusion A chalk talk is a conference session with a lot of interactivity. The speakers ask questions to the audience, draw diagrams on a whiteboard, and the audience asks questions back to the speakers. That's one of my preferred session types when I go to AWS re:Invent. Here I chose one I attended in 2025 to explore a few techniques I use with GenAI and architectural diagrams. I also took the opportunity that Claude Opus 4.6 is out and already available within Kiro CLI (an equivalent to Claude Code at AWS) to push this forward. During a chalk talk (like that could happen in any architecture session), the AWS architecture is hand drawn. In this particular session named "Beyond micro services: Event-driven patterns for modern applications", we designed a web site backend, and for scalability reasons we wanted to use an EDA (Event Driven Architecture) workflow to process shipment events (a parcel was shipped for example). Here is a picture I took of the architecture during the session: First I started kiro-cli, uploaded my photo in the IDE and chose the Claude Opus 4.6 model... Create a diagram from 20251205_085619.jpg I obtained a nice mermaid diagram in a markdown file as you can see below. Mermaid is a diagram as code notation that allows describing diagrams with text. However, I prefer to use Drawio because I can easily add modifications in my diagram with my mouse using Drawio Extension in Kiro-IDE (an IDE based on VSCode open source extension, maintained by AWS). Create a drawio diagram from architecture-diagram.md using 2025 icons Which gave me this result: N.B.: I could also ask to directly generate a Drawio diagram from the picture. Then during the chat talk, after some discussions we made this architecture evolve. So, I uploaded another picture and asked the same kind of questions which created another Drawio diagram of the architecture: Create a drawio diagram from architecture-diagram-v2.md using 2025 icons You may ask yourself, why did we change our minds? This is quite common in design and architecture, the first idea that comes to mind is not always the best one, even if you have a lot of people in the room😊 If it's not obvious (and it's not for a lot of people) you can simply ask kiro-cli. Why Event Driven Shipping flow has a better architecture in architecture-diagram-v2.drawio than in architecture-diagram-v1.drawio? The answer was quite impressive because that's exactly the reason why we changed our mind during the chalk talk, here is a chunk of the LLM answer: So, in brief, in the first architecture the problem is that the lambda writes in both Event Bridge Bus and DynamoDB Table. That's a trap we can easily fall into because technically... it works. However, as you may know, software is not 100% reliable and it can happen that you send a notification to the customer, but (due to an unexpected error) you don't write to the DynamoDB table. That means that you will have an inconsistent state in your system. For example, your customer might receive an information and your customer support team doesn't see the same information. That will make things confusing for your users if they have to talk to each other. That's the 'dual-write' problem. In the picture below we have that dual-write (the lambda writes in DynamoDB and Event Bridge) In the picture below we have NO dual-write (the lambda writes in the DynamoDB table only) The solution is to write the message in one place only, not more. In this case the user will receive a notification only if each step of the process was successful. That's why we added a DynamoDB Stream, and an EventBridge Pipe that can pull messages from the DynamoDB Stream and write to the Event Bridge Bus (EventBridge Pipes avoid maintaining a lambda for that). N.B.: For EDA experts, later in the chalk talk we added a dead letter to store the failing messages. A technique I find very useful is to generate the CDK (Cloud Development Kit) code from my diagrams. The generated CDK code can be executed in order to build a template that can be deployed to AWS CloudFormation (an Infrastructure as Code Engine), which atomically creates all the components on AWS. That's super powerful. So, First I created a steering (i.e. rule file) that I call my ProfesionnalTwin Then, I used Kiro CLI and asked: architecture-diagram-v2.drawio generate "Event-Driven Shipping Flow" application And after a few minutes I got that kind of code I ran a 'cdk deploy' and the deployment worked right away. Testing an EDA workflow is not easy, but I can rely on Kiro-CLI for that, so I asked: Create a test that will put a message on the sqs queue and verifiy the workflow output It generated a test suite that I could execute to verify the workflow was working: Architecture has characteristics (security, scalability, ...), so I asked what sort of the characteristics Kiro CLI could identify in this architecture What architecture characteristics can I identify in my shipping flow architecture If you want to prevent a drift of your architecture characteristics (scalability reduction, security reduction, ...) you can use what we call "Fitness Functions". These are static or dynamic tests that will verify the compliance of your architecture with its characteristics. That's super important if you want to make your architecture evolve while minimizing risks. What fitness functions can I create for these characteristics? To better visualize what fitness functions are and where they apply in my architecture, I simply asked Kiro Create a new Drawio diagram from architecture-diagram-v2.drawio with representation of the implemented fitness functions The result is an updated diagram with all the fitness functions (it's focused on scalability only in this case). For example, for the application to be scalable we shouldn't use FIFO queues (they have limited throughput compared to standard queues), and during runtime, message age should be less than 60 seconds. I asked an implementation of the previous scalability fitness functions, but I also wanted to focus on our dual-write problem. This fitness function should spot this issue in the future if someone (including me) wants to make the architecture evolve and forget about this. What fitness function can I implement to avoid dual-write implementation? The result is a test suite containing CDK Assertions. Actually, the test will scan my CloudFormation template and detect if the lambda is allowed to write to more than one target service. Therefore in the future if someone updates the lambda with a new target, the test will fail when executed locally or in the CI/CD pipeline. There are multiple ways you can continue to use your diagram to extract some information or generate others. Here are a few examples, and I don't think I'm exhaustive here: Maybe you prefer to work with EIP (Enterprise Integration Patterns), or at least identify these patterns, for a better communication or simply generalize your architecture in order to use it on another platform. Here is an example of what you can do: Redraw a new shipping flow with Enterprise Integration Patterns We see for example that the shipping update lambda function is a 'Message Translator', EventBridge a 'Publish/Subscribe Channel', ... Having a monolithic piece of code to describe your architecture can have some advantages, but for reuse purposes you might want to split this into modules called CDK Constructs in the CDK vocabulary. You can ask for some help from Kiro for this Create a new drawio diagram from architecture-diagram-v2.drawio with well defined constructs Then you can adjust this diagram to your needs and ask Kiro to update your CDK code from your Diagram. Another way you can use your diagram is to estimate the cost of your architecture. In order to be accurate, I installed the AWS Pricing MCP Server architecture-diagram-v2.drawio how much would this shipping flow cost with 10000 events per hour And I got a report with the cost breakdown Chalk Talks are really interesting and fun. The one I'm talking about in this blog post was the opportunity to learn more about AWS Services and Event Driven Architecture (EDA). That was also the opportunity to explore the Kiro CLI capabilities with pictures & diagrams. Especially with the new Claude Opus 4.6 (I guess with can get quite similar results with Claude Sonnet 4.5 by the way). That led me to explore more techniques, like asking why an architecture is better than another one. Then list architecture characteristics, generate fitness functions, and prevent from architecture flaws in the future (the dual-write problem in this case). When I see all these possibilities, I really feel that we'll be able to build more sophisticated software more easily and software architect jobs will get more and more exciting! Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK: The Lambda only writes to DynamoDB. The event propagation happens automatically through the stream and pipe. Here's why that matters across several architectural characteristics: 1. Data Consistency (the biggest win) - v1: If the Lambda writes to DynamoDB but the put_events call to EventBridge fails (timeout, throttle, transient error), you have data in DynamoDB but no event was published. Consumers never learn about the shipping update. The system is in an inconsistent state. - v2: The Lambda does a single write. The DynamoDB Stream guarantees that every successful write produces a stream record. EventBridge Pipes reads that stream. There's no dual-write β€” the event is derived from the data, not published separately. ... Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: The Lambda only writes to DynamoDB. The event propagation happens automatically through the stream and pipe. Here's why that matters across several architectural characteristics: 1. Data Consistency (the biggest win) - v1: If the Lambda writes to DynamoDB but the put_events call to EventBridge fails (timeout, throttle, transient error), you have data in DynamoDB but no event was published. Consumers never learn about the shipping update. The system is in an inconsistent state. - v2: The Lambda does a single write. The DynamoDB Stream guarantees that every successful write produces a stream record. EventBridge Pipes reads that stream. There's no dual-write β€” the event is derived from the data, not published separately. ... CODE_BLOCK: The Lambda only writes to DynamoDB. The event propagation happens automatically through the stream and pipe. Here's why that matters across several architectural characteristics: 1. Data Consistency (the biggest win) - v1: If the Lambda writes to DynamoDB but the put_events call to EventBridge fails (timeout, throttle, transient error), you have data in DynamoDB but no event was published. Consumers never learn about the shipping update. The system is in an inconsistent state. - v2: The Lambda does a single write. The DynamoDB Stream guarantees that every successful write produces a stream record. EventBridge Pipes reads that stream. There's no dual-write β€” the event is derived from the data, not published separately. ... COMMAND_BLOCK: --- inclusion: always --- <!------------------------------------------------------------------------------------ Add rules to this file or a short description and have Kiro refine them for you. Learn about inclusion modes: https://kiro.dev/docs/steering/#inclusion-modes -------------------------------------------------------------------------------------> - I work with python 3.10 - I work with CDK V2 - I work in AWS eu-west-1 region Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK: --- inclusion: always --- <!------------------------------------------------------------------------------------ Add rules to this file or a short description and have Kiro refine them for you. Learn about inclusion modes: https://kiro.dev/docs/steering/#inclusion-modes -------------------------------------------------------------------------------------> - I work with python 3.10 - I work with CDK V2 - I work in AWS eu-west-1 region COMMAND_BLOCK: --- inclusion: always --- <!------------------------------------------------------------------------------------ Add rules to this file or a short description and have Kiro refine them for you. Learn about inclusion modes: https://kiro.dev/docs/steering/#inclusion-modes -------------------------------------------------------------------------------------> - I work with python 3.10 - I work with CDK V2 - I work in AWS eu-west-1 region CODE_BLOCK: Architecture characteristics I can identify: 1. Scalability / Elasticity - All components are serverless and scale independently (SQS, Lambda, DynamoDB on-demand, EventBridge) - No provisioned capacity or concurrency caps anywhere 2. Loose Coupling - Components communicate through events, not direct calls - SQS decouples the upstream order management from the shipping-update Lambda - EventBridge bus decouples the DynamoDB state change from downstream consumers (SNS, event-handler Lambda) - Adding new consumers requires only a new EventBridge rule β€” no changes to producers ... Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: Architecture characteristics I can identify: 1. Scalability / Elasticity - All components are serverless and scale independently (SQS, Lambda, DynamoDB on-demand, EventBridge) - No provisioned capacity or concurrency caps anywhere 2. Loose Coupling - Components communicate through events, not direct calls - SQS decouples the upstream order management from the shipping-update Lambda - EventBridge bus decouples the DynamoDB state change from downstream consumers (SNS, event-handler Lambda) - Adding new consumers requires only a new EventBridge rule β€” no changes to producers ... CODE_BLOCK: Architecture characteristics I can identify: 1. Scalability / Elasticity - All components are serverless and scale independently (SQS, Lambda, DynamoDB on-demand, EventBridge) - No provisioned capacity or concurrency caps anywhere 2. Loose Coupling - Components communicate through events, not direct calls - SQS decouples the upstream order management from the shipping-update Lambda - EventBridge bus decouples the DynamoDB state change from downstream consumers (SNS, event-handler Lambda) - Adding new consumers requires only a new EventBridge rule β€” no changes to producers ... COMMAND_BLOCK: Based on the characteristics I identified, here are fitness functions you could create beyond the scalability ones you already have: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ## Loose Coupling β€” Static - **EventBridge bus exists between producers and consumers** β€” ensures no direct Lambda-to-Lambda invocations - **No Lambda environment variables referencing other Lambda ARNs** β€” proves components don't call each other directly - **EventBridge rule count matches expected consumer count** β€” detects accidental tight coupling through hardcoded targets ... Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK: Based on the characteristics I identified, here are fitness functions you could create beyond the scalability ones you already have: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ## Loose Coupling β€” Static - **EventBridge bus exists between producers and consumers** β€” ensures no direct Lambda-to-Lambda invocations - **No Lambda environment variables referencing other Lambda ARNs** β€” proves components don't call each other directly - **EventBridge rule count matches expected consumer count** β€” detects accidental tight coupling through hardcoded targets ... COMMAND_BLOCK: Based on the characteristics I identified, here are fitness functions you could create beyond the scalability ones you already have: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ## Loose Coupling β€” Static - **EventBridge bus exists between producers and consumers** β€” ensures no direct Lambda-to-Lambda invocations - **No Lambda environment variables referencing other Lambda ARNs** β€” proves components don't call each other directly - **EventBridge rule count matches expected consumer count** β€” detects accidental tight coupling through hardcoded targets ... COMMAND_BLOCK: ## Static β€” CDK Assertions Lambda functions have at most one write permission β€” if a Lambda has grant_write_data to DynamoDB AND grant_put_events_to on EventBridge, that's a dual-write risk. python def test_no_lambda_dual_write(): """Lambdas must write to at most one destination β€” use streams/pipes for fan-out.""" template = _template() resources = template.find_resources("AWS::IAM::Policy") for logical_id, resource in resources.items(): statements = resource["Properties"]["PolicyDocument"]["Statement"] write_actions = set() for stmt in statements: actions = stmt.get("Action", []) if isinstance(actions, str): actions = [actions] for action in actions: if any(w in action for w in [ "dynamodb:PutItem", "dynamodb:UpdateItem", "dynamodb:BatchWriteItem", "events:PutEvents", "sqs:SendMessage", "sns:Publish", ]): write_actions.add(action.split(":")[0]) assert len(write_actions) <= 1, ( f"{logical_id} has write permissions to {write_actions} " f"β€” dual-write risk. Use DynamoDB Streams + EventBridge Pipes instead." ) Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK: ## Static β€” CDK Assertions Lambda functions have at most one write permission β€” if a Lambda has grant_write_data to DynamoDB AND grant_put_events_to on EventBridge, that's a dual-write risk. python def test_no_lambda_dual_write(): """Lambdas must write to at most one destination β€” use streams/pipes for fan-out.""" template = _template() resources = template.find_resources("AWS::IAM::Policy") for logical_id, resource in resources.items(): statements = resource["Properties"]["PolicyDocument"]["Statement"] write_actions = set() for stmt in statements: actions = stmt.get("Action", []) if isinstance(actions, str): actions = [actions] for action in actions: if any(w in action for w in [ "dynamodb:PutItem", "dynamodb:UpdateItem", "dynamodb:BatchWriteItem", "events:PutEvents", "sqs:SendMessage", "sns:Publish", ]): write_actions.add(action.split(":")[0]) assert len(write_actions) <= 1, ( f"{logical_id} has write permissions to {write_actions} " f"β€” dual-write risk. Use DynamoDB Streams + EventBridge Pipes instead." ) COMMAND_BLOCK: ## Static β€” CDK Assertions Lambda functions have at most one write permission β€” if a Lambda has grant_write_data to DynamoDB AND grant_put_events_to on EventBridge, that's a dual-write risk. python def test_no_lambda_dual_write(): """Lambdas must write to at most one destination β€” use streams/pipes for fan-out.""" template = _template() resources = template.find_resources("AWS::IAM::Policy") for logical_id, resource in resources.items(): statements = resource["Properties"]["PolicyDocument"]["Statement"] write_actions = set() for stmt in statements: actions = stmt.get("Action", []) if isinstance(actions, str): actions = [actions] for action in actions: if any(w in action for w in [ "dynamodb:PutItem", "dynamodb:UpdateItem", "dynamodb:BatchWriteItem", "events:PutEvents", "sqs:SendMessage", "sns:Publish", ]): write_actions.add(action.split(":")[0]) assert len(write_actions) <= 1, ( f"{logical_id} has write permissions to {write_actions} " f"β€” dual-write risk. Use DynamoDB Streams + EventBridge Pipes instead." )