CLI (Command Line Interface) is something I have been using a lot, and I prefer this to using GUI (Graphical User Interface) because it's way more flexible and powerful.
For decades, when you were working in a terminal you could already start a "conversation" with your computer by typing some commands that it would respond to (with the aws cli for example). That is so true that sometimes, non technical people seeing me working like that, have the feeling that I'm talking to the computer. However, traditional CLI is quite limited if you don't give the right inputs and it has a very low intelligence when it answers.
But today, with Amazon Q, you can combine the standard CLI with GenAI, which is super powerful and that's what I discovered while testing the new capabilities of Amazon Q Developer CLI agent with architecture diagrams.
The Prerequisites
There are already a lot on how to configure Amazon Q Developer desktop (not Q Developer CLI) for diagrams in this blog post: https://dev.to/welcloud-io/from-diagram-to-code-with-amazon-q-developer-2da4
But you can now add a new companion called Amazon Q for command line (Q CLI for simplicity here), which can be installed on Mac or Linux. Here is the procedure you can follow to install it:
https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-installing.html
N.B. : My installation did not integrate well right away with my VS Code terminal, so I had to use Q CLI in a separate shell window. I believe it's quite a challenge to build a solution that works everywhere and maybe my environment configuration is not standard.
If you encounter any issue you can also use these blog post:
https://dev.to/aws/the-essential-guide-to-installing-amazon-q-developer-cli-on-windows-lmh
Once installed, open a terminal, and type:
q chat
then you are good to go!
The sample application
I always use the same simple feedback application. This simple application records feedbacks and acknowledge they have been recorded by sending an email back to the user.
The code of this application is right there:
The drawio diagram of this application is right there:
Using the code and the diagram, you can reproduce what I explained in this blog post :)
N.B. This is generative AI, and results can be slightly different on your side.
1- Extract diagram from code
My first idea with the Q CLI was to extract an architecture diagram of the application from the code. So, I gave the prompt below to Amazon Q CLI:
create a file containing a mermaid diagram of this application
And not surprisingly (even if I am always surprised it works right away :)) it generates code, ask me to approve and creates a markdown (.md) file in my folder containing a mermaid diagram (diagram as code) that I can preview!
2- Update the diagram
Now that I have my application diagram I would like to change it and instead of sending an email, I want to send an SMS to the user.
So I ask Q CLI:
update this diagram so now I send an sms instead of an email
And it proposes an update, that I can accept or not.
Of course I accepted it by typing 'y' (i.e. yes) in the CLI and here is the result: it replaces the 'Email' by 'SMS' and the 'User Email' by 'User Phone'.
3- Generate code from diagram
Now I would like to generate code from a diagram. Even though it's quite common to sketch an architecture diagram before implementing it, the difference now, with GenAI, is that we can ask the generation of the infrastructure and application code from this diagram.
So I started with my (always the same) draw.io diagram in a folder and I gave Q CLI my (always the same) prompt.
can you generate application from the drawio diagram (I want the code of the lambdas to be written in python and the infrastructure as code with the python cdk v2)
And I obtained complete folder structure with all I need to deploy this app!
Note that I say "always the same", because this is exactly the prompt I used in a previous blog post (https://dev.to/welcloud-io/from-diagram-to-code-with-amazon-q-developer-2da4) about generating code from diagrams.
But, at the time, that was with Amazon Q Desktop with VS Code extension where I used the Q /dev agent or Q @workspace.
What I think is nice here, is that Q CLI combines, from my point of view, advantages of both /dev agent and the use of @workspace in VS Code Q chat extension.
Actually, /dev can create or update files in your folder (what Q Desktop + @workspace doesn't do) which reduces copy/paste work.
However, using @workspace, which also knows about your files in the folder, streams the response in the chat and you get an answer much faster the /dev.
In brief, Q CLI knows about the files in your folder (like /dev and @workspace), it will write/update files in your folder (no copy/pasting needed) and it will split and stream the response (faster answers with more interactions).
Note that I didn't test what was generated at this point, I made the assumption that it was working, let see later if I was too confident😊
4- Update and Improve my architecture
My architecture diagram is quite simple, and everything is contained into one single CDK app with CDK Level 2 "technical" Constructs like DynamoDB, SNS, ...
The Construct concept is very important in the CDK and the beauty is that you can create your own Constructs (let's say more business domain oriented, autonomous, Constructs).
So I will ask Q CLI to redraw my diagram with higher abstraction Constructs (this is where I have the feeling that I can talk to my diagram through Q CLI 😊), and I wrote this prompt:
can you update the drawio diagram file with well defined cdk constructs?
N.B. I discovered later that it's not the best prompt if you want to repeat that. Check the 'Epilogue' section at the end of this blog post to find a better one. I keep this one though, for the sake of the story.
Then it proposes some modifications, and I typed 'y' (yes) to accept them...
... and the result is quite impressive to me!
I obtained a new diagram with CDK Constructs tied into colored boxes and a diagram legend as you can see below.
Note that I didn't have to specify the color and legend stuff in my prompt, but I found it was a very good idea.
And, by the way, when you look at the result I guess you can better understand what I mean by a CDK Constructs 😊
But even better...
...Q CLI naturally proposed me to change my code with these CDK Constructs, so my code and diagram are in sync!
Of course, I accepted this proposal, and I got some new code proposed, what I also accepted.
Does this work when I deploy?
This is an important question to me, and I was worried that, with all the stuff that had been generated, I would find a lot of errors. After all, I generated code and changed it without testing it yet (what I wouldn't recommend as a professional, but I wanted to see how far we could go 😊)
So I deployed the CDK code and... I still cannot get used to it... it worked right away!
And when I clicked on the landing page link of my application here is what I got:
Did I ask for all this in my prompt? No, but the diagram contains enough information for Q to understand what the landing page could look like. That's crazy!
The infrastructure code worked right away, but to be honest, with this prompt I always get a little issue. When I click on the "Submit feedback" button once deployed for the fist time, the "record feedback" URL is not accessible. The reason is this URL is missing the api stage (i.e. /prod in my case), so I have one line of code to change in the landing page HTML code.
But, that's it... When I change that, and I submit feedback, I get this message below (again I didn't have to specify this in my prompt)
... and the feed back is recorded in my DynamoDB table!
Does it improve the quality of my architecture?
As I previously wrote, CDK Constructs can help you build better abstractions of your architecture, but this might not be something obvious to do. That's where, I believe, GenAI can give you ideas.
If I look at the structure of my resources with the first generation (with only technical Constructs), I get something like this.
If I look at the structure of my resource with the updated generation (with abstracted Constructs), I get more something like this:
We can see a difference (even thought it's not big). In the second hierarchy, our lambda functions are grouped together in their own Construct. The other components live in their own Construct.
Of course, it's far from perfect, that could be discussed, but it's a very good start with such a little effort (just one simple update prompt!).
We can start to see the power of constructs however.
For example, I can see there is a "FeedbackNotifications" Construct that was generated. That isolates this part (let's say this domain) of this architecture into it's own Construct.
I find this interesting and more explicit than "FeedbackTopic" from the first generation, which is specifically related to Amazon SNS (Simple Notification Service).
So now I can make this Construct evolve independently.
For example, I can find a better way to send the feedback confirmation to my user (while keeping SNS as the point of contact with the rest of the architecture). And I could extend this Construct with another type of notification, like an SMS notification with Amazon SNS, or use an even more sophisticated services like Amazon SES or Amazon Pinpoint.
I feel like I have (or more honestly Q CLI has) abstracted away this notification part and now, as it is decoupled, it can have it's own life, be tested independently, etc as explained in the Q CLI summary 😊
Conclusion
I could carry on talking to my diagram, and maybe ask more things. That was possible with /dev or VS Code Q extension + @workspace, however I feel like we could go a step further with the Q CLI, and I am keen to carry on the experimentation and use it in my every day job!
The only thing I regret, which is a detail, but however important, is that it does not integrate well with my VS Code terminal yet (probably due to my VS Code & O.S. configuration) so I have to use an external terminal to execute the Q CLI. But, once this will be solved, I feel like it will be a terrific tool!
Epilogue
I wanted to write this epilogue, because a few days later I could easily configure my VS Code terminal with Q CLI, and that works well😊
But, I retried my prompts and I didn't get always the same kind of response and behavior. Basically, GenAI is build on probability and I think that, for my first try with Q CLI, I was lucky😊
Anyway, I modified one prompt and now I get very similar results each time (even if, again, GenAI is still not deterministic). And by the way, after a lot of trials I haven't reached my BuilderId free tier yet. The free tier seems quite generous so far!
So, if you want to try what is in this blog post, you should first ask (and I didn't change that):
can you generate application from the drawio diagram (I want the code of the lambdas to be written in python and the infrastructure as code with the python cdk v2)
...and then ask (this is the prompt I changed):
modify the drawio diagram to split the architecture diagram into well defined cdk construts (use colors and legend)
That proves prompt engineering is an important part with GenAI!
Here is also a YouTube video that shows this in action 👉https://www.youtube.com/watch?v=D6cYFDoX1Es