If you are looking at the state of software development you can see that serverless is starting to go mainstream. As complexity grows in software and more integration is introduced to produce a more dynamic and effective application you are bound to need more tooling.
In this article, I want to introduce Architect a serverless framework that might make you think again when looking at going serverless. Architect is an opinionated serverless framework that focuses on reducing the complexity of working with multiple AWS cloud services. It prioritizes speed with fast local dev, smart configurable defaults, and flexible infrastructure as code.
When you see the config file you should be able to tell what is going on and what resources are available. Architect supports multiple runtimes, but for the sake of simplicity, we will be using Node.js.
In this introduction to Architect, we will be building creating a REST endpoint that takes user data converts it into QR code and sends email out using SES with my url.
💡 Please be sure that the email you use for this is example is authorized with SES
First you will have to create a new architect app with the following command, which should create your boilerplate project with an app.arc
file.
npm init @architect architect-qr-email-service
cd architect-qr-email-service
Your app.arc file is your configuration files, which will be used to create your resources such as lambdas, database, queues as you can see in the code below. There is not much going on but with time this file will grow with time.
Architect takes the complexity of cloud formation, AWS's schema language, and refines it down into a declarative specification. When you look at the app.arc file, you are able to determine exactly what is going on and what resources were created.
@app
architect-qr-email-service
@http ## http endpoint get
get /
# @aws
# profile default
# region us-west-1
Now that we have a starting point let’s start creating some services. First, we will need a POST endpoint that will take some user data so we can store it for record-keeping. This record-creation
will not only be your route, but also the name in your directory as that is used by architect to determine what to build for what resource.
Configuration File
@app
architect-qr-email-service
@http
post /record-creation
***Folder Structure ***
Let’s install some architect helper functions so you can see a bit more tooling from arc, which will help us parse body json and more.
npm i @architect/functions -S
Now let’s add starter code that will take user data and return a helping message that it is running.
let arc = require('@architect/functions')
exports.handler = async function http(req) {
const body = arc.http.helpers.bodyParser(req)
if (!body || !body.email, !body.name) {
return {
statusCode: 400,
body: 'Missing user email and or name'
}
}
console.info({ ...body })
return {
statusCode: 200,
headers: {
'content-type': 'application/json;'
},
body: JSON.stringify({
message: `User Data Received`
})
}
}
One of the benefits of using architect is the simplicity of running serverless locally as it manages its own sandbox. Just run npm run start
and if your files and app.arc file is correct should start service running on http://localhost:3333 . Now that that is done just hit the post
The response should look something like this:
So let’s take a step back and realize what we just did. We created API gateway locally that pointed to a directory that holds a lambda function called record-creation
.
Now let’s do a bit more with this. We should take advantage of AWS lambda and utilize one of the best tools serverless echo system has to offer queues, which in the AWS world that resource is called Amazon Simple Queue Service (SQS) .
This we will be doing
You app.arc configuration file will have to be updated to add 2 new queues called notification-handler
and record-handler
.
Now that we have updated the configuration file. We will create a similar file structure, but instead of adding an http directory we will be creating under a queues directory. You can do this manually or run npx arc init
this command will read app.arc file and create all new corresponding files.
If you look at the app.arc file you will notice that each queue has its own corresponding directory. So record-handler
directory will handle all record-handler queue events that are published to it.
Let’s modify the record-creation
so it publishes to a queue record-handler
.
let arc = require('@architect/functions')
exports.handler = async function http(req) {
const body = arc.http.helpers.bodyParser(req)
if (!body || !body.email || !body.name || !body.twitter) {
console.error(body, 'Missing user email, name, twitter handle')
return {
statusCode: 400,
body: 'Missing user email and or name'
}
}
// publish to the record-handler queue based on our config
await arc.queues.publish({
name: 'record-handler',
payload: { ...body },
})
console.info('message published to record-handler')
return {
statusCode: 200,
headers: {
'content-type': 'application/json;'
},
body: JSON.stringify({
message: `User Data Received`
})
}
}
Now let’s modify the record handler that will save it to dynamo db and push it to a new queue as this function is only responsible for saving the record. You can see we use @architect/functions , which allows us to connect to dynamo db instance and save events for more on this see arc.codes docs.
let arc = require('@architect/functions')
exports.handler = async function queue(event) {
console.info('incoming message to queue record-handler', event)
let client = await arc.tables()
// handle incoming event, which could come in as a array
await Promise.all(event.Records.map(async record => {
//save a record for each one and publish new event
const parsedBody = JSON.parse(record.body)
const body = { ...parsedBody, id: Math.random() }
await client.records.put(body)
await arc.queues.publish({
name: 'notification-handler',
payload: body,
})
}))
}
Now that we are publishing to notification-handler
queue let’s do a little more let’s handle the event and push out an email using aws-sdk , with a qr of your Twitter handle.
let QRCode = require('qrcode')
let AWS = require('aws-sdk');
exports.handler = async function queue(event) {
try {
console.info('incoming message to queue notification-handler', event)
const AwsSes = new AWS.SES({ apiVersion: '2010-12-01' })
// handle incoming event, which could come in as a array
await Promise.all(event.Records.map(async record => {
//save a record for each one and publish new event
const parsedBody = JSON.parse(record.body)
const qrcodeString = `https://twitter.com/${parsedBody.twitter}`
const base64qr = await new Promise((resolve, _reject) => {
QRCode.toString(qrcodeString, { type: 'terminal' },
function (err, QRcode) {
if (err) return console.error("error occurred")
// Printing the generated code for your debugging purposes
console.info(QRcode)
})
QRCode.toDataURL(qrcodeString, function (err, code) {
if (err) return console.error("error occurred")
resolve(code)
})
})
const sendPromise = await AwsSes.sendEmail({
Destination: {
ToAddresses: [
parsedBody.email,
]
},
Message: {
Body: {
Html: {
Charset: "UTF-8",
Data: `<img src="${base64qr}" alt=${parsedBody.twitter} />`
},
Text: {
Charset: "UTF-8",
Data: qrcodeString
}
},
Subject: {
Data: 'Architect QR Email Service ' + parsedBody.twitter
}
},
Source: parsedBody.email,
}).promise();
console.info("Email Sent", sendPromise)
}))
} catch (error) {
console.error('Failed to handle notification message', error)
throw new Error('Failed to handle notification message')
}
}
Let’s test what we have now locally by running num run start
and triggering the same rest endpoint as before your result should be QR code in your terminal and email.
I would like to note that the email you use should be authorized in AWS SES for this to work. So any issues go here to get this done.
The result in your email should be something like the following
One additional thing you will need to do is create a custom AWS policy to give you access to SES service. Simply go to IAM in your AWS console and create a policy under SES service so you can manually add the JSON policy. Please name the policy ArcSESPolicy
as you will need it to match your custom macro later.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ses:*",
"Resource": "*"
}
]
}
A macro extends the functionality of your architect app with standard CloudFormation. It allows developers to add any resources or modify existing ones extending Architect into the entire AWS ecosystem supported by CloudFormation
So for us, we will add a new field in app.arc and add a new directory called macro. In the macro directory add add-policies.js
.
Configuration File
@app
architect-qr-email-service
@http
post /record-creation ## http endpoint that will listen for our rest call
@queues
record-handler ## handle message that is publish by record-creation endpoint and saves it
notification-handler ## triggers email notification
## configuration to create a dynamo db
@tables
records
email *String
name **String
@macros
add-policies ## modify your cloudformation during build timeto add SES policy permission
File structure
add-policies.js code to be added
/**
* Add Managed Policies
*
* @param {object} arc - Parsed `app.arc` value
* @param {object} sam - Generated CloudFormation template
* @param {string} stage - Deployment target runtime environment 'staging' or 'production'
* @returns {object} Modified CloudFormation template
*/
module.exports = async function macro(arc, cfn) {
if (cfn.Resources.Role) {
const newPolicies = [
{
PolicyName: 'ArcSESPolicy', // remember this should match your ses policy
PolicyDocument: {
Statement: [
{
Effect: 'Allow',
Action: ['ses:*'],
Resource: '*',
},
],
},
},
]
cfn.Resources.Role.Properties.Policies = [...cfn.Resources.Role.Properties.Policies, ...newPolicies]
}
return cfn
}
Let’s deploy this baby to AWS for that all you have to do is run npx arc deploy
, which will deploy a QA stack for you. This deploy command reads your app.arc configuration files and creates your cloudformation file that will be used to create all resources.
Once that is completed you can verify all the resources we're created.
SQS queues:
Lambdas:
Dynamo table:
API Gateway:
Now that we verified all the resources let’s test the url that your deploy script should have produced. Mine looked something like this https://ka3ube9vj6.execute-api.us-west-2.amazonaws.com
.
With that, I will just trigger gateway with the same route as before of record-creation
curl --request POST \
--url https://ka3ube9vj6.execute-api.us-west-2.amazonaws.com/record-creation \
--header 'Content-Type: application/json' \
--data '{
"email": "[email protected]",
"name": "Martin Patino",
"twitter": "thisguymartin"
}'
If everything works correctly you should be getting qrcode email from your aws service same as before.
🛑 If for some reason you do not get a email. This could be do to a error possibly permission reason. I would recommend that you purge your queue with dangling message and debug the issues if not your message will be dangling their until the issue is resolved. See cloudwatch for logging information .
Once you are done and no longer want the project in your aws account simply run npx arc destroy --app architect-qr-email-service --force
and should delete all resources and dynamodb tables.
Thank you for joining me on this serverless journey of yours and I hope this gave you a small taste of the power of Architect.
Here is the GitHub repo if you require further info: https://github.com/thisguymartin/architect-qr-email-service or assistance.
Also published on: https://dev.to/thisguymartin/architect-a-easy-way-to-jump-into-the-serverless-world-285h