Rust + Lambda #1: Project Structure for A Hexagonal Architecture
Most blog posts related to using Rust on AWS Lambda tend to give a basic overview of getting Rust up and running on lambda (I am also guilty of this), in reality this does not represent a production ready workflow. Over a series of blog posts I will aim to try and present my findings after 2 years of writing Rust on AWS Lambda.
The Project Structure (That I Like)
I've found great satisfaction working with a "hexagonal architecture" (although I am no expert on the topic) when it comes to Rust Lambdas. This particular structure strikes a balance between organization, flexibility and testability, and here's how it's structured:
Handler Organization:
Each handler has its designated place within its own crate, residing snugly in the bin
folder.
These handlers have a specific role: orchestration and dependency injection. They're like the conductors of our application's orchestra.
Core Logic:
In the lib
folder, there's a crucial crate named 'domain' This is where all the magic happens.
'Domain' is home to your business logic, ports, and adapters—the essential components that define your application's functionality.
Service Implementations:
lib
isn't just about 'domain' It also houses the implementations of critical services, including AWS, perfectly tailored to align with the traits defined in 'domain'
This alignment ensures a seamless integration of external services with your core application logic.
Building the Bridge:
Now, here's where it gets interesting. In the bin
crate, AWS service implementations from lib
come to life.
They materialize as concrete implementations of your ports and adapters, ready to collaborate with the business logic, almost like building a bridge between your application and external services.
├── Cargo.toml
├── bin
│ ├── create
│ │ ├── Cargo.toml
│ │ └── src
│ │ └── main.rs
│ ├── update
│ │ ├── Cargo.toml
│ │ └── src
│ │ └── main.rs
│ └── order
│ ├── Cargo.toml
│ └── src
│ └── main.rs
└── lib
├── domain
│ ├── Cargo.toml
│ └── src
│ ├── storage_adapter.rs
│ ├── message_bus_adapter.rs
│ ├── models.rs
│ └── lib.rs
├── dynamodb_storage_adapter
│ ├── Cargo.toml
│ └── src
│ └── lib.rs
└── eventbridge_message_bus_adapter
├── Cargo.toml
└── src
└── lib.rs
Ordering a Pizza
Using the principals described above, lets design a microservices for building and ordering a Pizza.
create
is the start of the pizza's journey. It creates an empty pizza and returns the Unique ID that should be used throughout.update
update the pizza's toppings.order
finalize the pizza's design and put the order through to the restaurant.
Pizza Model
Each pizza we are order needs a unique id and the list of toppings that should be on it.
// lib/domain/models.rs
pub struct Pizza {
pub(crate) id: String
pub(crate) toppings: Vec<String>
}
Storage Adapter Trait
Lets say we want to create a trait for storage, in our example code we need to be able to create, get and update. For simplicity update and create will use upsert. The trait might look something like this:
use anyhow::Result;
use async_trait::async_trait;
#[async_trait]
pub trait StorageAdapter {
async fn get<T>(&self, primary_key: &str) -> Result<Option<T>>
where
T: serde::de::DeserializeOwned + std::marker::Sync + std::marker::Send;
async fn set<T>(&self, item: &T) -> Result<()>
where
T: serde::Serialize + std::marker::Sync + std::marker::Send;
}
Here we have defined the contract that an implementation of StorageAdapter
must follow. I usually use the async_trait
procedural macro for adapter traits as they are most likely handling some sort of IO.
This trait outlines a few constraints that any type T
that we want to be able to store must be able be Serializable and Deserializable using serde
, which is a standard crate in Rust for marking something as de/serializable.
We can now update our Pizza model accordingly to adhere to these constraints.
// lib/domain/models.rs
#[derive(serde::Serialize, serde::Deserialize, Clone)]
pub struct Pizza {
pub(crate) id: String
pub(crate) toppings: Vec<String>
}
DynamoDB Storage Adapter Implementation
Using DynamoDB's get_item
and put_item
commands we can easily map these to the StorageAdapter
trait's requirements.
Notice that the serde::de::DeserializeOwned
and serde::Serialize
constraints allow us to use the serde_dynamo
to convert to/from a DynamoDB item into our type T
. Hypothetically if we were writing an implementation for S3, we could instead use the serde_json
crate to write to the file as JSON instead.
// lib/dynamodb_storage_adapter/lib.rs
use anyhow::Result;
use async_trait::async_trait;
use aws_sdk_dynamodb::Client;
use domain::adapters::BoardStateAdapter;
pub struct DynamodbCacheAdapter {
client: Client,
table_name: String,
}
impl DynamodbCacheAdapter {
pub fn new(table_name: String, config: aws_config::SdkConfig) -> Self {
let client = Client::new(&config);
Self { client, table_name }
}
}
#[async_trait]
impl DynamoDBStorageAdapter for StorageAdapter {
async fn get<T>(&self, id: &String) -> Result<Option<T>>
where
T: serde::de::DeserializeOwned + std::marker::Sync + std::marker::Send,
{
let response_rslt = self
.client
.get_item()
.table_name(self.table_name.clone())
.key(
"id",
aws_sdk_dynamodb::types::AttributeValue::S(primary_key.clone()),
)
.send()
.await;
let item: Result<Option<T>> = response_rslt
.map(|response| {
response
.item()
.map(|item| serde_dynamo::from_item(item.clone()).unwrap())
})
.map_err(|err| anyhow::anyhow!(err));
item
}
async fn set<T>(&self, item: &T) -> Result<()>
where
T: serde::Serialize + std::marker::Sync + std::marker::Send,
{
let response_rslt = self
.client
.put_item()
.table_name(self.table_name.clone())
.set_item(serde_dynamo::to_item(item).ok())
.send()
.await;
let response = response_rslt
.map_err(|err| anyhow::anyhow!(err))
.map(|_| ());
response
}
}
Business Logic
Create Pizza
pub struct CreateContext<T> where T: StorageAdapter {
pub storage_adapter: &T
}
pub async fn create_pizza<T>(context: &CreateContext<T>) -> Result<&str>
where T: StorageAdapter
{
let pizza = Pizza {
id: nanoid!(),
toppings: Vec::new()
}
// Do business logicy things
context.storage_adapter
.set(&pizza)
.await
.map(|_| pizza.id)
}
Update Pizza
pub struct UpdateContext<T> where T: StorageAdapter {
pub storage_adapter: &T
}
pub async fn update_pizza<T>(id: &str, topping: &str, context: &UpdateContext<T>) -> Result<()>
where T: StorageAdapter
{
let mut pizza: Pizza = context
.storage_adapter
.get(id)
.await?
.expect("No pizza found")?;
pizza.toppings.push(topping);
// Do business logicy things
context.storage_adapter
.set(&pizza)
.await
}
Order Pizza
pub struct OrderContext<T, M>
where
T: StorageAdapter,
M: MessageBusAdapter
{
pub storage_adapter: &T,
pub message_bus_adapter: &M
}
pub async fn order_pizza<T>(id: &str, context: &OrderContext<T>) -> Result<()>
where T: StorageAdapter
{
let pizza: Pizza = context
.storage_adapter
.get(id)
.await?
.expect("No pizza found")?;
// Do business logicy things
context.message_bus_adapter
.send(pizza)
.await
}
Cool... So what?
Our business logic is completely agnostic of our implementations of the adapters we provided. This means that we can do unit tests for our business logic by providing mock implementations of the Adapters. We can change our storage and message bus implementations without changing the business logic.
Stay tuned for Part 2 where we will be implementing our mocks and building out the Lambdas.