With No Moss Co, I’m working with a startup that has a large Hasura deployment. When I joined the project I realised they didn’t have a way for teams to develop and test apps locally, and instead relying on a single development environment. This resulted in problems like:
Delayed Development Cycle
Increased Debugging Complexity
Reduced Productivity
To make things easier, I introduced LocalStack, which works with Hasura, so coders can build and test their code right on their machines. If you know Hasura but haven’t set it up locally, here’s a simple guide to get you started:
Hasura provides a powerful solution for creating GraphQL APIs by automatically generating them from your database schema. By drastically reducing the need for manual backend coding, Hasura accelerates the development process, enabling developers to focus on building features and delivering value quickly and efficiently.
Here are some key features and benefits of Hasura:
Automatic API Generation
Real-time GraphQL
Authorization and Permissions
Remote Schemas and Actions
Event Triggers
Data Federation:
Admin Console:
Extensibility:
Client Apps:
Hasura GraphQL Engine:
Metadata Store:
PostgreSQL Database:
Hasura Actions:
Event Triggers:
Client Requests:
GraphQL Engine Processing:
Custom Actions Handling:
Event Handling:
This architecture ensures that Hasura provides a robust, scalable, and real-time GraphQL API layer on top of a PostgreSQL database, with extensibility for integrating additional data sources, custom business logic, and event-driven functionality.
A local development environment, aka a local dev environment or simply a local environment, refers to a self-contained and isolated setup and configuration of software and tools on your local machine (such as a laptop or desktop computer) to facilitate software development. It allows software developers to efficiently build, test, and debug their applications on their own machines before deploying them to a production environment. It also serves as a replica of the production environment, enabling developers to work on their code in a controlled and familiar setting.
The local environment provides developers with a sandbox-like environment where they can experiment, iterate, and troubleshoot their code without impacting the live production environment. This allows for faster development cycles and reduces the risk of introducing bugs or breaking the application’s existing functionality.
One of the key advantages of using a local development environment is the ability to closely replicate the production environment. This ensures that developers are working with the same configurations, libraries, and dependencies as the final deployment environment. Thereby minimising the chances of encountering unexpected issues when the application is deployed to production.
Moreover, a local development environment offers developers greater control and flexibility over their development process. They can easily switch between different versions of software components, experiment with new technologies, and simulate various scenarios to test the robustness and performance of their applications. This level of control allows developers to optimise their code and ensure its compatibility with different operating systems and platforms.
Local development offers speed, efficiency, privacy, and control. It allows developers to work offline, experiment freely, and control their development setup fully. However, it may lack scalability testing and can present collaboration and platform compatibility challenges.
LocalStack is an open-source tool that provides a local development environment for building and testing cloud-native applications. It emulates AWS services, allowing developers to use the same APIs and SDKs as they would in a real AWS environment, but without the need for an actual AWS account or the associated costs. It enables:
LocalStack has a couple prerequisites:
When setting up Hasura GraphQL Engine with Docker and LocalStack, you might organise your project directory structure to keep things clean and manageable. Here’s an example of how you can structure your Hasura project directory:
hasura-project/
├── config.yaml
├── migrations/
│ └── <migration_files>
├── metadata/
│ └── actions.yaml
├── seeds/
│ └── <seed_files>
├── actions/
│ └── <action_files>
├── docker-compose-local.yml
├── serverless-local.yml
├── src/
│ └── handlers
│ └── action_get_user_id_by_username.ts
├── dump/
│ └── dump_and_restore.sh
├── .env
├── .gitignore
└── README.md
config.yaml
migrations/
metadata/
seeds/
actions/
docker-compose-local.yaml
serverless-local.yaml
src/
dump/
.env
.gitignore
Creating a database dump and restoring it using a PostgreSQL URI involves using the pg_dump and pg_restore or psql tools. In your .env file create a uri variable PG_DEV_DATABASE_URL whose format should be something like this postgres://username:password@hostname:port/dbname
PG_DEV_DATABASE_URL=postgres://username:password@hostname:port/dbname
The dump_and_restore.sh inside your dump folder should look like this
echo "Job started: $(date)"
FILE="db_dump.dump"
pg_dump - file $FILE - format=custom $PG_DEV_DATABASE_URL
pg_restore - verbose - clean - no-acl - no-owner - username "postgres" - dbname "postgres" - no-password - verbose $FILE
echo "Job finished: $(date)"
Since you’re accessing other resources like Hasura and Postgres in the container as well, your application container should be configured to use LocalStack as its DNS server. Once this is done, the domain name localhost.localstack.cloud will resolve to the LocalStack container. All subdomains of localhost.localstack.cloud will also resolve to the LocalStack instance, e.g. API Gateway default URLs.
To configure your application container:
version: "3.6"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}"
image: localstack/localstack
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
environment:
# LocalStack configuration: https://docs.localstack.cloud/references/configuration/
- DEBUG=${DEBUG:-0}
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
ls:
# Set the container IP address in the 10.0.2.0/24 subnet
ipv4_address: 10.0.2.20
postgres:
image: postgres:15
restart: always
volumes:
- ./dump:/docker-entrypoint-initdb.d
environment:
POSTGRES_PASSWORD: postgrespassword
REPLICA_PG_DATABASE_URL: $REPLICA_PG_DATABASE_URL
dns:
# Set the DNS server to be the LocalStack container
- 10.0.2.20
networks:
- ls
graphql-engine:
image: hasura/graphql-engine:v2.38.0
ports:
- "8080:8080"
restart: always
environment:
## postgres database to store Hasura metadata
HASURA_GRAPHQL_METADATA_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres
## this env var can be used to add the above postgres database to Hasura as a data source. this can be removed/updated based on your needs
PG_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres
## enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
## enable debugging mode. It is recommended to disable this in production
HASURA_GRAPHQL_DEV_MODE: "true"
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
## uncomment next line to run console offline (i.e load console assets from server instead of CDN)
# HASURA_GRAPHQL_CONSOLE_ASSETS_DIR: /srv/console-assets
## uncomment next line to set an admin secret
# HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
HASURA_GRAPHQL_METADATA_DEFAULTS: '{"backend_configs":{"dataconnector":{"athena":{"uri":"http://data-connector-agent:8081/api/v1/athena"},"mariadb":{"uri":"http://data-connector-agent:8081/api/v1/mariadb"},"mysql8":{"uri":"http://data-connector-agent:8081/api/v1/mysql"},"oracle":{"uri":"http://data-connector-agent:8081/api/v1/oracle"},"snowflake":{"uri":"http://data-connector-agent:8081/api/v1/snowflake"}}}}'
depends_on:
data-connector-agent:
condition: service_healthy
dns:
# Set the DNS server to be the LocalStack container
- 10.0.2.20
networks:
- ls
data-connector-agent:
image: hasura/graphql-data-connector:v2.38.0
restart: always
ports:
- 8081:8081
environment:
QUARKUS_LOG_LEVEL: ERROR # FATAL, ERROR, WARN, INFO, DEBUG, TRACE
## https://quarkus.io/guides/opentelemetry#configuration-reference
QUARKUS_OPENTELEMETRY_ENABLED: "false"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8081/api/v1/athena/health"]
interval: 5s
timeout: 10s
retries: 5
start_period: 5s
volumes:
db_data:
networks:
ls:
ipam:
config:
# Specify the subnet range for IP address allocation
- subnet: 10.0.2.0/24
Name the file docker-compose-local.yaml and put it in your root directory
docker-compose -f docker-compose-local.yaml --env-file .env up
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6235f13c0cd4 hasura/graphql-engine:v2.38.0 "/bin/sh -c '\"${HGE_…" 12 seconds ago Up 5 seconds (health: starting) 0.0.0.0:8080->8080/tcp hasura-events-graphql-engine-1
9dd026ea7f78 postgres:15 "docker-entrypoint.s…" 12 seconds ago Up 11 seconds 5432/tcp hasura-events-postgres-1
b86721219b5c localstack/localstack "docker-entrypoint.sh" 12 seconds ago Up 11 seconds (healthy) 127.0.0.1:4510-4559->4510-4559/tcp, 127.0.0.1:4566->4566/tcp, 5678/tcp localstack-main
77cac6f80ae8 hasura/graphql-data-connector:v2.38.0 "/app/run-java.sh" 12 seconds ago Up 11 seconds (healthy) 5005/tcp, 0.0.0.0:8081->8081/tcp
docker exec -it hasura-events-postgres-1 bash /docker-entrypoint-initdb.d/dump_and_restore.sh
Install and configure Serverless-Localstack plugin
npm install -D serverless-localstack
serverless-local.yaml should look like
app: hasura-events-local
service: hasura-events
provider:
name: aws
runtime: nodejs20.x
timeout: 30
memorySize: 1024
architecture: arm64
apiGateway:
apiKeys:
- apiKey
environment:
### Update with dev db url
DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres
### Update with dev replica url
DATABASE_URL_REPLICA: postgres://postgres:postgrespassword@postgres:5432/postgres
functions:
action_get_user_id_by_username:
handler: src/handlers/action_get_user_id_by_username.handler
events:
- http:
path: /get-user-id-by-username
method: post
private: false
plugins:
- serverless-plugin-typescript
- serverless-localstack
# only include the Prisma binary required on AWS Lambda while packaging
custom:
localstack:
stages:
- local
lambda:
mountCode: true
host: http://localhost.localstack.cloud
edgePort: 4566
autostart: true
serverlessPluginTypescript:
tsConfigFileLocation: "./tsconfig.json"
prune:
automatic: true
includeLayers: true
number: 10
sls deploy --config serverless-localstack.yml deploy --stage local
Console will look something like this
✔ Service deployed to stack hasura-events-local (94s)
api keys:
apiKey: h3PDCL0T5y4ueQR29EalVqzvdAimJZsxnFgUjoSX
endpoint: http://localhost:4566/restapis/k5zem1d6pv/local/_user_request_
functions:
action_get_user_id_by_username: hasura-events-local-action_get_user_id_by_username
Grab apiKey and endpoint from the above
Also make sure to change localhost:4566 to localhost.localstack.cloud:4566
Go to metadata/actions.yaml
go to action getUserIDByUsername
Which looks something like below
actions:
- name: getUserIDByUsername
definition:
kind: ""
handler: 'http://localhost.localstack.cloud:4566/restapis/k5zem1d6pv/local/_user_request_/get-user-id-by-username'
forward_client_headers: true
headers:
- name: x-api-key
value: h3PDCL0T5y4ueQR29EalVqzvdAimJZsxnFgUjoSX
request_transform:
method: POST
query_params: {}
version: 2
update handler with the endpoint you got from step #7 and append with /get-user-id-by-username
update headers: x-api-key value with the apiKey you got from step #7
hasura metadata apply -–endpoint=http://localhost:8080
http://localhost:8080/console
Go to Actions
scroll down to getUserIDByUsername verify if Webhook (HTTP/S) Handler and x-api-key values are the same as updated on step #7
http://localhost:8080/console
Verify getUserIDByUsername works
query MyQuery {
getUserIDByUsername(arg1: {username: "orenweiss"}) {
id
username
}
}
Response should look like this
In this article, we have learned how to set up Hasura on local and connect it to a local database. We also learned how to deploy Hasura Actions onto the localstack. We have also tested how end to end flow from Hasura actions to localstack works on Hasura console.
I hope you enjoyed this do-it-along-with-me session and learned something new. If you did, please give this article a clap and share it with your friends. Also, feel free to follow me on Medium and connect with me on LinkedIn at Ankita Madan I would love to hear your feedback and suggestions for future articles. Reach out here to connect with No Moss.
Thank you for reading!