Blog & Resources

Local Development with Hasura and LocalStack

Written By
Published On
Ankita Madan
20 Jun, 2024
Copy

With No Moss Co, I’m working with a startup that has a large Hasura deployment. When I joined the project I realised they didn’t have a way for teams to develop and test apps locally, and instead relying on a single development environment. This resulted in problems like:

Delayed Development Cycle

  • Longer Feedback Loop: Without local deployment, developers need to rely on remote environments for testing, which slows down the feedback loop and hinders rapid iteration.
  • Dependency on External Resources: Development becomes dependent on network availability and external resources, leading to delays if these resources are slow or unavailable.

Increased Debugging Complexity

  • Environment Discrepancies: Differences between local and remote environments can lead to hard-to-diagnose issues that only appear in production or staging environments.
  • Limited Debugging Tools: Remote environments often lack the full suite of debugging tools available locally, making it harder to troubleshoot and resolve issues efficiently.

Reduced Productivity

  • Context Switching: Developers may need to switch contexts frequently between writing code and deploying/testing in remote environments, reducing overall productivity.
  • Inability to Work Offline: Without local deployment, development work is hindered when offline, limiting flexibility and productivity.

To make things easier, I introduced LocalStack, which works with Hasura, so coders can build and test their code right on their machines. If you know Hasura but haven’t set it up locally, here’s a simple guide to get you started:

What is Hasura?

Hasura provides a powerful solution for creating GraphQL APIs by automatically generating them from your database schema. By drastically reducing the need for manual backend coding, Hasura accelerates the development process, enabling developers to focus on building features and delivering value quickly and efficiently.

Here are some key features and benefits of Hasura:

Key Features

Automatic API Generation

  • Hasura automatically generates GraphQL APIs for your PostgreSQL database tables, views, and stored procedures.

Real-time GraphQL

  • Supports subscriptions for real-time updates. Any changes in the database can be pushed to the client instantly.

Authorization and Permissions

  • Fine-grained access control to manage who can query or mutate which data.
  • Role-based access control (RBAC) to define permissions at a granular level.

Remote Schemas and Actions

  • Integrate external GraphQL schemas and REST endpoints, allowing you to extend your API capabilities.

Event Triggers

  • Trigger serverless functions or webhooks on database events, enabling event-driven architectures.

Data Federation:

  • Combine multiple data sources into a single GraphQL API, making it easier to query and manage diverse datasets.

Admin Console:

  • A web-based console to manage your database schema, API, and permissions. It also provides tools for data exploration and debugging.

Extensibility:

  • Supports custom business logic via actions, event triggers, and remote schemas.
  • Integrate with other services and APIs.

High-level architecture diagram and interactions between Hasura, PostgreSQL and Hasura Actions

Explanation

Client Apps:

  • These are the front-end applications, which can be web, mobile, or desktop applications. They interact with the Hasura GraphQL Engine via GraphQL queries, mutations, and subscriptions.

Hasura GraphQL Engine:

  • GraphQL API: The core of Hasura, where it automatically generates a GraphQL API based on the connected PostgreSQL database schema.
  • GraphQL Engine: This engine processes incoming GraphQL queries, mutations, and subscriptions, handles real-time subscriptions, and enforces authorization rules.

Metadata Store:

  • Hasura stores metadata about the GraphQL schema, permissions, relationships, and custom configurations. This metadata is crucial for the GraphQL engine to interpret the database schema and enforce security rules.

PostgreSQL Database:

  • The primary data source for Hasura, where all your application data is stored. Hasura generates the GraphQL schema based on the tables, views, and relationships defined in this database.
  • - The database is accessed directly by the Hasura GraphQL Engine to fulfill client queries and mutations.

Hasura Actions:

  • Actions allow you to extend Hasura’s capabilities by defining custom business logic and custom GraphQL mutations or queries that are not directly tied to the PostgreSQL database.
  • These actions are essentially custom REST endpoints that are defined as part of the GraphQL schema. They can call external APIs, invoke microservices, or perform complex business logic.
  • Example: A client request to create a new order may invoke a custom action that checks inventory, processes payment, and then updates the database.

Event Triggers:

  • Event triggers enable Hasura to call webhooks or serverless functions (e.g., AWS Lambda, Google Cloud Functions) in response to changes in the database. This allows for real-time event-driven architectures and workflows.
  • Example: When a new user is added to the users table, an event trigger could send a welcome email or update an external CRM system.

Data Flow

Client Requests:

  • Clients send GraphQL queries, mutations, or subscriptions to the Hasura GraphQL Engine.

GraphQL Engine Processing:

  • The Hasura GraphQL Engine interprets the requests and interacts with the PostgreSQL database to fetch or modify data.
  • For real-time subscriptions, Hasura uses PostgreSQL’s capabilities to listen for changes and push updates to clients.

Custom Actions Handling:

  • If a client request involves a custom action, the Hasura GraphQL Engine forwards the request to the appropriate custom REST endpoint defined by the action.
  • The custom endpoint performs the necessary business logic and returns a response to the Hasura GraphQL Engine, which then forwards it to the client.

Event Handling:

  • For events (e.g., row insert, update, delete), Hasura triggers the configured webhooks or serverless functions, enabling real-time workflows.

This architecture ensures that Hasura provides a robust, scalable, and real-time GraphQL API layer on top of a PostgreSQL database, with extensibility for integrating additional data sources, custom business logic, and event-driven functionality.

What is a Local Development Environment?

A local development environment, aka a local dev environment or simply a local environment, refers to a self-contained and isolated setup and configuration of software and tools on your local machine (such as a laptop or desktop computer) to facilitate software development. It allows software developers to efficiently build, test, and debug their applications on their own machines before deploying them to a production environment. It also serves as a replica of the production environment, enabling developers to work on their code in a controlled and familiar setting.

The local environment provides developers with a sandbox-like environment where they can experiment, iterate, and troubleshoot their code without impacting the live production environment. This allows for faster development cycles and reduces the risk of introducing bugs or breaking the application’s existing functionality.

Why have one?

One of the key advantages of using a local development environment is the ability to closely replicate the production environment. This ensures that developers are working with the same configurations, libraries, and dependencies as the final deployment environment. Thereby minimising the chances of encountering unexpected issues when the application is deployed to production.

Moreover, a local development environment offers developers greater control and flexibility over their development process. They can easily switch between different versions of software components, experiment with new technologies, and simulate various scenarios to test the robustness and performance of their applications. This level of control allows developers to optimise their code and ensure its compatibility with different operating systems and platforms.

Local development offers speed, efficiency, privacy, and control. It allows developers to work offline, experiment freely, and control their development setup fully. However, it may lack scalability testing and can present collaboration and platform compatibility challenges.

How does LocalStack help?

LocalStack is an open-source tool that provides a local development environment for building and testing cloud-native applications. It emulates AWS services, allowing developers to use the same APIs and SDKs as they would in a real AWS environment, but without the need for an actual AWS account or the associated costs. It enables:

  • Emulation of AWS Services: LocalStack replicates various AWS services locally, including but not limited to S3, DynamoDB, SQS, SNS, and Lambda. By emulating these services, developers can interact with them using the same APIs and SDKs they would use in a real AWS environment.
  • Isolation and Replication: LocalStack allows developers to isolate their local development environment from external dependencies and production systems. It provides a sandbox environment where developers can experiment, test, and iterate without impacting actual AWS resources or incurring costs.
  • Integration with Development Workflow: LocalStack integrates with popular development tools and frameworks, such as Docker, Docker Compose, and serverless frameworks like AWS SAM and Serverless Framework. This integration streamlines the development workflow and allows developers to easily spin up and tear down emulated AWS services as needed.
  • Testing and Debugging Capabilities: LocalStack facilitates testing and debugging of AWS-related code by providing a local environment that closely resembles the production AWS environment. Developers can write unit tests, integration tests, and end-to-end tests against emulated AWS services to validate their code and identify potential issues early in the development process.

LocalStack has a couple prerequisites:

  • Docker: Ensure Docker is installed and running on your machine.
  • Docker Compose: Used to define and run multi-container Docker applications.

How to set up Hasura GraphQL Engine using Docker and Localstack

When setting up Hasura GraphQL Engine with Docker and LocalStack, you might organise your project directory structure to keep things clean and manageable. Here’s an example of how you can structure your Hasura project directory:

hasura-project/
├── config.yaml
├── migrations/
│   └── <migration_files>
├── metadata/
│   └── actions.yaml
├── seeds/
│   └── <seed_files>
├── actions/
│   └── <action_files>
├── docker-compose-local.yml
├── serverless-local.yml
├── src/
│   └── handlers
│       └── action_get_user_id_by_username.ts
├── dump/
│   └── dump_and_restore.sh
├── .env
├── .gitignore
└── README.md

Explanation of the Directory Structure

config.yaml

  • It’s used to configure the Hasura CLI and manage metadata for the Hasura GraphQL Engine. This file typically includes settings for connecting to the Hasura GraphQL Engine, defining project-specific configurations, and specifying directories for migrations and metadata.

migrations/

  • Contains database migration files. These are automatically generated by Hasura when you make changes to your database schema.

metadata/

  • Contains metadata files managed by Hasura. These files define the GraphQL schema, permissions, and other configurations.

seeds/

  • Contains seed files for populating the database with initial data.

actions/

  • Contains files related to Hasura Actions, such as custom resolvers or webhook handlers.

docker-compose-local.yaml

  • Docker Compose file to define and run multi-container Docker applications, including Hasura, postgres and LocalStack.

serverless-local.yaml

  • Serverless yml that describes your serverless functions, plugins and other necessary configurations.

src/

  • handlers/: Contains custom logic for Hasura Actions or event triggers.

dump/

  • dump_and_restore.sh: Script for creating a dump and restoring it.

.env

  • Contains environment variables used by Docker Compose and other parts of the application.

.gitignore

  • Specifies files and directories to be ignored by version control (e.g., Git).

Steps

Step 1: Create a dump and restore it using PostgreSQL URI

Creating a database dump and restoring it using a PostgreSQL URI involves using the pg_dump and pg_restore or psql tools. In your .env file create a uri variable PG_DEV_DATABASE_URL whose format should be something like this postgres://username:password@hostname:port/dbname

PG_DEV_DATABASE_URL=postgres://username:password@hostname:port/dbname

The dump_and_restore.sh inside your dump folder should look like this

echo "Job started: $(date)"
FILE="db_dump.dump"
pg_dump - file $FILE - format=custom $PG_DEV_DATABASE_URL
pg_restore - verbose - clean - no-acl - no-owner - username "postgres" - dbname "postgres" - no-password - verbose $FILE
echo "Job finished: $(date)"

Step 2: Create docker compose file with below content

Since you’re accessing other resources like Hasura and Postgres in the container as well, your application container should be configured to use LocalStack as its DNS server. Once this is done, the domain name localhost.localstack.cloud will resolve to the LocalStack container. All subdomains of localhost.localstack.cloud will also resolve to the LocalStack instance, e.g. API Gateway default URLs.

To configure your application container:

  • add a user-managed docker network;
  • either determine your LocalStack container IP, or configure your LocalStack container to have a fixed known IP address;
  • set the DNS server of your application container to the IP address of the LocalStack container.
version: "3.6"
services:
  localstack:
    container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}"
    image: localstack/localstack
    ports:
      - "127.0.0.1:4566:4566"            # LocalStack Gateway
      - "127.0.0.1:4510-4559:4510-4559"  # external services port range
    environment:
      # LocalStack configuration: https://docs.localstack.cloud/references/configuration/
      - DEBUG=${DEBUG:-0}
    volumes:
      - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"
    networks:
      ls:
        # Set the container IP address in the 10.0.2.0/24 subnet
        ipv4_address: 10.0.2.20
  postgres:
    image: postgres:15
    restart: always
    volumes:
      - ./dump:/docker-entrypoint-initdb.d
    environment:
      POSTGRES_PASSWORD: postgrespassword
      REPLICA_PG_DATABASE_URL: $REPLICA_PG_DATABASE_URL
    dns:
      # Set the DNS server to be the LocalStack container
      - 10.0.2.20
    networks:
      - ls
  graphql-engine:
    image: hasura/graphql-engine:v2.38.0
    ports:
      - "8080:8080"
    restart: always
    environment:
      ## postgres database to store Hasura metadata
      HASURA_GRAPHQL_METADATA_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres
      ## this env var can be used to add the above postgres database to Hasura as a data source. this can be removed/updated based on your needs
      PG_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres
      ## enable the console served by server
      HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
      ## enable debugging mode. It is recommended to disable this in production
      HASURA_GRAPHQL_DEV_MODE: "true"
      HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
      ## uncomment next line to run console offline (i.e load console assets from server instead of CDN)
      # HASURA_GRAPHQL_CONSOLE_ASSETS_DIR: /srv/console-assets
      ## uncomment next line to set an admin secret
      # HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
      HASURA_GRAPHQL_METADATA_DEFAULTS: '{"backend_configs":{"dataconnector":{"athena":{"uri":"http://data-connector-agent:8081/api/v1/athena"},"mariadb":{"uri":"http://data-connector-agent:8081/api/v1/mariadb"},"mysql8":{"uri":"http://data-connector-agent:8081/api/v1/mysql"},"oracle":{"uri":"http://data-connector-agent:8081/api/v1/oracle"},"snowflake":{"uri":"http://data-connector-agent:8081/api/v1/snowflake"}}}}'
    depends_on:
      data-connector-agent:
        condition: service_healthy
    dns:
      # Set the DNS server to be the LocalStack container
      - 10.0.2.20
    networks:
      - ls
  data-connector-agent:
    image: hasura/graphql-data-connector:v2.38.0
    restart: always
    ports:
      - 8081:8081
    environment:
      QUARKUS_LOG_LEVEL: ERROR # FATAL, ERROR, WARN, INFO, DEBUG, TRACE
      ## https://quarkus.io/guides/opentelemetry#configuration-reference
      QUARKUS_OPENTELEMETRY_ENABLED: "false"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8081/api/v1/athena/health"]
      interval: 5s
      timeout: 10s
      retries: 5
      start_period: 5s
volumes:
  db_data:
networks:
  ls:
    ipam:
      config:
        # Specify the subnet range for IP address allocation
        - subnet: 10.0.2.0/24

Name the file docker-compose-local.yaml and put it in your root directory

Step 3: Start docker with the following command

docker-compose -f docker-compose-local.yaml --env-file .env up

Step 4: Check if Postgres is healthy with the following command

docker ps -a
CONTAINER ID   IMAGE                                   COMMAND                   CREATED          STATUS                            PORTS                                                                    NAMES
6235f13c0cd4   hasura/graphql-engine:v2.38.0           "/bin/sh -c '\"${HGE_…"   12 seconds ago   Up 5 seconds (health: starting)   0.0.0.0:8080->8080/tcp                                                   hasura-events-graphql-engine-1
9dd026ea7f78   postgres:15                             "docker-entrypoint.s…"    12 seconds ago   Up 11 seconds                     5432/tcp                                                                 hasura-events-postgres-1
b86721219b5c   localstack/localstack                   "docker-entrypoint.sh"    12 seconds ago   Up 11 seconds (healthy)           127.0.0.1:4510-4559->4510-4559/tcp, 127.0.0.1:4566->4566/tcp, 5678/tcp   localstack-main
77cac6f80ae8   hasura/graphql-data-connector:v2.38.0   "/app/run-java.sh"        12 seconds ago   Up 11 seconds (healthy)           5005/tcp, 0.0.0.0:8081->8081/tcp                

Step 5 : Apply database dump once Postgres is in healthy state using following command

docker exec -it hasura-events-postgres-1 bash /docker-entrypoint-initdb.d/dump_and_restore.sh

Step 6: Deploy serverless with Localstack

Install and configure Serverless-Localstack plugin

npm install -D serverless-localstack

serverless-local.yaml should look like

app: hasura-events-local
service: hasura-events

provider:
  name: aws
  runtime: nodejs20.x
  timeout: 30
  memorySize: 1024
  architecture: arm64
  apiGateway:
    apiKeys:
      - apiKey

  environment:
    ### Update with dev db url
    DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres
    ### Update with dev replica url
    DATABASE_URL_REPLICA: postgres://postgres:postgrespassword@postgres:5432/postgres

functions:
  action_get_user_id_by_username:
    handler: src/handlers/action_get_user_id_by_username.handler
    events:
      - http:
          path: /get-user-id-by-username
          method: post
          private: false

plugins:
  - serverless-plugin-typescript
  - serverless-localstack

# only include the Prisma binary required on AWS Lambda while packaging

custom:
  localstack:
    stages:
      - local
    lambda:
      mountCode: true
    host: http://localhost.localstack.cloud
    edgePort: 4566
    autostart: true
  serverlessPluginTypescript:
    tsConfigFileLocation: "./tsconfig.json"
  prune:
    automatic: true
    includeLayers: true
    number: 10

Step 7 : Deploy serverless on localstack with the following command

 sls deploy --config serverless-localstack.yml deploy --stage local

Console will look something like this

✔ Service deployed to stack hasura-events-local (94s)

api keys:
  apiKey: h3PDCL0T5y4ueQR29EalVqzvdAimJZsxnFgUjoSX
endpoint: http://localhost:4566/restapis/k5zem1d6pv/local/_user_request_
functions:
  action_get_user_id_by_username: hasura-events-local-action_get_user_id_by_username

Grab apiKey and endpoint from the above

Also make sure to change localhost:4566 to localhost.localstack.cloud:4566

Step 8: Update actions urls generated above

Go to metadata/actions.yaml

go to action getUserIDByUsername

Which looks something like below

actions:
   - name: getUserIDByUsername
    definition:
      kind: ""
      handler: 'http://localhost.localstack.cloud:4566/restapis/k5zem1d6pv/local/_user_request_/get-user-id-by-username'
      forward_client_headers: true
      headers:
        - name: x-api-key
          value: h3PDCL0T5y4ueQR29EalVqzvdAimJZsxnFgUjoSX
      request_transform:
        method: POST
        query_params: {}
        version: 2

update handler with the endpoint you got from step #7 and append with /get-user-id-by-username

update headers: x-api-key value with the apiKey you got from step #7

Step 9: Apply metadata

hasura metadata apply -–endpoint=http://localhost:8080

Step 10 : Open Hasura console

http://localhost:8080/console

Go to Actions

scroll down to getUserIDByUsername verify if Webhook (HTTP/S) Handler and x-api-key values are the same as updated on step #7

Step 11: Go back to Hasura console

http://localhost:8080/console

Verify getUserIDByUsername works

query MyQuery {
  getUserIDByUsername(arg1: {username: "orenweiss"}) {
    id
    username
  }
}

Response should look like this

Conclusion

In this article, we have learned how to set up Hasura on local and connect it to a local database. We also learned how to deploy Hasura Actions onto the localstack. We have also tested how end to end flow from Hasura actions to localstack works on Hasura console.

I hope you enjoyed this do-it-along-with-me session and learned something new. If you did, please give this article a clap and share it with your friends. Also, feel free to follow me on Medium and connect with me on LinkedIn at Ankita Madan I would love to hear your feedback and suggestions for future articles. Reach out here to connect with No Moss.

Thank you for reading!

Continue reading...