Deploying a Vendure Application

A Vendure application is essentially a Node.js application, and can be deployed to any environment that supports Node.js.

The bare minimum requirements are:

  • A server with Node.js installed
  • A database server (if using MySQL/Postgres)

A typical pattern is to run the Vendure app on the server, e.g. at http://localhost:3000 an then use nginx as a reverse proxy to direct requests from the Internet to the Vendure application.

Here is a good guide to setting up a production-ready server for an app such as Vendure:

Security Considerations

For a production Vendure server, there are a few security-related points to consider when deploying:

  • Set the Superadmin credentials to something other than the default.
  • Consider taking steps to harden your GraphQL APIs against DOS attacks. Use the ApiOptions to set up appropriate Express middleware for things like request timeouts and rate limits. A tool such as graphql-query-complexity can be used to mitigate resource-intensive GraphQL queries.
  • You may wish to restrict the Admin API to only be accessed from trusted IPs. This could be achieved for instance by configuring an nginx reverse proxy that sits in front of the Vendure server.
  • By default, Vendure uses auto-increment integer IDs as entity primary keys. While easier to work with in development, sequential primary keys can leak information such as the number of orders or customers in the system. For this reason you should consider using the UuidIdStrategy for production.
    import { UuidIdStrategy, VendureConfig } from '@vendure/core';
    export const config: VendureConfig = {
      entityIdStrategy: new UuidIdStrategy(),
      // ...
  • Consider using helmet as middleware (add to the apiOptions.middleware array) to handle security-related headers.

Serverless / multi-instance deployments

Vendure supports running in a serverless or multi-instance (horizontally scaled) environment. The key consideration in configuring Vendure for this scenario is to ensure that any persistent state is managed externally from the Node process, and is shared by all instances. Namely:

  • The JobQueue should be stored externally using the DefaultJobQueuePlugin (which stores jobs in the database) or the BullMQJobQueuePlugin (which stores jobs in Redis), or some other custom JobQueueStrategy.
  • A custom SessionCacheStrategy must be used which stores the session cache externally (such as in the database or Redis), since the default strategy stores the cache in-memory and will cause inconsistencies in multi-instance setups.
  • When using cookies to manage sessions, make sure all instances are using the same cookie secret:
    const config: VendureConfig = {
      authOptions: {
        cookieOptions: {
          secret: 'some-secret'
  • Channel and Zone data gets cached in-memory as this data is used in virtually every request. The cache time-to-live defaults to 30 seconds, which is probably fine for most cases, but it can be configured in the EntityOptions.

Health/Readiness Checks

If you wish to deploy with Kubernetes or some similar system, you can make use of the health check endpoints.


This is a regular REST route (note: not GraphQL), available at /health.

REQUEST: GET http://localhost:3000/health
  "status": "ok",
  "info": {
    "database": {
      "status": "up"
  "error": {},
  "details": {
    "database": {
      "status": "up"

Health checks are built on the Nestjs Terminus module. You can also add your own health checks by creating plugins that make use of the HealthCheckRegistryService.


Although the worker is not designed as an HTTP server, it contains a minimal HTTP server specifically to support HTTP health checks. To enable this, you need to call the startHealthCheckServer() method after bootstrapping the worker:

  .then(worker => worker.startJobQueue())
  .then(worker => worker.startHealthCheckServer({ port: 3020 }))
  .catch(err => {

This will make the /health endpoint available. When the worker instance is running, it will return the following:

REQUEST: GET http://localhost:3020/health
  "status": "ok"

Note: there is also an internal health check mechanism for the worker, which does not uses HTTP. This is used by the server’s own health check to verify whether at least one worker is running. It works by adding a check-worker-health job to the JobQueue and checking that it got processed.

Admin UI

If you have customized the Admin UI with extensions, it can make sense to compile your extensions ahead-of-time as part of the deployment process.