API Micro Service Skeleton Project

Dave Redfern

Published: 21 Feb 02:48 in Symfony


Symfony Micro Service Starter Project

This is a skeleton project that pre-configures a Symfony 5+ project for use as a micro service. This project is intended to be used in conjunction with: Data Service

The setup includes:

  • doctrine
  • doctrine-fixtures
  • doctrine-migrations
  • messenger
  • profiler
  • command/query/domain event buses
  • test helpers
  • docker configuration for app and redis containers
  • docker app container is configured without local mounts
  • shell scripts in bin/ that call libs in docker
  • PHP container uses php-pm as the application server
  • Mutagen via SyncIt with a default configuration

If you are working with micro services then be sure to check out: Project Manager a CLI toolkit that makes working with multiple services a little bit easier.

Be sure to read on to find out how this project skeleton is structured and how to make effective use of it.

Getting Started

Getting Started

Create a new project using composer:

composer create-project somnambulist/symfony-micro-service <folder> --no-scripts

Customise the base files as you see fit; change names, (especially the service names), config values etc to suite your needs. Then: docker-compose up -d to start the docker environment in dev mode. Be sure to read Service Discovery to understand some of how the docker environment is setup.

Note: to use the latest version add dev-master as the last argument when creating a project. This will checkout and use the current master version, instead of a tagged release.

Alternatively if using Project Manager with the default templates: spm new:service <service_name> api or without the service name/template to use the wizard.

Recommended First Steps

This project uses App and example.dev throughout. Your first step would be to change the base PHP namespace (if desired). PhpStorms refactoring / renaming is highly recommended for this action.

The domain name is set in several places, it is strongly recommended to change this to something more useful. The following files should be updated:

  • .env
  • docker-compose*.yml

You should be sure to read Compiled Containers.

Configured Services

The following docker services are pre-configured for development:

  • Redis
  • PHP 8.0 running php-pm 2.X

Test config includes all services to successfully run tests.

Release / production only defines the app as it is intended to be deployed into a cluster.

Docker Service Names

The Docker container names will be prefixed by a project name defined in the .env file. This is the constant COMPOSE_PROJECT_NAME. If you remove it, the current folder name will be used instead. For example: you create a new project called "invoice-service", without setting the COMPOSE constant the containers started via docker-compose will be prefixed with invoice-service_. If you have a lot of docker projects, they may have similar folder names, so using this constant avoids collisions.

The second constant that needs setting is APP_SERVICE_APP. This is the name of the PHP application container. By default this is app. It is strongly recommended to change this to something that is more unique. If you do change this, be sure to change the container name in the docker-compose*.yml files otherwise it will not be used. This name is used by SyncIt to resolve the application container and by the bin/dc-* scripts.

DNS Resolution

DNS and Proxy where moved to data service.

The Domain


The domain represents the solution to a business problem. It includes all the code necessary to implement and solve that problem; without relying too heavily on third party or framework code. It should be (ideally) framework agnostic and be portable to other frameworks if they prove to be a better fit with a minimum of modifications to the core domain classes. i.e. you do not couple to a framework validator or service container and avoid injecting implementations but use interfaces instead.

The domain is typically discovered during the project setup with discussions with the main stakeholders and domain experts - the people who really know and understand how the business operates. That information is then used to create the software solution. The most important aspect of this is the language that is discovered that allows all people to effectively communicate and know what is meant by specific terms. The language is not set in stone and changes over time as knowledge is gained or the processes are improved. It is important to keep these changes up-to-date and this includes the code itself.

This project suggests and has the following folder layout for the domain:

  • Commands
  • Events
  • Models
  • Queries
  • Services

These are suggestions and you are free to change this up if you wish.


This project is centred around a Domain Driven Design approach, with Doctrine providing persistence for the main domain objects. These models are located in: src/Domain/Models. All domain models should be located here, including enumerations, value objects, and other data centric models. Unlike standard Symfony projects, models should not contain Doctrine mapping annotations. Add these to the config/mappings folder in a separate folder (default is models).

Your models should focus on the domain "state" and how various actions should be applied to it. This means enforcing valid state changes i.e.: you do not need getters and setters. In fact you should avoid adding these as the role of the models is to manage the state and not provide an API to query that state. Essentially your models represent the write operations. In many cases these will use value-objects and enumerables to ensure valid data is passed to the domain at all times. When using simple scalars, strict-types should be enabled and all scalar type hints used.

Within your domain models there will be some that are key and are accessed externally. These are likely to be your aggregate roots. Each aggregate root should raise appropriate domain events after each critical state transition. A doctrine listener is pre-enabled to listen for and propagate the domain events to the pre-configured RabbitMQ fan-out exchange (note that at the time of writing php-amqp is not yet available for PHP 8). Examples of aggregate roots may include User, Account, Order etc. however it will depend on your domain.

In general your domain models will follow the business concepts and use terminology that is familiar to the business. For example: if creating a service for the sales team, and they work with "leads" then your domain should have a "Lead" model and it should have whatever properties they consider to be important. The sales team should be able to look at the code and at least grasp the names and concepts that it expresses.


Services should contain classes that interact with the domain or provide additional support to the code domain models e.g.: transformations, or translations between data types / formats. Repositories are part of the domain services. A key idea though: is that the domain services are not dependent on framework code. They are standalone, and encapsulated - just like the models.

For example a currency converter could be a domain service; or an authenticator that checks if an object is accessible by another object based on domain rules (not framework rules).


Each aggregate root should have a Repository service defined for it. This should be an interface that then receives a Persistence implementation. The interface should be kept as simple as possible, typically:

  • find(Uuid $id): Object
  • store(Object $object): bool
  • destroy(Object $object): bool

The interface should be coded to a specific object type. Under the hood this may use Doctrine ObjectManager to persist and delete objects.

Note that it is not necessary to call ->flush() as a command bus should be used that includes DB transaction wrapping. However: if you do need to persist data outside of the commands, then you would need to either manage your EntityManager directly, or add the flush call to the repository.


Command Query Responsibility Segregation

CQRS is a design pattern that splits reading and writing operations into separate concerns i.e. you have a data model dedicated to managing changes to your data and a separate model for reading data. This allows your reads to be tailored specifically for the information you need and not be constrained by the requirements of the write side of the application.


A "command" is a request to make a change to the system; such as: "create a user" or "activate a thing". Commands are dispatched via a CommandBus that does not return any output. A command should be fully encapsulated with all the necessary data need to action that request. This includes any generated ids before hand i.e.: using this system you should not be relying on database auto-increments or sequences (in this case these are surrogate identities that are used to make database modelling easier). Instead you should only expose UUIDs of the main objects and only if necessary expose internal ids or use an aggregate ID generation strategy such as a counter that increments continually as records are added.

When the command is dispatched, the command bus handles it along with any errors that may occur. These will be raised as an exception that the custom JSON Exception subscriber will collect and transform to API error messages. This can be overridden by adding appropriate error handling.

The command bus uses the following middleware:

  • validation
  • doctrine_transaction

Additional middlewares can be configured in the config/packages/messenger.yaml file.

Commands may only be handled by one handler; but a handler may raise more commands to be dispatched if deemed appropriate. However: even in this instance it would be better to write an event listener for a domain event and respond to that as domain events are broadcast after all Doctrine operations have been flushed to the data store.

The command handler may make whatever changes are necessary via calling into the domain models. This includes creating new objects, loading existing ones, interacting with the repository or other services.

Typically your commands will correspond to actual actions that the business carries out and should be named as such.


A query is a request for information from the system. The query might be "Find me X by Id" or "find all products matching these criteria...". A QueryBus then executes the query command and returns a result. The query encapsulates all the data that has been requested and should never include the originating request object. It is safe to use value objects and primitives. Several abstract query commands are included for basic actions (provided by somnambulist/domain).

Query commands are immutable and should not be changed; the only concession is if using the includes support to load sub-objects where a with() method is added.

The query command is handled by a QueryHandler that accepts that command as an argument to the magic __invoke method. How the query is handled is entirely up to the implementor. It could be pure SQL, API calls, DQL, parse some files, return hard coded responses etc etc.

For example a query command may look like:


use Somnambulist\Components\Domain\Queries\AbstractQuery;

class FindObjectById extends AbstractQuery
    private $id;

    public function __construct($id)
        $this->id = $id;

    public function getId()
        return $this->id;

This would then be executed by a QueryHandler that would have the following signature:


class FindObjectByIdQueryHandler
    public function __invoke(FindObjectById $query)
        // do some operations to find the thing
        return $object;

Using a QueryBus allows the query handling to be changed at any time by replacing the query handler with another implementation. For example: we start off with a service that gets large and requires splitting up, queries into the part that is split off do not need to change, only the handler needs updating to make API calls instead and can still return the same objects as before. No changes would be needed in the controllers.

The down side to this approach are many small files; however each of the files is completely testable in isolation.



The Delivery folder is for any output mechanisms that will produce a response from the system. Here is where any API or web controllers live, console commands, etc. ViewModels would live in this part of the system.

Each major output type should be kept segregated in its own namespace to avoid polluting e.g. the web responses with API responses.

By default Api and Console are provided and are mapped as services already in the services.yaml file.

The API is intended to be fully versioned right from the get go - to ensure backwards compatibility. This versioning should be done at the controller, form and transformer level. Each version should have its own controllers, form requests and transformers. If a particular version does not change one output, you could re-use a previous version if needs be.

FormRequests are a concept from Laravel where you can type hint a validated request object that will ensure that the request contains the data defined in the rules. It provides a somewhat cleaner setup to the Symfony Form library, that can be rather complex to deal with. Using this library is entirely optional. See Form Request Bundle for more details.

For controllers it is best to group then around an aggregate root e.g. there is a User aggregate, so there would be a Users folder in the src/Delivery/Api/V1 folder. Within this folder you could arrange it with folders for Forms and Transformers or include specific ViewModels too.

For the controllers it is best to follow a single controller per action approach e.g.: instead of one controller that contains methods for create, update, destroy, view, list; these are instead separate controllers: CreateController, ListController, ViewController etc. It is up to you how you name these. They could instead by named: DisplayUserAsJson instead of ViewController etc. Whatever naming strategy is used, it should be used consistently.

To help with handling some of the typical request/response cycle of a controller a helper library (somnambulist/api-bundle) is included. This integrates Fractal response transformer through a system similar to DingoAPI. When used in conjunction with the command and query buses, this allows for very thin and light-weight controllers; keeping most of the business logic within the command and query handlers.

View Models a.k.a Presenters

For querying the system e.g. for an API response, create a ViewModel instead of using the main domain models. This allows for customised represents to be used including presentation logic, without filling the domain models with presentation logic. A package: somnambulist/read-models is included to provide this functionality via an active-record approach, however pure SQL / PDO could be used instead.

See the read-models documentation for more details of working with the library.


Database Migrations

Database migrations are handled by Doctrine Migrations. When writing your migrations it is strongly recommended to not use the entity manager to persist records. If you need to change schema or structure this can make managing older migrations much more difficult.

However: if you do need the entity manager you must add a factory override in the doctrine_migrations.yaml file and have the following:

<?php declare(strict_types=1);

namespace App\Resources\Factories;

use Doctrine\Migrations\AbstractMigration;
use Doctrine\Migrations\Version\DbalMigrationFactory;
use Doctrine\Migrations\Version\MigrationFactory;
use Symfony\Component\DependencyInjection\ContainerAwareInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;

 * Class MigrationFactoryDecorator
 * From: https://symfony.com/doc/master/bundles/DoctrineMigrationsBundle/index.html#migration-dependencies
 * @package    App\Resources\Factories
 * @subpackage App\Resources\Factories\MigrationFactoryDecorator
class MigrationFactoryDecorator implements MigrationFactory
    private MigrationFactory $factory;
    private ContainerInterface $container;

    public function __construct(DbalMigrationFactory $migrationFactory, ContainerInterface $container)
        $this->factory   = $migrationFactory;
        $this->container = $container;

    public function createVersion(string $migrationClassName): AbstractMigration
        $instance = $this->factory->createVersion($migrationClassName);

        if ($instance instanceof ContainerAwareInterface) {

        return $instance;

The same goes for any other services you would like to inject into the migrations.

Compiled Containers

Compiled Containers

A compiled container is a Docker container that does not use mounted local folders i.e.: there are no mappings in the docker-compose file to expose the application files to the container. This is the preferred way to run the containers as the performance is much higher and on Docker for Mac there are far fewer issues with Docker resources timing out.

The downside to this approach is that all the files are loaded into the container and are never updated until the container is rebuilt. This is not desirable while developing as developers need to be able to see code changes either real-time, or almost real-time.

There are several products to help keep a local file system in-sync with a running Docker container namely: Docker Sync and Mutagen. The preference is to use Mutagen as this will work with remote docker hosts where Docker Sync will not.

Mutagen is an application that can synchronize files from a source to a target. This can be done by several mechanisms, but includes direct docker support (as well as FTP/SSH etc). The project is still in development but works quite well and is pretty quick. Read more about it on the Mutagen site linked previously.

To install Mutagen, use brew: brew install mutagen-io/mutagen/mutagen.

To help work with Mutagen a helper library is available: SyncIt for Mutagen. To install this library, follow the instructions and optionally use the lazy install. Note that lazy install is performed at your own risk.

SyncIt is a PHP Phar archive that wraps some of the mutagen functionality in an easier wrapper that uses a YAML file format to configure sync tasks. Mutagen 0.10.0 has experimental support for a similar setup however this is highly experimental and currently not used.

Setup Mutagen Client

The first thing to do is create a default Mutagen global configuration. This is to prevent accidentally deleting files by using a two-way-sync. A good set of defaults would be:

        mode: one-way-replica
            vcs: true
                # System files
                - ".DS_Store"
                - "._*"

                # Vim files
                - "*~"
                - "*.sw[a-p]"

                # Common folders and files
                - ".idea"
                - "node_modules"
                - "var/*"
                - "docker-compose*.yml"
            mode: ignore
            defaultFileMode: 0644
            defaultDirectoryMode: 0755

This should be stored in ~/.mutagen.yml (your users home folder).

Using SyncIt on a Project

A default configuration file is included in the skeleton project and provides sync tasks for:

  • src
  • vendor
  • composer.json/lock
  • migrations

The SyncIt file will use ENV vars defined in the project .env as well as any in the current shell scope. You can check all available ENV vars by running: syncit params. Note this requires that the config file be valid. To substitute an ENV var use Bash expansion syntax: ${VAR_NAME}.

To start the SyncIt tasks: syncit start and then choose which ones you want to start.

To stop the SyncIt tasks: syncit stop and again choose what to stop.

You can get extended information by running syncit view and get the details of a sync task.

Additionally: all commands can be debugged by adding -vvv. This will output the underlying calls out to mutagen for debugging. For example:

$ syncit start -vvv
Would you like to start the daemon? (y/n) y
Which task would you like to start? 
  [0] app_source_files
  [1] app_vendor_files
  [2] composer_json
  [3] composer_lock
  [4] All
 > 0
Starting 1 sync tasks
  RUN  'mutagen' 'create' '/Users/anon/Projects/app-service' ... <lots more options>
Created session <hash>                            
  RES  Command ran successfully
 RUN  started session for "app_source_files" successfully

You can get the current task status by running: syncit status

$ syncit status
+------------------+----- Sync-It -- Active Tasks -- Mutagen (v0.10.0) -------+----------------------+
| Label            | Identifier                           | Conn State        | Sync Status          |
| app_source_files | <hash>                               | Connected         | Watching for changes |
| app_vendor_files | --                                   | --                | stopped              |
| composer_json    | --                                   | --                | stopped              |
| composer_lock    | --                                   | --                | stopped              |
| Run: "mutagen list" for raw output; or view <label> for more details                               |

Important! Once you are done working on a project be sure to stop ALL syncit tasks to avoid issues.

Service Discovery

Service Discovery

When running with data service it is possible to setup service auto-discovery.

Traefik is being used as a load-balancer and proxy for the main data service and provided that you follow a few simple steps, any number of micro-services can be associated with it.

Automatic Service Discovery

Traefik acts as a proxy and load balancer in a similar way to nginx. It listens on port 80 (or any other) and provides a gui (usually on 8080, but proxy.example.dev:80 gives access as well). LetsEncrypt can be setup to provide SSL as well as HTTP auth etc.

To register containers with Traefik (called proxy in the data project), you need to label the container with specific tags. Any web service should be labeled with:

  • traefik.enable: true
  • traefik.http.routers.service.rule: "Host(service.example.dev)"
  • traefik.http.routers.service.tls: true
  • traefik.http.services.service.loadbalancer.server.port: 8080

For example: to expose the example App API and have Traefik route it:

      context: .
      dockerfile: src/Resources/docker/dev/app/Dockerfile
      - app_example_network
      traefik.enable: true
      traefik.http.routers.app.rule: "Host(`app.example.dev`)"
      traefik.http.routers.app.tls: true
      traefik.http.services.app.loadbalancer.server.port: 8080

The port is the INTERNAL container port, the frontend rule is how it should be accessed via Traefik. We could require the previous port by changing the rule to: Host(app.example.dev:4011)

All that is left to do is dc up -d and now Traefik will pick up the new container and it will be available immediately.

If you dc down services will automatically be removed.

As the Traefik config is done through labels, they can be added safely to docker-compose files without interfering with any other configuration.

Note: you need to ensure that the network is defined as an external type so that your containers will join the same resources. This network name is defined in the data-service project.

Note: The Traefik options must reference the container name. In the example above the container is named app and the labels reference this as routers.app.* and services.app.*. If you name your container something else e.g.: foobarbaz, then these would be routers.foobarbaz. and services.foobarbaz.*.

Return to article