How to build your Property Management System integration using Microservices

This is a guest post by Rafael Neves, Head of Enterprise Architecture at ALICE, a NY-based hospitality technology startup. While the domain is Property Management, it's also a good microservices intro.

In a fragmented world of hospitality systems, integration is a necessity. Your system will need to interact with different systems from different providers, each providing its own Application Program Interface (API). Not only that, but as you integrate with more hotel customers, the more instances you will need to connect and manage this connection. A Property Management System (PMS) is the core system of any hotel and integration is paramount as the industry moves to become more connected.

To provide software solutions in the hospitality industry, you will certainly need to establish a 2-way integration with the PMS providers. The challenge is building and managing these connections at scale, with multiple PMS instances across multiple hotels. There are several approaches you can leverage to implement these integrations. Here, I present one simple architectural design to building an integration foundation that will increase ROI as you grow. This approach is the use of microservices.

What are microservices?

Martin Fowler, thought leader in design of enterprise software, provides the a comprehensive definition of microservices:

The term "Microservice Architecture" has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data.

In essence, microservices are very small components of software that focus on doing one thing really well. Instead of writing one big monolithic application , one can break it apart using microservices, with each one managing one focused function in a given domain.

As such, they are autonomous, executing independent of each other. As such a change to one service should not require a change to another. As you grow and make changes you do not need to worry about impacting the other microservices.

Microservices are:

  • Small
  • Focused
  • Loosely coupled
  • Highly cohesive

Why Microservices are powerful

There are a lot of benefits provided by the microservices architecture. Key benefits include:


  • Your single system will integrate with multiple PMS instances in different properties. At scale, when integrating with 1000 properties, you will need to explicitly manage different 1000 integrations even if they are running the same PMS system from the same vendor. To add more complexity, these instances can be from different providers.
  • As you add many PMS instances and properties, this scales elegantly. If you have a monolithic application, you have to scale everything as just one single big piece. During traffic spikes, it might become difficult to understand where the performance bottleneck is, whereas with microservices there is a lot more transparency.
  • When you have microservices, it is very clear which services are presenting performance issues and you can easily adjust their capacity (the underlying hardware) without having to increase capacity for other services that are running normal loads of traffic.


  • The PMS instance at the hotel may be down or presenting performance issues and this will not impact performance or uptime of your system.
  • You can implement as many microservices you want. The more you deploy, the more fault-tolerance and management over changes you have.

Tech Stack independence

  • You can have multiple technology stacks; each service will have the tech stack that suits it best. Your guest profiles might be in a relational database whereas their requests could be in a NoSQL database.
  • There is no long-term commitment to a specific tech stack; after all, you can have multiple.

Adding, changing or killing features & refactoring

  • Microservices are very small (often just a few hundred lines of code).
  • It is easier to understand the code as it has cohesion: it does one thing.
  • Changes to it don't impact other services directly.
  • Deleting an entire microservice is easier and without risk to the system at large


  • Hotels want to deliver exceptional service. They can't deliver it if your system is down or it is not providing all of its features (like not pushing a guest check-in request to your PMS or reading updates from it).
  • Roll back is also easier. If something goes wrong, it is easy to roll back one microservice with its own database than to roll back the entire system with one single database.
  • When you deploy your next version of your monolithic application, it is always a pain: you have to deploy your entire application at once, even if you just added one single feature. Deploying everything together is risky.

Keep in mind that if you are integrating just one PMS, microservices are an overkill. If you are integrating with 1000 PMS instances at 1000 different properties, that is when the benefits of this architecture become evident.

The traditional approach: Monolithic applications

It makes sense to start your business and to start a new product with the traditional approach (a monolithic application) as it is easier. At this time, you are still learning about your domain and how it is all integrated.

The monolith is easier to develop and to deploy and with a monolithic approach it is easier to answer questions such as how should you model a reservation service and how your guest folio actions be implemented in the guest profile microservice.

However, just as your company does, the monolith grows very rapidly and as your system grows it becomes difficult to scale:

New features are added, new lines of code get pushed and when the system becomes too big, it becomes a monster that is hard to tame and understand. Changing the system requires intense regression testing to make sure the new features are not breaking other parts of the system as monolithic applications are usually not cohesive nor are they loosely coupled.

Troubleshooting and debugging is harder since there is more code with more interdependence. Updating a service may require changes in shared infrastructure code and that can lead to contagion when a bug is introduced. This large code base makes it hard to onboard and train new developers.

It impacts deployment: the bigger the application the longer it takes to start up. Yes, you can simply add more servers and copy your application across all of them, but you still have one single database. Not only that, but some parts of your system might be more intense on memory usage whereas others need a lot of CPU. So what do you do when you cannot scale each component individually? You add more servers!

This is very costly. As such, once you get a better sense of your domain, you might want to begin breaking your system into microservices.

Architecture Overview

Now we have explained the long term benefits of adopting a microservices architecture, lets look into some details of how we can design this microservices framework:

The key here is segregation: systems being integrated should operate independently of one another. IE: your core system is independent of the property management system running at property X and is independent from the system running at property Y.

This segregation can be achieved by introducing a connector (a middleware) between your core system and all the property management systems you are integrating with.

The Middleware is composed of message queues and background workers:

  • Examples of services that can be used to implement:
    • Message queues: RabbitMQ, IronMQ, etc.
    • Background workers: IronWorker, AWS Lambda, etc.

Message queues provide asynchronous communication between systems; the sender system (e.g. your system) posts a message into a queue and it sits there until it is processed at a later time by a background worker (which subscribes to that queue).

The background worker is then responsible for processing the message (parsing its contents) and then to manage the integration with the PMS API. Also, it saves data into the middleware database.

Keep in mind that the background worker can be a cloud service like AWS Lambda, it can be an application developed internally in Java or it can be developed in a Windows service. As we will see in details below, the technology stack of the microservice does not matter.

One should keep in mind that one message queue guarantee FIFO (First In, First Out) and therefore all your messages in the queue are processed in the order that they come in. If you have multiple queues, a message that was posted to a queue X at a later time can be processed earlier than a message that was posted to a queue Y. This should be considered in your design.

Also, while the PMS may be on premises at the hotel, your system needn’t be, it can be in the cloud or on premises.

Keep in mind that this service controls all the lifecycle events associated with the reservation so it is not just a CRUD wrapper. If you need to assign a room to this reservation, add an accompanying guest or even check-in a reservation, you would send the appropriate requests to this same worker.

Now let's analyze each of the main characteristics of Microservices as per Martin's description and how our architecture implements them:

1- Organization around business capability

Each worker implements a piece of the logic of how to integrate with a PMS. You can have multiple workers plugged into the same PMS instance in a property, add more workers connected to the same Property Management System (same vendor) in a different hotel and also add other workers connected to a different Property Management System (different vendor) at other properties.

So supposing you need to make changes in how you interact with some API methods, you just need to deploy the new version to one of the workers without impacting the others. You can have one worker to deal with reservations and another to deal with guests profiles.

Some background workers can be scheduled in the Linux crontab to recurrently execute under a given schedule. Others are always running, processing messages on the queue as they arrive. Other background workers can also call back your core system API in order to insert or update new information it collected from the PMS (i.e. fetching/reading data from the PMS and loading it into your core system).

In the book Building Microservices by Sam Newman, he states that “smaller teams working on smaller codebases tend to be more productive". This is achieved through microservices

Not only does it enable higher productivity but it allows you to shift teams or individuals from one microservice to the other (universal sharing of the code base).

It also promotes innovation as individuals or teams working on the same project for a long time may only come up with a limited scope of new ideas. Whereas if you are allowing your team to switch between products and projects, they may come up with thousands of new ideas.

2- Automated Deployment

Microservices need to be deployed in an automated fashion. Why? First off, because you have so many of them. If you had to deploy each one of them manually, it would be very error-prone and time-consuming. It depends on the number of microservices you have. You release each service individually, independently of each other.

Notice that I mean deployment of a new version to the microservice here, not spinning up new worker instances. You already have the workers running, but you want to deploy a new version of the code. Let's illustrate this with an example:

  • Let's say you have to integrate with 1000 properties, where 500 of them use PMS from vendor 1 (PMS_1) and 500 of them use a PMS from vendor 2 (PMS_2).
  • Since the domain context here are very similar irrespective of PMS vendor, you will very likely have a similar number of workers per PMS instance, unless you want to scale a given connection by adding more workers of the same type. But to simply, let's assume you have 5 workers per PMS instance (one for Reservation, one for Guest Profiles, etc).
  • Since PMS_1 API is different from PMS_2 API, the Reservation service that integrates with PMS_1 has a different code than the Reservation service that integrates with PMS_2.
  • On 1000 properties, you have 5000 workers then:
    • 2500 workers for PMS_1
      • 500 Reservation Workers, one per property with PMS_1
      • 500 Guest Profile Workers, one per property with PMS_1
      • 500 X workers, one per property with PMS_1
      • 500 Y workers, one per property with PMS_1
      • 500 Z workers, one per property with PMS_1
    • 2500 workers for PMS_2
      • 500 Reservation Workers, one per property with PMS_2
      • 500 Guest Profile Workers, one per property with PMS_2
      • 500 X workers, one per property with PMS_2
      • 500 Y workers, one per property with PMS_2
      • 500 Z workers, one per property with PMS_2
  • Let's say that you made a change to the code of the Reservation service that integrates with PMS_1, it was tested and it is ready to release.
  • Assuming you have one source code repository and build in your Continuous Integration tool of choice per microservice, which is greatly advisable, you need to deploy your code to 500 workers (the 500 Reservation Workers that integrate with PMS_1).

Secondly, one of the purposes of microservices is to have agility and to have agility you need automation. This is where the magic of Continuous Integration (CI) and Continuous Delivery (CD) shows up:

CI is a development practice that requires developers to integrate code into a shared repository several times a day. The commit then triggers a build. If the build fails, everyone gets alerted (principle of alerting on deviations). The key here is to identify issues with the commit (i.e. issues with the code) as early as possible. If it builds successfully, it can then be deployed to your application server. This leads us to Continuous Delivery:

Continuous Delivery is the practice to ensure that the artifact of the successful build above can be quickly deployed to production. This is achieved by first deploying your application to a staging environment (which has the same characteristics as your production environment). The application can be deployed to your production environment by just pressing the "Deploy" button. What’s best here is since it is just a button, you don't need to interrupt your software engineers work.

  • Examples of service that can be used to implement CI/CD: Atlassian Bamboo, TeamCity, Jenkins, etc.

Although, please don't confuse Continuous Delivery with Continuous Deployment. I will not get into the details about this here, but I leave a great article by PuppetLabs on the differences between Continuous Delivery and Continuous Deployment.

Also note that microservices have a much safer deployment than a monolithic application, which should help with automation.

3- Intelligence in the endpoints

The background workers encapsulate the logic of how to integrate with the PMS. If we need to change the logic or if the PMS API has changed, we just need to change in one place - and it is not in your main system (isolates it from changes to the downstream API).

Also, each PMS has its own API, so you can isolate the logic to communicate with each API from your core system.

4 - Decentralised control of languages and data

Each microservice can have its own technology stack. Allowing you to embrace technology heterogeneity.

For example: If we need to improve performance of a specific component, we can choose a different tech stack that is able to achieve the desired performance. New services are not dependent on older technology definitions and leverage new languages or technologies when appropriate.

Each worker can be built with a different technology: worker 1 can be developed in Java with a MySQL database to handle Guest Profiles whereas worker 2 can be developed in C# with a NoSQL DB to handle guest messages. Remember: they are independent.

You need to worry about their integration, i.e. how they actually communicate with each other. Are you going to implement a RESTful API returning JSON or a SOAP API talking XML?

Now let’s dive deeper into the middleware.

The Middleware

The Middleware provides the segregation between your system and the multiple property management systems you are integrating with. It is composed of message queues and background workers.

The Middleware should not hold state: the systems on each end (i.e. your system and the PMS system) are responsible for holding state about hotels, guest profiles, reservations, etc. The Middleware just creates a mapping between the two. The reason for this is that you don't want to introduce a new component that also stores state which can lead to consistency problems: you will have the same entity (say, a Reservation) stored in 3 different systems (the Hotel property management system, your middleware and your core application) and they can be all different due to some bug in the integration. In this case, who holds the truth about the Reservation real valid state?

The Middleware must provide ways that allow you to reconcile what you have in your core system with what is in the PMS. So if a new request was created in your core system but for some reason (the instance was offline, bug in the software, network issue, etc) it was not saved into the PMS (for posting a charge to the guest folio, for example), the Middleware should alert users to that issue and offer ways to reprocess the integration. It must provide a clear view of where every message stands on the queues and the status of each background worker. It must allow you to see that a given message failed and the reason it failed, and provide mechanisms to retry (reprocess) a given a message.

You can have a caching layer on top of the Middleware database in order to make access to common objects like city codes, credit card types, etc a lot faster. Some property management systems implement this as enums with key-value pairs. Examples of tools that can be used to provide a caching layer are: self-hosted Redis, managed Redis solutions (e.g. AWS Elasticache), Memcache.

Challenges with using Microservice

There are always challenges when building any software, especially with integrated systems at scale.

In Building Microservices, Newman reminds us that "failure becomes a statistical certainty at scale," and this is certainly the case when implementing microservices (and monolithic for that matter).

Accepting that things will fail (your hard disks, the network, etc.) and embracing this fact, handling failure of multiple independent services is hard.

Distributed and asynchronous architectures are hard to implement and hard to debug. You need to look through logs scattered across multiple instances and look at distributed transactionality to understand why you ended up with a wacky state. If errors occur in the middle of a synchronous workflow, rolling back a state is difficult. Getting a sense of where failures happened is difficult since work may be happening in parallel and there could be race conditions introduced, which are hard to manage.

Ensuring consistency at scale with microservices is an additional challenge. Imagine that you have a service that manages guest profiles and another to manage reservations. If a new guest booked a reservation for the first time at your hotel, the reservation microservice will create a new reservation record and the guest profile microservice will need to create a new guest profile. What if the guest profile has a bug and it doesn't create a new guest profile successfully? If you don't manage this correctly, you will end up with one orphan reservation that is not tied to any guest profile. At scale this can be very hard to track and manage.

Asynchronous distributed architecture may lead to other problems. Let's imagine a certain type of request sent by your system to a transacted queue makes your workers crash. Also, imagine you added multiple workers pulling messages from the same queue to speed up the processing. The first workers pulls the message from the queue and then dies. When it dies, the lock on the request times out and the original request message is put back into the queue. Then your next worker is going to pull the same message from the queue and have the same outcome: it’s going to die.

Another challenge is that you will have to monitor hundreds of services constantly being re-deployed. This will drive the need for a dedicated DevOps resource or team to manage this large number of services.

Overall system performance will also need to be considered as you have lots of remote calls all going through the network. As we all know, the network is unreliable: packets might be delayed, lost, etc. Also, the messages between the systems don't flow in real-time: you post a message to a queue and then at some point in the (near) future, it will be processed.

Lastly, implementing versioning effectively is hard with microservices: you will eventually make a change to the interface of a service. How will you manage that?

With every architecture approach there are tradeoffs. While there are challenges with microservices, there are challenges with every approach. When managing multiple PMS integrations at scale, the benefits of using microservices far outweigh the costs.

Considering economics of implementation at scale:

Here are some comparative costs to consider when implementing microservices:

  • When at scale, 100 different PMS integrations may require 100 servers.
  • With a monolithic approach, these servers are always running.
  • In a microservice world, the microservice wakes up when needed and turns off when not.
  • With cloud services like AWS Lambda or IronMQ you are paying for CPU time versus server time.
  • When embracing on-demand provisioning systems like those provided by Amazon Web Services, we can apply this scaling on demand for those pieces that need it. This allows us to control our costs more effectively.

Thus over time, microservices can be much more cost effective as you pay for computing power as there is demand which allows you to manage your costs more closely and ultimately lower wastage.  It is not often that an architectural approach can be so closely correlated to an almost immediate cost saving.

So, where to go from here?

Breaking apart the monolith

A common question I always hear is: "I've already built my monolithic application. Do I need to re-build it from scratch to implement a microservices architecture?"

The short answer is: no.

The long answer is that you can break apart your monolith piece by piece. It should be an incremental approach so you can learn more about your core functionality and how it interacts with other core functionality. This is essential for you to build an understanding of what your services should be and how they will communicate with each other. Take a "learn as you go" approach and define which parts of the system should become a microservice, piece by piece. I will leave the details on strategies on how to design and implement this to a future post.

I hope this explains the benefits or using a microservices approach to building your PMS integration. As your system and the number of microservices grows, this approach will help you scale in a more flexible, efficient and cost effective manner.