Speeding up Data Access for Microservices

Mobile and web apps have been on the rise in recent years, and as more people use them and the average usage time increases, this reliance on mobile and web apps brings with it a demand for real-time response. Consumers want information, and they want it now.

Aside from responding to user demands, applications must also address requests from other applications or handle data delivered by connected monitors or sensors. Any delay in response will negatively affect customer touch points and operational efficiency, leading to missed transactions and decreased revenue. Current applications must be prepared and equipped to meet these demands if a business is to stand any chance in today’s extremely competitive market.

A challenge for companies looking to scale and evolve existing applications is the sheer size and structure of these applications. Large-scale applications are very difficult to evolve and will lead to downtimes and long migration processes that could affect an organization for years, not to mention the integration of cloud services and other newer applications necessary for a business to run. The practical solution is to break these large apps into smaller more manageable parts, and microservices were born out of this necessity. Microservices, or the microservice architecture, helps provide structure to large applications by breaking them up into a collection of services that can be deployed independently, easily maintained, are optimized around business capabilities, and are loosely coupled. Ideally, these microservices should be owned by a small team and allow for quick, reliable, and frequent delivery of complex applications.

As older and larger applications are decomposed into microservices, a valid question that arises is “What do you do with the data; specifically, what do you do to ensure data access speed and integrity?

The Need for Speed in Microservices

Microservices can be scaled more dynamically compared to older, larger applications because the scope of software being scaled is limited only to the necessary microservices. There’s no need to scale the entire application, especially if the application is to be scaled back once peak in demand is met. This makes a microservices-based approach the go-to solution for companies that use web-scale applications and those that want to develop their own custom software. This breaking down of large applications into more manageable microservices is in line with the availability benefits and horizontal scalability of a distributed computing platform. By scaling each microservice individually, computing power can be allocated elsewhere and each microservice can be hosted on the infrastructure most appropriate for its specific workload.

A common approach to scaling microservices is by adding more instances of microservices. This can be very cumbersome and is very impractical because it adds resources to the application as a whole instead of focusing on specific areas that require higher performance. It also doesn’t scale the data access layer, which makes scaling inefficient because it only makes changes to the business logic. An in-memory data grid architecture solves this by providing a fast data access layer from where application instances can access data.

Because many businesses today rely on cloud-based architectures, a level of efficiency must be achieved where a microservice can become several instances across a multitude of other microservices and then become one again. A common method to address the data requirements in this instance is to share a caching layer across instances of microservices. Disk-based storage can also be scaled, but this will come at a cost; since traditional databases can only scale vertically, this will require more powerful hardware. Also, there’s the problem with speed: data access is much slower on disk than it is on RAM, even if solid-state drives (SSD) are used.

Speeding up With in-memory Data Grids

In developing and scaling microservices, the most cost-effective solution is working with an in-memory data grid. Using an in-memory data grid will provide the following benefits.

  • Simple lookups (reads)
    A common data access method for microservices, this is one of the aspects that should be optimized. Lookups involve retrieval of a small amount of data from the larger dataset, which is done very frequently. In-memory data grids make this process faster by minimizing access to disk, allowing for low latency and high throughput.
  • MapReduce-style processing
    This allows for faster data processing through the local execution of code by appropriate data partitions for parallel processing. Executing code to the same location where data resides eliminates the need to move data before processing.
  • Fast transactions (writes)
    Since this doesn’t occur as frequently as lookups, an in-memory data grid allows for transactions that can be configured to support ACID transactions. This helps avoid data consistency issues when the main focus is performance.
  • Session state data
    This data helps in scaling up or down through the addition or removal of instances of microservices. Although this data has a relatively short lifespan that ends at the end of each session, it allows new instances to pick up where a failed one left off.

Speeding Into the Future

The demand for speed and the amount of data businesses need to handle each day show no signs of subsiding anytime soon. Although technology has made leaps and bounds in the area of data processing, there’s still a long way to go—and in-memory data grids are the map to that destination. The effectiveness of microservices-based architectures demand performance and scalability to unleash its full potential, and the in-memory data grid is here to heed that call, maybe for the foreseeable future.

Comments are closed.