
AWS Consulting Partner & Cloud Consulting Services For Application Modernization - CRE8IVELOGIX
We can FastTrack your AWS cloud adoption from months to days using our proprietary product.
The title of this article is a bit of an exaggeration. Monoliths are evil and not so evil at the same time. How is that possible? Let’s find out.
Every application starts simple, so monolith architecture is the right choice. When the project starts there is often very little information available, so the project should always start with the most straightforward approach. However, one thing to keep in mind is that the project is bound to grow with the introduction of more features. Therefore, the projects need to be architected with the ultimate goal of Microservices in mind.
Modular architecture design following the Single Responsibility Principle can help us achieve this goal. Boundaries between different classes and services need to be along functional aspects. Upon neglection of these principles, the once simple monolithic application eventually becomes a tangled spaghetti.
Organizations often focus on adopting the best agile processes and practices. However, after a while, they realize that engineers are still struggling with meeting deadlines. The rate of feature development and release is prolonged, what they fail to realize is that the application is suffering from monolith fever, therefore no matter what they do to improve the process they are only putting band-aids rather than fixing the core problem.
Monolithic applications have some advantages and disadvantages. However, its disadvantages are far more than the advantages when it comes to large and complex projects.
Monolithic applications are tightly coupled with the technology stack. Therefore as the time passes and the technology becomes obsolete, it is challenging to upgrade it to use the latest technology frameworks or even upgrade the version of an existing framework if the new version is not backward compatible. This necessitates a rewrite of the complete application from scratch using the latest tech stack.
Because of the obsolete technology, it is difficult to find talent and resources who are interested in working on the old tech stack. Often to work on those applications engineers have to learn old technologies which does not help them much in their career advancements. It is far easier to recruit engineers who want to learn and apply the most cutting edge technologies.
Monolithic applications are tough to scale; they often have one centralized database which is a single point of failure. If that database is down or struggling due to heavy load the only way to scale it without additional development work is to use more powerful hardware, i.e. vertical scaling. The nonresponsive database becomes a bottleneck and in turn, decommissions the whole application. To handle increased load more application instances can be launched and added to the Load Balancer.
Another major problem with monolith applications is that there is no way to use optimized hardware based on feature requirements. If one feature is memory intensive whereas another feature is computationally intensive, then the monolithic application is out of luck as it can only utilize general purpose hardware.
As monolithic applications tend to be large and complex their testing become very error-prone and challenging. Engineers are scared to make any significant refactorings because it becomes very time consuming and tedious to figure out all paths that are affected by the refactored code. This causes a ripple effect, creating more code debt.
Deployment cycle is yet another victim of Monolithic architecture. Since the application is hard to test, even a minor change in the application requires a complete retest. Testing takes longer, causing the deployment cycle to prolong.
As the code base of the monolith is quite extensive, the learning curve is pretty steep, which in turn affects the productivity of the whole team. No single engineer can become SME of the application.
Larger applications take a long time to build and start, thus development time and feedback cycle prolongs. Continuous integrations take a long time as well, and if the build fails, it becomes very time consuming to figure out the cause.
To avoid breaking the build, developers use the branching strategy to work on the features; this becomes very problematic when they merge their branches to the master branch. Merges are error-prone, so instead of helping, it hurts them instead.
If a nasty bug creeps in such as memory leak, it affects the whole application thus causing production outages and affecting customers.
So in short monolithic architecture violates all the requirements of the modern software application. i.e.
Maintainability
Extensibility
Testability
Scalability
Reliability
Releasability
Monolithic architecture is an excellent choice for small applications because they offer several advantages.
Monolithic applications can catch most of the bugs at compile time since they only have binary dependencies and no dependency on external services.
Monolithic applications are inherently more secure as the surface area for the attack is minimal and is centralized. So enforcing security is more natural.
The deployment procedure is very straight forward and only requires the deployment of one artifact across all instances.
For monolithic applications, setting up the development environment is very straight forward. Usually, only one code repository needs to be checked out, and IDE has access to all the code base of the application.
Debugging is easier in general, as the engineer can step through the code quickly without having to worry about external calls.
Transaction Management is very easy to implement in a monolithic architecture, since it is only dealing with a single database.
Testing monolithic application is simple, as there are no external dependencies to mock. Setting up the data required to run the test is straight forward as there is only one database involved.
To perform a similar task, monolithic architecture can be more performant as they only deal with local API calls, rather than making an over the network call to fetch the equivalent data. However, this advantage is at the cost of scalability.
Before the internet, computers were being used for limited use cases. The dawn of the internet allowed multiple computers to exchange data in a meaningful way, providing the capability to solve many more interesting problems.
Based on this premise, various distributed applications have developed ever since. These applications need to run 24/7 to provide services to their users. Data centers are facilities that provide a structure to host computer servers connected to the internet. These applications are then installed on the servers to serve user requests.
Setting up the proper environment in these data centers required lots of effort and was very time-consuming, hindering the companies from innovating quickly and efficiently. Companies had to purchase or rent racks in these data centers, then purchase computer servers, provision them, install them in the rack, and connect them to the internet.
Once they are done setting up the environment, they have to provide on-going support in patching the OS, monitoring for security threats, handling hardware failure, and replacing old servers with new ones.
That is to say, all this took specialized teams and was a waste of time and resources for most of the organizations. It took away precious time, which could instead be spent developing features that can differentiate their business from others and get them ahead of their competition.
Cloud Computing became popular very quickly to resolve these issues. It provided a way to ease the heavy lifting for these organizations. Cloud computing pretty much refers to renting someone else’s computer and pay them for the use, such as by the hour, etc.
It is analogous to how we use electricity. When we flip the switch to turn on the light, we are charged for the usage, just like that cloud computing provides an on-demand usage of compute, network, and storage services.
Cloud providers have a shared responsibility model for consumers. The cloud provider manages the infrastructure and virtualized hosts’ security, whereas the consumers are responsible for securely running their applications.
An organization willing to move to the cloud can benefit from the following characteristics of the cloud.
In summary, there is quite a bit of attraction in moving to the cloud. However, this task requires cloud knowledge and the ability to architect cost-effective and secure solutions. CRE8IVELOGIX Inc. is founded by a former AWS Sr. Solutions Architect, who not only have an in-depth knowledge of AWS services but also helped various startups and enterprises with their transformation strategy. We would love to partner with you.
Although every team has its flavor of following and implementing the Agile software development process, some best practices can be followed to have a streamlined development process.
Vision: Every project starts with a vision. Vision is captured as a high-level (60,000 feet) view of the project. The stakeholder defines their vision in a few sentences.
I want to develop a mobile banking platform.
Project Planning in Agile Software Development: Once the vision is defined, it’s time to get together with stakeholders, project managers, engineers, etc., and develop a high-level (30,000 feet) view of the project by defining Epics. These Epics intend to capture the feature set required for accomplishing the overall vision.
EPIC-1: Customers can send money to other members using their phone numbers.
Milestone Planning: After capturing Epics, project milestones need to be defined, which will also help in release planning. Based on priorities, Epics are assigned to milestones in an Agile Software Development Process.
Milestone 1.0 will include EPIC-1, EPIC-2, and EPIC3.
Milestone 2.0 will consist of EPIC-4 and EPIC-5.
Sprint Planning: Engineers commit to developing these prioritized features during the sprint planning sessions. In these sessions, engineers get together with the product owners and stakeholders to dive deep into the committed features. It is essential to define acceptance criteria during this process.
Story Estimation: Engineers vote on estimating the features in this Agile Software Development stage. Estimation determines the effort level required to implement the story. Every team has its flavor of assessment. Some follow the Fibonacci series, and some follow the number of days, etc. The story is broken into smaller stories if the estimate is too big and can not be completed in one sprint. The story should be short enough to be completed within the sprint.
Iteration Start: Stories are then assigned to the next iteration. Engineers start working on it when the iteration begins. Iterations represent the heartbeat of the project. Iterations are usually between 1–4 weeks, but most teams follow two-week iterations. At the end of the iterations, there should be some demo-able product.
Daily Stand Ups: Quick feeback is vital to the Agile Software Development process; therefore, the engineering team meets with their product owners daily and provides updates on the project progress. These updates are short and are focused on three things.
What did I do yesterday?
What do I plan on doing today?
Does something block me?
QA: Good quality software is essential for the project’s success. Therefore it is also crucial to ensure the software is built per acceptance criteria, can handle various error scenarios, and meets the security and performance criteria. All these are verified and tested during the QA phase. Once the engineer feels comfortable with the implementation, the story becomes Demoable.
Demo: At the end of the iterations, there should be some demo-able stories. During the demo meeting, engineers demo the completed stories to the product owner and stakeholders. If the demo meets the acceptance criteria defined in the story, the story is accepted and considered complete else; it goes back to in progress.
Retrospectives: A retrospective meeting usually follows a demo meeting. During the retrospective, the team reviews their performance and deficiencies. They focus on identifying what worked and areas that need improvement. Once the team identifies problem areas, they brainstorm for a solution. They pick a solution to try out in the next iteration, and that’s how they engage in a continuous improvement cycle. Retrospectives are also the time for the team members to appreciate or encourage each other.
Release:
After achieving a milestone during the Agile Development Process, a project release may be necessary to get the project out in the hands of the customers and get early feedback.
Feedback: After the software is released, feedback is collected, and more stories or epics are added to either improve an existing feature or add a new feature.
We follow all these steps of the Agile Software Development process while developing our client’s application. For that reason, we can deliver quality products within tight deadlines without compromising on quality.
I would like to set the stage for this article with an opening sentence.
Microservice architecture is not a swiss army knife
Understanding this phrase will make us ready to dive into the world of Microservices and discover its advantages as well as drawbacks. This phrase will help us perceive if Microservices Architecture is the right choice for our next project.
Microservices architecture is a way of architecting software applications involving multiple self-contained services. Each service provides specific business functionality and follows its own development and deployment cycle. This is synonymous to the Lego bricks which can be connected to create different objects. Similarly, multiple microservices can be mashed up to create an application that provides a more concrete value to the end users.
Microservices thus offer a reusable infrastructure which can be used in different contexts, e.g., a Messaging app encapsulates messaging service so it can be used with a Banking App, eCommerce App, Ticketing App, etc. It encapsulates the messaging domain so that other applications don’t have to worry about re-inventing the wheel for the messaging system.
Microservices are genuinely independent of each other; they are loosely coupled without any binary dependency between them. They only use each other’s services defined by messaging protocols or external API contracts.
Each of these services can evolve independently, one thing to keep in mind is that these services need to be evolved in a backward compatible way. If care is not taken, it will create a situation where other microservices need to be deployed in lock-step.
Microservices are small, easily manageable applications; therefore it is easy for engineers to get up to speed and be able to contribute without understanding the overall applications architecture. If the whole system is reasonably complex, this also helps with creating agile teams that focus on particular microservices.
Microservices are easy to test, writing the end to end acceptance or integration tests do not require a pile of data to be set up. Each microservice focuses on its domain; therefore data setup for testing is relatively straight forward.
Each properly crafted microservice has its database, which is chosen based on the requirements for that particular domain. Significant schema changes in one microservice do not affect other microservices. This helps tremendously with fewer down times.
If one microservice is hit with the outage, it does not have a significant impact on the usability of the entire system, as other microservices continue to provide their services. This, however, may not be true in case of a microservice which is central to the system such as Authentication App.
Each microservices can use the technology stack, including development language, frameworks, databases, etc suitable for solving problems for that particular domain.
In terms of security, even though microservices have a larger surface area that needs to be secured, but even in case of a breach, only the data managed by that microservice is at risk. Other microservices are not affected as they either have their separate databases or if they share the same database, they will have a different schema and credentials.
Best of all each service can be deployed on the most optimized hardware for that service. If one service requires more memory whereas others require more CPU, then they can each be deployed on the hardware which fulfills those requirements.
Microservices have a fairly small footprint; therefore it takes less time to build and launch, this helps engineers tremendously, giving them the ability to make changes and get much quicker feedback.
Deployment of microservices is effortless, provided they are enhanced in a backward compatible fashion. So utmost attention should be paid not to make changes which will break the contract with external clients, as it will require external clients to be upgraded in lockstep.
Microservices applications are easy to scale, it is easy to identify highly utilized services and focus on scaling just those services rather than scaling the whole infrastructure.
So far we were focused entirely on the good parts of microservices. However, as I mentioned early, it’s not a one size fits all architecture. It is well suited for some applications but not for others.
One issue is with designing the microservices, as it is difficult to come up with a microservice with well-defined boundaries. If the microservices are not crafted with the correct boundaries, they can drive the application towards a big, and messy dependency web. Where each microservice depends on various other services which in turn depend on more services and so on, microservices need to be modularized based on the business domain or functional boundaries which have minimal dependency on each other.
Transaction management across multiple services is the most challenging part of the microservice architecture. Imagine on an e-commerce website, a customer places an order by calling order management microservice, and payment processing is handled by some other microservice. What will happen if the order is placed successfully, but the payment processing fails. There needs to be appropriate infrastructure in place to reverse the order if the payment fails, which of course would be more challenging to handle compared to transactions handled at the database level.
If we look at the overall application comprising of multiple microservices, there are more moving parts, so a lot more places where things can go wrong.
Debugging an issue in microservices based platforms is also exceptionally difficult. The request has to travel through multiple microservices, so pinpointing the exact location where an error or exception occurred is cumbersome.
Hopefully, the advantages and disadvantages outlined in this article will help the reader decide which architecture to choose when designing an application.
Storage is an essential component of any architecture. Storage is necessary because it is responsible for storing valuable data produced by the applications. Whether you are an architect or an application developer, you need to understand various storage options, their differences, and the applicable use cases. Knowing this will help you choose the right storage solution for your application and avoid any headaches down the road.
This blog will talk about different storage options and the portfolio of equivalent storage services offered by AWS. I will also discuss use cases that each solution supports.
Choice of storage option is influenced by the semantics of the data that need to be stored or processed. These semantics often define specific scalability, durability, and availability requirements. Therefore it is vital to understand the project’s needs, criticality, and sensitivity of the data, before selecting a particular storage technology.
Block storage: is the one that architects and developers are most familiar with. In this storage technology, each file is divided into several blocks or chunks and stored on the hard disk attached to the server. The disk is usually formated as NTFS, ext3, or some other standard. This technology is mostly used in environments that require frequent updates of the stored files such as databases. As these files are stored in chucks, only the changed section of the file needs to be updated.
Amazon Elastic Block Store — EBS is the block storage solution in AWS Cloud. Before using EBS, you need to mount it to an Elastic Compute Cloud EC2. However, once it is mounted, it can not be mounted to another EC2 instance simultaneously. Therefore it does not support simultaneous access of data from different compute resources. If the EC2 instance stops or terminates, the data on EBS is not lost and can be mounted to another EC2 instance for access.
File Storage: File Storage is also well known among consumers and developers alike. Nowadays, it is common to have an external Network Attached Storage — NAS to store large files or for backups. NAS Servers empower this type of file storage. This storage solution allows sharing of data between different servers over the network.
Amazon Elastic File Storage — EFS is a file-based storage solution available in AWS Cloud. Data in EFS backed storage can be easily shared between multiple EC2 instances.
For Windows-based high-performance workloads, Amazon FSx provides a similar file storage solution.
Object Storage: Object storage is a modern storage technology of the internet. Although both EBS and EFS need to be mounted to EC2 instances, object storage is an entirely independent storage service. Any client that supports the HTTP protocol can communicate with object storage over the internet using API calls. If there is a change in the file in this storage solution, the complete file needs to be replaced.
Amazon Simple Storage Service — S3 is the object store provided by AWS. There are multiple ways to interact with S3, either using AWS Console, AWS CLI, or AWS SDK regardless; the underlying communication occurs using API calls over the HTTP protocol. S3 provides virtually limitless amount of data storage and can empower various workloads such as data lakes.
In summary, AWS provides multiple storage solutions to support different workload requirements. It would be best if you understood the pros and cons of each storage technology to use them effectively.