Software Architecture

Software Architecture
Software Architecture

Definition:

Philippe Kruchten, Grady Booch, Kurt Bittner, and Rich Reitman derived and refined a definition of architecture based on work by Mary Shaw and David Garlan (Shaw and Garlan 1996). Their definition is:

“Software architecture encompasses the set of significant decisions about the organization of a software system including the selection of the structural elements and their interfaces by which the system is composed; behavior as specified in collaboration among those elements; composition of these structural and behavioral elements into larger subsystems; and an architectural style that guides this organization. Software architecture also involves functionality, usability, resilience, performance, reuse, comprehensibility, economic and technology constraints, tradeoffs and aesthetic concerns.”

In Patterns of Enterprise Application Architecture, Martin Fowler outlines some common recurring themes when explaining architecture. He identifies these themes as:

“The highest-level breakdown of a system into its parts; the decisions that are Hard to change; there are multiple architectures in a system; what is architecturally Significant can change over a system’s lifetime; and, in the end, architecture boils Down to whatever the important stuff is.”

 Software application architecture is the process of defining and coming up with a solution that is well structured and meets all of the technical and operational requirements. The architecture should be able to take into account and improve upon the common quality attributes such as performance, security, and manageability.

The main focus of the Software architecture is how the major elements and components within an application are used by, or interact with, other major elements and components within the application. The selection of data structures and algorithms or the implementation details of individual components are design concerns, they are not an architectural concerns but sometimes Design and Architecture concerns overlap.

Before starting the architecting of any software, there are some basic questions that we should strive to get answers for. They are as follows:

  • How the users of the system will be interacting with the system?
  • How will the application be deployed into production and managed?
  • What are the various non-functional requirements for the application, such as security, performance, concurrency, internationalization, and configuration?
  • How can the application be designed to be flexible and maintainable over time?
  • What are the architectural trends that might impact your application now or after it has been deployed?

Goals of Software Architecture

Building the bridge between business requirements and technical requirements is the main goal of any software architecture. The goal of architecture is to identify the requirements that affect the basic structure of the application. Good architecture reduces the business risks associated with building a technical solution while a good design is flexible enough to be able to handle the changes that will occur over time in hardware and software technology, as well as in user scenarios and requirements. An architect must consider the overall effect of design decisions, the inherent tradeoffs between quality attributes (such as performance and security), and the tradeoffs required to address user, system, and business requirements.

Principles of Software Architecture

The basic assumption of any architecture should be the belief that the design will evolve over time and that one cannot know everything one need to know up front. The design will generally need to evolve during the implementation stages of the application as one learn more, and as one tests the design against real world requirements.

Keeping the above statement in mind, let’s try to list down some of the Architectural principles:

  • The system should be built to change instead of building to last.
  • Model the architecture to analyze and reduce risk.
  • Use models and visualizations as a communication and collaboration tool.
  • The key engineering decisions should be identified and acted upon upfront.

Architects should consider using an incremental and iterative approach to refining their architecture. Start with baseline architecture to get the big picture right, and then evolve candidate architectures as one iteratively test and improve one’s architecture. Do not try to get it all right the first time—design just as much as you can in order to start testing the design against requirements and assumptions. Iteratively add details to the design over multiple passes to make sure that you get the big decisions right first, and then focus on the details. A common pitfall is to dive into the details too quickly and get the big decisions wrong by making incorrect assumptions, or by failing to evaluate your architecture effectively.

When testing your architecture, consider the following questions:

  • What were the main assumptions that were made while architecting the system?
  • What are the requirements both explicit and implicit this architecture is satisfying?
  • What are the key risks with this architectural approach?
  • What countermeasures are in place to mitigate key risks?
  • In what ways is this architecture an improvement over the baseline or the last candidate architecture?

 Design Principles

When getting started with Software design, one should keep in mind the proven principles and the principles that adheres to minimizes costs and maintenance requirements, and promotes usability and extensibility. The key principles of any Software Design are:

  • Separation of concerns: The key factor to be kept in mind is minimization of interaction points between independent feature sets to achieve high cohesion and low coupling.
  • Single Responsibility principle: Each component or module should be independent in itself and responsible for only a specific feature or functionality.
  • Principle of Least Knowledge: A component or object should not know about internal details of other components or objects.
  • Don’t repeat yourself (DRY): The intent or implementation of any feature or functionality should be done at only one place. It should never be repeated in some other component or module
  • Minimize upfront design: This principle is also sometimes known as YAGNI (“You ain’t gonna need it”). Design only what is necessary. Especially for agile development, one can avoid big design upfront (BDUF). If the application requirements are unclear, or if there is a possibility of the design evolving over time, one should avoid making a large design effort prematurely.

Design Practices

  • Keep design patterns consistent within each layer.
  • Do not duplicate functionality within an application.
  • Prefer composition to inheritance. If possible, use composition over inheritance when reusing functionality because inheritance increases the dependency between parent and child classes, thereby limiting the reuse of child classes. This also reduces the inheritance hierarchies, which can become very difficult to deal with.
  • Establish a coding style and naming convention for development.
  • Maintain system quality using automated QA techniques during development. Use unit testing and other automated Quality Analysis techniques, such as dependency analysis and static code analysis, during development
  • Not only development, also consider the operation of your application. Determine what metrics and operational data are required by the IT infrastructure to ensure the efficient deployment and operation of your application.

Application Layers: While architecting and designing the system, one needs to carefully consider the various layers into which the application will be divided. There are some key considerations that need to be kept in mind while doing that:

  • Separate the areas of concern. Break your application into distinct features that overlap in functionality as little as possible. The main benefit of this approach is that a feature or functionality can be optimized independently of other features or functionality
  • Be explicit about how layers communicate with each other.
  • Abstraction should be used to implement loose coupling between layers.
  • Do not mix different types of components in the same logical layer. For example, the UI layer should not contain business processing components, but instead should contain components used to handle user input and process user requests.
  • Keep the data format consistent within a layer or component.

Components, Modules, and Functions: Key Considerations

In the previous sections we talked about key considerations that need to be kept in mind while architecting or designing an application. We also touched based on what needs to be kept in mind when we are dividing our application into various layers. In this section let’s take a look at some key considerations for designing component, modules and Functions.

  • A component or an object should not rely on internal details of other components or objects.
  • Never overload the functionality of a component. For example, a UI processing component should not contain data access code or attempt to provide additional functionality.
  • Explicitly state how the components will communicate with each other. This requires an understanding of the deployment scenarios your application must support. You must determine if all components will run within the same process, or if communication across physical or process boundaries must be supported—perhaps by implementing message-based interfaces.
  • Keep crosscutting code (such as logging and performance) abstracted from the application business logic as far as possible.
  • Present a clear contract for components. Components, modules, and functions should define a contract or interface specification that describes their usage and behavior clearly.

Technology Blogs
top sites

4 Dimensions of Project Management

4DimensionThere are 4D’s of any software project management. Any successful or efficient Project Managers uses these 4D’s as a trade-offs or leveraging techniques when dealing with the management. The 4D’s of project management namely – Duration, Cost, Scope and Risk are used by Project Managers to handle any change requests while maintaining the feasibility of the projects.

Scenario 1 – Project Managers with only 1 dimension: Duration

The PM presents the plan and schedule to the management and is inevitably asked the question which is always asked:” How can we make this faster?”

PM response: The PM slumps in his chair and starts stuttering, “We can’t change anything, this is the only way of doing it and changing anything at this stage will lead to disaster.”

Management response: They go ahead anyway and slash the duration to what they want and ask the PM to do it anyway. Please note, management relishes the opportunity to take strategic decisions based on data and they don’t respond too well to vague terms like disaster or no change.

Here the project manager was caught in a typical 1-Dimension situation wherein the only option or alternative that he was able to give to the management was Duration. He was either unprepared or unwilling to give any alternatives or trade-offs options to the management and that led to a disastrous scenario. So now the PM is stuck with a shorter duration with no increase in the budget and no reduction in scope of the project.

A different outcome could have been accomplished if the PM would have come up with multiple Duration and Costs options. If faced with the inevitable question of “How can we do it faster?” the better response to the management is sure we can do it faster and we can do it cheaper too. I have these options which you can take a look at and then make an informed decision. I have this option 1 which will complete the project in 6 months at a cost of half a million dollars. I have another option in which we can complete the project in 4 months only but in that scenario I will have to hire a Graphics designer and an extra developer which will add a cost of 60 thousand dollar to the project.

What is the immediate effect of the presentation of these multiple options to the management? The management starts giving respect to the PM and he is being perceived as somebody who is intelligent and knows what he is doing. In short, they become aware of the fact that they are dealing with somebody who cannot be bulldozed with statements like “This is when i want the things to complete and please make sure that it is done.” Secondly Management is happy that they are being asked to make a strategic decision based on some data instead of vague threats like disaster.

There is a high possibility that even after being presented with the multiple options and various dimensions the management still goes ahead and start playing 1-Dimension card. This is when the PM has to be intelligent and ready enough to take out the 3rd dimension that is the Scope card.

“Sure if you still want the duration of the project to be shorter at no additional cost then i will have to reduce the scope of the project to achieve them and here are the options for them.”

There is very high probability that even after playing all the 3 dimensions of the project management, the PM might not get everything that he desired but we will surely be able to negotiate something and at least come out of the meeting with a feasible project. Last but not the least the PM would have gained the respect and credibility of the management which would go a long way in maintaining the sanity of the project as the project goes along. The trade-offs (Duration, Cost, Scope and Risk) are very effective tools that needs to be presented to the decision-makers so that they can take realistic decisions. To be able to present these trade-offs , the PM should be able to quantify each of these trade-offs and present as many as options to the executives.

One – Dimension Projects : Most of the internal applications that are developed in an organization are one dimensional. The only tangible option that the executives have are the duration. There is no explicit project budget and there is no assessment of risks that are done. Not surprisingly, most of the discussions and decisions are around the duration of the project as that is the only measurable entity that is before the management. Scope creep, bloated budget and slipped deadlines are common in one-dimension projects.

Two – Dimension projects :  Some organizations do add a second dimension to their plans – Duration and Cost. The benefits of adding this dimension is tremendous as now the management understands that adding a new feature or changing a feature is not “free” anymore. There is a cost involved. Even if the Cost of the project might not be paid by the end client, the management now becomes aware of the Scope creep and the cost involved with it. This knowledge is a major step forward and helps tremendously in controlling the scope creep.

Three – Dimension projects : Things get much better if the PM adds the third dimension and is able to quantify the scope of the project. By decomposing and quantifying the scope of the project, the management now has a measurable view of each of the feature sets and their impact on the business results. This trade-off becomes more powerful when it is complemented with the two dimensions mentioned above – Duration and Cost. Consider these two options and one will realize the effect of presenting this dimension with the other two dimensions.

Option 1: “Develop this particular feature capable of serving 90000 users at a cost of 100000 dollars in 6 months.”

Option 2: “Develop this particular feature capable of service 50000 users at a cost of 60000 dollars in 4 months.”

Now the management directly sees the desired business result that they are going to get and at what cost and duration.

Four – Dimension Projects: The fourth dimensions adds the assessment of risks to the projects. When this dimension is added to the project management, this gives the ability to the management to see at what level of certainty they want to achieve in the project and at what cost and duration. When the management listens to “We can deliver the agreed scope of the application with 60 % confidence at a cost of 1 million in 6 months”, they are more confident of taking the decision and are vary of creeping of the scope as they are now aware that any change request will affect all the duration, cost and Risk.

 

Big Data – Challenges and opportunities

Big Data
Big Data

Every challenge represents a new set of opportunities, big challenges bring forth big opportunities and big data is a big challenge. It challenges the way we have been viewing, storing, analyzing and interpreting our data. The term “Big Data” is kind of a misnomer since it implies that the only issue with today’s data is its sheer size. There are lots more to Big Data than just its size. Big Data applies to the set of data that can’t be processed or gained insight into using the traditional tools and techniques. The amount of data in our world has been exploding. We capture trillions of bytes of information about customers, suppliers, and operations, and millions of networked sensors are being embedded in the physical world in devices such as mobile phones and automobiles, sensing, creating, and communicating data. Individuals with smartphones and on social network sites continue to fuel exponential growth. Big data—large pools of data that can be captured, communicated, aggregated, stored, and analyzed—is now part of every sector and function of the global economy.  Let’s look at some of the facts how the data is proliferating – There are 5 billion phones in world right now that are generating data by seconds, just the Facebook alone is generating 30 billion pieces of content every month, the Library of US congress has around 235 terabytes of data, twitter alone generates around 7 terabytes of data every month, The Large Hadron Collider experiments represent about 150 million sensors delivering data 40 million times per second, Wal-Mart handles more than 1 million customer transactions every hour. These are just the tip of the iceberg and it demonstrates how the data is becoming massive by seconds. Digital data is everywhere – in every sector, in every company, in every economy. Today we store everything – Environmental data, financial data, medical data, and surveillance data and the list goes on and on. According to an MGI estimate, in 2010 the corporations stored more than 7 Exabyte’s of data on their hard drives and the individual consumers stored more than 6 Exabyte’s of data.  Google’s executive chairman Eric Schmidt brings it to a point: “From the dawn of civilization until 2003, humankind generated five Exabyte’s of data. Now we produce five Exabyte’s every two days…and the pace is accelerating.”

The possibilities and opportunities presented by the proliferation of big data are constantly evolving, driven by innovation in technologies, platforms and analytical capabilities. The Big Data can be defined by three of its main characteristics – Volume, Variety and Velocity of data. Till now we have just discussed about the size or the volume of data. With the proliferation of sensors, smart devices, social collaboration tools, enterprises today are faced with not only the traditional relational data but also data in a raw form, semi structured or unstructured form. The sheer variety of data that are getting captured today presents a unique set of problems to our traditional tools and techniques in storing and analyzing them. As the volume and variety of data that gets captured or stored today has changed so has the sheer velocity at which this data is getting generated. With the emergence of RFID sensors everywhere and other information streams, the data is getting generated at such a pace that has made impossible for our traditional tools to handle. For many applications, the speed of data creation is even more important than the volume. Real-time or nearly real-time information makes it possible for a company to be much more agile than its competitors.

The opportunities presented by Big Data are huge. According to various researches, there is a potential value addition to the tune of 300 billion dollars just in the US healthcare, there is a potential to generate 250 million pounds in the public sectors in Europe, there is a potential to generate 600 billion dollars in consumer sectors just if we start analyzing the location centric data, in coming 5-7 years the big data is going to generate 1.5 to 1.8 million jobs just in US alone in the field of deep data analytics, big data mangers and engineers. The opportunities presented by the big data is immense if we just keep pace by evolving our technologies and tools to keep with the volume and velocity with which we are generating the data.

Ignoring Risk Management : A disaster !

Risk Management
Risk Management

Some staggering facts on software failures

According to the Standish report:  In the United States, we spend more than $250 billion each year on IT application development of approximately 175,000 projects. The average cost of a development project for a large company is $2,322,000; for a medium company, it is $1,331,000; and for a small company, it is $434,000. The Standish Group research shows a staggering 31.1% of these projects will be cancelled before they ever get completed. Further results indicate 52.7% of projects will cost 189% of their original estimates.

According to a study report done by McKinsey & Company in conjunction with the University of Oxford: 17 percent of large IT projects go so badly that they can threaten the very existence of the company. On average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted

According to a study by KPMG (New Zealand): Survey shows an incredible 70% of organizations have suffered at least one project failure in the prior 12 months. 50% of respondents also indicated that their project failed to consistently achieve what they set out to achieve!

 Ignoring Risk Management: A Recipe for disaster

Software Project failures are the result of the multiplicity of risks inherent in software project environment. Software development projects are collections of larger programs with many interactions and dependencies. It involves a creation of something that has never been done before although the development processes are similar among other projects. As a result, software development projects have a dismal track-record of cost and schedule overruns and quality and usability problems. Time-to-market is the most critical factor for consumer in developing commercial software products. However the project success is difficult to predict because project scope is changed by continuous market requirements and resources are constantly being reallocated to accommodate latest market conditions. Projects for specific customers also have a large degree of uncertainty for requirements due to the customized technical attributes. Many software projects and programs involve multiple entities such as companies, divisions, etc., that may have certain interests. There is often a feeling of disconnection between software developers and their management, each believing that the others are out of touch with reality resulting in misunderstanding and lack of trust. Research shows that 45% of all the causes of delayed software deliverables are related to organizational issues. By looking at the facts and reasons mentioned above, it would be quite obvious that the Risk Management process would be quite an integral part of the Software Development process. Wrong!

According to Kwak and Ibbs (2000) identified risk management as the least practiced discipline among different project management knowledge areas. Boehm and DeMarco (1997) mentioned that “our culture has evolved such that owning up to risks is often confused with defeatism”. In many organizations, the tendency to ‘shoot the messenger’ often discourages people from bringing imminent problems to the attention of management. This attitude is the result of a misunderstanding of risk management. Most software developers and project managers perceive risk management processes and activities as extra work and expense. Risk management processes are the first thing to be removed from the project activities when the project schedule slips.

MVVM Unplugged – Definition, Benefits, Classes and its interactions

Definition
MVVM is an architectural pattern that facilitates the clear separation of the GUI with the logic. It provides a clean separation between application’s user interface, its presentation logic, and its business logic and data by separating each into separate classes. The MVVM pattern is a close variant of the Presentation Model pattern, optimized to leverage some of the core capabilities of WPF and Silverlight, such as data binding, data templates, commands, and behaviors.  

Benefits
The separation of application logic and UI helps to address numerous development and design issues and can make the application much easier to test, maintain, and evolve. It can also greatly improve code re-use opportunities and allows developers and UI designers to more easily collaborate when developing their respective parts of the application.

Some of the benefits achieved by using MVVM are explained in detail below:

  • Concurrent Development: One of the biggest advantage is that during the development process, developers and designers can work more independently and concurrently on their components. The designers can concentrate on the view, and if they are using Expression Blend, they can easily generate sample data to work with, while the developers can work on the view model and model components.
  • Testability: The developers can create unit tests for the view model and the model without using the view.
  • Easy Redesign of the UI: It is easy to redesign the UI of the application without touching the code because the view is implemented entirely in XAML. A new version of the view can easily be worked upon and plugged into the existing view model.

Details
In the MVVM pattern, the UI of the application and the underlying presentation and business logic is separated into three separate classes: the view, which encapsulates the UI and UI logic; the view model, which encapsulates presentation logic and state; and the model, which encapsulates the application’s business logic and data. The view interacts with the view model through data binding, commands, and change notification events. The view model queries, observes, and coordinates updates to the model, converting, validating, and aggregating data as necessary for display in the view. 
The interaction between the classes is explained in the diagram below:
MVVM Class Interactions
Class Interactions Diagram 

Characteristics of the View Class

  • The view in the MVVM pattern defines the structure and appearance of what one sees on the screen.
  • As a thumb rule, one should not put any logic code that needs to be tested by unit test in the View.
  • The view is a visual element. The view defines the controls contained in the view and their visual layout and styling.
  • The view can customize the data binding behavior between the view and the view model by using the Value converters to format the data or by using the validation rules to provide additional input data validation to the user.
  • The view defines and handles UI visual behavior, such as animations or transitions that may be triggered from a state change in the view model or via the user’s interaction with the UI.

Characteristics of View Model: 

  • The view model is a non-visual class and does not derive from any WPF or Silverlight base class.
  • It encapsulates the presentation logic and is testable independently of the view and the model.
  • The view model does not reference the view. It implements properties and commands to which the view can data bind. It notifies the view of any state changes via change notification events via the INotifyPropertyChanged and INotifyCollectionChanged interfaces.
  • The view model acts as coordinator between the view and the model. It may convert or manipulate data so that it can be easily consumed by the view and may implement additional properties that may not be present on the model.
  • It may also implement data validation via the IDataErrorInfo or INotifyDataErrorInfo interfaces.
  • The view model may define logical states that the view can represent visually to the user.
  • Typically, there is a one-to many-relationship between the view model and the model classes.
  • In most cases, the view model will define commands or actions that can be represented in the UI and that the user can invoke. 

Characteristics of the Model Class 

  • The model in the MVVM pattern encapsulates business logic and data. Business logic is defined as any application logic that is concerned with the retrieval and management of application data and for making sure that any business rules that ensure data consistency and validity are imposed.
  • As a thumb rule models should not contain any use case–specific application logic.
  • Mostly the model represents the client-side domain model for the application. It can define data structures based on the application’s data model and any supporting business and validation logic.
  • The model classes do not directly reference the view or view model classes and have no dependency on how they are implemented.
  • The model classes are typically used in conjunction with a service or repository that encapsulates data access and caching.