Commands in WPF and Silverlight : Detailed Explanation

In WPF and Silverlight world, Commands provide a convenient way to represent actions or operations that can be easily bound to controls in the UI. They encapsulate the actual code that implements the action or operation and help to keep it decoupled from its actual visual representation in the view. The Commands have basically two purposes:

a)      To separate the semantics and the object that invokes a command from the logic that executes the command.

b)      To indicate whether an action is actually available at the time of its invoking.

Any Command can be represented and invoked using many different ways: Mouse clicks, shortcut key presses, touch gestures, or any other input events. Interaction between the UI controls in the view and the command can be two-way. The command can be invoked as the user interacts with the UI, and the UI can be automatically enabled or disabled as the underlying command becomes enabled or disabled.

Commands are based on the Command design pattern. Typically, with commands you have an invoker and a receiver and some infrastructure in between to keep those two parties decoupled from one another as much as possible. The invoker is typically a UI element of some sort like a button, menu, or possibly a combo box selection. The receiver is the handling code that takes some action based on the command being invoked.

The command pattern is a behavioral design pattern in which an object is used to represent and encapsulate all the information needed to call a method at a later time. This information includes the method name, the object that owns the method and values for the method parameters. This object is typically a Command object.

Four terms always associated with the command pattern are command, receiver, invoker and client. A command object has a receiver object and invokes a method of the receiver in a way that is specific to that receiver’s class. The receiver then does the work. A command object is separately passed to an invoker object, which invokes the command, and optionally does bookkeeping about the command execution. Any command object can be passed to the same invoker object. Both an invoker object and several command objects are held by a client object. The client contains the decision making about which commands to execute at which points. To execute a command, it passes the command object to the invoker object. If you want to dig deeper into the Command Design pattern, look at this nice explanation: http://www.dofactory.com/Patterns/PatternCommand.aspx#_self1

class Client
{
static void Main(string[] args)
{
Invoker i = new Invoker();
//save undo to position 100
ICommand a = new UndoCommand(100);
i.AddCommand(a);
//save undo to position 200
ICommand b = new UndoCommand(200);
i.AddCommand(b);
//perform the undo
i.RunCommand(); //the client does not need to know about the details of the undo
}
}

public interface ICommand
{
void Execute();
}


public class UndoCommand : ICommand
{
private int location;
public int Location
{
get { return location; }
}
public UndoCommand(int originalLocation)
{
location = originalLocation;
}
public void Execute()
{
new UndoPerformer().Undo(this);
}
}

public class Invoker
{
private Stack commandList = new Stack();
public void RunCommand()
{
while (commandList.Count > 0)
commandList.Pop().Execute();
}
public void AddCommand(ICommand c)
{
commandList.Push(c);
}
}

public class UndoPerformer
{
public void Undo(ICommand c)
{
if (c is UndoCommand)
{
int originalLocation = (c as UndoCommand).Location;
Console.WriteLine("Moving back to position: " + originalLocation);
}
}
}

DelegateCommand and CompositeCommand

In the following section, we will be focusing specifically on DelegateCommand and CompositeCommand. These are the two implementations defined by Prism framework and both implements the ICommand interface. Both are implementations of the ICommand interface defined in WPF and Silverlight, which defines three members: an Execute method, a CanExecute method, and a CanExecuteChanged event. The Execute method is the crux of the ICommand interface, it is what will be invoked whenever the invoker element has the triggering action happen to it (i.e. a button is clicked that has a command hooked up to it). The CanExecute method can be optionally hooked up to conditionally say whether the associated invoker should be enabled or not – for example if a document is not dirty, the Save command should be disabled. The associated CanExecuteChanged event is a way for the supporting logic that determines whether the command should be enabled to notify the invoker that it should re-evaluate the CanExecute state of the command and update the UI appropriately as things are changing behind the scenes (i.e. the document just went dirty because the user input some text or some other formatting command was invoked on the document that the Save command relates to).

There are two ways in which any Commands can be implemented in View Models: Either as CommandMethod or as a CommandObject. In either case, the view’s interaction with the command can be defined declaratively without requiring complex event handling code in the view’s code-behind file. Most of the WPF and Silverlight Controls inherently support commands and provide a Command property that can be data bound to an ICommand object provided by the view model. In other cases, a command behavior can be used to associate a control with a command method or command object provided by the view model.
Let’s take a deeper look into either of the two ways of implementing Commands in the View Models:

Implementing DelegateCommand as CommandObject

DelegateCommand class encapsulates two delegates that each reference a method implemented within your view model class. It inherits from the DelegateCommandBase class, which implements the ICommand interface’s Execute and CanExecute methods by invoking these delegates. You specify the delegates to your view model methods in the DelegateCommand class constructor, which is defined as follows:

public class DelegateCommand : DelegateCommandBase
{
public DelegateCommand(Action executeMethod,Func<T,bool> canExecuteMethod ): base((o) => executeMethod((T)o), (o) => canExecuteMethod((T)o))
{
...
}
}

The following code example shows how a DelegateCommand instance, which represents a Submit command, is constructed by specifying delegates to the OnSubmit and CanSubmit view model methods.

public class ViewModel
{
public ViewModel()
{
this.SubmitCommand = new DelegateCommand<object>(
this.OnSubmit, this.CanSubmit );
}
public ICommand SubmitCommand { get; private set; }
private void OnSubmit(object arg)   {...}
private bool CanSubmit(object arg)  { return true; }
}

Invoking Command Objects from the View
There are a number of ways in which a control in the view can be associated with a command object proffered by the view model. Certain WPF and Silverlight 4 controls, notably ButtonBase derived controls, such as Button or RadioButton, and Hyperlink, or MenuItem derived controls, can be easily data bound to a command object through the Command property. WPF also supports binding view model ICommand to a KeyGesture.

Button Command="{Binding Path=SubmitCommand}" CommandParameter="SubmitOrder"

A command parameter can also be optionally defined using the CommandParameter property. The type of the expected argument is specified in the Execute and CanExecute target methods. The control will automatically invoke the target command when the user interacts with that control, and the command parameter, if provided, will be passed as the argument to the command’s Execute method. In the preceding example, the button will automatically invoke the SubmitCommand when it is clicked. Additionally, if a CanExecute handler is specified, the button will be automatically disabled if CanExecute returns false, and it will be enabled if it returns true.

An alternative approach is to use Expression Blend interaction triggers and InvokeCommandAction behavior.

<Button Content="Submit" IsEnabled="{Binding CanSubmit}">
<i:Interaction.Triggers>
<i:EventTrigger EventName="Click">
<i:InvokeCommandAction Command="{Binding SubmitCommand}"/>
</i:EventTrigger>
</i:Interaction.Triggers>
</Button>

This approach can be used for any control to which you can attach an interaction trigger. It is especially useful if you want to attach a command to a control that does not derive from ButtonBase, or when you want to invoke the command on an event other than the click event. Again, if you need to supply parameters for your command, you can use the CommandParameter property.
Unlike controls that can be bound directly to a command, InvokeCommandAction does not automatically enable or disable the control based on the command’s CanExecute value. To implement this behavior, you have to data bind the IsEnabled property of the control directly to a suitable property on the view model, as shown earlier.

Invoking Command Methods from the View
An alternative approach to implementing commands as ICommand objects is to implement them simply as methods in the view model and then to use behaviors to invoke those methods directly from the view.
This can be achieved in a similar way to the invocation of commands from behaviors, as shown in the previous section. However, instead of using InvokeCommandAction, you use the CallMethodAction. The following code example calls the (parameter-less) Submit method on the underlying view model.

<Button Content="Submit" IsEnabled="{Binding CanSubmit}">
<i:Interaction.Triggers>
<i:EventTrigger EventName="Click">
<i:CallMethodAction TargetObject="{Binding}" Method="Submit"/>
</i:EventTrigger>
</i:Interaction.Triggers>
</Button>

The TargetObject is bound to the underlying data context (which is the view model) by using the {Binding} expression. The Method parameter specifies the method to invoke.

Composite Commands

In many cases, a command defined by a view model will be bound to controls in the associated view so that the user can directly invoke the command from within the view. However, in some cases, you may want to be able to invoke commands on one or more view models from a control in a parent view in the application’s UI.

For example, if your application allows the user to edit multiple items at the same time, you may want to allow the user to save all the items using a single command represented by a button in the application’s toolbar or ribbon. In this case, the Save All command will invoke each of the Save commands implemented by the view model instance for each item. This is where the CompositeCommand comes in.

The CompositeCommand class represents a command that is composed from multiple child commands. When the composite command is invoked, each of its child commands is invoked in turn. It is useful in situations where you need to represent a group of commands as a single command in the UI or where you want to invoke multiple commands to implement a logical command.

Similar to the DelegateCommand, the C0mpositeCommand implements the ICommand interface. The CompositeCommand class maintains a list of child commands (DelegateCommand instances). The Execute method of the CompositeCommand class simply calls the Execute method on each of the child commands in turn. The CanExecute method similarly calls the CanExecute method of each child command, but if any of the child commands cannot be executed, the CanExecute method will return false. In other words, by default, a CompositeCommand can only be executed when all the child commands can be executed.

Apart from the Execute and CanExecute methods, there are two more methods that are worth looking into. They are RegisterCommand and UnRegisterCommand.

Registering and Unregistering Child Commands with the Composite Commands

Child commands are registered or unregistered using the RegisterCommand and UnregisterCommand methods.

commandProxy.SubmitAllOrdersCommand.RegisterCommand(
orderCompositeViewModel.SubmitCommand );
commandProxy.CancelAllOrdersCommand.RegisterCommand(
orderCompositeViewModel.CancelCommand )

Composite commands at the parent view level will often be used to coordinate how commands at the child view level are invoked. In some cases, you will want the commands for all shown views to be executed, as in the Save All command. In other cases, you will want the command to be executed only on the active view. In this case, the composite command will execute the child commands only on views that are deemed to be active; it will not execute the child commands on views that are not active. To support this scenario, Prism provides the IActiveAware interface. The IActiveAware interface defines an IsActive property that returns true when the implementer is active, and an IsActiveChanged event that is raised whenever the active state is changed.

You can implement the IActiveAware interface on child views or view models. It is primarily used to track the active state of a child view within a region. Whether or not a view is active is determined by the region adapter that coordinates the views within the specific region control.

The DelegateCommand class also implements the IActiveAware interface. The CompositeCommand can be configured to evaluate the active status of child DelegateCommands (in addition to the CanExecute status) by specifying true for the monitorCommandActivity parameter in the constructor. When this parameter is set to true, the CompositeCommand class will consider each child DelegateCommand’s active status when determining the return value for the CanExecute method and when executing child commands within the Execute method.

When the monitorCommandActivity parameter is true, the CompositeCommand class exhibits the following behavior:

CanExecute. Returns true only when all active commands can be executed. Child commands that are inactive will not be considered at all.
Execute. Executes all active commands. Child commands that are inactive will not be considered at all.

By implementing the IActiveAware interface on your child view models, you will be notified when your child view becomes active or inactive with the region. When the child view’s active status changes, you can update the active status of the child commands.

Differences : CompositeCommand and DelegateCommand

There are several differences about the CompositeCommand implementation and the DelegateCommand implementation. The first is that the CompositeCommand is an aggregation of other commands – a list of ICommand references internally. It allows you to hook up multiple command targets to a single root command that itself can be hooked up to a command source such as a button or menu item. Figure 1 shows this relationship. The CompositeCommand can hold references to any ICommand object, but typically you will use it in conjunction with DelegateCommands. When the CompositeCommand.Execute method is invoked, it will invoke the Execute method on each of the child commands. When CompositeCommand.CanExecute is called to determine whether the command is enabled, it polls its child commands for their result from CanExecute.Child commands register themselves with the composite command and can also unregister if appropriate. In this way, it is very similar to a subscribe/unsubscribe of an event, but with the additional functionality of commands to enable and disable the commands based on custom logic.

Another thing different about CompositeCommands is that they can be hooked up to a source control ahead of time, before any child commands have registered. DelegateCommands have to be pointed to their target methods through a delegate at the point where they are constructed. As a result, CompositeCommands allow an extra layer of separation between the source (i.e. toolbar button or menu item) and the target (handling method) – decoupled in lifetime. Because they can target multiple child commands, they also work well for distributed logic such as a Save All command that needs to be dispatched to multiple open documents, each of which has their own handling logic in their view model.

Software Architecture

Software Architecture
Software Architecture

Definition:

Philippe Kruchten, Grady Booch, Kurt Bittner, and Rich Reitman derived and refined a definition of architecture based on work by Mary Shaw and David Garlan (Shaw and Garlan 1996). Their definition is:

“Software architecture encompasses the set of significant decisions about the organization of a software system including the selection of the structural elements and their interfaces by which the system is composed; behavior as specified in collaboration among those elements; composition of these structural and behavioral elements into larger subsystems; and an architectural style that guides this organization. Software architecture also involves functionality, usability, resilience, performance, reuse, comprehensibility, economic and technology constraints, tradeoffs and aesthetic concerns.”

In Patterns of Enterprise Application Architecture, Martin Fowler outlines some common recurring themes when explaining architecture. He identifies these themes as:

“The highest-level breakdown of a system into its parts; the decisions that are Hard to change; there are multiple architectures in a system; what is architecturally Significant can change over a system’s lifetime; and, in the end, architecture boils Down to whatever the important stuff is.”

 Software application architecture is the process of defining and coming up with a solution that is well structured and meets all of the technical and operational requirements. The architecture should be able to take into account and improve upon the common quality attributes such as performance, security, and manageability.

The main focus of the Software architecture is how the major elements and components within an application are used by, or interact with, other major elements and components within the application. The selection of data structures and algorithms or the implementation details of individual components are design concerns, they are not an architectural concerns but sometimes Design and Architecture concerns overlap.

Before starting the architecting of any software, there are some basic questions that we should strive to get answers for. They are as follows:

  • How the users of the system will be interacting with the system?
  • How will the application be deployed into production and managed?
  • What are the various non-functional requirements for the application, such as security, performance, concurrency, internationalization, and configuration?
  • How can the application be designed to be flexible and maintainable over time?
  • What are the architectural trends that might impact your application now or after it has been deployed?

Goals of Software Architecture

Building the bridge between business requirements and technical requirements is the main goal of any software architecture. The goal of architecture is to identify the requirements that affect the basic structure of the application. Good architecture reduces the business risks associated with building a technical solution while a good design is flexible enough to be able to handle the changes that will occur over time in hardware and software technology, as well as in user scenarios and requirements. An architect must consider the overall effect of design decisions, the inherent tradeoffs between quality attributes (such as performance and security), and the tradeoffs required to address user, system, and business requirements.

Principles of Software Architecture

The basic assumption of any architecture should be the belief that the design will evolve over time and that one cannot know everything one need to know up front. The design will generally need to evolve during the implementation stages of the application as one learn more, and as one tests the design against real world requirements.

Keeping the above statement in mind, let’s try to list down some of the Architectural principles:

  • The system should be built to change instead of building to last.
  • Model the architecture to analyze and reduce risk.
  • Use models and visualizations as a communication and collaboration tool.
  • The key engineering decisions should be identified and acted upon upfront.

Architects should consider using an incremental and iterative approach to refining their architecture. Start with baseline architecture to get the big picture right, and then evolve candidate architectures as one iteratively test and improve one’s architecture. Do not try to get it all right the first time—design just as much as you can in order to start testing the design against requirements and assumptions. Iteratively add details to the design over multiple passes to make sure that you get the big decisions right first, and then focus on the details. A common pitfall is to dive into the details too quickly and get the big decisions wrong by making incorrect assumptions, or by failing to evaluate your architecture effectively.

When testing your architecture, consider the following questions:

  • What were the main assumptions that were made while architecting the system?
  • What are the requirements both explicit and implicit this architecture is satisfying?
  • What are the key risks with this architectural approach?
  • What countermeasures are in place to mitigate key risks?
  • In what ways is this architecture an improvement over the baseline or the last candidate architecture?

 Design Principles

When getting started with Software design, one should keep in mind the proven principles and the principles that adheres to minimizes costs and maintenance requirements, and promotes usability and extensibility. The key principles of any Software Design are:

  • Separation of concerns: The key factor to be kept in mind is minimization of interaction points between independent feature sets to achieve high cohesion and low coupling.
  • Single Responsibility principle: Each component or module should be independent in itself and responsible for only a specific feature or functionality.
  • Principle of Least Knowledge: A component or object should not know about internal details of other components or objects.
  • Don’t repeat yourself (DRY): The intent or implementation of any feature or functionality should be done at only one place. It should never be repeated in some other component or module
  • Minimize upfront design: This principle is also sometimes known as YAGNI (“You ain’t gonna need it”). Design only what is necessary. Especially for agile development, one can avoid big design upfront (BDUF). If the application requirements are unclear, or if there is a possibility of the design evolving over time, one should avoid making a large design effort prematurely.

Design Practices

  • Keep design patterns consistent within each layer.
  • Do not duplicate functionality within an application.
  • Prefer composition to inheritance. If possible, use composition over inheritance when reusing functionality because inheritance increases the dependency between parent and child classes, thereby limiting the reuse of child classes. This also reduces the inheritance hierarchies, which can become very difficult to deal with.
  • Establish a coding style and naming convention for development.
  • Maintain system quality using automated QA techniques during development. Use unit testing and other automated Quality Analysis techniques, such as dependency analysis and static code analysis, during development
  • Not only development, also consider the operation of your application. Determine what metrics and operational data are required by the IT infrastructure to ensure the efficient deployment and operation of your application.

Application Layers: While architecting and designing the system, one needs to carefully consider the various layers into which the application will be divided. There are some key considerations that need to be kept in mind while doing that:

  • Separate the areas of concern. Break your application into distinct features that overlap in functionality as little as possible. The main benefit of this approach is that a feature or functionality can be optimized independently of other features or functionality
  • Be explicit about how layers communicate with each other.
  • Abstraction should be used to implement loose coupling between layers.
  • Do not mix different types of components in the same logical layer. For example, the UI layer should not contain business processing components, but instead should contain components used to handle user input and process user requests.
  • Keep the data format consistent within a layer or component.

Components, Modules, and Functions: Key Considerations

In the previous sections we talked about key considerations that need to be kept in mind while architecting or designing an application. We also touched based on what needs to be kept in mind when we are dividing our application into various layers. In this section let’s take a look at some key considerations for designing component, modules and Functions.

  • A component or an object should not rely on internal details of other components or objects.
  • Never overload the functionality of a component. For example, a UI processing component should not contain data access code or attempt to provide additional functionality.
  • Explicitly state how the components will communicate with each other. This requires an understanding of the deployment scenarios your application must support. You must determine if all components will run within the same process, or if communication across physical or process boundaries must be supported—perhaps by implementing message-based interfaces.
  • Keep crosscutting code (such as logging and performance) abstracted from the application business logic as far as possible.
  • Present a clear contract for components. Components, modules, and functions should define a contract or interface specification that describes their usage and behavior clearly.

Technology Blogs
top sites

4 Dimensions of Project Management

4DimensionThere are 4D’s of any software project management. Any successful or efficient Project Managers uses these 4D’s as a trade-offs or leveraging techniques when dealing with the management. The 4D’s of project management namely – Duration, Cost, Scope and Risk are used by Project Managers to handle any change requests while maintaining the feasibility of the projects.

Scenario 1 – Project Managers with only 1 dimension: Duration

The PM presents the plan and schedule to the management and is inevitably asked the question which is always asked:” How can we make this faster?”

PM response: The PM slumps in his chair and starts stuttering, “We can’t change anything, this is the only way of doing it and changing anything at this stage will lead to disaster.”

Management response: They go ahead anyway and slash the duration to what they want and ask the PM to do it anyway. Please note, management relishes the opportunity to take strategic decisions based on data and they don’t respond too well to vague terms like disaster or no change.

Here the project manager was caught in a typical 1-Dimension situation wherein the only option or alternative that he was able to give to the management was Duration. He was either unprepared or unwilling to give any alternatives or trade-offs options to the management and that led to a disastrous scenario. So now the PM is stuck with a shorter duration with no increase in the budget and no reduction in scope of the project.

A different outcome could have been accomplished if the PM would have come up with multiple Duration and Costs options. If faced with the inevitable question of “How can we do it faster?” the better response to the management is sure we can do it faster and we can do it cheaper too. I have these options which you can take a look at and then make an informed decision. I have this option 1 which will complete the project in 6 months at a cost of half a million dollars. I have another option in which we can complete the project in 4 months only but in that scenario I will have to hire a Graphics designer and an extra developer which will add a cost of 60 thousand dollar to the project.

What is the immediate effect of the presentation of these multiple options to the management? The management starts giving respect to the PM and he is being perceived as somebody who is intelligent and knows what he is doing. In short, they become aware of the fact that they are dealing with somebody who cannot be bulldozed with statements like “This is when i want the things to complete and please make sure that it is done.” Secondly Management is happy that they are being asked to make a strategic decision based on some data instead of vague threats like disaster.

There is a high possibility that even after being presented with the multiple options and various dimensions the management still goes ahead and start playing 1-Dimension card. This is when the PM has to be intelligent and ready enough to take out the 3rd dimension that is the Scope card.

“Sure if you still want the duration of the project to be shorter at no additional cost then i will have to reduce the scope of the project to achieve them and here are the options for them.”

There is very high probability that even after playing all the 3 dimensions of the project management, the PM might not get everything that he desired but we will surely be able to negotiate something and at least come out of the meeting with a feasible project. Last but not the least the PM would have gained the respect and credibility of the management which would go a long way in maintaining the sanity of the project as the project goes along. The trade-offs (Duration, Cost, Scope and Risk) are very effective tools that needs to be presented to the decision-makers so that they can take realistic decisions. To be able to present these trade-offs , the PM should be able to quantify each of these trade-offs and present as many as options to the executives.

One – Dimension Projects : Most of the internal applications that are developed in an organization are one dimensional. The only tangible option that the executives have are the duration. There is no explicit project budget and there is no assessment of risks that are done. Not surprisingly, most of the discussions and decisions are around the duration of the project as that is the only measurable entity that is before the management. Scope creep, bloated budget and slipped deadlines are common in one-dimension projects.

Two – Dimension projects :  Some organizations do add a second dimension to their plans – Duration and Cost. The benefits of adding this dimension is tremendous as now the management understands that adding a new feature or changing a feature is not “free” anymore. There is a cost involved. Even if the Cost of the project might not be paid by the end client, the management now becomes aware of the Scope creep and the cost involved with it. This knowledge is a major step forward and helps tremendously in controlling the scope creep.

Three – Dimension projects : Things get much better if the PM adds the third dimension and is able to quantify the scope of the project. By decomposing and quantifying the scope of the project, the management now has a measurable view of each of the feature sets and their impact on the business results. This trade-off becomes more powerful when it is complemented with the two dimensions mentioned above – Duration and Cost. Consider these two options and one will realize the effect of presenting this dimension with the other two dimensions.

Option 1: “Develop this particular feature capable of serving 90000 users at a cost of 100000 dollars in 6 months.”

Option 2: “Develop this particular feature capable of service 50000 users at a cost of 60000 dollars in 4 months.”

Now the management directly sees the desired business result that they are going to get and at what cost and duration.

Four – Dimension Projects: The fourth dimensions adds the assessment of risks to the projects. When this dimension is added to the project management, this gives the ability to the management to see at what level of certainty they want to achieve in the project and at what cost and duration. When the management listens to “We can deliver the agreed scope of the application with 60 % confidence at a cost of 1 million in 6 months”, they are more confident of taking the decision and are vary of creeping of the scope as they are now aware that any change request will affect all the duration, cost and Risk.

 

Big Data – Challenges and opportunities

Big Data
Big Data

Every challenge represents a new set of opportunities, big challenges bring forth big opportunities and big data is a big challenge. It challenges the way we have been viewing, storing, analyzing and interpreting our data. The term “Big Data” is kind of a misnomer since it implies that the only issue with today’s data is its sheer size. There are lots more to Big Data than just its size. Big Data applies to the set of data that can’t be processed or gained insight into using the traditional tools and techniques. The amount of data in our world has been exploding. We capture trillions of bytes of information about customers, suppliers, and operations, and millions of networked sensors are being embedded in the physical world in devices such as mobile phones and automobiles, sensing, creating, and communicating data. Individuals with smartphones and on social network sites continue to fuel exponential growth. Big data—large pools of data that can be captured, communicated, aggregated, stored, and analyzed—is now part of every sector and function of the global economy.  Let’s look at some of the facts how the data is proliferating – There are 5 billion phones in world right now that are generating data by seconds, just the Facebook alone is generating 30 billion pieces of content every month, the Library of US congress has around 235 terabytes of data, twitter alone generates around 7 terabytes of data every month, The Large Hadron Collider experiments represent about 150 million sensors delivering data 40 million times per second, Wal-Mart handles more than 1 million customer transactions every hour. These are just the tip of the iceberg and it demonstrates how the data is becoming massive by seconds. Digital data is everywhere – in every sector, in every company, in every economy. Today we store everything – Environmental data, financial data, medical data, and surveillance data and the list goes on and on. According to an MGI estimate, in 2010 the corporations stored more than 7 Exabyte’s of data on their hard drives and the individual consumers stored more than 6 Exabyte’s of data.  Google’s executive chairman Eric Schmidt brings it to a point: “From the dawn of civilization until 2003, humankind generated five Exabyte’s of data. Now we produce five Exabyte’s every two days…and the pace is accelerating.”

The possibilities and opportunities presented by the proliferation of big data are constantly evolving, driven by innovation in technologies, platforms and analytical capabilities. The Big Data can be defined by three of its main characteristics – Volume, Variety and Velocity of data. Till now we have just discussed about the size or the volume of data. With the proliferation of sensors, smart devices, social collaboration tools, enterprises today are faced with not only the traditional relational data but also data in a raw form, semi structured or unstructured form. The sheer variety of data that are getting captured today presents a unique set of problems to our traditional tools and techniques in storing and analyzing them. As the volume and variety of data that gets captured or stored today has changed so has the sheer velocity at which this data is getting generated. With the emergence of RFID sensors everywhere and other information streams, the data is getting generated at such a pace that has made impossible for our traditional tools to handle. For many applications, the speed of data creation is even more important than the volume. Real-time or nearly real-time information makes it possible for a company to be much more agile than its competitors.

The opportunities presented by Big Data are huge. According to various researches, there is a potential value addition to the tune of 300 billion dollars just in the US healthcare, there is a potential to generate 250 million pounds in the public sectors in Europe, there is a potential to generate 600 billion dollars in consumer sectors just if we start analyzing the location centric data, in coming 5-7 years the big data is going to generate 1.5 to 1.8 million jobs just in US alone in the field of deep data analytics, big data mangers and engineers. The opportunities presented by the big data is immense if we just keep pace by evolving our technologies and tools to keep with the volume and velocity with which we are generating the data.

Ignoring Risk Management : A disaster !

Risk Management
Risk Management

Some staggering facts on software failures

According to the Standish report:  In the United States, we spend more than $250 billion each year on IT application development of approximately 175,000 projects. The average cost of a development project for a large company is $2,322,000; for a medium company, it is $1,331,000; and for a small company, it is $434,000. The Standish Group research shows a staggering 31.1% of these projects will be cancelled before they ever get completed. Further results indicate 52.7% of projects will cost 189% of their original estimates.

According to a study report done by McKinsey & Company in conjunction with the University of Oxford: 17 percent of large IT projects go so badly that they can threaten the very existence of the company. On average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted

According to a study by KPMG (New Zealand): Survey shows an incredible 70% of organizations have suffered at least one project failure in the prior 12 months. 50% of respondents also indicated that their project failed to consistently achieve what they set out to achieve!

 Ignoring Risk Management: A Recipe for disaster

Software Project failures are the result of the multiplicity of risks inherent in software project environment. Software development projects are collections of larger programs with many interactions and dependencies. It involves a creation of something that has never been done before although the development processes are similar among other projects. As a result, software development projects have a dismal track-record of cost and schedule overruns and quality and usability problems. Time-to-market is the most critical factor for consumer in developing commercial software products. However the project success is difficult to predict because project scope is changed by continuous market requirements and resources are constantly being reallocated to accommodate latest market conditions. Projects for specific customers also have a large degree of uncertainty for requirements due to the customized technical attributes. Many software projects and programs involve multiple entities such as companies, divisions, etc., that may have certain interests. There is often a feeling of disconnection between software developers and their management, each believing that the others are out of touch with reality resulting in misunderstanding and lack of trust. Research shows that 45% of all the causes of delayed software deliverables are related to organizational issues. By looking at the facts and reasons mentioned above, it would be quite obvious that the Risk Management process would be quite an integral part of the Software Development process. Wrong!

According to Kwak and Ibbs (2000) identified risk management as the least practiced discipline among different project management knowledge areas. Boehm and DeMarco (1997) mentioned that “our culture has evolved such that owning up to risks is often confused with defeatism”. In many organizations, the tendency to ‘shoot the messenger’ often discourages people from bringing imminent problems to the attention of management. This attitude is the result of a misunderstanding of risk management. Most software developers and project managers perceive risk management processes and activities as extra work and expense. Risk management processes are the first thing to be removed from the project activities when the project schedule slips.

Agile and Scrum : A detailed perspective

AGILE MANIFESTO |

  • Individuals and interactions over processes and tools.
  • Working software over comprehensive documentation.
  • Customer collaboration over contract negotiation.
  • Responding to change over following a plan.

 AGILE PRINCIPLES |

  • Satisfy the Customer – Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  • Embrace Change – Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  • Frequent Delivery – Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • Cross-Functional Collaboration – Business people and developers must work together daily throughout the project.
  • Support and Trust – Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  • Face-to-Face Conversation – The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  • Working Software – Working software is the primary measure of progress.
  • Sustainable Pace – Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  • Technical Excellence – Continuous attention to technical excellence and good design enhances agility.
  • Keep it Simple – Simplicity–the art of maximizing the amount of work not done–is essential.
  • Self-Organization – The best architectures, requirements, and designs emerge from self-organizing teams.
  • Inspect and Adapt – At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

SCRUM

Scrum is a framework, a specific implementation of the agile methodology.  It is an iterative and incremental agile software development framework for managing software projects and product or application development. As mentioned in the Agile Process, Scrum relies on a self-organizing, cross-functional team. The scrum team is self-organizing in that there is no overall team leader who decides which person will do which task or how a problem will be solved. Those are issues that are decided by the team as a whole. The Scrum team is cross-functional; everyone is necessary to take a feature from idea to implementation.

ROLES | that are part of the Scrum framework

There are three core roles and some ancillary roles in the Scrum framework. There is a joke about a chicken and a pig. They are talking and the chicken says, “Let’s start a restaurant.” The pig replies, “Good idea, but what should we call it?” “How about ‘Ham and Eggs,'” says the chicken. “No thanks,” says the pig, “I’d be committed, you’d only be involved.

The roles that are committed are the core roles and the roles that are only involved in the project are the ancillary roles.

CORE ROLES  | The core roles are those committed to the project in the Scrum process—they are the ones producing the product (objective of the project). They represent the scrum team.

  • Product Owner – The Product Owner represents the stakeholders and is the voice of the customer. He or she is accountable for ensuring that the team delivers value to the business. The Product Owner writes (or has the team write) customer-centric items (typically user stories), prioritizes them, and adds them to the product backlog.
  • Development Team – The Development Team is responsible for delivering potentially shippable product increments at the end of each Sprint. A Development Team is made up of 3–9 people with cross-functional skills who do the actual work (analyze, design, develop, test, technical communication, document, etc.). The Development Team in Scrum is self-organizing, even though they may interface with project management organizations (PMOs).
  • Scrum Master – Scrum is facilitated by a Scrum Master, who is accountable for removing impediments to the ability of the team to deliver the sprint goal/deliverables. The Scrum Master is not the team leader, but acts as a buffer between the team and any distracting influences. The Scrum Master ensures that the Scrum process is used as intended.

ANCILLARY ROLES | The ancillary roles in Scrum teams are those with no formal role and infrequent involvement in the Scrum process—but nonetheless, they must be taken into account.

  • Stakeholders – The stakeholders are the customers, vendors. They are people who enable the project and for whom the project produces the agreed-upon benefit[s] that justify its production. They are only directly involved in the process during the sprint reviews.
  • Managers –  People who control the work environment.

ARTIFACTS | produced during scrum framework implementations

  • Product – The primary and the most important artifact of Scrum project is, of course, the product itself. The Scrum model expects the team to bring the product or system to a potentially shippable state at the end of each Scrum sprint.
  • Product backlog – It is a prioritized features list, containing short descriptions of all functionality desired in the product. When using Scrum, it is not necessary to start a project with a lengthy, upfront effort to document all requirements. Typically, a Scrum team and its product owner begin by writing down everything they can think of for agile backlog prioritization. This agile product backlog is almost always more than enough for a first sprint. The Scrum product backlog is then allowed to grow and change as more is learned about the product and its customers. Typically a product backlog consists of the following items – Features, Bugs, Technical work, Knowledge acquisition. The most predominant way for a Scrum team to express features on the agile product backlog is in the form of user stories, which are short, simple descriptions of the desired functionality told from perspective of the user.
  • Sprint backlog – The sprint backlog is the list of tasks identified by the Scrum team during sprint planning. During sprint planning, the team selects some number of product backlog items, usually in the form of user stories, and identifies the tasks necessary to complete each user story. Most teams also estimate how many hours each task will take someone on the team to complete. It is critical that the team selects the items and size of the sprint backlog. Because they are the ones committing to completing the tasks they must be the ones to choose what they are committing to. The sprint backlog is very commonly maintained as a spreadsheet but it is also possible to use your defect tracking system or any of a number of software products designed specifically for Scrum or agile.
  • Sprint burn down chart – The sprint burn down chart is a publicly displayed chart showing remaining work in the sprint backlog. Updated every day, it gives a simple view of the sprint progress. It also provides quick visualizations for reference. There are also other types of burn down, for example the release burn down chart that shows the amount of work left to complete the target commitment for a Product Release (normally spanning through multiple iterations) and the alternative release burn down chart, which basically does the same, but clearly shows scope changes to Release Content, by resetting the baseline.

OTHER KEYWORDS | in the Sprint framework

  • Daily Scrum – On each day of a sprint, the team holds daily meetings (“the daily scrum”). Meetings are typically held in the same location and at the same time each day. Ideally, the daily scrum meeting is held in the morning, as it helps set the context for the coming day’s work. These Scrum daily standup meetings are strictly time-boxed to 15 minutes. This keeps the discussion brisk but relevant.During the daily Scrum, each team member answers the following three questions:What did you do yesterday?What will you do today? Are there any impediments in your way?The daily Scrum meeting is not a status update meeting in which a boss is collecting information about who is behind schedule. Rather, it is a meeting in which team members make commitments to each other.
  • Sprint – A time period (typically 1–4 weeks) in which development occurs on a set of backlog items that the team has committed to. Also commonly referred to as a Time-box or iteration.
  • User Story – A feature described in the product backlog is commonly explained using a story and has a specific suggested structure. The structure of a story is: “As a <user type> I want to <do some action> so that <desired result>” This is done so that the development team can identify the user, action and required result in a request and is a simple way of writing requests that anyone can understand.
  • Epic – An epic is a group of related stories, mainly used in product roadmaps and the backlog for features that have not yet been analyzed enough to break down into component stories.
  • Spike – A period used to research a concept and/or create a simple prototype. Spikes can either be planned to take place in between sprints or it might be accepted as one of many sprint delivery objectives.
  • Tasks – Typically at the beginning of a sprint, the user stories are broken down into tasks and are assigned hours to them.
  • Tracer Bullet – The tracer bullet is a spike with the current architecture, current technology set, and current set of best practices which results in production quality code. It might just be a very narrow implementation of the functionality but is not throw away code.
  • Sashimi – A report that something is “done”. The definition of “done” may vary from one Scrum team to another, but must be consistent within one team.
  • Planning Poker – In the Sprint Planning Meeting, the team sits down to estimate its effort for the stories in the backlog. The Product Owner needs these estimates, so that he or she is empowered to effectively prioritize items in the backlog and, as a result, forecast releases based on the team’s velocity
  • Sprint retrospective – Following one of the core principles of Agile to inspect and adapt. The Scrum team believes that there is always opportunity to improve. Although a good Scrum team will be constantly looking for improvement opportunities, the team should set aside a brief, dedicated period at the end of each sprint to deliberately reflect on how they are doing and to find ways to improve. This occurs during the sprint retrospective.

SUMMARY | of Scrum framework

  • A product owner creates a prioritized wish list called a product backlog.
  • During sprint planning, the team pulls a small chunk from the top of that wishlist, a sprint backlog, and decides how to implement those pieces.
  • The team has a certain amount of time, a sprint, to complete its work – usually two to four weeks – but meets each day to assess its progress (daily scrum).
  • Along the way, the ScrumMaster keeps the team focused on its goal.
  • At the end of the sprint, the work should be potentially shippable, as in ready to hand to a customer, put on a store shelf, or show to a stakeholder.
  • The sprint ends with a sprint review and retrospective.
  • As the next sprint begins, the team chooses another chunk of the product backlog and begins working again.

MVVM UnPlugged

In Part 1 of the MVVM series “MVVM Unplugged – Definition, Benefits, Classes and its interactions“, we saw that the MVVM pattern provides a clean separation between your application’s user interface, its presentation logic, and its business logic and data by separating each into separate classes. In this blog, we are going to focus on the Data Binding aspect of implementing the MVVM pattern esp. Databinding of properties using INotifyPropertyChanged interface.

Data Binding

Data binding is one of the very important mechanism through which the various classes under the MVVM pattern interact with each other.WPF and Silverlight both provide powerful data binding capabilities. Silverlight and WPF data binding supports multiple data binding modes. There are various modes of data binding, with one-way data binding, UI controls can be bound to a view model so that they reflect the value of the underlying data when the display is rendered. Two-way data binding will also automatically update the underlying data when the user modifies it in the UI. To achieve data binding and to make ensure that the UI is kept up to date when the data changes in the view model, it should implement the appropriate change notification interfaces.  If there are properties in the ViewModel that needs to be data bound with the view then the view model should implement the INotifyPropertyChanged interface. If the view model represents a collection, it should implement the INotifyCollectionChanged interface or derive from the ObservableCollection class that provides an implementation of this interface. The INotifyPropertyChanged interface and INotifyCollectionChanged interface both define an event that is raised whenever the underlying data is changed. Any data bound controls will be automatically updated when these events are raised. WPF and Silverlight data binding supports binding to nested properties via the Path property. Thumb rule for achieving data binding in WPF or Silverlight is that any view model or model that is accessible to the view should implement either the INotifyPropertyChanged or INotifyCollectionChanged interfaces, as per the need.

In this blog, we are going to focus on the implementation of the INotifyPropertyChanged interface. If we look in the object browser of the INotifyPropertyChanged, we see that it’s a very simple interface and it is just asking the concrete classes to expose an event which takes in a delegate PropertyChangedEventHandler delegate.

namespace System.ComponentModel
{
       // Summary: Notifies clients that a property value has changed.
       public interface INotifyPropertyChanged
       {
           // Summary: Occurs when a property value changes.
           event PropertyChangedEventHandler PropertyChanged;
       }
}

The PropertyChangedEventHandler is defined as a method that will handle the INotifyPropertyChanged. PropertyChanged event is raised when a property is changed on a component. It takes in two parameters:

  •  sender – The source of the event.
  •  e – A System.ComponentModel.PropertyChangedEventArgs that contains the event data.

If we look into the PropertyChangedEventArgs, it has a constructor which takes in a parameter PropertyName as string now this parameter is basically is the property which has changed.

public PropertyChangedEventArgs(string propertyName);

Now since we have a basic understanding of how the INotifyPropertyChanged interface works and what constitutes the PropertyChangedEventHandler and PropertyChangedEventArgs. Lets take a look at an example of how we are going to implement this interface.

View

In our view, We have created a simple window which is depicting a user and has three properties defined on it – First Name , Last Name and Age. Let’s go ahead and create a view model for our simple view.

ViewModel

 Lets create a view model for our User View and name it as UserViewModel. To implement the databinding, lets implement the INotifyPropertyChanged interface. As seen in the code below, we see that for implementing the INotifyPropertyChanged, we have declared a public event PropertyChanged which takes in a delegate method PropertyChangedEventHandler. Now for all the properties which we want to have databinding capabilities, we have gone ahead and provided the event trigger by calling the delegate PropertyChanged.

class UserViewModel : INotifyPropertyChanged
{
    private int _age;
    private string _firstName;
    private string _lastName;
    public event PropertyChangedEventHandler PropertyChanged;
    public UserViewModel(){}
    public string LastName
    {
       get { return _lastName; }
       set
       {
          if (value != _lastName)
          {
             _lastName = value;
             if (PropertyChanged != null)
             {
               this.PropertyChanged(this, new propertyChangedEventArgs("LastName"));
             }
          }
       }
    }
    public string FirstName
    {
        get { return _firstName; }
        set
        {
           if (value != _firstName)
           {
             _firstName = value;
             if (PropertyChanged != null)
             {
                this.PropertyChanged(this, new PropertyChangedEventArgs("FirstName"));
             }
           }
        }
    }
    public int Age
    {
        get { return _age; }
        set
        {
           if (value != _age)
           {
              _age = value;
              if (PropertyChanged != null)
              {
                 this.PropertyChanged(this, new PropertyChangedEventArgs("Age"));
              }
           }
        }
    }
}

To make this code working, we now just need to define these bindings in the User View XAML code and we also need to provide the hook up code between the view and viewmodel.

<TextBox Height="23" HorizontalAlignment="Left" Margin="242,27,0,0" Name="firstNameTextBox" Text ="{Binding FirstName,Mode=TwoWay}" VerticalAlignment="Top" Width="120" Grid.Column="1" />
<TextBox Height="23" HorizontalAlignment="Left" Margin="242,69,0,0" Name="lastNameTextBox" Text ="{Binding LastName,Mode=TwoWay}" VerticalAlignment="Top" Width="120" Grid.Column="1" />
<TextBox Height="23" HorizontalAlignment="Left" Margin="242,115,0,0" Name="ageTextBox" Text ="{Binding Age,Mode=TwoWay}" VerticalAlignment="Top" Width="120" Grid.Column="1" />

As you can see from the code above, we have bound the textboxes with the Properties defined in our UserViewModel class.

Now as a final step , we just need to hook up the View and Model, one way in which we can do this is by creating an object of View Model in our constructor of the view and then setting up the data context of the view as the view model.

UserViewModel viewModel = new UserViewModel();
this.DataContext = viewModel;

Problems with this implementation of the INotifyPropertyChanged.

If we notice it closely, with the approach described above, we have to implement the INotifyPropertyChanged interface on all of our View Models and this would lead to errors as we have to repetitively specify the property name in the event argument which is an string. Let’s look at a better way of implementing the same thing.

 Better Approach : Using NotificationObject of Prism

The Prism Library provides a convenient base class from which you can derive your view model classes that implements the INotifyPropertyChanged interface in a type-safe manner. The base class it provides us is called NotificationObject. Let’s take a look at its implementation.

public abstract class NotificationObject : INotifyPropertyChanged
{
   protected NotificationObject();
   // Summary: Raised when a property on this object has a new value.
   public event PropertyChangedEventHandler PropertyChanged;
   // Summary: Raises this object's PropertyChanged event.
   // Parameters: 
   // propertyExpression:
   // A Lambda expression representing the property that has a new value.
   // Type parameters:
   // T: The type of the property that has a new value
   protected void RaisePropertyChanged(Expression propertyExpression);
   // Summary: Raises this object's PropertyChanged event for each of the properties.
   // Parameters:
   // propertyNames: The properties that have a new value.
   protected void RaisePropertyChanged(params string[] propertyNames);
   // Summary:
   // Raises this object's PropertyChanged event.
   // Parameters:
   // propertyName: The property that has a new value.
   protected virtual void RaisePropertyChanged(string propertyName);
}

In the code above, we see that Prism has defined a class NotificationObject which implements the INotificationPropertyChanged interface.  We also see that it has three overloaded methods defined for Raising the Property Changed , one method takes, the Lambda expression representing the property that has a new value, the second one takes an array of properties that has a new value and the third one is similar to the one which we explained above in our implementation of the INotifyPropertyChanged event which takes in the property name that has a new value.

 Lets implement a new View Model class for our user view , this one will derive from the NotificationObject provided by the Prism.

class UserBindingUsingPrismViewModel : NotificationObject
{
   private int _age;
   private string _firstName;
   private string _lastName;
   public string LastName
   {
      get { return _lastName; }
      set
      {
         if (value != _lastName)
         {
           _lastName = value;
           RaisePropertyChanged(() => this.LastName);
         }
      }
   }
   public string FirstName
   {
       get { return _firstName; }
       set
       {
          if (value != _firstName)
          {
            _firstName = value;
            RaisePropertyChanged(() => this.FirstName);
          }
       }
   }
   public int Age
   {
      get { return _age; }
      set
      {
         if (value != _age)
         {
           _age = value;
           RaisePropertyChanged(() => this.Age);
         }
      }
   }
}

As we can see from the above code, our inherited view model class is now raising the property change event by invoking RaisePropertyChanged using a lambda expression that refers to the property. Using a lambda expression in this way involves a small performance cost because the lambda expression has to be evaluated for each call. The benefit is that this approach provides compile-time type safety and refactoring support if you rename a property. Although the performance cost is small and would not normally impact your application, the costs can accrue if you have many change notifications. In this case, one should consider using the non-lambda method overload.

MVVM Unplugged – Definition, Benefits, Classes and its interactions

Definition
MVVM is an architectural pattern that facilitates the clear separation of the GUI with the logic. It provides a clean separation between application’s user interface, its presentation logic, and its business logic and data by separating each into separate classes. The MVVM pattern is a close variant of the Presentation Model pattern, optimized to leverage some of the core capabilities of WPF and Silverlight, such as data binding, data templates, commands, and behaviors.  

Benefits
The separation of application logic and UI helps to address numerous development and design issues and can make the application much easier to test, maintain, and evolve. It can also greatly improve code re-use opportunities and allows developers and UI designers to more easily collaborate when developing their respective parts of the application.

Some of the benefits achieved by using MVVM are explained in detail below:

  • Concurrent Development: One of the biggest advantage is that during the development process, developers and designers can work more independently and concurrently on their components. The designers can concentrate on the view, and if they are using Expression Blend, they can easily generate sample data to work with, while the developers can work on the view model and model components.
  • Testability: The developers can create unit tests for the view model and the model without using the view.
  • Easy Redesign of the UI: It is easy to redesign the UI of the application without touching the code because the view is implemented entirely in XAML. A new version of the view can easily be worked upon and plugged into the existing view model.

Details
In the MVVM pattern, the UI of the application and the underlying presentation and business logic is separated into three separate classes: the view, which encapsulates the UI and UI logic; the view model, which encapsulates presentation logic and state; and the model, which encapsulates the application’s business logic and data. The view interacts with the view model through data binding, commands, and change notification events. The view model queries, observes, and coordinates updates to the model, converting, validating, and aggregating data as necessary for display in the view. 
The interaction between the classes is explained in the diagram below:
MVVM Class Interactions
Class Interactions Diagram 

Characteristics of the View Class

  • The view in the MVVM pattern defines the structure and appearance of what one sees on the screen.
  • As a thumb rule, one should not put any logic code that needs to be tested by unit test in the View.
  • The view is a visual element. The view defines the controls contained in the view and their visual layout and styling.
  • The view can customize the data binding behavior between the view and the view model by using the Value converters to format the data or by using the validation rules to provide additional input data validation to the user.
  • The view defines and handles UI visual behavior, such as animations or transitions that may be triggered from a state change in the view model or via the user’s interaction with the UI.

Characteristics of View Model: 

  • The view model is a non-visual class and does not derive from any WPF or Silverlight base class.
  • It encapsulates the presentation logic and is testable independently of the view and the model.
  • The view model does not reference the view. It implements properties and commands to which the view can data bind. It notifies the view of any state changes via change notification events via the INotifyPropertyChanged and INotifyCollectionChanged interfaces.
  • The view model acts as coordinator between the view and the model. It may convert or manipulate data so that it can be easily consumed by the view and may implement additional properties that may not be present on the model.
  • It may also implement data validation via the IDataErrorInfo or INotifyDataErrorInfo interfaces.
  • The view model may define logical states that the view can represent visually to the user.
  • Typically, there is a one-to many-relationship between the view model and the model classes.
  • In most cases, the view model will define commands or actions that can be represented in the UI and that the user can invoke. 

Characteristics of the Model Class 

  • The model in the MVVM pattern encapsulates business logic and data. Business logic is defined as any application logic that is concerned with the retrieval and management of application data and for making sure that any business rules that ensure data consistency and validity are imposed.
  • As a thumb rule models should not contain any use case–specific application logic.
  • Mostly the model represents the client-side domain model for the application. It can define data structures based on the application’s data model and any supporting business and validation logic.
  • The model classes do not directly reference the view or view model classes and have no dependency on how they are implemented.
  • The model classes are typically used in conjunction with a service or repository that encapsulates data access and caching.