View -> ViewModel -> Model

Apr 22, 2009 at 11:54 PM
I am looking for a solid example of the MVVM practice. I have scoured the web and found excellent explanations of it, and they have got me started but I have not been able to find a skeleton coded that I could open up and look at.

Does anyone have or know of source code available showing the base code used to define these three application pieces?

I'm not articulating that very well, but hopefully you get what I am looking for. I understand the theory, but I need to see what is done in C#, whats in XAML, what objects these should inherit, with a couple methods showing how data gets from the Model to the VM and then bound to the View.

Thanks ahead of times.
Apr 23, 2009 at 1:02 AM
Here are some good code examples with MVVM,

Apr 23, 2009 at 5:37 PM
Edited Apr 23, 2009 at 5:38 PM
Thank you very much; I'm sure those will prove very helpful. Yesterday I was reading this article ( and found Figure 4 and Figure 11 to help me understand the modeling a bit.

Separate from the View and ViewModel classes, the author made a clear distinction between the Model (i'll call it DataModel) and the DataAccessClass, which he called Customer and CustomerRepository respectively.

If I understand right:

1. View is data bound to the ViewModel (likely in XAML).
2. The ViewModel (likely implementing INotifyPropertyChanged, and IDataErrorInfo) is something of a wrapper or window for the DataModel in that exposes the model via delegation.
4. The DataAccessClass (IDataErrorInfo) is a class (static?) that fetches and persists data from the db, and is probably used by all DataModels.

What then is the guts of the DataModel? A more tangible version of the ViewModel, to ensure that the ViewModel collections do not have to immediately reflect what is in the DataModel and thus be persisted to db until told to? Should it be doing the validation efforts and then using the DataAccess class as a tool to sync its collections with the db?

Am I getting it? Or still way out in space with ziggy stardust?
Apr 23, 2009 at 6:50 PM
Sorry to stack up another post in a row, but I am working on writing a ViewModelBase.cs class to begin poking around and understanding its purpose. Along those lines I've got a couple more questions:

1. -- Which practice in implementing INotifyPropertyChanged would you suggest? My guess is #2, but I wanted your opinion?

2. I keep hearing arguments between using IDisposable, WeakEvent method, and others for consuming the objects. I think I'm a bit lost here, assuming ViewModelBase is an abstract class implemented by more specific models, IDisposable sounds like the right choice, but I can't find a solid example of how it should be implemented in the base class.

Once again, thank you guys for all your help.
Apr 23, 2009 at 7:57 PM
Model - Objects that fully implement the domain or business logic of an application. This includes, but is not limited to, data persistence.  It also includes business rules and logic.  It does not contain any presentation logic or metadata.  Generally speaking, validation logic probably shouldn't go in this level, though that's not a hard and fast rule.

ViewModel - Provides presentation data and logic. Some of the data and logic is likely to be delegated to the Model, though not necessarily directly (for instance the ViewModel might do data conversion). It will also likely include data and logic that's view specific that doesn't exist in the Model at all (for instance, selected item tracking).  This is typically where your validation logic will go, though it might just delegate to the Model if you found there was a reason to put validation at that level.

View - Provides the presentation.  No business logic should be included at this level.  Some presentation logic may be (make this text red if the value is invalid, for instance).

For INPC, I'd use #3 with a slight variation: you should still make the virtual On* method follow framework design guidelines, and I'd do that by including a non-virtual version that takes just the name and delegates to the virtual version.

IDisposable is tricky here.  Whether or not you need it will depend on factors we don't have here.  Who "owns" the ViewModel (i.e. who will be responible for disposing)?  The MVVM pattern doesn't dictate this, and people have differing opinions on it.  Further, not all ViewModels will need disposing.  There's three main reasons to need IDisposable on any class: 1) the class has acquired unmanaged resources that need to be released, 2) the class "owns" an object that's implemented IDisposable, 3) you need to break a chain of references that could keep an object alive longer than it should be, such as often occurs with event handlers.  For this reason, I don't think a ViewModelBase would implement IDisposable. As for how to implement IDisposable, that's well documented as the "dispose pattern" (
Apr 23, 2009 at 8:18 PM

thanks for the reply. again, I think I get the general idea between the three main pieces. I kinda just need to see some hard code illustrations. The InternationalizedWizard project Vincent left above is helping already, but I am curious as to when, where, and how to separate the DataModel from the DataAccessClass roles.

What I'm trying to build should look like this:

View -> ViewModel -> DataModel -> DataAccessClass -> OracleDB

I would like to see a coded example how to implement a DataAccessClass and make it talk to the DataModel. From there I could tie test controls directly to the DataModel to make sure things are working the way I like, then develop the ViewModel to place between them and do all the presentation logic as you suggest.

I guess a question that comes before it is, is there any reason that the DataModel and DataAcessClass should be separate at all? Does the DataAccessClass simply contain the connectivity stuff like the Oracle Provider, ciphered connection string, and methods to query/execute commands against the db? and does the DataModel simply outline the properties, collections, and datasets the stuff returned from the DAC is loaded into?

On IDisposable, I think I'll leave it out thus far. Your link provided was the example I used but I am not sure I see a necessity for it. I guess start small, and implement it if/when necessary?

Sorry if I am lagging on this, once I get beyond theory into working examples, I usually can piece stuff together and understand better. Thanks again for your insight and advice.
Apr 24, 2009 at 12:49 PM
Strictly speaking, from the perspective of M-V-VM, the DataAccessClass is part of the Model.  This isn't to say you shouldn't abstract out the data persistence into it's own layer, but it's simply not relevant to the M-V-VM pattern.  That's probably why you're struggling here.  Most M-V-VM samples aren't going to focus on this, and probably have a Model design that's not factored in this way.

BTW, the DataAccessClass isn't likely to "talk to the DataModel".  Communication is going to go the other direction.  I'm also not sure what you mean by "tie test controls directly to the DataModel"?  If "test controls" means some piece of UI, you've failed to understand why this pattern is so powerful.  Your tests should be unit tests with no reliance on UI automation.

There's reasons why the Model and the DataAccessClass should be separated.  The Single Responsibility Principle ( is one reason.  For why SRP is important here, think about the need for changing how the data is persisted (different DBs, through web services, in an XML file, etc.).  If the data access is intermixed with your business logic, you're going to find it hard to change the persistence.  However, when you have a separate data access layer, you can easily just replace that layer and the business logic isn't effected.

On IDisposable, yes "start small".  Most classes should not require IDisposable.  If you use FXCop/CodeAnalysis, you'll usually know when you need it.
Apr 24, 2009 at 5:30 PM
Edited Apr 24, 2009 at 10:00 PM
Thanks for the added info. I don't think I was articulating myself very well yesterday. I did understand that in theory the data access stuff was part of the class. I guess I just remember the days of ASP.Net and earlier development where common practice was to remove the data access and write a utility class for the connectivity and execution of commands against a database. A few MVVM examples I read did this, a few did not, so I was trying to get an idea of how to split them, how much should be factored out into a data access class for utility purposes (ex. whether it should be static, handle security, or go so far as to fill sets/sources). By "talk to" I simply meant that they would interact, but yeah I get that it would be the DataModels using this sort of utility class.

As far as "test controls", yes, I meant some simple UI crap just to see if the business objects were getting instantiated, collections populated, etc. before I built the ViewModel. This is only because I have absolutely no idea how to write unit tests in VS.Net and I like to simply confirm that I'm learning one thing at a time. But I definately get the fundamentals of "loose coupling" between layers and the purpose for each layer's minimal awareness (if any) of the other layers. I just don't know how to do the unit testing the way you suggest. MVVM Demo App ( looks as if it has an entire project of unit tests, so I can probably dig up some examples there rather than temporarily tying UI controls to directly to the model.

I guess one of the places I'm still most stuck is the factoring out the DA logic from the DM logic.

On a slightly different note, I have two main DataModel classes FC42Master, FC42Details. The details record as you can guess is child to the master record. All the examples I found model one class, of very little complexity and data types. I am almost finished, but two items are left with the models: (1) I am kind of curious about if/how I should model the relationship between the two. (2) I am curious how I should model collections of the two simply by fetching generic lists of type FC42Master? or going so far as creating a FC42MasterCollection object that extends ObservableCollection<>. Any advice?

Once again, thanks for the wisdom Wekempf. -T
Apr 24, 2009 at 7:08 PM
Edited Apr 27, 2009 at 3:06 PM
How you write unit tests is at least somewhat dependent on the testing framework you use.  I generally use MSTest because it's integrated with VS if you're using the right SKU (Team), and my day jobs have always required it.  There are some issues with MSTest (performance mostly, though the test runner could use some help UI wise as well), so most folks prefer one of the open source testing frameworks: NUnit, xUnit, MbUnit are the most popular.  At the most basic level, unit testing simply consists of writing code to excercise a "unit" of your code and assert that return values and other side effects are what you expect.  At a higher level, there's a definite art to unit testing.  Here's a simple tutorial for using NUnit:  Unit testing with any of the frameworks is very similar, with syntactic variations.  I'd also recommend you read a little about TDD and BDD.

As for your modeling questions: this is straying way off topic for this forum, but I'll provide a little guidance.  Generally, you'd model the Master as an object with a collection of Detail objects as a member.  How this is implemented will depend a lot on your persistence mechanism, but usually you'll want to lazy load the details.  You can create a MasterCollection, but really, just using an ObservableCollection<Master> is often good enough.  The choice depends on your needs (do you need to modify or add to the functionality of ObservableCollection<>) and preferences (some like to keep it simple, while other's prefer the addition of a non-generic type).  At the Model layer, however, you may not require INotifyCollectionChanged or INotifyPropertyChanged, choosing instead to add this functionality in the ViewModel layer.

Static Reflection: Be careful there. I really like the implementation on that blog post, but I've found that static reflection is orders of magnitude slower than even dynamic reflection, making it a poor choice for implementing INPC and other places you'd love to do so. We really need Microsoft to add nameof() and/or symbols to the language. :(
Apr 25, 2009 at 12:29 AM
Edited Apr 25, 2009 at 12:56 AM
Sorry if I was straying. I can see the parent/child thing is off topic. Considering the practice of modeling the object collections can be done as an exposed property in the ViewModel layer, or as a full Model layer class, I thought I was still on target. If you know a more appropriate place to discuss this, let me know. My work email address is trey.white @ -- we could talk there, or i could send you my project. while it is certainly not disciples work, I hope you'd find I understand more than I seem to articulate here.

Collection Modeling - VM vs. DM
I think I understand you on the collection modeling. Marlon's example ( modeled a singular ProjectDetails.cs offering a static IEnumerable<ProjectDetails> GetProjects() method. ProjectDetailsVM wrapped and exposed a single public ProjectDetails project property. ProjectListVM wrapped an exposed a public ObservableCollection<ProjectDetails> property, later filled by calling the data model's GetProjects() method.

Is this a healthy practice to you? I'm trying to evaluate the pros and cons of collection modeling at the ViewModel level vs. writing a full-on MasterCollection.cs data model (which is the first example I'd found form a Silverlight example months back). What's your opinion on this one?

If it helps, here's the general idea. It's all payroll data imported monthly from a text file. The records are parsed and dumped into Master and Detail tables in Oracle. I imagine setting up a UI with something like tabbed windows much like browsers - the main window displaying a filterable MasterRecord collection in a ListView of some sort. The user opens a record and it becomes a new tab, its view displaying MasterRecord data designed much like a header (text boxes, drop downs, etc.) and its associated DetailsRecord(s) in an editable ListView.

DataModel - Factoring Out Common Function
It's funny you said "how this is implemented depends a lot on your persistence mechanism" because that's exactly it; I am working with Oracle's ODP.Net, WPF for the first time, MVVM for the first time, and trying to figure out how to glue data models to the db source properly. If I'm understanding better, 99% of CRUD mechanics stay in the model classes -- AddMasterRecord() would execute insert sprocs, GetMasterRecords() would fill an IEnumberable collection and expose it -- then the only code likely to be factored out to its own "data access class" would be that which manages the OracleConnection object, provides OracleDataAdapter and OracleDataReaders to the models, and encrypts/decrypts connection strings. Am I understanding better?

For the record, this image WPF Line of Business M-V-VM Application is what is making it hard for me to understand the Persistence mechanics and their relation to Model and ModelView... I had thought what they are calling the "business layer" was incorporated in the Model layer's classes, and that the Model was always what talked to the data store.

Note: I'll peek at that unit testing. The relational modeling between Master and Detail records I'll put on hold; I'm gonna spend more time consuming the 4-5 examples of MVVM for now and trying to make sure I understand it tangibly rather than just theory, and to get a single model done right first with just the Master record. And the static reflection looks cool, but yeah, that would be like me putting gold rims on an old bondo'd Cutlass... i'll wait till i have a car worth pimping. you're right though, nameof() would be amazing. have a good weekend.
Apr 25, 2009 at 1:34 PM
That picture is definitely confusing, IMHO.  Let me discuss a few of the layers they have there.  We'll work from the bottom up.

Data Layer - This is where the persistence goes.  This layer is strictly concerned with moving data into and out of the persistent storage, whether that be a DB, an XML file or a remote service.  It's fairly important that this layer be abstracted in a fashion that you can change how the data is persisted without impacting higher layers.  Google "repository pattern" for ideas in this area.

Business Layer - To be totally honest, I'm not sure why they've made a separate layer for this.  IMHO, this is just an aspect of the model layer.  The difference between the two, if I understand the diagram, is that the model layer is strictly data, while the business layer is strictly logic.  That makes no sense to me, personally. I see this as a single layer, and would expect my models to include business logic in traditional OO design.

Model Layer - The diagram strictly states that this layer implements INotifyPropertyChanged and IDataErrorInfo.  INPC is strictly optional.  If you're not dealing with data that's somehow updated automatically, there may be no need for INPC at this level.  Note, however, that the diagram also (correctly) indicates that the VM may expose the M as a property and the V will directly bind to the M in that case, which likely would require INPC.  IDEI is also optional, as validation can be done strictly in the VM, though there's lots of reasons why it may be more appropriate at the M layer.

ViewModel Layer - Lots of stuff stated as "fact" in that diagram are strictly just one approach to implementing MVVM.  For example, it states the VM exposes commands as ICommand properties.  That's a very easy and understandable approach, but it misses a few concepts.  For instance, this means you're not using "standard" commands, which are defined as static RoutedUICommand properties on a few different classes in the BCL.  It *IS* possible to continue to use commands in the "traditional" fashion dictated by these standard commands, it just takes more work (though it can be handled by a framework).  Check out Onyx ( for one way to continue to use "traditional" command architectures.  It also states "business layer performs actions on Model"... but it may be the VM itself that performs these actions.  That's going to depend on what the logic is... if it's logic specific to the presentation, it goes in the VM, while if it's general business logic, it goes in the M (business layer).  This also mentions INPC and IDEI, which are important at this layer (though IDEI is dependent on whether or not there's any necessary validation), but it fails to mention IEditableObject, which I think is just as important.  Note, however, that if you're using IEO, you won't be exposing the M to the V at all.  Instead you'll always be delegating changes through the VM.

Marlon's example works fine, but it's going to depend on how large your application is and how dogmatic you are about architectural design.  For a small sample application, his approach works and is easy to code and understand.  For a larger project, however, I'd put more of that logic in the M layer, where dogmatic folks would tell you it always belongs.  I'm not dogmatic.  I'm pragmatic.  My designs are more fluid, using what I think is appropriate for the amount of code, the amount of expected maintenance, and the time constraints I'm facing.  I would have coded the sample in much the same way Marlon did, I think.
Apr 27, 2009 at 5:29 PM
You're an amazing help Kempf.

My application is small. It's going to be a db of master/child records that need CRUD work and then some reporting, but I want to get a very idealistic frame done so I have something to start with on other applications. Otherwise I am a pragmatic person like you. When it comes to learning something new and getting a handle on common/best practices I have to be dogmatic. It's like being a little kid with a new bike: if you pop wheelies before you get how to use the brakes you're gonna fly over the handles. ;)

That said, I too don't see the need for splitting the "model layer" into two pieces either.

The last (i know shouldn't say that) questions I have for now is how the two approaches of linking the three layers compare?
   1 - VM reimplements the Model object in its entirity - (yellow box in the images linked in my last post)
   2 - VM exposes the the Model object as a model-class public property - (blue box in the image linked in my last post)

You mentioned IEditableObject. How does that fit in these patterns? Most of the examples I have found expose the Model as a property, rather than reimplementing it like facsimile... doesn't this kind of go against the purpose of MVVM?

Aside from that, my next goal is digging up a good example of a "data persistence layer" - it's really too bad I can't simply use the Entities framework. I poked around with that a month or two back and fell in love.
Apr 27, 2009 at 8:13 PM
If you use IEditableObject, it's much simpler if you follow option 1.  IEditableObject.BeginEdit copies the values from the Model into the ViewModel.  IEditableObject.CanelEdit needs to do nothing, generally.  IEditableObject.EndEdit sets the (modified) values in the ViewModel back onto the Model.  Doing this, the Model is only ever modified in response to the user actually clicking on an "OK" button on the form.  Of course, this also means the validation (IDataErrorInfo) logic is in the ViewModel instead of/in addition to in the Model, which some folks won't agree with.  Personally, I see validation as a function of the UI, and prefer it there.  At the Model layer I'll use more rigid preconditions (throw an exception if properties are set to invalid values).

I will note that many people prefer to put IEditableObject in the Model layer, but I find this awkward.  It generally means adding functionality to the Model to copy and store a "memento" of the state, and then restore it on IEditableObject.Cancel or discard it on IEditable.EndEdit.  This often means you move the state between three objects: the "memento", the Model and the ViewModel, assuming your ViewModel follows option 1 for other reasons (and there's several other reasons you might do that).  At the very least, it requires an extra type for the "memento", even if your following option 2.  Also, I personally see IEditableObject as a function of the UI, the same as IDataErrorInfo.  My Model layer is kept entirely UI unaware.  After all, there may not be a UI (scripted, run as a service, etc.).

No, exposing the Model doesn't necessarily go against the purpose of MVVM.  It may go against other architectural design principles if your dogmatic, but when you consider these are samples and/or if you prefer pragmatic over dogmatic (fully reimplementing a Model interface for the ViewModel can be EXTREMELY tedious coding, with sometimes little or no benefit beyond meeting architectural principles), the practice doesn't really violate anything in the MVVM pattern or it's goals.

There's nothing wrong with using EF for the data persistence layer, IMHO.  Members of the Alt.NET crowd would disagree, but they are more dogmatic than I am.  They prefer to use NHibernate, which would also be a fine choice.  Honestly, though, the persistence layer is the least interesting, at least to me.
Apr 27, 2009 at 9:43 PM
Edited Apr 27, 2009 at 9:45 PM
Gotcha. Being an ASP.Net programmer, I always viewed validation as a UI function as well. I like the idea of using it there.I think I am getting stuck at defining what my data access layer needs to be like, and how separate it is or should be from the Model layer. Right now I am using Oracle's ODP.Net access class and provider, and I understand CRUD functionality should be found in the Model class. But do I go so far as to actually include the methods that push/pull from the db? How much should be factored out into a separate class? For instance:

public class MasterRecord()
    private int _docNumber;
    public MasterRecord GetMasterRecord(int docNumber)
        OracleConnection connection = new OracleConnection();
        OracleCommand cmd = this.connection.CreateCommand();
        cmd.Connection = this.connection;
        cmd.CommandType = CommandType.StoredProcedure;
        cmd.CommandText = "GetMasterRecord";   
        OracleParameter parameter;
        parameter = insertCommand.Parameters.Add("@DocNo", OracleDbType.Int32);
        // execute sproc transaction
        // fill record with returned row data

    // or should the Model class's method simply use a access layer class that manages connections and transactions, pass the sproc name and params[] like this?
   public MasterRecord GetMasterRecord(int docNumber)
        MasterRecord record = new MasterRecord();
        // fill params[]
       dataset = DataAccess.ExecuteSproc("GetMasterRecord",params[])
       // parse row in dataset and stuff in record

also, this is a dumb question, but I've had trouble retrieving the connection string in App.config. some were saying its kinda a dumb way to do connections at all in WPF, and instead to toss it in the Settings bag within the application class App.xaml.cs? What's your opinion?

note: oh yeah, as far as EF goes, I would love to, but with Oracle it takes a third party provider since Microsoft hasn't included one. My commpany is scared of "new technology" and they won't spend $300 bucks on a provider from a well established company... so its learning PL/SQL for me, and programming some old school ADO.Net stuff that i'm way out of practice on, in a new application model (MVVM) I am learning. Nice huh?
Apr 29, 2009 at 12:02 AM
Kempf -
I think I am getting close to having it worked out:
* I have written a low level "data access" plumbing class that is much like the enterprise library example (though im sure much less robust). It handles the connections, and executes statements/sprocs against the db, returning DataSets, DataReaders, Counts, etc.
* I have my Model class outfitted with Fields/Properties to expose the object.
* View <-> ViewModel connectivity I think I understand well.
* ViewModel <-> Model connectivity I think I understand enough, and will polish as the last piece.

My largest concern is that I've implemented my Model class correctly, and can be summarized in 2-3 related questions:
1. If I continue using my data access class, the Model class needs to know how to convert the raw data (sets, rows, etc.)  into instances of the object (itself).
2. Is there a better route than the traditional ADO.Net helper methods (ExecuteDataSet(), ExecuteNonQuery(), etc.) like the "data sources" that VS.Net provides? My understanding is that these are "table adapters" of sorts - same purpose as a Model class. While I prefer not to use the data source wizards, they are the functionality I am trying to accomplish. only tied to my Oracle.DataAccess provider and stored procedures.

I think this is my last major hangup with MVVM, and I know its not suitable for this topic/page anymore.

If you have the time, would you mind discussing the last piece of this by email or another forum to your liking? my email is

Once again, thank you in advance. I hope to hear from you. If not, I understand you are a very busy man and not my personal trainer.

Jun 3, 2012 at 8:37 AM
Edited Jun 3, 2012 at 8:37 AM

Here is nice example (you can download the code on the bottom side of the page): Model-View-ViewModel Pattern (MVVM)